From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ob0-f180.google.com (mail-ob0-f180.google.com [209.85.214.180]) by dpdk.org (Postfix) with ESMTP id 285EAAFCD for ; Wed, 28 May 2014 14:58:38 +0200 (CEST) Received: by mail-ob0-f180.google.com with SMTP id va2so10737606obc.39 for ; Wed, 28 May 2014 05:58:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=TM/J0wf3iy9WpKSZq/uNAzUpuLntTn7Mw6VbY1o74YY=; b=pKyNi6XB3SM4laJHGTlB5iKSpo2Ns98tTFmM5LYCGSECOIXfqIfejiSZHEUJlj0PC3 PM9/yKkAMS1bVHD7Rs8wauFy2/4H+Lwf6A03Med+BZuKwDCJ942vHj1V+tKUwGi1fmEU 3Vkh/B73y7o2Ljb3+/gfLx3RNw+SEuS923AMB+zkmSc2cB8W9UDTHLrIDHYd93ZhtRcd GmQVKBalE0fTCyJ5qEkL3nbIUGh864q3tY4RLXHxXgqzfRwku2w75YjTCYicq8knvjEL F8ke/Bca06rkAz0qkM6dqrVk/7Zucf7wxZZULQLbHyPxTfoyMvAGO6In4KFyr+yThgXn XmjA== MIME-Version: 1.0 X-Received: by 10.60.178.243 with SMTP id db19mr41190705oec.11.1401281928989; Wed, 28 May 2014 05:58:48 -0700 (PDT) Received: by 10.76.76.72 with HTTP; Wed, 28 May 2014 05:58:48 -0700 (PDT) In-Reply-To: References: <4032A54B6BB5F04B8C08B6CFF08C592855430B1A@FMSMSX103.amr.corp.intel.com> Date: Wed, 28 May 2014 14:58:48 +0200 Message-ID: From: Jun Han To: "Shaw, Jeffrey B" Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] DPDK Latency Issue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 28 May 2014 12:58:38 -0000 Hi all, I realized I made a mistake on my previous post. Please note the changes below. "While I vary the MAX_BURST_SIZE (1, 8, 16, 32, 64, and 128) and fix BURST_TX_DRAIN_US=100 usec, I see a low average latency when sending a burst of packets greater than the MAX_BURST_SIZE. For example, when MAX_BURST_SIZE is 32, if I send a burst of 32 packets or larger, then I get around 10 usec of latency. When the burst size is less than 32, I see higher average latency, which make total sense." On Mon, May 26, 2014 at 9:39 PM, Jun Han wrote: > Thanks a lot Jeff for your detailed explanation. I still have open > question left. I would be grateful if someone would share their insight on > it. > > I have performed experiments to vary both the MAX_BURST_SIZE (originally > set as 32) and BURST_TX_DRAIN_US (originally set as 100 usec) in l3fwd > main.c. > > While I vary the MAX_BURST_SIZE (1, 8, 16, 32, 64, and 128) and fix > BURST_TX_DRAIN_US=100 usec, I see a low average latency when sending a > burst of packets less than or equal to the MAX_BURST_SIZE. > For example, when MAX_BURST_SIZE is 32, if I send a burst of 32 packets or > less, then I get around 10 usec of latency. When it goes over it, it starts > to get higher average latency, which make total sense. > > My main question are the following. When I start sending continuous packet > at a rate of 14.88 Mpps for 64B packets, it shows consistently receiving an > average latency of 150 usec, no matter what MAX_BURST_SIZE. My guess is > that the latency should be bounded by BURST_TX_DRAIN_US, which is fixed at > 100 usec. Would you share your thought on this issue please? > > Thanks, > Jun > > > On Thu, May 22, 2014 at 7:06 PM, Shaw, Jeffrey B > wrote: > >> Hello, >> >> > I measured a roundtrip latency (using Spirent traffic generator) of >> sending 64B packets over a 10GbE to DPDK, and DPDK does nothing but simply >> forward back to the incoming port (l3fwd without any lookup code, i.e., >> dstport = port_id). >> > However, to my surprise, the average latency was around 150 usec. (The >> packet drop rate was only 0.001%, i.e., 283 packets/sec dropped) Another >> test I did was to measure the latency due to sending only a single 64B >> packet, and the latency I measured is ranging anywhere from 40 usec to 100 >> usec. >> >> 40-100usec seems very high. >> The l3fwd application does some internal buffering before transmitting >> the packets. It buffers either 32 packets, or waits up to 100us >> (hash-defined as BURST_TX_DRAIN_US), whichever comes first. >> Try either removing this timeout, or sending a burst of 32 packets at >> time. Or you could try with testpmd, which should have reasonably low >> latency out of the box. >> >> There is also a section in the Release Notes (8.6 How can I tune my >> network application to achieve lower latency?) which provides some pointers >> for getting lower latency if you are willing to give up top-rate throughput. >> >> Thanks, >> Jeff >> > >