From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-oa0-f50.google.com (mail-oa0-f50.google.com [209.85.219.50]) by dpdk.org (Postfix) with ESMTP id 6CC6D58F4 for ; Mon, 26 May 2014 21:38:55 +0200 (CEST) Received: by mail-oa0-f50.google.com with SMTP id i7so8712260oag.9 for ; Mon, 26 May 2014 12:39:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=nyLsbSKQNMt3elmiYZqzcJdG5a8uhUX0nraHB5aKeE8=; b=DErAyH+Hf1WGSa8OgUtVEXKOZGWFa7lJURfNmGV/C0EP/56DNNfiz30DRwO6smexEX 8Wr22XrvmOBddRsWrE4NLCK+oUd5p5dM5U8EP19qfbQUNTBrdWAyVFOrVG3o8yT053p5 P7PTpfXmMJPjhMyU5AONxdZvP5FdndIyg7N2c3glyL4ELYviy2ZtFFPON631yonpttB6 fK8Q609tRAqt2HYyveHyPvF9K3DPefO6AUSFu2mNIvBw/ilAoMpViipOVWZOyQShpVrW SFdZPc/6D77TzM/HQFZ0gEIrI2niD4zy8kDCc9254+rFyxboTdMrFdExfDeQqN0BJN8N aTww== MIME-Version: 1.0 X-Received: by 10.182.60.4 with SMTP id d4mr27902606obr.4.1401133145670; Mon, 26 May 2014 12:39:05 -0700 (PDT) Received: by 10.76.76.72 with HTTP; Mon, 26 May 2014 12:39:05 -0700 (PDT) In-Reply-To: <4032A54B6BB5F04B8C08B6CFF08C592855430B1A@FMSMSX103.amr.corp.intel.com> References: <4032A54B6BB5F04B8C08B6CFF08C592855430B1A@FMSMSX103.amr.corp.intel.com> Date: Mon, 26 May 2014 21:39:05 +0200 Message-ID: From: Jun Han To: "Shaw, Jeffrey B" Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] DPDK Latency Issue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 26 May 2014 19:38:55 -0000 Thanks a lot Jeff for your detailed explanation. I still have open question left. I would be grateful if someone would share their insight on it. I have performed experiments to vary both the MAX_BURST_SIZE (originally set as 32) and BURST_TX_DRAIN_US (originally set as 100 usec) in l3fwd main.c. While I vary the MAX_BURST_SIZE (1, 8, 16, 32, 64, and 128) and fix BURST_TX_DRAIN_US=100 usec, I see a low average latency when sending a burst of packets less than or equal to the MAX_BURST_SIZE. For example, when MAX_BURST_SIZE is 32, if I send a burst of 32 packets or less, then I get around 10 usec of latency. When it goes over it, it starts to get higher average latency, which make total sense. My main question are the following. When I start sending continuous packet at a rate of 14.88 Mpps for 64B packets, it shows consistently receiving an average latency of 150 usec, no matter what MAX_BURST_SIZE. My guess is that the latency should be bounded by BURST_TX_DRAIN_US, which is fixed at 100 usec. Would you share your thought on this issue please? Thanks, Jun On Thu, May 22, 2014 at 7:06 PM, Shaw, Jeffrey B wrote: > Hello, > > > I measured a roundtrip latency (using Spirent traffic generator) of > sending 64B packets over a 10GbE to DPDK, and DPDK does nothing but simply > forward back to the incoming port (l3fwd without any lookup code, i.e., > dstport = port_id). > > However, to my surprise, the average latency was around 150 usec. (The > packet drop rate was only 0.001%, i.e., 283 packets/sec dropped) Another > test I did was to measure the latency due to sending only a single 64B > packet, and the latency I measured is ranging anywhere from 40 usec to 100 > usec. > > 40-100usec seems very high. > The l3fwd application does some internal buffering before transmitting the > packets. It buffers either 32 packets, or waits up to 100us (hash-defined > as BURST_TX_DRAIN_US), whichever comes first. > Try either removing this timeout, or sending a burst of 32 packets at > time. Or you could try with testpmd, which should have reasonably low > latency out of the box. > > There is also a section in the Release Notes (8.6 How can I tune my > network application to achieve lower latency?) which provides some pointers > for getting lower latency if you are willing to give up top-rate throughput. > > Thanks, > Jeff >