From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-oa0-f53.google.com (mail-oa0-f53.google.com [209.85.219.53]) by dpdk.org (Postfix) with ESMTP id 63675AFD7 for ; Tue, 27 May 2014 20:30:37 +0200 (CEST) Received: by mail-oa0-f53.google.com with SMTP id m1so10094744oag.12 for ; Tue, 27 May 2014 11:30:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=V6UCSJufVzu12IWg4y7pnTS1Ul8kqTB9pEwW9QRdUyo=; b=pKBSVPKoQboBIRZFhd13DhFRWhZYrPr+utgHLkU1thywdcngyrEnM1yasiIVzDn2Zy 2eyFji4AiBYTFKwI6jY6ucEU6Q/tI51hNfHHLxN38OQqkbSC7j+skm/X5VdIv2oDcR5c vATEMNbSA7QBVajEmM9V+5OZKp8BHP0AR6uVks800o7RYey0GUCibKYuYDCSoVZkACUV JhqMjRT7aFmZkagP5xKRigfWYp1V0k28R/x+4oGaHo08tk4br26Dyr8+KHNRINx9kxB4 XC+U3grXV7+3jLSSw6T+vhlW/l5xRLIEpdQoCTz9W/eT9xQR68yRpjqoc+xtGR0ZCeTu NCzg== MIME-Version: 1.0 X-Received: by 10.182.200.131 with SMTP id js3mr35848539obc.0.1401215448128; Tue, 27 May 2014 11:30:48 -0700 (PDT) Received: by 10.76.76.72 with HTTP; Tue, 27 May 2014 11:30:48 -0700 (PDT) In-Reply-To: <5D695A7F6F10504DBD9B9187395A21797D0CC5F1@ORSMSX112.amr.corp.intel.com> References: <5D695A7F6F10504DBD9B9187395A21797D0CC5F1@ORSMSX112.amr.corp.intel.com> Date: Tue, 27 May 2014 20:30:48 +0200 Message-ID: From: Jun Han To: "Jayakumar, Muthurajan" Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] roundtrip delay X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 27 May 2014 18:30:37 -0000 Hi all, I've also asked a similar question on the previous thread, but I'll copy it here for better visibility. I would really appreciate it if you can provide some hints to my question below. Thanks a lot! Thanks a lot Jeff for your detailed explanation. I still have open question left. I would be grateful if someone would share their insight on it. I have performed experiments to vary both the MAX_BURST_SIZE (originally set as 32) and BURST_TX_DRAIN_US (originally set as 100 usec) in l3fwd main.c. While I vary the MAX_BURST_SIZE (1, 8, 16, 32, 64, and 128) and fix BURST_TX_DRAIN_US=100 usec, I see a low average latency when sending a burst of packets less than or equal to the MAX_BURST_SIZE. For example, when MAX_BURST_SIZE is 32, if I send a burst of 32 packets or less, then I get around 10 usec of latency. When it goes over it, it starts to get higher average latency, which make total sense. My main question are the following. When I start sending continuous packet at a rate of 14.88 Mpps for 64B packets, it shows consistently receiving an average latency of 150 usec, no matter what MAX_BURST_SIZE. My guess is that the latency should be bounded by BURST_TX_DRAIN_US, which is fixed at 100 usec. Would you share your thought on this issue please? On Sun, May 25, 2014 at 8:12 PM, Jayakumar, Muthurajan < muthurajan.jayakumar@intel.com> wrote: > Please kindly refer recent thread titled "DPDK Latency Issue" on similar > topic. Below copied and pasted Jeff Shaw reply on that thread. > > Hello, > > > I measured a roundtrip latency (using Spirent traffic generator) of > sending 64B packets over a 10GbE to DPDK, and DPDK does nothing but simply > forward back to the incoming port (l3fwd without any lookup code, i.e., > dstport = port_id). > > However, to my surprise, the average latency was around 150 usec. (The > packet drop rate was only 0.001%, i.e., 283 packets/sec dropped) Another > test I did was to measure the latency due to sending only a single 64B > packet, and the latency I measured is ranging anywhere from 40 usec to 100 > usec. > > 40-100usec seems very high. > The l3fwd application does some internal buffering before transmitting the > packets. It buffers either 32 packets, or waits up to 100us (hash-defined > as BURST_TX_DRAIN_US), whichever comes first. > Try either removing this timeout, or sending a burst of 32 packets at > time. Or you could try with testpmd, which should have reasonably low > latency out of the box. > > There is also a section in the Release Notes (8.6 How can I tune my > network application to achieve lower latency?) which provides some pointers > for getting lower latency if you are willing to give up top-rate throughput. > > Thanks, > Jeff > > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Helmut Sim > Sent: Sunday, May 25, 2014 7:55 AM > To: dev@dpdk.org > Subject: [dpdk-dev] roundtrip delay > > Hi, > > what is the way to optimize the round trip delay of a packet? > i.e. receiving a packet and then resending it back to the network in a > minimal time, assuming the rx and tx threads are on a continuous loop of > rx/tx. > > Thanks, >