From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qa0-f47.google.com (mail-qa0-f47.google.com [209.85.216.47]) by dpdk.org (Postfix) with ESMTP id AA108255 for ; Wed, 28 Jan 2015 13:24:20 +0100 (CET) Received: by mail-qa0-f47.google.com with SMTP id n8so15913040qaq.6 for ; Wed, 28 Jan 2015 04:24:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=wmBTX+ZYuPamRWdAEgqEjWaR4qxKgckZkwKmRV0H7cA=; b=osONaWjvZYORJKRSs1kl7TSdUc/vLa7npZrtu3MHvYnqgWTIyEIMNlZ97tL5AWpKR4 H0UR9YXaiJcfeZVOHvN8bfboLIgnPtscVKEvLpZW3BzFP0086MzaBxqcvvAV1nHnQrI7 MmiK3Cs4u1wLuLT91Lly2NFaF59IAcSXYNUHwa0B7z82J7g+8hH5fJask4Do3LTpHZXC eKvWUVs9HpB+C+GVRWg/GkMlV3/9JUh5fKzLShagcWg8U00Tn7DKOsUsuRwxfGfI2hB1 nnaD+vAMwxBCXf8/3tPbdkn/1itthlmKs2G8tOpTpn3IGzwcsiriZPkNX9nVIyhJ0AkC Q8sQ== MIME-Version: 1.0 X-Received: by 10.140.94.205 with SMTP id g71mr11752385qge.70.1422447859632; Wed, 28 Jan 2015 04:24:19 -0800 (PST) Received: by 10.140.93.51 with HTTP; Wed, 28 Jan 2015 04:24:19 -0800 (PST) In-Reply-To: References: Date: Wed, 28 Jan 2015 15:24:19 +0300 Message-ID: From: Alexander Belyakov To: "De Lara Guarch, Pablo" Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] DPDK testpmd forwarding performace degradation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 28 Jan 2015 12:24:21 -0000 On Tue, Jan 27, 2015 at 7:21 PM, De Lara Guarch, Pablo < pablo.de.lara.guarch@intel.com> wrote: > > > > On Tue, Jan 27, 2015 at 10:51 AM, Alexander Belyakov > > > wrote: > > > > > > Hi Pablo, > > > > > > On Mon, Jan 26, 2015 at 5:22 PM, De Lara Guarch, Pablo > > > wrote: > > > Hi Alexander, > > > > > > > -----Original Message----- > > > > From: dev [mailto:dev-bounces@dpdk.org ] On > Behalf Of Alexander > > > Belyakov > > > > Sent: Monday, January 26, 2015 10:18 AM > > > > To: dev@dpdk.org > > > > Subject: [dpdk-dev] DPDK testpmd forwarding performace degradation > > > > > > > > Hello, > > > > > > > > recently I have found a case of significant performance degradation > for our > > > > application (built on top of DPDK, of course). Surprisingly, similar > issue > > > > is easily reproduced with default testpmd. > > > > > > > > To show the case we need simple IPv4 UDP flood with variable UDP > > > payload > > > > size. Saying "packet length" below I mean: Eth header length (14 > bytes) + > > > > IPv4 header length (20 bytes) + UPD header length (8 bytes) + UDP > payload > > > > length (variable) + CRC (4 bytes). Source IP addresses and ports are > > > selected > > > > randomly for each packet. > > > > > > > > I have used DPDK with revisions 1.6.0r2 and 1.7.1. Both show the same > > > issue. > > > > > > > > Follow "Quick start" guide (http://dpdk.org/doc/quick-start) to build > and > > > > run testpmd. Enable testpmd forwarding ("start" command). > > > > > > > > Table below shows measured forwarding performance depending on > > > packet > > > > length: > > > > > > > > No. -- UDP payload length (bytes) -- Packet length (bytes) -- > Forwarding > > > > performance (Mpps) -- Expected theoretical performance (Mpps) > > > > > > > > 1. 0 -- 64 -- 14.8 -- 14.88 > > > > 2. 34 -- 80 -- 12.4 -- 12.5 > > > > 3. 35 -- 81 -- 6.2 -- 12.38 (!) > > > > 4. 40 -- 86 -- 6.6 -- 11.79 > > > > 5. 49 -- 95 -- 7.6 -- 10.87 > > > > 6. 50 -- 96 -- 10.7 -- 10.78 (!) > > > > 7. 60 -- 106 -- 9.4 -- 9.92 > > > > > > > > At line number 3 we have added 1 byte of UDP payload (comparing to > > > > previous > > > > line) and got forwarding performance halved! 6.2 Mpps against 12.38 > Mpps > > > > of > > > > expected theoretical maximum for this packet size. > > > > > > > > That is the issue. > > > > > > > > Significant performance degradation exists up to 50 bytes of UDP > payload > > > > (96 bytes packet length), where it jumps back to theoretical maximum. > > > > > > > > What is happening between 80 and 96 bytes packet length? > > > > > > > > This issue is stable and 100% reproducible. At this point I am not > sure if > > > > it is DPDK or NIC issue. These tests have been performed on Intel(R) > Eth > > > > Svr Bypass Adapter X520-LR2 (X520LR2BP). > > > > > > > > Is anyone aware of such strange behavior? > > > I cannot reproduce the issue using two ports on two different 82599EB > NICs, > > > using 1.7.1 and 1.8.0. > > > I always get either same or better linerate as I increase the packet > size. > > > > > > Thank you for trying to reproduce the issue. > > > > > > Actually, have you tried using 1.8.0? > > > > > > I feel 1.8.0 is little bit immature and might require some post-release > > > patching. Even tespmd from this release is not forwarding packets > properly > > > on my setup. It is up and running without visible errors/warnings, TX/RX > > > counters are ticking but I can not see any packets at the output. > > > > This is strange. Without changing anything, forwarding works perfectly > for me > > (so, RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC is enabled). > > > > >Please note, both 1.6.0r2 and 1.7.1 releases work (on the same setup) > out-of-the-box just > > > fine with only exception of this mysterious performance drop. > > > So it will take some time to figure out what is wrong with dpdk-1.8.0. > > > Meanwhile we could focus on stable dpdk-1.7.1. > > > > > > Managed to get testpmd from dpdk-1.8.0 to work on my setup. > > > Unfortunately I had to disable RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC, > > > it is new comparing to 1.7.1 and somehow breaks testpmd forwarding. By > the > > > way, simply disabling RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC in > > > common_linuxapp config file breaks the build - had to make quick'n'dirty > fix > > > in struct igb_rx_queue as well. > > > > > > Anyway, issue is still here. > > > > > > Forwarding 80 bytes packets at 12.4 Mpps. > > > Forwarding 81 bytes packets at 7.2 Mpps. > > > > > > Any ideas? > > > As for X520-LR2 NIC - it is dual port bypass adapter with device id > 155d. I > > > believe it should be treated as 82599EB except bypass feature. I put > bypass > > > mode to "normal" in those tests. > > > > I have used a 82599EB first, and now a X520-SR2. Same results. > > I assume that X520-SR2 and X520-LR2 should give similar results > > (only thing that is changed is the wavelength, but the controller is the > same). > > > It seems I found what was wrong, at least got a hint. My build server machine type differs from test setup. Until now it was OK to build DPDK with -march=native. I found that building dpdk-1.8.0 with explicitly set core-avx-i (snb, ivb) or bdver2 (amd) machine types almost eliminates performance drop. The same goes for RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC option issues. It seems DPDK performance and stability depends on machine type more than I was expecting. Thank you for your help. Alexander Pablo > > > Alexander > > > > > > > > > Pablo > > > > > > > > Regards, > > > > Alexander Belyakov > > > > > > > > > > >