From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id CDF31A057C for ; Fri, 27 Mar 2020 19:25:36 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C24791C239; Fri, 27 Mar 2020 19:25:36 +0100 (CET) Received: from mail-ed1-f66.google.com (mail-ed1-f66.google.com [209.85.208.66]) by dpdk.org (Postfix) with ESMTP id B388D1C21A for ; Fri, 27 Mar 2020 19:25:35 +0100 (CET) Received: by mail-ed1-f66.google.com with SMTP id cw6so11732851edb.9 for ; Fri, 27 Mar 2020 11:25:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=iol.unh.edu; s=unh-iol; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=IUYFTEAOvJTIoK406rYnBw/tr7niaSGRvsG0ogo3Ikg=; b=QRa1yBmhhSpvSUT6DBJl0nbexq0hkF3+zxmWJ6UcUIZhp9Qo5s+dTyPmXUwsC6CWhA zzK7i3MGph/IE1CiQ/gu5KR6EdPaHBwvU8XTFaHuxx2WsAlWbiGC2DKgO2/Msip/AlLj Z5hWU+DAG5ntzwB+CHyk5o1gzmdGCjntxZ8M8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=IUYFTEAOvJTIoK406rYnBw/tr7niaSGRvsG0ogo3Ikg=; b=uQQ39nfK50/Saj17fqZ1f18nxXjwIcycuSPrTcKU21Tx7FjUaXfrMCA+7QRrDTnKxk hOK9B2XbXqHMtgIF90wBY8tKM1HWZDVjNHvIKLANrPxGcP3jIAt2t3wJR1zxBsedBEgk E5xskjQGhJ3G+p7Nhc0Z4+bhM6sq4Y13Xw5YgbiGgYwdNj7orTL5UFYnmAYPmzJYMNpB 6yafgARORmy3fPaCchXZWp1miEwcgerz+CBw2AuxlLBiBrbD1MxaWvzllT97myzuDI+J Zn7HUpiQeH8R27cFglZhycvVPgKfe4DrUkKrJXTOSnp+DcNbDSImAe91gPCU0uP8C/J9 qFhg== X-Gm-Message-State: ANhLgQ2We+qSz/UAl/z5Y6Ej5nGa4OfebeQcUvlexg27Ix/SGlFEM2Yp s+ikehutjsn4HO9WLiwNw0tc4k6i8akJbgWPKAg0iFnugxY= X-Google-Smtp-Source: ADFU+vtiC1+YZwvmH5CbXHENaONG4fZHQ9QgBEAOb9WQLMalmUfh+jeKLuGzWiG5maqSYp7GpBF4HwgxXCqdcOReHeU= X-Received: by 2002:a50:d4c2:: with SMTP id e2mr459314edj.136.1585333535206; Fri, 27 Mar 2020 11:25:35 -0700 (PDT) MIME-Version: 1.0 References: <2735222.2VHbPRQshP@xps> In-Reply-To: <2735222.2VHbPRQshP@xps> From: Lincoln Lavoie Date: Fri, 27 Mar 2020 14:25:32 -0400 Message-ID: To: Thomas Monjalon Cc: Hrvoje Habjanic , users@dpdk.org, galco@mellanox.com, asafp@mellanox.com, olgas@mellanox.com, ci@dpdk.org Content-Type: multipart/alternative; boundary="000000000000378ee805a1da3ac1" Subject: Re: [dpdk-ci] [dpdk-users] DPDK TX problems X-BeenThere: ci@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK CI discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ci-bounces@dpdk.org Sender: "ci" --000000000000378ee805a1da3ac1 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Hi Thomas, I've captured this as https://bugs.dpdk.org/show_bug.cgi?id=3D429, so we ca= n add this to the list of development items for the testing, etc. Cheers, Lincoln On Thu, Mar 26, 2020 at 4:54 PM Thomas Monjalon wrote= : > Thanks for the interesting feedback. > It seems we should test this performance use case in our labs. > > > 18/02/2020 09:36, Hrvoje Habjanic: > > On 08. 04. 2019. 11:52, Hrvoje Habjani=C4=87 wrote: > > > On 29/03/2019 08:24, Hrvoje Habjani=C4=87 wrote: > > >>> Hi. > > >>> > > >>> I did write an application using dpdk 17.11 (did try also with > 18.11), > > >>> and when doing some performance testing, i'm seeing very odd > behavior. > > >>> To verify that this is not because of my app, i did the same test > with > > >>> l2fwd example app, and i'm still confused by results. > > >>> > > >>> In short, i'm trying to push a lot of L2 packets through dpdk engin= e > - > > >>> packet processing is minimal. When testing, i'm starting with small > > >>> number of packets-per-second, and then gradually increase it to see > > >>> where is the limit. At some point, i do reach this limit - packets > start > > >>> to get dropped. And this is when stuff become weird. > > >>> > > >>> When i reach peek packet rate (at which packets start to get > dropped), i > > >>> would expect that reducing packet rate will remove packet drops. Bu= t, > > >>> this is not the case. For example, let's assume that peek packet > rate is > > >>> 3.5Mpps. At this point everything works ok. Increasing pps to > 4.0Mpps, > > >>> makes a lot of dropped packets. When reducing pps back to 3.5Mpps, > app > > >>> is still broken - packets are still dropped. > > >>> > > >>> At this point, i need to drastically reduce pps (1.4Mpps) to make > > >>> dropped packets go away. Also, app is unable to successfully forwar= d > > >>> anything beyond this 1.4M, despite the fact that in the beginning i= t > did > > >>> forward 3.5M! Only way to recover is to restart the app. > > >>> > > >>> Also, sometimes, the app just stops forwarding any packets - packet= s > are > > >>> received (as seen by counters), but app is unable to send anything > back. > > >>> > > >>> As i did mention, i'm seeing the same behavior with l2fwd example > app. I > > >>> did test dpdk 17.11 and also dpdk 18.11 - the results are the same. > > >>> > > >>> My test environment is HP DL380G8, with 82599ES 10Gig (ixgbe) cards= , > > >>> connected with Cisco nexus 9300 sw. On the other side is ixia test > > >>> appliance. Application is run in virtual machine (VM), using KVM > > >>> (openstack, with sriov enabled, and numa restrictions). I did check > that > > >>> VM is using only cpu's from NUMA node on which network card is > > >>> connected, so there is no cross-numa traffic. Openstack is Queens, > > >>> Ubuntu is Bionic release. Virtual machine is also using ubuntu bion= ic > > >>> as OS. > > >>> > > >>> I do not know how to debug this? Does someone else have the same > > >>> observations? > > >>> > > >>> Regards, > > >>> > > >>> H. > > >> There are additional findings. It seems that when i reach peak pps > > >> rate, application is not fast enough, and i can see rx missed errors > > >> on card statistics on the host. At the same time, tx side starts to > > >> show problems (tx burst starts to show it did not send all packets). > > >> Shortly after that, tx falls apart completely and top pps rate drops= . > > >> > > >> Since i did not disable pause frames, i can see on the switch "RX > > >> pause" frame counter is increasing. On the other hand, if i disable > > >> pause frames (on the nic of server), host driver (ixgbe) reports "TX > > >> unit hang" in dmesg, and issues card reset. Of course, after reset > > >> none of the dpdk apps in VM's on this host does not work. > > >> > > >> Is it possible that at time of congestion DPDK does not release mbuf= s > > >> back to the pool, and tx ring becomes "filled" with zombie packets > > >> (not send by card and also having ref counter as they are in use)? > > >> > > >> Is there a way to check mempool or tx ring for "left-owers"? Is is > > >> possible to somehow "flush" tx ring and/or mempool? > > >> > > >> H. > > > After few more test, things become even weirder - if i do not free > mbufs > > > which are not sent, but resend them again, i can "survive" > over-the-peek > > > event! But, then peek rate starts to drop gradually ... > > > > > > I would ask if someone can try this on their platform and report back= ? > I > > > would really like to know if this is problem with my deployment, or > > > there is something wrong with dpdk? > > > > > > Test should be simple - use l2fwd or l3fwd, and determine max pps. Th= en > > > drive pps 30%over max, and then return back and confirm that you can > > > still get max pps. > > > > > > Thanks in advance. > > > > > > H. > > > > > > > I did receive few mails from users facing this issue, asking how it was > > resolved. > > > > Unfortunately, there is no real fix. It seems that this issue is relate= d > > to card and hardware used. I'm still not sure which is more to blame, > > but the combination i had is definitely problematic. > > > > Anyhow, in the end, i did conclude that card driver have some issues > > when it is saturated with packets. My suspicion is that driver/software > > does not properly free packets, and then DPDK mempool becomes > > fragmented, and this causes performance drops. Restarting software > > releases pools, and restores proper functionality. > > > > After no luck with ixgbe, we migrated to Mellanox (4LX), and now there > > is no more of this permanent performance drop. With mlx, when limit is > > reached, reducing number of packets restores packet forwarding, and thi= s > > limit seems to be stable. > > > > Also, we moved to newer servers - DL380G10, and got significant > > performance increase. Also, we moved to newer switch (also cisco), with > > 25G ports, which reduced latency - almost by factor of 2! > > > > I did not try old ixgbe on newer server, but i did try Intel's XL710, > > and it is not as happy as Mellanox. It gives better PPS, but it is more > > unstable in terms of maximum bw (has similar issues as ixgbe). > > > > Regards, > > > > H. > > > > > --=20 *Lincoln Lavoie* Senior Engineer, Broadband Technologies 21 Madbury Rd., Ste. 100, Durham, NH 03824 lylavoie@iol.unh.edu https://www.iol.unh.edu +1-603-674-2755 (m) --000000000000378ee805a1da3ac1 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Hi = Thomas,

I've captured= this as=C2=A0https= ://bugs.dpdk.org/show_bug.cgi?id=3D429, so we can add this to the list = of development items for the testing, etc.

Cheers,
Lincoln

On Thu, Mar 26, 2020 at 4:54 PM T= homas Monjalon <thomas@monjalon.n= et> wrote:

--
Lincoln Lavoie
Senior Engineer, Broadband Technologies
21 Madbury Rd., Ste. 10= 0, Durham, NH 03824
+1-603-674= -2755 (m)
--000000000000378ee805a1da3ac1--