From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 047DBA0552 for ; Tue, 18 Feb 2020 09:36:49 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 51ED31C07F; Tue, 18 Feb 2020 09:36:48 +0100 (CET) Received: from ls405.t-com.hr (ls405.t-com.hr [195.29.150.135]) by dpdk.org (Postfix) with ESMTP id 06E4B1C038 for ; Tue, 18 Feb 2020 09:36:46 +0100 (CET) Received: from ls265.t-com.hr (ls265.t-com.hr [195.29.150.93]) by ls405.t-com.hr (Postfix) with ESMTP id A159E69902F for ; Tue, 18 Feb 2020 09:36:46 +0100 (CET) Received: from ls265.t-com.hr (localhost.localdomain [127.0.0.1]) by ls265.t-com.hr (Qmlai) with ESMTP id 8B26D211026A for ; Tue, 18 Feb 2020 09:36:46 +0100 (CET) X-Envelope-Sender: hrvoje.habjanic@zg.ht.hr Received: from habix.doma (93-138-156-66.adsl.net.t-com.hr [93.138.156.66]) by ls265.t-com.hr (Qmali) with ESMTP id 58C0320B0244 for ; Tue, 18 Feb 2020 09:36:46 +0100 (CET) Received: from [192.168.10.192] (habi-doma.doma [192.168.10.192]) by habix.doma (Postfix) with ESMTPSA id 30DDD2BB for ; Tue, 18 Feb 2020 09:36:46 +0100 (CET) From: Hrvoje Habjanic To: users@dpdk.org References: <16b6d36f-ea75-4f20-5d96-ef4053787dba@zg.ht.hr> Message-ID: Date: Tue, 18 Feb 2020 09:36:43 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.9.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Content-Language: en-US X-TM-AS-Product-Ver: IMSS-7.1.0.1224-8.2.0.1013-25200.000 X-TM-AS-Result: No--21.587-10.0-31-1 X-imss-scan-details: No--21.587-10.0-31-1 X-TM-AS-User-Approved-Sender: No Subject: Re: [dpdk-users] DPDK TX problems X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org Sender: "users" On 08. 04. 2019. 11:52, Hrvoje Habjanić wrote: > On 29/03/2019 08:24, Hrvoje Habjanić wrote: >>> Hi. >>> >>> I did write an application using dpdk 17.11 (did try also with 18.11), >>> and when doing some performance testing, i'm seeing very odd behavior. >>> To verify that this is not because of my app, i did the same test with >>> l2fwd example app, and i'm still confused by results. >>> >>> In short, i'm trying to push a lot of L2 packets through dpdk engine - >>> packet processing is minimal. When testing, i'm starting with small >>> number of packets-per-second, and then gradually increase it to see >>> where is the limit. At some point, i do reach this limit - packets start >>> to get dropped. And this is when stuff become weird. >>> >>> When i reach peek packet rate (at which packets start to get dropped), i >>> would expect that reducing packet rate will remove packet drops. But, >>> this is not the case. For example, let's assume that peek packet rate is >>> 3.5Mpps. At this point everything works ok. Increasing pps to 4.0Mpps, >>> makes a lot of dropped packets. When reducing pps back to 3.5Mpps, app >>> is still broken - packets are still dropped. >>> >>> At this point, i need to drastically reduce pps (1.4Mpps) to make >>> dropped packets go away. Also, app is unable to successfully forward >>> anything beyond this 1.4M, despite the fact that in the beginning it did >>> forward 3.5M! Only way to recover is to restart the app. >>> >>> Also, sometimes, the app just stops forwarding any packets - packets are >>> received (as seen by counters), but app is unable to send anything back. >>> >>> As i did mention, i'm seeing the same behavior with l2fwd example app. I >>> did test dpdk 17.11 and also dpdk 18.11 - the results are the same. >>> >>> My test environment is HP DL380G8, with 82599ES 10Gig (ixgbe) cards, >>> connected with Cisco nexus 9300 sw. On the other side is ixia test >>> appliance. Application is run in virtual machine (VM), using KVM >>> (openstack, with sriov enabled, and numa restrictions). I did check that >>> VM is using only cpu's from NUMA node on which network card is >>> connected, so there is no cross-numa traffic. Openstack is Queens, >>> Ubuntu is Bionic release. Virtual machine is also using ubuntu bionic >>> as OS. >>> >>> I do not know how to debug this? Does someone else have the same >>> observations? >>> >>> Regards, >>> >>> H. >> There are additional findings. It seems that when i reach peak pps >> rate, application is not fast enough, and i can see rx missed errors >> on card statistics on the host. At the same time, tx side starts to >> show problems (tx burst starts to show it did not send all packets). >> Shortly after that, tx falls apart completely and top pps rate drops. >> >> Since i did not disable pause frames, i can see on the switch "RX >> pause" frame counter is increasing. On the other hand, if i disable >> pause frames (on the nic of server), host driver (ixgbe) reports "TX >> unit hang" in dmesg, and issues card reset. Of course, after reset >> none of the dpdk apps in VM's on this host does not work. >> >> Is it possible that at time of congestion DPDK does not release mbufs >> back to the pool, and tx ring becomes "filled" with zombie packets >> (not send by card and also having ref counter as they are in use)? >> >> Is there a way to check mempool or tx ring for "left-owers"? Is is >> possible to somehow "flush" tx ring and/or mempool? >> >> H. > After few more test, things become even weirder - if i do not free mbufs > which are not sent, but resend them again, i can "survive" over-the-peek > event! But, then peek rate starts to drop gradually ... > > I would ask if someone can try this on their platform and report back? I > would really like to know if this is problem with my deployment, or > there is something wrong with dpdk? > > Test should be simple - use l2fwd or l3fwd, and determine max pps. Then > drive pps 30%over max, and then return back and confirm that you can > still get max pps. > > Thanks in advance. > > H. > I did receive few mails from users facing this issue, asking how it was resolved. Unfortunately, there is no real fix. It seems that this issue is related to card and hardware used. I'm still not sure which is more to blame, but the combination i had is definitely problematic. Anyhow, in the end, i did conclude that card driver have some issues when it is saturated with packets. My suspicion is that driver/software does not properly free packets, and then DPDK mempool becomes fragmented, and this causes performance drops. Restarting software releases pools, and restores proper functionality. After no luck with ixgbe, we migrated to Mellanox (4LX), and now there is no more of this permanent performance drop. With mlx, when limit is reached, reducing number of packets restores packet forwarding, and this limit seems to be stable. Also, we moved to newer servers - DL380G10, and got significant performance increase. Also, we moved to newer switch (also cisco), with 25G ports, which reduced latency - almost by factor of 2! I did not try old ixgbe on newer server, but i did try Intel's XL710, and it is not as happy as Mellanox. It gives better PPS, but it is more unstable in terms of maximum bw (has similar issues as ixgbe). Regards, H.