From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mailS.toulouse.viveris.com (mail.toulouse.viveris.com [212.99.125.10]) by dpdk.org (Postfix) with ESMTP id ABA672B89 for ; Mon, 22 May 2017 11:45:50 +0200 (CEST) Received: from webmail.toulouse.viveris.fr (unknown [192.168.4.100]) by mailS.toulouse.viveris.com (Postfix) with ESMTPSA id EA30CCC00CD; Mon, 22 May 2017 11:45:49 +0200 (CEST) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit Date: Mon, 22 May 2017 11:40:51 +0200 From: dfernandes@toulouse.viveris.com To: "Wiles, Keith" Cc: users@dpdk.org In-Reply-To: <5b2e2c4bba0c68b6dbfd50cb6654de9f@toulouse.viveris.com> References: <8474dba1b5f7aeeba26446b349ddf832@toulouse.viveris.com> <893EE76D-12F0-4D89-8A4C-D7D41C7C6014@intel.com> <5b2e2c4bba0c68b6dbfd50cb6654de9f@toulouse.viveris.com> Message-ID: X-Sender: dfernandes@toulouse.viveris.com User-Agent: Roundcube Webmail/1.1.1 Subject: Re: [dpdk-users] Packet losses using DPDK X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 22 May 2017 09:45:51 -0000 Hi ! I performed many tests using Pktgen and it seems to work much better. However, I observed that one of the tests showed that 2 packets were dropped. In this test I sent packets between the 2 physical ports in bidirectional mode during 24 hours. The packets size was 450 bytes and the rate in both ports was 1500 Mbps. The port stats I got are the following : ** Port 0 ** Tx: 34481474912. Rx: 34481474846. Dropped: 2 ** Port 1 ** Tx: 34481474848. Rx: 34481474912. Dropped: 0 DEBUG portStats = { [1] = { ["ipackets"] = 34481474912, ["ierrors"] = 0, ["rx_nombuf"] = 0, ["ibytes"] = 15378737810752, ["oerrors"] = 0, ["opackets"] = 34481474848, ["obytes"] = 15378737782208, }, [0] = { ["ipackets"] = 34481474846, ["ierrors"] = 1, ["rx_nombuf"] = 0, ["ibytes"] = 15378737781316, ["oerrors"] = 0, ["opackets"] = 34481474912, ["obytes"] = 15378737810752, }, ["n"] = 2, } So 2 packets were dropped by port 0 and I see that "ierrors" counter has a value of 1. Do you know what does this counter represent ? And what could it be interpreted ? By the way, I performed as well the same test changing the packet size to 1518 bytes and the rate to 4500 Mbps (on each port) and 0 packets were dropped. David Le 17.05.2017 09:53, dfernandes@toulouse.viveris.com a écrit : > Thanks for your response ! > > I have installed Pktgen and I will perform some tests. So far it seems > to work fine. I'll keep you informed. Thanks again. > > David > > Le 12.05.2017 18:18, Wiles, Keith a écrit : >>> On May 12, 2017, at 10:45 AM, dfernandes@toulouse.viveris.com wrote: >>> >>> Hi ! >>> >>> I am working with MoonGen which is a fully scriptable packet >>> generator build on DPDK. >>> (→ https://github.com/emmericp/MoonGen) >>> >>> The system on which I perform tests has the following characteristics >>> : >>> >>> CPU : Intel Core i3-⁠⁠6100 (3.70 GHz, 2 cores, 2 threads/⁠⁠core) >>> NIC : X540-⁠⁠AT2 with 2x10Gbe ports >>> OS : Linux Ubuntu Server 16.04 (kernel 4.4) >>> >>> I coded a MoonGen script which requests DPDK to transmit packets from >>> one physical port and to receive them at the second physical port. >>> The 2 physical ports are directly connected with an RJ-45 cat6 cable. >>> >>> The issue is that I perform the same test with exactly the same >>> script and the same parameters several times and the results show a >>> random behavior. For most of the tests there is no losses but for >>> some of them I observe packet losses. The percentage of lost packets >>> is very variable. It happens even when the packet rate is very low. >>> >>> Some examples of random failed tests : >>> >>> # 1,000,000 packets sent (packets size = 124 bytes, rate = 76 Mbps) → >>> 10170 lost packets >>> >>> # 3,000,000 packets sent (packets size = 450 bytes, rate = 460 Mbps) >>> → ALL packets lost >>> >>> >>> I tested the following system modifications without success : >>> >>> # BIOS parameters : >>> >>> Hyperthreading : enable (because the machine has only 2 cores) >>> Multi-⁠⁠⁠processor : enable >>> Virtualization Technology (VTx) : disable >>> Virtualization Technology for Directed I/⁠⁠⁠O (VTd) : disable >>> Allow PCIe/⁠⁠⁠PCI SERR# Interrupt (=PCIe System Errors) : disable >>> NUMA unavailable >>> >>> # use of isolcpus in order to isolate the cores which are in charge >>> of transmission and reception >>> >>> # hugepages size = 1048576 kB >>> >>> # size of buffer descriptors : tried with Tx = 512 descriptors and Rx >>> = 128 descriptors and also with Tx = 4096 descriptors and Rx = 4096 >>> descriptors >>> >>> # Tested with 2 different X540-⁠⁠T2 NICs units >>> >>> # I tested all with a Dell FC430 which has a CPU Intel Xeon E5-2660 >>> v3 @ 2.6GHz with 10 Cores and 2threads/Core (tested with and without >>> hyper-threading) >>> → same results and even worse >>> >>> >>> Remark concerning the NIC stats : >>> I used the rte_eth_stats struct in order to get more information >>> about the losses and I observed that in some cases, when there is >>> packet losses, ierrors value is > 0 and also ierrors + imissed + >>> ipackets < opackets. In other cases I get ierrors = 0 and imissed + >>> ipackets = opackets which has more sense. >>> >>> What could be the origin of that erroneous packets counting? >>> >>> Do you have any explanation about that behaviour ? >> >> Not knowing MoonGen at all other then a brief look at the source I may >> not be much help, but I have a few ideas to help locate the problem. >> >> Try using testpmd in tx-only mode or try Pktgen to see if you get the >> same problem. I hope this would narrow down the problem to a specific >> area. As we know DPDK works if correctly coded and testpmd/pktgen >> work. >> >>> >>> Thanks in advance. >>> >>> David >> >> Regards, >> Keith