From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-it0-f53.google.com (mail-it0-f53.google.com [209.85.214.53]) by dpdk.org (Postfix) with ESMTP id D9A5A28EE for ; Mon, 22 May 2017 14:10:41 +0200 (CEST) Received: by mail-it0-f53.google.com with SMTP id o5so150331014ith.1 for ; Mon, 22 May 2017 05:10:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=tOVQZo+GjoRQea1EibJl41m/YLYvMCruA9/UF6igFzw=; b=0v2wWgdx2nAb60vKnTXOVUv+bn5h1q56fcv1HrxfS0I/67E+I1E9Y2uN+50kbREPW1 48rvWyhMQ6I7QF+zWqhx/xU4Bf1Qt6uLPoIK1PxXyZXQGOJaFkI6rqLmgreN7EYdqtdD CA13rCU+skVPADx0y+K2IEYfHYEWakRL1ddRKRMnciLeoCOBTttPfAImUUPRReE0awwb 0d3TWkZ72WQub0zBafNjYlDkNWwSbwLkz5p3y5y6gfimvXQwc2y8dfPWriRRmp20p3TH apqmUD2D7DKosd+NRi8c0Y5oXI0vUsup4Bb4wn/BuzUTDS3syJ1irxBpDZXDUvVHaBTL Jzsg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=tOVQZo+GjoRQea1EibJl41m/YLYvMCruA9/UF6igFzw=; b=DZOxoA9nX9i+lG891Xb7YSqdoUB4HG1qXj137SuPOn0VSUtMyR98El04fTcx/8JLOQ N8E2Mp6CjjmXb5LI3Npr7AVTN21knFXLQ/cFcDWTyS1h/JLdjPzE2m4V/LEdpF2Z0U+7 d91EIY7NBQF48dD7+0vuUyTbS6op+9C+TTd4O0l/HTCiQOyC9CtnVaeXCZ4qxFQWwrvo NwtEvlltv2QZPjGJPWRR6K+096mVWjaQJl/VqLmYCDQMpNKPzubrtk6oinI8kXnltj8p cNu693iSbY2v7davujfVqn24b9K/KdS3L88FxODnkARTOevHCUe/tOUzEr8MOl3DZgeM Alkg== X-Gm-Message-State: AODbwcDxagS4x+Szo9i0gtbByInpCtOHLWcXHY5RWUePj5w0nCxuZ5W0 7MXZ4WbbbIdMI2rlR0J9dNNWj3zoR/en X-Received: by 10.36.125.197 with SMTP id b188mr21655303itc.59.1495455041120; Mon, 22 May 2017 05:10:41 -0700 (PDT) MIME-Version: 1.0 Received: by 10.79.94.6 with HTTP; Mon, 22 May 2017 05:10:20 -0700 (PDT) In-Reply-To: References: <8474dba1b5f7aeeba26446b349ddf832@toulouse.viveris.com> <893EE76D-12F0-4D89-8A4C-D7D41C7C6014@intel.com> <5b2e2c4bba0c68b6dbfd50cb6654de9f@toulouse.viveris.com> From: Andriy Berestovskyy Date: Mon, 22 May 2017 14:10:20 +0200 Message-ID: To: dfernandes@toulouse.viveris.com Cc: "Wiles, Keith" , users Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Subject: Re: [dpdk-users] Packet losses using DPDK X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 22 May 2017 12:10:42 -0000 Hi, Please have a look at https://en.wikipedia.org/wiki/High_availability I was trying to calculate your link availability, but my Ubuntu calculator gives me 0 for 2 / 34 481 474 846 ;) Most probably you dropped a packet during the start/stop. ierrors is what you NIC consider as an error Ethernet frame (checksums, runts, giants etc) Regards, Andriy On Mon, May 22, 2017 at 11:40 AM, wrote: > Hi ! > > I performed many tests using Pktgen and it seems to work much better. > However, I observed that one of the tests showed that 2 packets were > dropped. In this test I sent packets between the 2 physical ports in > bidirectional mode during 24 hours. The packets size was 450 bytes and th= e > rate in both ports was 1500 Mbps. > > The port stats I got are the following : > > > ** Port 0 ** Tx: 34481474912. Rx: 34481474846. Dropped: 2 > ** Port 1 ** Tx: 34481474848. Rx: 34481474912. Dropped: 0 > > DEBUG portStats =3D { > [1] =3D { > ["ipackets"] =3D 34481474912, > ["ierrors"] =3D 0, > ["rx_nombuf"] =3D 0, > ["ibytes"] =3D 15378737810752, > ["oerrors"] =3D 0, > ["opackets"] =3D 34481474848, > ["obytes"] =3D 15378737782208, > }, > [0] =3D { > ["ipackets"] =3D 34481474846, > ["ierrors"] =3D 1, > ["rx_nombuf"] =3D 0, > ["ibytes"] =3D 15378737781316, > ["oerrors"] =3D 0, > ["opackets"] =3D 34481474912, > ["obytes"] =3D 15378737810752, > }, > ["n"] =3D 2, > } > > So 2 packets were dropped by port 0 and I see that "ierrors" counter has = a > value of 1. Do you know what does this counter represent ? And what could= it > be interpreted ? > By the way, I performed as well the same test changing the packet size to > 1518 bytes and the rate to 4500 Mbps (on each port) and 0 packets were > dropped. > > David > > > > > Le 17.05.2017 09:53, dfernandes@toulouse.viveris.com a =C3=A9crit : >> >> Thanks for your response ! >> >> I have installed Pktgen and I will perform some tests. So far it seems >> to work fine. I'll keep you informed. Thanks again. >> >> David >> >> Le 12.05.2017 18:18, Wiles, Keith a =C3=A9crit : >>>> >>>> On May 12, 2017, at 10:45 AM, dfernandes@toulouse.viveris.com wrote: >>>> >>>> Hi ! >>>> >>>> I am working with MoonGen which is a fully scriptable packet generator >>>> build on DPDK. >>>> (=E2=86=92 https://github.com/emmericp/MoonGen) >>>> >>>> The system on which I perform tests has the following characteristics = : >>>> >>>> CPU : Intel Core i3-=E2=81=A0=E2=81=A06100 (3.70 GHz, 2 cores, 2 threa= ds/=E2=81=A0=E2=81=A0core) >>>> NIC : X540-=E2=81=A0=E2=81=A0AT2 with 2x10Gbe ports >>>> OS : Linux Ubuntu Server 16.04 (kernel 4.4) >>>> >>>> I coded a MoonGen script which requests DPDK to transmit packets from >>>> one physical port and to receive them at the second physical port. The= 2 >>>> physical ports are directly connected with an RJ-45 cat6 cable. >>>> >>>> The issue is that I perform the same test with exactly the same script >>>> and the same parameters several times and the results show a random >>>> behavior. For most of the tests there is no losses but for some of the= m I >>>> observe packet losses. The percentage of lost packets is very variable= . It >>>> happens even when the packet rate is very low. >>>> >>>> Some examples of random failed tests : >>>> >>>> # 1,000,000 packets sent (packets size =3D 124 bytes, rate =3D 76 Mbps= ) =E2=86=92 >>>> 10170 lost packets >>>> >>>> # 3,000,000 packets sent (packets size =3D 450 bytes, rate =3D 460 Mbp= s) =E2=86=92 >>>> ALL packets lost >>>> >>>> >>>> I tested the following system modifications without success : >>>> >>>> # BIOS parameters : >>>> >>>> Hyperthreading : enable (because the machine has only 2 cores) >>>> Multi-=E2=81=A0=E2=81=A0=E2=81=A0processor : enable >>>> Virtualization Technology (VTx) : disable >>>> Virtualization Technology for Directed I/=E2=81=A0=E2=81=A0=E2=81= =A0O (VTd) : disable >>>> Allow PCIe/=E2=81=A0=E2=81=A0=E2=81=A0PCI SERR# Interrupt (=3DPCIe = System Errors) : disable >>>> NUMA unavailable >>>> >>>> # use of isolcpus in order to isolate the cores which are in charge of >>>> transmission and reception >>>> >>>> # hugepages size =3D 1048576 kB >>>> >>>> # size of buffer descriptors : tried with Tx =3D 512 descriptors and R= x =3D >>>> 128 descriptors and also with Tx =3D 4096 descriptors and Rx =3D 4096 >>>> descriptors >>>> >>>> # Tested with 2 different X540-=E2=81=A0=E2=81=A0T2 NICs units >>>> >>>> # I tested all with a Dell FC430 which has a CPU Intel Xeon E5-2660 v3= @ >>>> 2.6GHz with 10 Cores and 2threads/Core (tested with and without >>>> hyper-threading) >>>> =E2=86=92 same results and even worse >>>> >>>> >>>> Remark concerning the NIC stats : >>>> I used the rte_eth_stats struct in order to get more information >>>> about the losses and I observed that in some cases, when there is pack= et >>>> losses, ierrors value is > 0 and also ierrors + imissed + ipackets < >>>> opackets. In other cases I get ierrors =3D 0 and imissed + ipackets = =3D >>>> opackets which has more sense. >>>> >>>> What could be the origin of that erroneous packets counting? >>>> >>>> Do you have any explanation about that behaviour ? >>> >>> >>> Not knowing MoonGen at all other then a brief look at the source I may >>> not be much help, but I have a few ideas to help locate the problem. >>> >>> Try using testpmd in tx-only mode or try Pktgen to see if you get the >>> same problem. I hope this would narrow down the problem to a specific >>> area. As we know DPDK works if correctly coded and testpmd/pktgen >>> work. >>> >>>> >>>> Thanks in advance. >>>> >>>> David >>> >>> >>> Regards, >>> Keith > > --=20 Andriy Berestovskyy