From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-it0-f48.google.com (mail-it0-f48.google.com [209.85.214.48]) by dpdk.org (Postfix) with ESMTP id 62AEE234 for ; Mon, 15 May 2017 10:26:16 +0200 (CEST) Received: by mail-it0-f48.google.com with SMTP id c15so38699507ith.0 for ; Mon, 15 May 2017 01:26:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=bd/rz/PBLgy2yYrdC+BuD8/mDjO9/dgVjG0poJBJmco=; b=xrXayPF79/w4g2qmkUyhA1neB3yQLM8dXK6OwUF3xHKRqBzmFcUegdXfxGoZeb2EDe Qh2bRnkZ08a03sRY6+noEhu5zqqe/tOzgLlf4+uMlnOISTeWAdKfxFECud2KXDd3j/aw OVEMEgzKqmj7/NzsulBlSpP0jXHDoJVKM4QN6mDqgUFypX8/JM44iW1YCWxKW+k6enw0 AegT+RKe4UA2R0D3WgQ748+YXzLmJHUrHWYt/GmCjXg7e4igPXAV2/+HEaoXE8QZLf99 HQaAqcYNqLiv4i4NQ9hNNlF3XV24+YCjYoYPSTjhpI6kFVNCMFi1qrS0XTaP7pYxWy3G xv0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=bd/rz/PBLgy2yYrdC+BuD8/mDjO9/dgVjG0poJBJmco=; b=kGs0vj36yJmJLdA7wZeHcgcHlDNAoh8+OOZvJ/1kshW5lC7JD8BuHiMILON6i/r/tf /Lo5OgDqbBwE12LpxiIAFlt2EY549aqMMopaLHBFRBS7K7GGbyEn7gAd/DTO7/TNBUKy 863944qw8vHy+r4bPmX0LfCRoG5Qbs3HZVEaCvgAlRBA+LFi9aHhgOHsTbKHBX2XfRDx mWGsod3fS9YC+V4hMXKtlcpz5WCRSh1g7XlBxmQPBfBr9BGlK1+9tR3kzOl+hokTdFe2 sQJ7XiXzI6yV9VrYmMG2ReRoEjnxkiRwPEgiRYrh6A9zfXp/Mj5ZcI89saU6jYZJXsHJ +fbw== X-Gm-Message-State: AODbwcCVx0K6+wdnFbyI5JAXTmoMs91EIv2wOD9qcoXszd2lhZ5PdJZY Y3Ejd93AUhk8Sv/sujsYfRroW0W7qlWK X-Received: by 10.36.55.23 with SMTP id r23mr4339672itr.20.1494836775629; Mon, 15 May 2017 01:26:15 -0700 (PDT) MIME-Version: 1.0 Received: by 10.79.71.135 with HTTP; Mon, 15 May 2017 01:25:55 -0700 (PDT) In-Reply-To: <8474dba1b5f7aeeba26446b349ddf832@toulouse.viveris.com> References: <8474dba1b5f7aeeba26446b349ddf832@toulouse.viveris.com> From: Andriy Berestovskyy Date: Mon, 15 May 2017 10:25:55 +0200 Message-ID: To: dfernandes@toulouse.viveris.com Cc: users Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Subject: Re: [dpdk-users] Packet losses using DPDK X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 15 May 2017 08:26:16 -0000 Hey, It might be a silly guess, but do you wait for the links are up and ready to send/receive packets? Andriy On Fri, May 12, 2017 at 5:45 PM, wrote: > Hi ! > > I am working with MoonGen which is a fully scriptable packet generator bu= ild > on DPDK. > (=E2=86=92 https://github.com/emmericp/MoonGen) > > The system on which I perform tests has the following characteristics : > > CPU : Intel Core i3-=E2=81=A0=E2=81=A06100 (3.70 GHz, 2 cores, 2 threads/= =E2=81=A0=E2=81=A0core) > NIC : X540-=E2=81=A0=E2=81=A0AT2 with 2x10Gbe ports > OS : Linux Ubuntu Server 16.04 (kernel 4.4) > > I coded a MoonGen script which requests DPDK to transmit packets from one > physical port and to receive them at the second physical port. The 2 > physical ports are directly connected with an RJ-45 cat6 cable. > > The issue is that I perform the same test with exactly the same script an= d > the same parameters several times and the results show a random behavior. > For most of the tests there is no losses but for some of them I observe > packet losses. The percentage of lost packets is very variable. It happen= s > even when the packet rate is very low. > > Some examples of random failed tests : > > # 1,000,000 packets sent (packets size =3D 124 bytes, rate =3D 76 Mbps) = =E2=86=92 10170 > lost packets > > # 3,000,000 packets sent (packets size =3D 450 bytes, rate =3D 460 Mbps) = =E2=86=92 ALL > packets lost > > > I tested the following system modifications without success : > > # BIOS parameters : > > Hyperthreading : enable (because the machine has only 2 cores) > Multi-=E2=81=A0=E2=81=A0=E2=81=A0processor : enable > Virtualization Technology (VTx) : disable > Virtualization Technology for Directed I/=E2=81=A0=E2=81=A0=E2=81=A0O= (VTd) : disable > Allow PCIe/=E2=81=A0=E2=81=A0=E2=81=A0PCI SERR# Interrupt (=3DPCIe Sy= stem Errors) : disable > NUMA unavailable > > # use of isolcpus in order to isolate the cores which are in charge of > transmission and reception > > # hugepages size =3D 1048576 kB > > # size of buffer descriptors : tried with Tx =3D 512 descriptors and Rx = =3D 128 > descriptors and also with Tx =3D 4096 descriptors and Rx =3D 4096 descr= iptors > > # Tested with 2 different X540-=E2=81=A0=E2=81=A0T2 NICs units > > # I tested all with a Dell FC430 which has a CPU Intel Xeon E5-2660 v3 @ > 2.6GHz with 10 Cores and 2threads/Core (tested with and without > hyper-threading) > =E2=86=92 same results and even worse > > > Remark concerning the NIC stats : > I used the rte_eth_stats struct in order to get more information abo= ut > the losses and I observed that in some cases, when there is packet losses= , > ierrors value is > 0 and also ierrors + imissed + ipackets < opackets. In > other cases I get ierrors =3D 0 and imissed + ipackets =3D opackets whic= h has > more sense. > > What could be the origin of that erroneous packets counting? > > Do you have any explanation about that behaviour ? > > Thanks in advance. > > David --=20 Andriy Berestovskyy