From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pd0-x22d.google.com (mail-pd0-x22d.google.com [IPv6:2607:f8b0:400e:c02::22d]) by dpdk.org (Postfix) with ESMTP id B0B67684A for ; Mon, 27 Jan 2014 20:59:58 +0100 (CET) Received: by mail-pd0-f173.google.com with SMTP id y10so6131274pdj.32 for ; Mon, 27 Jan 2014 12:01:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type; bh=aB4ULd8iw3KmaPv9Gm0TlWr2DX8c7d9MbUgtLQS7erg=; b=FSuzHSnAbst3pS8KWfwoMruRh1mvWPu0S9waRco2jK9fePjeRT2ivv2i8dCZTnSn9O HR4u4jm2CJ3UcJbMpxrVaavtoMG647VpE4kU149sFKVDtAFw1TLZ9ZflIy2n5bU5wSTM Jsd32WcPa3nfRYELyCdqY8WWWpGi6aZzSxXPkKShBEmXVYcGBWF09wH0IISEPKN1Qmns M64gaFQOHVwd4Zk59RaD/VJdzlNrWE1q0jvupxaljT/0oaa2uxjuC/fjHLy7a5ZHx4wO nLOC8w9M03/K2xilclWm5rzFqxKFVYBKFx9eSJr67S0unbsEas4VRwGeH2vQXy9A0tu/ DVFw== X-Received: by 10.67.4.169 with SMTP id cf9mr31909964pad.45.1390852876241; Mon, 27 Jan 2014 12:01:16 -0800 (PST) MIME-Version: 1.0 Received: by 10.70.131.229 with HTTP; Mon, 27 Jan 2014 12:00:56 -0800 (PST) In-Reply-To: <52DFDB10.2090906@gmail.com> References: <52DFDB10.2090906@gmail.com> From: Michael Quicquaro Date: Mon, 27 Jan 2014 15:00:56 -0500 Message-ID: To: Dmitry Vyal Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] Rx-errors with testpmd (only 75% line rate) X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 27 Jan 2014 19:59:59 -0000 Dmitry, I cannot thank you enough for this information. This too was my main problem. I put a "small" unmeasured delay before the call to rte_eth_rx_burst() and suddenly it starts returning bursts of 512 packets vs. 4!! Best Regards, Mike On Wed, Jan 22, 2014 at 9:52 AM, Dmitry Vyal wrote: > Hello MIchael, > > I suggest you to check average burst sizes on receive queues. Looks like I > stumbled upon a similar issue several times. If you are calling > rte_eth_rx_burst too frequently, NIC begins losing packets no matter how > many CPU horse power you have (more you have, more it loses, actually). In > my case this situation occured when average burst size is less than 20 > packets or so. I'm not sure what's the reason for this behavior, but I > observed it on several applications on Intel 82599 10Gb cards. > > Regards, Dmitry > > > > On 01/09/2014 11:28 PM, Michael Quicquaro wrote: > >> Hello, >> My hardware is a Dell PowerEdge R820: >> 4x Intel Xeon E5-4620 2.20GHz 8 core >> 16GB RDIMM 1333 MHz Dual Rank, x4 - Quantity 16 >> Intel X520 DP 10Gb DA/SFP+ >> >> So in summary 32 cores @ 2.20GHz and 256GB RAM >> >> ... plenty of horsepower. >> >> I've reserved 16 1GB Hugepages >> >> I am configuring only one interface and using testpmd in rx_only mode to >> first see if I can receive at line rate. >> >> I am generating traffic on a different system which is running the netmap >> pkt-gen program - generating 64 byte packets at close to line rate. >> >> I am only able to receive approx. 75% of line rate and I see the Rx-errors >> in the port stats going up proportionally. >> I have verified that all receive queues are being used, but strangely >> enough, it doesn't matter how many queues more than 2 that I use, the >> throughput is the same. I have verified with 'mpstat -P ALL' that all >> specified cores are used. The utilization of each core is only roughly >> 25%. >> >> Here is my command line: >> testpmd -c 0xffffffff -n 4 -- --nb-ports=1 --coremask=0xfffffffe >> --nb-cores=8 --rxd=2048 --txd=2048 --mbcache=512 --burst=512 --rxq=8 >> --txq=8 --interactive >> >> What can I do to trace down this problem? It seems very similar to a >> thread on this list back in May titled "Best example for showing >> throughput?" where no resolution was ever mentioned in the thread. >> >> Thanks for any help. >> - Michael >> > >