DPDK patches and discussions
 help / color / mirror / Atom feed
From: Jun Han <junhanece@gmail.com>
To: Dmitry Vyal <dmitryvyal@gmail.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] Rx-errors with testpmd (only 75% line rate)
Date: Mon, 10 Feb 2014 18:34:47 +0100	[thread overview]
Message-ID: <CAGeT4PJkUzGzb_p15tccZOH8mMPvyENF4-h7WYvTmD9LO-xhnA@mail.gmail.com> (raw)
In-Reply-To: <52E76AD3.9040404@gmail.com>

Hi Michael,

We are also trying to purchase an IXIA traffic generator. Could you let us
know which chassis + load modules you are using so we can use that as a
reference to look for the model we need? There seems to be quite a number
of different models.

Thank you.


On Tue, Jan 28, 2014 at 9:31 AM, Dmitry Vyal <dmitryvyal@gmail.com> wrote:

> On 01/28/2014 12:00 AM, Michael Quicquaro wrote:
>
>> Dmitry,
>> I cannot thank you enough for this information.  This too was my main
>> problem.  I put a "small" unmeasured delay before the call to
>> rte_eth_rx_burst() and suddenly it starts returning bursts of 512 packets
>> vs. 4!!
>> Best Regards,
>> Mike
>>
>>
> Thanks for confirming my guesses! By the way, make sure the number of
> packets you receive in a single burst is less than configured queue size.
> Or you will lose packets too. Maybe your "small" delay in not so small :)
> For my own purposes I use a delay of about 150usecs.
>
> P.S. I wonder why this issue is not mentioned in documentation. Is it
> evident for everyone doing network programming?
>
>
>
>
>> On Wed, Jan 22, 2014 at 9:52 AM, Dmitry Vyal <dmitryvyal@gmail.com<mailto:
>> dmitryvyal@gmail.com>> wrote:
>>
>>     Hello MIchael,
>>
>>     I suggest you to check average burst sizes on receive queues.
>>     Looks like I stumbled upon a similar issue several times. If you
>>     are calling rte_eth_rx_burst too frequently, NIC begins losing
>>     packets no matter how many CPU horse power you have (more you
>>     have, more it loses, actually). In my case this situation occured
>>     when average burst size is less than 20 packets or so. I'm not
>>     sure what's the reason for this behavior, but I observed it on
>>     several applications on Intel 82599 10Gb cards.
>>
>>     Regards, Dmitry
>>
>>
>>
>>     On 01/09/2014 11:28 PM, Michael Quicquaro wrote:
>>
>>         Hello,
>>         My hardware is a Dell PowerEdge R820:
>>         4x Intel Xeon E5-4620 2.20GHz 8 core
>>         16GB RDIMM 1333 MHz Dual Rank, x4 - Quantity 16
>>         Intel X520 DP 10Gb DA/SFP+
>>
>>         So in summary 32 cores @ 2.20GHz and 256GB RAM
>>
>>         ... plenty of horsepower.
>>
>>         I've reserved 16 1GB Hugepages
>>
>>         I am configuring only one interface and using testpmd in
>>         rx_only mode to
>>         first see if I can receive at line rate.
>>
>>         I am generating traffic on a different system which is running
>>         the netmap
>>         pkt-gen program - generating 64 byte packets at close to line
>>         rate.
>>
>>         I am only able to receive approx. 75% of line rate and I see
>>         the Rx-errors
>>         in the port stats going up proportionally.
>>         I have verified that all receive queues are being used, but
>>         strangely
>>         enough, it doesn't matter how many queues more than 2 that I
>>         use, the
>>         throughput is the same.  I have verified with 'mpstat -P ALL'
>>         that all
>>         specified cores are used.  The utilization of each core is
>>         only roughly 25%.
>>
>>         Here is my command line:
>>         testpmd -c 0xffffffff -n 4 -- --nb-ports=1 --coremask=0xfffffffe
>>         --nb-cores=8 --rxd=2048 --txd=2048 --mbcache=512 --burst=512
>>         --rxq=8
>>         --txq=8 --interactive
>>
>>         What can I do to trace down this problem?  It seems very
>>         similar to a
>>         thread on this list back in May titled "Best example for showing
>>         throughput?" where no resolution was ever mentioned in the thread.
>>
>>         Thanks for any help.
>>         - Michael
>>
>>
>>
>>
>

  reply	other threads:[~2014-02-10 17:33 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-01-09 19:28 Michael Quicquaro
2014-01-09 21:21 ` François-Frédéric Ozog
2014-01-22 14:52 ` Dmitry Vyal
2014-01-22 17:46   ` Wang, Shawn
2014-01-27 20:00   ` Michael Quicquaro
2014-01-28  8:31     ` Dmitry Vyal
2014-02-10 17:34       ` Jun Han [this message]
2014-01-22 20:38 ` Robert Sanford
2014-01-23 23:22   ` Michael Quicquaro
2014-01-24  9:18     ` François-Frédéric Ozog

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAGeT4PJkUzGzb_p15tccZOH8mMPvyENF4-h7WYvTmD9LO-xhnA@mail.gmail.com \
    --to=junhanece@gmail.com \
    --cc=dev@dpdk.org \
    --cc=dmitryvyal@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).