DPDK patches and discussions
 help / color / mirror / Atom feed
From: Stephen Hemminger <stephen@networkplumber.org>
To: Aaron Campbell <aaron@arbor.net>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] Polling too often at lower packet rates?
Date: Wed, 8 Apr 2015 11:00:58 -0700	[thread overview]
Message-ID: <CAOaVG153Z4=fVwFxJ__KbVj1WCmWVHk_EiuoQG5_9bVmpiWvFw@mail.gmail.com> (raw)
In-Reply-To: <5D6C8629-393C-4195-8063-8168E206335B@arbor.net>

We use adaptive polling loop similar to l3fwd-power example.
See:


http://video.fosdem.org/2015/devroom-network_management_and_sdn/

On Wed, Apr 8, 2015 at 9:35 AM, Aaron Campbell <aaron@arbor.net> wrote:

> Hi,
>
> I have a machine with 6 DPDK ports (4 igb, 2 ixgbe), with 1.23Mpps traffic
> offered to only one of the 10G ports (the other 5 are unused).  I also have
> a program with a pretty standard looking DPDK receive loop, where it calls
> rte_eth_rx_burst() for each configured port.  If I configure the loop to
> read from all 6 ports, it can read the 1.23Mpps rate with no drops.  If I
> configure the loop to poll only 1 port (the ixgbe receiving the traffic), I
> lose about 1/3rd of the packets (i.e., the NIC drops ~400Kpps).
>
> Another data point is that if I configure the loop to read from 3 out of
> the 6 ports, the drop rate is reduced to less than half (i.e., the NIC is
> only dropping ~190Kpps now).  So it seems that in this test, throughput
> improves by adding NICs, not removing them, which is counter-intuitive.
> Again, I get no drops when polling all 6 ports.  Note, the burst size is 32.
>
> I did find a reference to a similar issue in a recent paper (
> http://www.net.in.tum.de/fileadmin/bibtex/publications/papers/ICN2015.pdf),
> Section III, which states:
>
> "The DPDK L2FWD application initially only managed to forward 13.8 Mpps in
> the single direction test at the maximum CPU frequency, a similar result
> can be found in [11]. Reducing the CPU frequency increased the throughput
> to the expected value of 14.88 Mpps. Our investigation of this anomaly
> revealed that the lack of any processing combined with the fast CPU caused
> DPDK to poll the NIC too often. DPDK does not use interrupts, it utilizes a
> busy wait loop that polls the NIC until at least one packet is returned.
> This resulted in a high poll rate which affected the throughput. We limited
> the poll rate to 500,000 poll operations per second (i.e., a batch size of
> about 30 packets) and achieved line rate in the unidirectional test with
> all frequencies. This effect was only observed with the X520 NIC, tests
> with X540 NICs did not show this anomaly.”
>
> Another reference, from this mailing list last year (
> http://wiki.dpdk.org/ml/archives/dev/2014-January/001169.html):
>
> "I suggest you to check average burst sizes on receive queues. Looks like
> I stumbled upon a similar issue several times. If you are calling
> rte_eth_rx_burst too frequently, NIC begins losing packets no matter how
> many CPU horse power you have (more you have, more it loses, actually). In
> my case this situation occured when average burst size is less than 20
> packets or so. I'm not sure what's the reason for this behavior, but I
> observed it on several applications on Intel 82599 10Gb cards.”
>
> So I’m wondering if anyone can explain at a lower level what happens when
> you poll “too often”, and if there are any practical workarounds.  We’re
> using this same program and DPDK version to process 10G line-rate in other
> scenarios, so I’m confident that the overall packet capture architecture is
> sound.
>
> -Aaron

  reply	other threads:[~2015-04-08 18:00 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-04-08 16:35 Aaron Campbell
2015-04-08 18:00 ` Stephen Hemminger [this message]
2015-04-09 18:26   ` Aaron Campbell
2015-04-09 21:24     ` Stephen Hemminger
2015-04-10  0:42       ` Paul Emmerich
2015-04-10  1:05         ` Paul Emmerich
2015-04-08 19:11 ` Ananyev, Konstantin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAOaVG153Z4=fVwFxJ__KbVj1WCmWVHk_EiuoQG5_9bVmpiWvFw@mail.gmail.com' \
    --to=stephen@networkplumber.org \
    --cc=aaron@arbor.net \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).