DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Traynor, Kevin" <kevin.traynor@intel.com>
To: Andrey Korolyov <andrey@xdel.ru>, "dev@dpdk.org" <dev@dpdk.org>
Cc: "discuss@openvswitch.org" <discuss@openvswitch.org>
Subject: Re: [dpdk-dev] Packet drops during non-exhaustive flood with OVS and 1.8.0
Date: Tue, 3 Feb 2015 17:02:22 +0000	[thread overview]
Message-ID: <BC0FEEC7D7650749874CEC11314A88F730688B53@IRSMSX104.ger.corp.intel.com> (raw)
In-Reply-To: <CABYiri_q4QqWrhTQL-UZ1wf3FLz-wj9SbtBJrDntz2Bw4cEPoQ@mail.gmail.com>


> -----Original Message-----
> From: Andrey Korolyov [mailto:andrey@xdel.ru]
> Sent: Monday, February 2, 2015 10:53 AM
> To: dev@dpdk.org
> Cc: discuss@openvswitch.org; Traynor, Kevin
> Subject: Re: Packet drops during non-exhaustive flood with OVS and 1.8.0
> 
> On Thu, Jan 22, 2015 at 8:11 PM, Andrey Korolyov <andrey@xdel.ru> wrote:
> > On Wed, Jan 21, 2015 at 8:02 PM, Andrey Korolyov <andrey@xdel.ru> wrote:
> >> Hello,
> >>
> >> I observed that the latest OVS with dpdk-1.8.0 and igb_uio starts to
> >> drop packets earlier than a regular Linux ixgbe 10G interface, setup
> >> follows:
> >>
> >> receiver/forwarder:
> >> - 8 core/2 head system with E5-2603v2, cores 1-3 are given to OVS exclusively
> >> - n-dpdk-rxqs=6, rx scattering is not enabled
> >> - x520 da
> >> - 3.10/3.18 host kernel
> >> - during 'legacy mode' testing, queue interrupts are scattered through all cores
> >>
> >> sender:
> >> - 16-core E52630, netmap framework for packet generation
> >> - pkt-gen -f tx -i eth2 -s 10.6.9.0-10.6.9.255 -d
> >> 10.6.10.0-10.6.10.255 -S 90:e2:ba:84:19:a0 -D 90:e2:ba:85:06:07 -R
> >> 11000000, results in 11Mpps 60-byte packet flood, there are constant
> >> values during test.
> >>
> >> OVS contains only single drop rule at the moment:
> >> ovs-ofctl add-flow br0 in_port=1,actions=DROP
> >>
> >> Packet generator was launched for tens of seconds for both Linux stack
> >> and OVS+DPDK cases, resulting in zero drop/error count on the
> >> interface in first, along with same counter values on pktgen and host
> >> interface stat (means that the none of generated packets are
> >> unaccounted).
> >>
> >> I selected rate for about 11M because OVS starts to do packet drop
> >> around this value, after same short test interface stat shows
> >> following:
> >>
> >> statistics          : {collisions=0, rx_bytes=22003928768,
> >> rx_crc_err=0, rx_dropped=0, rx_errors=10694693, rx_frame_err=0,
> >> rx_over_err=0, rx_packets=343811387, tx_bytes=0, tx_dropped=0,
> >> tx_errors=0, tx_packets=0}
> >>
> >> pktgen side:
> >> Sent 354506080 packets, 60 bytes each, in 32.23 seconds.
> >> Speed: 11.00 Mpps Bandwidth: 5.28 Gbps (raw 7.39 Gbps)
> >>
> >> If rate will be increased up to 13-14Mpps, the relative error/overall
> >> ratio will rise up to a one third. So far OVS on dpdk shows perfect
> >> results and I do not want to reject this solution due to exhaustive
> >> behavior like described one, so I`m open for any suggestions to
> >> improve the situation (except using 1.7 branch :) ).
> >
> > At a glance it looks like there is a problem with pmd threads, as they
> > starting to consume about five thousandth of sys% on a dedicated cores
> > during flood but in theory they should not. Any ideas for
> > debugging/improving this situation are very welcomed!
> 
> Over the time from a last message I tried a couple of different
> configurations, but packet loss starting to happen as early as at
> 7-8Mpps. Looks like that the bulk processing which has been in
> OVS-DPDK distro is missing from series of patches
> (http://openvswitch.org/pipermail/dev/2014-December/049722.html,
> http://openvswitch.org/pipermail/dev/2014-December/049723.html).
> Before implementing this, I would like to know if there can be any
> obvious (not for me unfortunately) clues on this performance issue.

These patches are to enable DPDK 1.8 only. What 'bulk processing' are you referring to? 
By default there is a batch size of 192 in netdev-dpdk for rx from the NIC - the linked 
patch doesn't change this, just the DPDK version.

Main things to consider are to isocpu's, pin the pmd thread and keep everything 
on 1 NUMA socket. At 11 mpps without packet loss on that processor I suspect you are 
doing those things already.

> 
> Thanks!

  parent reply	other threads:[~2015-02-03 17:02 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-01-21 17:02 Andrey Korolyov
2015-01-22 17:11 ` Andrey Korolyov
     [not found]   ` <CABYiri_q4QqWrhTQL-UZ1wf3FLz-wj9SbtBJrDntz2Bw4cEPoQ@mail.gmail.com>
2015-02-03 17:02     ` Traynor, Kevin [this message]
2015-02-03 17:21       ` Andrey Korolyov
2015-02-06 14:43         ` Andrey Korolyov
2015-02-12 15:05         ` Traynor, Kevin
2015-02-12 15:15           ` Andrey Korolyov
2015-02-13 10:58             ` Traynor, Kevin
2015-02-16 22:37               ` Andrey Korolyov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=BC0FEEC7D7650749874CEC11314A88F730688B53@IRSMSX104.ger.corp.intel.com \
    --to=kevin.traynor@intel.com \
    --cc=andrey@xdel.ru \
    --cc=dev@dpdk.org \
    --cc=discuss@openvswitch.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).