DPDK patches and discussions
 help / color / mirror / Atom feed
From: Andrey Korolyov <andrey@xdel.ru>
To: "Traynor, Kevin" <kevin.traynor@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
	"discuss@openvswitch.org" <discuss@openvswitch.org>
Subject: Re: [dpdk-dev] Packet drops during non-exhaustive flood with OVS and 1.8.0
Date: Fri, 6 Feb 2015 18:43:35 +0400	[thread overview]
Message-ID: <CABYiri9CjS0ME-WnVXEKE-7Z6rkxLfTe3UO=z13qzbJ1ap78NQ@mail.gmail.com> (raw)
In-Reply-To: <CABYiri8qr2v8r_XQO35swWt_OO=WQtW0jO=KAz70f5VojhM6kg@mail.gmail.com>

On Tue, Feb 3, 2015 at 8:21 PM, Andrey Korolyov <andrey@xdel.ru> wrote:
>> These patches are to enable DPDK 1.8 only. What 'bulk processing' are you referring to?
>> By default there is a batch size of 192 in netdev-dpdk for rx from the NIC - the linked
>> patch doesn't change this, just the DPDK version.
>
> Sorry, I referred the wrong part there: bulk transmission, which is
> clearly not involved in my case. The idea was that the conditionally
> enabling prefetch for rx queues (BULK_ALLOC) may help somehow, but
> it`s probably will mask issue instead of solving it directly. By my
> understanding, strict drop rule should have a zero impact on a main
> ovs thread (and this is true) and work just fine with a line rate
> (this is not).
>
>>
>> Main things to consider are to isocpu's, pin the pmd thread and keep everything
>> on 1 NUMA socket. At 11 mpps without packet loss on that processor I suspect you are
>> doing those things already.
>
> Yes, with all tuning improvements I was able to do this, but bare
> Linux stack on same machine is able to handle 12Mpps and there are
> absolutely no hints of what exactly is being congested.

Also both action=NORMAL & action=output:<non-dpdk port> do manage flow
control in such a way that the generator side reaches line (14.8Mpps)
rate on 60b data packets, though very high drop ratio persists. With
action=DROP or action=output:X, where X is another dpdk port, flow
control establishes somewhere at the 13Mpps. Of course, using regular
host interface or NORMAL action generates a lot of context switches,
mainly by miniflow_extract() and emc_..(), the difference in a syscall
distribution between congested (line rate is reached) and
non-congested link is unobservable.

  reply	other threads:[~2015-02-06 14:43 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-01-21 17:02 Andrey Korolyov
2015-01-22 17:11 ` Andrey Korolyov
     [not found]   ` <CABYiri_q4QqWrhTQL-UZ1wf3FLz-wj9SbtBJrDntz2Bw4cEPoQ@mail.gmail.com>
2015-02-03 17:02     ` Traynor, Kevin
2015-02-03 17:21       ` Andrey Korolyov
2015-02-06 14:43         ` Andrey Korolyov [this message]
2015-02-12 15:05         ` Traynor, Kevin
2015-02-12 15:15           ` Andrey Korolyov
2015-02-13 10:58             ` Traynor, Kevin
2015-02-16 22:37               ` Andrey Korolyov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CABYiri9CjS0ME-WnVXEKE-7Z6rkxLfTe3UO=z13qzbJ1ap78NQ@mail.gmail.com' \
    --to=andrey@xdel.ru \
    --cc=dev@dpdk.org \
    --cc=discuss@openvswitch.org \
    --cc=kevin.traynor@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).