From: "Traynor, Kevin" <kevin.traynor@intel.com>
To: Andrey Korolyov <andrey@xdel.ru>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
"discuss@openvswitch.org" <discuss@openvswitch.org>
Subject: Re: [dpdk-dev] Packet drops during non-exhaustive flood with OVS and 1.8.0
Date: Fri, 13 Feb 2015 10:58:23 +0000 [thread overview]
Message-ID: <BC0FEEC7D7650749874CEC11314A88F7306A5B5F@IRSMSX104.ger.corp.intel.com> (raw)
In-Reply-To: <CABYiri9b_fZJNg186vHLTWVWsAy9UqBvN32WTi26bO5+o93mPQ@mail.gmail.com>
> -----Original Message-----
> From: Andrey Korolyov [mailto:andrey@xdel.ru]
> Sent: Thursday, February 12, 2015 3:16 PM
> To: Traynor, Kevin
> Cc: dev@dpdk.org; discuss@openvswitch.org
> Subject: Re: Packet drops during non-exhaustive flood with OVS and 1.8.0
>
> On Thu, Feb 12, 2015 at 6:05 PM, Traynor, Kevin <kevin.traynor@intel.com> wrote:
> >> -----Original Message-----
> >> From: Andrey Korolyov [mailto:andrey@xdel.ru]
> >> Sent: Tuesday, February 3, 2015 5:21 PM
> >> To: Traynor, Kevin
> >> Cc: dev@dpdk.org; discuss@openvswitch.org
> >> Subject: Re: Packet drops during non-exhaustive flood with OVS and 1.8.0
> >>
> >> > These patches are to enable DPDK 1.8 only. What 'bulk processing' are you referring to?
> >> > By default there is a batch size of 192 in netdev-dpdk for rx from the NIC - the linked
> >> > patch doesn't change this, just the DPDK version.
> >>
> >> Sorry, I referred the wrong part there: bulk transmission, which is
> >> clearly not involved in my case. The idea was that the conditionally
> >> enabling prefetch for rx queues (BULK_ALLOC) may help somehow, but
> >> it`s probably will mask issue instead of solving it directly. By my
> >> understanding, strict drop rule should have a zero impact on a main
> >> ovs thread (and this is true) and work just fine with a line rate
> >> (this is not).
> >
> > I've set a similar drop rule and I'm seeing the first packet drops occurring
> > at 13.9 mpps for 64 byte pkts. I'm not sure if there is a config that can be
> > changed or if it just the cost of the emc/lookups
> >
>
> Do you mind to compare this case with forward to the dummy port
> (ifconfig dummy0; ovs-vsctl add-port br0 dummy0; ip link set dev
> dummy0 up; flush rule table; create a single forward rule; start an
> attack)? As I mentioned there are no signs of syscall congestion for a
> drop or dpdk-dpdk forward case.
Assuming I've understood your setup, I get a very low rate (~1.1 mpps)
without packet loss as I'm sending the packets from a dpdk port to a
socket for the dummy port
next prev parent reply other threads:[~2015-02-13 10:58 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-01-21 17:02 Andrey Korolyov
2015-01-22 17:11 ` Andrey Korolyov
[not found] ` <CABYiri_q4QqWrhTQL-UZ1wf3FLz-wj9SbtBJrDntz2Bw4cEPoQ@mail.gmail.com>
2015-02-03 17:02 ` Traynor, Kevin
2015-02-03 17:21 ` Andrey Korolyov
2015-02-06 14:43 ` Andrey Korolyov
2015-02-12 15:05 ` Traynor, Kevin
2015-02-12 15:15 ` Andrey Korolyov
2015-02-13 10:58 ` Traynor, Kevin [this message]
2015-02-16 22:37 ` Andrey Korolyov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=BC0FEEC7D7650749874CEC11314A88F7306A5B5F@IRSMSX104.ger.corp.intel.com \
--to=kevin.traynor@intel.com \
--cc=andrey@xdel.ru \
--cc=dev@dpdk.org \
--cc=discuss@openvswitch.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).