DPDK patches and discussions
 help / color / mirror / Atom feed
From: Chas Williams <3chas3@gmail.com>
To: Matan Azrad <matan@mellanox.com>
Cc: bluca@debian.org, Eric Kinzie <ehkinzie@gmail.com>,
	dev@dpdk.org,  Declan Doherty <declan.doherty@intel.com>,
	Chas Williams <chas3@att.com>,
	stable@dpdk.org
Subject: Re: [dpdk-dev] [dpdk-stable] [PATCH v4] net/bonding: per-slave intermediate rx ring
Date: Sun, 9 Sep 2018 16:57:58 -0400	[thread overview]
Message-ID: <CAG2-Gk=CMEfz7DN4+ZDZXpEyOaDgAZR0ELRF-esoTeZF4QoV6g@mail.gmail.com> (raw)
In-Reply-To: <AM0PR0502MB4019288257E263EC8A6AEE9AD20D0@AM0PR0502MB4019.eurprd05.prod.outlook.com>

On Sun, Sep 2, 2018 at 7:34 AM Matan Azrad <matan@mellanox.com> wrote:
>
> Hi Luca\Chas
>
> From: Luca Boccassi
> > On Wed, 2018-08-29 at 15:20 +0000, Matan Azrad wrote:
> > >
> > > From: Chas Williams
> > > > On Tue, Aug 28, 2018 at 5:51 AM Matan Azrad <mailto:matan@mellanox.
> > > > com> wrote:
> > > >
> > > >
> > > > From: Chas Williams
> > > > > On Mon, Aug 27, 2018 at 11:30 AM Matan Azrad <mailto:mailto:matan
> > > > > @mellanox.com> wrote:
> > > >
> > > > <snip>
> > > > > > > Because rings are generally quite efficient.
> > > > > >
> > > > > > But you are using a ring in addition to regular array
> > > > > > management, it must hurt performance of the bonding PMD (means
> > > > > > the bonding itself - not the slaves PMDs which are called from
> > > > > > the bonding)
> > > > >
> > > > > It adds latency.
> > > >
> > > > And by that hurts the application performance because it takes more
> > > > CPU time in the bonding PMD.
> > > >
> > > > No, as I said before it takes _less_ CPU time in the bonding PMD
> > > > because we use a more optimal read from the slaves.
> > >
> > > Each packet pointer should be copied more 2 times because of this
> > > patch + some management(the ring overhead) So in the bonding code you
> > > lose performance.
> > >
> > > >
> > > > > It increases performance because we spend less CPU time reading
> > > > > from the PMDs.
> > > >
> > > > So, it's hack in the bonding PMD to improve some slaves code
> > > > performance but hurt the bonding code performance, Over all the
> > > > performance we gain for those slaves improves the application
> > > > performance only when working with those slaves.
> > > > But may hurt the application performance when working with other
> > > > slaves.
> > > >
> > > > What is your evidence that is hurts bonding performance?  Your
> > > > argument is purely theoretical.
> > >
> > > Yes, we cannot test all the scenarios cross the PMDs.
> >
> > Chas has evidence that this helps, a _lot_, in some very common cases.
> > We haven't seen evidence of negative impact anywhere in 2 years. Given
> > this, surely it's not unreasonable to ask to substantiate theoretical arguments
> > with some testing?
>
> What is the common cases of the bond usage?
> Do you really know all the variance of the bond usages spreading all over the world?

We actually have a fairly large number of deployments using this bonding code
across a couple different adapter types (mostly Intel though and some virtual
usage).  The patch was designed to address starvation of slaves because of
the way that vector receives tend on the Intel PMDs.  If there isn't
enough space
to attempt a vector receive (a minimum of 4 buffers), then the rx
burst will return
a value of 0 -- no buffers read.  The rx burst in bonding moves to the
next adapter.
So this tend to starve any slaves that aren't first in line.  The issue doesn't
really show up in single streams.  You need to run multiple streams
that multiplex
across all the slaves.

>
> I’m saying that using a hack in the bond code which helps for some slaves PMDs\application scenarios (your common cases) but hurting
> the bond code performance and latency is not the right thing to do because it may hurt other scenarios\PMDs  using the bond.

What do you think about the attached patch?  It implements an explicit
round-robin
for the "first" slave in order to enforce some sort of fairness.  Some
limited testing
has shown that this our application scan scale polling to read the
PMDs fast enough.
Note, I have only tested the 802.3ad paths.  The changes are likely
necessary for
the other RX burst routines since they should suffer the same issue.

  reply	other threads:[~2018-09-09 20:58 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-08-15 15:46 [dpdk-dev] [PATCH] " Luca Boccassi
2018-08-15 16:06 ` [dpdk-dev] [PATCH v2] " Luca Boccassi
2018-08-16 12:52   ` [dpdk-dev] [PATCH v3] " Luca Boccassi
2018-08-16 13:32     ` [dpdk-dev] [PATCH v4] " Luca Boccassi
2018-08-20 14:11       ` Chas Williams
2018-08-21 10:56         ` Matan Azrad
2018-08-21 11:13           ` Luca Boccassi
2018-08-21 14:58           ` Chas Williams
2018-08-21 15:43             ` Matan Azrad
2018-08-21 18:19               ` Chas Williams
2018-08-22  7:09                 ` Matan Azrad
2018-08-22 10:19                   ` [dpdk-dev] [dpdk-stable] " Luca Boccassi
2018-08-22 11:42                     ` Matan Azrad
2018-08-22 17:43                       ` Eric Kinzie
2018-08-23  7:28                         ` Matan Azrad
2018-08-23 15:51                           ` Chas Williams
2018-08-26  7:40                             ` Matan Azrad
2018-08-27 13:22                               ` Chas Williams
2018-08-27 15:30                                 ` Matan Azrad
2018-08-27 15:51                                   ` Chas Williams
2018-08-28  9:51                                     ` Matan Azrad
2018-08-29 14:30                                       ` Chas Williams
2018-08-29 15:20                                         ` Matan Azrad
2018-08-31 16:01                                           ` Luca Boccassi
2018-09-02 11:34                                             ` Matan Azrad
2018-09-09 20:57                                               ` Chas Williams [this message]
2018-09-12  5:38                                                 ` Matan Azrad
2018-09-19 18:09       ` [dpdk-dev] " Luca Boccassi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAG2-Gk=CMEfz7DN4+ZDZXpEyOaDgAZR0ELRF-esoTeZF4QoV6g@mail.gmail.com' \
    --to=3chas3@gmail.com \
    --cc=bluca@debian.org \
    --cc=chas3@att.com \
    --cc=declan.doherty@intel.com \
    --cc=dev@dpdk.org \
    --cc=ehkinzie@gmail.com \
    --cc=matan@mellanox.com \
    --cc=stable@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).