From: Matan Azrad <matan@mellanox.com>
To: Chas Williams <3chas3@gmail.com>
Cc: Declan Doherty <declan.doherty@intel.com>,
Radu Nicolau <radu.nicolau@intel.com>,
"dev@dpdk.org" <dev@dpdk.org>, Chas Williams <chas3@att.com>
Subject: Re: [dpdk-dev] [PATCH] net/bonding: propagate promiscous mode in mode 4
Date: Thu, 13 Sep 2018 15:40:44 +0000 [thread overview]
Message-ID: <AM0PR0502MB401982FAE6C843E80D817897D21A0@AM0PR0502MB4019.eurprd05.prod.outlook.com> (raw)
In-Reply-To: <CAG2-GkncJzABeEa_kOJrMmDUigUaiFCpiDGjZAeTU5TwgGS+DQ@mail.gmail.com>
Hi Chas
From: Chas Williams
> On Wed, Sep 12, 2018 at 1:56 AM Matan Azrad <matan@mellanox.com>
> wrote:
> >
> > Hi Chas
> >
> > From: Chas Williams
> > > On Mon, Aug 6, 2018 at 3:35 PM Matan Azrad <matan@mellanox.com>
> > > wrote:
> > > >
> > > >
> > > > Hi Chas
> > > >
> > > > From: Chas Williams
> > > > >On Mon, Aug 6, 2018 at 1:46 PM Matan Azrad
> > > <mailto:matan@mellanox.com> wrote:
> > > > >Hi Chas
> > > > >
> > > > >From: Chas Williams
> > > > >>On Fri, Aug 3, 2018 at 1:47 AM Matan Azrad
> > > <mailto:mailto:matan@mellanox.com> wrote:
> > > > >>Hi Chas
> > > > >>
> > > > >> From: Chas Williams
> > > > >> [mailto:mailto:mailto:mailto:3chas3@gmail.com]
> > > > >> On Thu, Aug 2, 2018 at 1:33
> > > > >>> PM Matan Azrad <mailto:mailto:matan@mellanox.com> wrote:
> > > > >>> >
> > > > >>> > > I suggest to do it like next, To add one more parameter
> > > > >>> > > for LACP which means how to configure the
> > > > >>> > LACP MC group - lacp_mc_grp_conf:
> > > > >>> > > 1. rte_flow.
> > > > >>> > > 2. flow director.
> > > > >>> > > 3. add_mac.
> > > > >>> > > 3. set_mc_add_list
> > > > >>> > > 4. allmulti
> > > > >>> > > 5. promiscuous
> > > > >>> > > Maybe more... or less :)
> > > > >>> > >
> > > > >>> > > By this way the user decides how to do it, if it's fail
> > > > >>> > > for a slave, the salve
> > > > >>> > should be rejected.
> > > > >>> > > Conflict with another configuration(for example calling to
> > > > >>> > > promiscuous
> > > > >>> > disable while running LACP lacp_mc_grp_conf=5) should raise
> > > > >>> > an
> > > error.
> > > > >>> > >
> > > > >>> > > What do you think?
> > > > >>> > >
> > > > >>> >
> > > > >>> > Supporting an LACP mc group specific configuration does make
> > > > >>> > sense, but I wonder if this could just be handled by default
> > > > >>> > during
> > > slave add.
> > > > >>> >
> > > > >>> >
> > > > >>> > 1 and 2 are essentially the same hardware filtering offload
> > > > >>> > mode, and the other modes are irrelevant if this is enabled,
> > > > >>> > it should not be possible to add the slave if the bond is
> > > > >>> > configured for this mode, or possible to change the bond
> > > > >>> > into this mode if an existing slave doesn't support it.
> > > > >>>
> > > > >>> >
> > > > >>> > 3 should be the default expected behavior, but
> > > > >>> > rte_eth_bond_slave_add() should fail if the slave being
> > > > >>> > added doesn't support either adding the MAC to the slave or
> > > > >>> > adding the
> > > LACP MC address.
> > > > >>> >
> > > > >>> > Then the user could try either rte_eth_allmulticast_enable()
> > > > >>> > on the bond port and then try to add the slave again, which
> > > > >>> > should fail if existing slave didn't support allmulticast or
> > > > >>> > the add slave would fail again if the slave didn't support
> > > > >>> > allmulticast and finally just call
> > > > >>> > rte_eth_promiscuous_enable() on the bond and then try to
> > > > >>> > re-add the that slave.
> > > > >>> >
> > > > >>> > but maybe having a explicit configuration parameter would be
> > > better.
> > > > >>>
> > > > >>> I don't sure you understand exactly what I’m suggesting here,
> again:
> > > > >>> I suggest to add a new parameter to the LACP mode called
> > > > >>> lacp_mc_grp_conf(or something else).
> > > > >>> So, when the user configures LACP (mode 4) it must to
> > > > >>> configure the lacp_mc_grp_conf parameter to one of the options I
> suggested.
> > > > >>> This parameter is not per slave means the bond PMD will use
> > > > >>> the selected option to configure the LACP MC group for all the
> slave ports.
> > > > >>>
> > > > >>> If one of the slaves doesn't support the selected option it
> > > > >>> should be
> > > rejected.
> > > > >>> Conflicts should rais an error.
> > > > >>>
> > > > >>> I agree here. Yes, if a slave can't manage to subscribe to
> > > > >>> the multicast group, an error should be raised. The only way
> > > > >>> for this to happen is that you don't have promisc support
> > > > >>> which is the ultimate
> > > fallback.
> > > > >>
> > > > >>> The advantages are:
> > > > >>> The user knows which option is better to synchronize with his
> > > application.
> > > > >>> The user knows better than the bond PMD what is the slaves
> > > capabilities.
> > > > >>> All the slaves are configured by the same way - consistent traffic.
> > > > >>>
> > > > >>>
> > > > >>> It would be ideal if all the slaves would have the same
> > > > >>> features and capabilities. There wasn't enforced before, so
> > > > >>> this would be a new restriction that would be less flexible
> > > > >>> than what we currently have. That doesn't seem like an
> improvement.
> > > > >>
> > > > >>> The bonding user probably doesn't care which mode is used.
> > > > >>> The bonding user just wants bonding to work. He doesn't care
> > > > >>> about
> > > the details. If I am writing
> > > > >>> an application with this proposed API, I need to make a list
> > > > >>> of adapters and what they support (and keep this up to date as
> > > > >>> DPDK
> > > evolves). Ugh.
> > > > >>
> > > > >>The applications commonly know what are the nics capabilities
> > > > >>they
> > > work with.
> > > > >>
> > > > >>I know at least an one big application which really suffering
> > > > >>because the bond configures promiscuous in mode 4 without the
> > > application asking (it's considered there as a bug in dpdk).
> > > > >>I think that providing another option will be better.
> > > > >>
> > > > >>I think providing another option will be better as well.
> > > > >>However we
> > > disagree on the option.
> > > > >>If the PMD has no other way to subscribe the multicast group, it
> > > > >>has to
> > > use promiscuous mode.
> > > > >
> > > > >>Yes, it is true but there are a lot of other and better options,
> > > promiscuous is greedy! Should be the last alternative to use.
> > > > >
> > > > >Unfortunately, it's the only option implemented.
> > > >
> > > > Yes, I know, I suggest to change it or at least not to make it worst.
> > > >
> > > > >>Providing a list of options only makes life complicated for the
> > > > >>developer and doesn't really make any difference in the end results.
> > > > >
> > > > >>A big different, for example:
> > > > >>Let's say the bonding groups 2 devices that support rte_flow.
> > > > >>The user don't want neither promiscuous nor all multicast, he
> > > > >>just want to get it's mac traffic + LACP MC group traffic,(a
> > > > >>realistic use case) if
> > > he has an option to tell to the bond PMD, please use rte_flow to
> > > configure the specific LACP MC group it will be great.
> > > > >>Think how much work these applications should do in the current
> > > behavior.
> > > > >
> > > > >The bond PMD should already know how to do that itself.
> > > >
> > > > The bond can do it with a lot of complexity, but again the user
> > > > must know
> > > what the bond chose to be synchronized.
> > > > So, I think it's better that the user will define it because it is
> > > > a traffic configuration (the same as promiscuous configuration -
> > > > the user configures it)
> > > > > Again, you are forcing more work on the user to ask them to
> > > > > select
> > > between the methods.
> > > >
> > > > We can create a default option as now(promiscuous).
> > > >
> > > > >> For instance, if the least common denominator between the two
> > > > >>PMDs is promiscuous mode, you are going to be forced to run
> > > > >>both in promiscuous mode instead of selecting the best mode for
> each PMD.
> > > > >
> > > > >>In this case promiscuous is better, Using a different
> > > > >>configuration is worst and against the bonding PMD
> > > principle to get a consistent traffic from the slaves.
> > > > >>So, if one uses allmulti and one uses promiscuous the
> > > > >>application may get an inconsistent traffic and it may trigger a
> > > > >>lot of problems and
> > > complications for some applications.
> > > > >
> > > > >Those applications should already have those problems.
> > > > > I can make the counter
> > > > >argument that there are potentially applications relying on the
> > > > >broken
> > > behavior.
> > > >
> > > > You right. So adding allmulticast will require changes in these
> applications.
> > > >
> > > > >We need to ignore those issues and fix this the "right" way. The
> > > > >"right" way IMHO is the pass the least amount of traffic possible
> > > > >in each
> > > case.
> > > >
> > > > Not in cost of an inconsistency, but looks like we are not agree here.
> > > >
> > >
> > > I have recently run into this issue again with a device that doesn't
> > > support promiscuous, but does let me subscribe to the appropriate
> multicast groups.
> > > At this point, I am leaning toward adding another API call to the
> > > bonding API so that the user can provide a callback to setup
> > > whatever they want on the slaves.
> > > The default setup routine would be enable promiscuous.
> > >
> > > Comments?
> >
> > The bonding already allows to the users to do operations directly to the
> slaves(it exports the port ids - rte_eth_bond_slaves_get), so I don't
> understand why do you need a new API.
> > The only change you need may be to add parameter to disable the
> promiscuous configuration in mode4.
>
> Changing the API is new API. We should attempt to not break any of the
> existing API.
>
> As for being able to operate on the slaves, yet, you can. But the bonding
> PMD also controls the slaves as well. It seems cleaner to make this explcit,
> that the bonding driver calls out to the application to setup the 802.3ad
> listening when it needs to be done.
> If you want to control it a different way, you simply provide a null routine
> that does nothing and control it however you like.
The issue is that the bonding PMD cannot be synchronized with such like callback, it doesn't know what was done by the application.
I don't think we should open a direct application calls to the slaves by an API, if the application really need it, as we said, it has already option to do it without a new API while it is really not recommended by the bonding guide.
next prev parent reply other threads:[~2018-09-13 15:40 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-08-01 12:57 Radu Nicolau
2018-08-01 13:34 ` Chas Williams
2018-08-01 13:47 ` Radu Nicolau
2018-08-01 15:35 ` Chas Williams
2018-08-02 6:35 ` Matan Azrad
2018-08-02 13:23 ` Doherty, Declan
2018-08-02 14:24 ` Matan Azrad
2018-08-02 15:53 ` Doherty, Declan
2018-08-02 17:33 ` Matan Azrad
2018-08-02 21:10 ` Chas Williams
2018-08-03 5:47 ` Matan Azrad
2018-08-06 16:00 ` Chas Williams
2018-08-06 17:46 ` Matan Azrad
2018-08-06 19:01 ` Chas Williams
2018-08-06 19:35 ` Matan Azrad
2018-09-11 3:31 ` Chas Williams
2018-09-12 5:56 ` Matan Azrad
2018-09-13 15:14 ` Chas Williams
2018-09-13 15:40 ` Matan Azrad [this message]
2018-09-16 16:14 ` Chas Williams
2018-09-17 6:29 ` Matan Azrad
2018-08-02 21:05 ` Chas Williams
2018-08-02 9:57 ` [dpdk-dev] [PATCH v2 1/2] net/bonding: in 8023ad mode enable all multicast rather than promiscuous Radu Nicolau
2018-08-02 9:57 ` [dpdk-dev] [PATCH v2 2/2] net/bonding: propagate promiscous mode in mode 4 Radu Nicolau
2018-08-02 10:21 ` [dpdk-dev] [PATCH v2 1/2] net/bonding: in 8023ad mode enable all multicast rather than promiscuous Matan Azrad
2018-08-02 21:16 ` Chas Williams
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=AM0PR0502MB401982FAE6C843E80D817897D21A0@AM0PR0502MB4019.eurprd05.prod.outlook.com \
--to=matan@mellanox.com \
--cc=3chas3@gmail.com \
--cc=chas3@att.com \
--cc=declan.doherty@intel.com \
--cc=dev@dpdk.org \
--cc=radu.nicolau@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).