* [dpdk-dev] [PATCH] net/bonding: propagate promiscous mode in mode 4 @ 2018-08-01 12:57 Radu Nicolau 2018-08-01 13:34 ` Chas Williams 2018-08-02 9:57 ` [dpdk-dev] [PATCH v2 1/2] net/bonding: in 8023ad mode enable all multicast rather than promiscuous Radu Nicolau 0 siblings, 2 replies; 26+ messages in thread From: Radu Nicolau @ 2018-08-01 12:57 UTC (permalink / raw) To: dev; +Cc: declan.doherty, chas3, Radu Nicolau Update the bonding promiscuous mode enable/disable functions as to propagate the change to all slaves instead of doing nothing; this seems to be the correct behaviour according to the standard, and also implemented in the linux network stack. Signed-off-by: Radu Nicolau <radu.nicolau@intel.com> --- drivers/net/bonding/rte_eth_bond_pmd.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c index ad6e33f..16105cb 100644 --- a/drivers/net/bonding/rte_eth_bond_pmd.c +++ b/drivers/net/bonding/rte_eth_bond_pmd.c @@ -2617,12 +2617,10 @@ bond_ethdev_promiscuous_enable(struct rte_eth_dev *eth_dev) case BONDING_MODE_ROUND_ROBIN: case BONDING_MODE_BALANCE: case BONDING_MODE_BROADCAST: + case BONDING_MODE_8023AD: for (i = 0; i < internals->slave_count; i++) rte_eth_promiscuous_enable(internals->slaves[i].port_id); break; - /* In mode4 promiscus mode is managed when slave is added/removed */ - case BONDING_MODE_8023AD: - break; /* Promiscuous mode is propagated only to primary slave */ case BONDING_MODE_ACTIVE_BACKUP: case BONDING_MODE_TLB: @@ -2645,12 +2643,10 @@ bond_ethdev_promiscuous_disable(struct rte_eth_dev *dev) case BONDING_MODE_ROUND_ROBIN: case BONDING_MODE_BALANCE: case BONDING_MODE_BROADCAST: + case BONDING_MODE_8023AD: for (i = 0; i < internals->slave_count; i++) rte_eth_promiscuous_disable(internals->slaves[i].port_id); break; - /* In mode4 promiscus mode is set managed when slave is added/removed */ - case BONDING_MODE_8023AD: - break; /* Promiscuous mode is propagated only to primary slave */ case BONDING_MODE_ACTIVE_BACKUP: case BONDING_MODE_TLB: -- 2.7.5 ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [dpdk-dev] [PATCH] net/bonding: propagate promiscous mode in mode 4 2018-08-01 12:57 [dpdk-dev] [PATCH] net/bonding: propagate promiscous mode in mode 4 Radu Nicolau @ 2018-08-01 13:34 ` Chas Williams 2018-08-01 13:47 ` Radu Nicolau 2018-08-02 9:57 ` [dpdk-dev] [PATCH v2 1/2] net/bonding: in 8023ad mode enable all multicast rather than promiscuous Radu Nicolau 1 sibling, 1 reply; 26+ messages in thread From: Chas Williams @ 2018-08-01 13:34 UTC (permalink / raw) To: Radu Nicolau; +Cc: dev, Declan Doherty, Chas Williams On Wed, Aug 1, 2018 at 9:04 AM Radu Nicolau <radu.nicolau@intel.com> wrote: > Update the bonding promiscuous mode enable/disable functions as to > propagate the change to all slaves instead of doing nothing; this > seems to be the correct behaviour according to the standard, > and also implemented in the linux network stack. > > Signed-off-by: Radu Nicolau <radu.nicolau@intel.com> > --- > drivers/net/bonding/rte_eth_bond_pmd.c | 8 ++------ > 1 file changed, 2 insertions(+), 6 deletions(-) > > diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c > b/drivers/net/bonding/rte_eth_bond_pmd.c > index ad6e33f..16105cb 100644 > --- a/drivers/net/bonding/rte_eth_bond_pmd.c > +++ b/drivers/net/bonding/rte_eth_bond_pmd.c > @@ -2617,12 +2617,10 @@ bond_ethdev_promiscuous_enable(struct rte_eth_dev > *eth_dev) > case BONDING_MODE_ROUND_ROBIN: > case BONDING_MODE_BALANCE: > case BONDING_MODE_BROADCAST: > + case BONDING_MODE_8023AD: > for (i = 0; i < internals->slave_count; i++) > > rte_eth_promiscuous_enable(internals->slaves[i].port_id); > break; > - /* In mode4 promiscus mode is managed when slave is added/removed > */ > This comment is true (and it appears it is always on in 802.3ad mode): /* use this port as agregator */ port->aggregator_port_id = slave_id; rte_eth_promiscuous_enable(slave_id); If we are going to do this here, we should probably get rid of it in the other location so that future readers aren't confused about which is the one doing the work. Since some adapters don't have group multicast support, we might already be in promiscuous anyway. Turning off promiscuous for the bonding master might turn it off in the slaves where an application has already enabled it. > - case BONDING_MODE_8023AD: > - break; > /* Promiscuous mode is propagated only to primary slave */ > case BONDING_MODE_ACTIVE_BACKUP: > case BONDING_MODE_TLB: > @@ -2645,12 +2643,10 @@ bond_ethdev_promiscuous_disable(struct rte_eth_dev > *dev) > case BONDING_MODE_ROUND_ROBIN: > case BONDING_MODE_BALANCE: > case BONDING_MODE_BROADCAST: > + case BONDING_MODE_8023AD: > for (i = 0; i < internals->slave_count; i++) > > rte_eth_promiscuous_disable(internals->slaves[i].port_id); > break; > - /* In mode4 promiscus mode is set managed when slave is > added/removed */ > - case BONDING_MODE_8023AD: > - break; > /* Promiscuous mode is propagated only to primary slave */ > case BONDING_MODE_ACTIVE_BACKUP: > case BONDING_MODE_TLB: > -- > 2.7.5 > > ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [dpdk-dev] [PATCH] net/bonding: propagate promiscous mode in mode 4 2018-08-01 13:34 ` Chas Williams @ 2018-08-01 13:47 ` Radu Nicolau 2018-08-01 15:35 ` Chas Williams 0 siblings, 1 reply; 26+ messages in thread From: Radu Nicolau @ 2018-08-01 13:47 UTC (permalink / raw) To: Chas Williams; +Cc: dev, Declan Doherty, Chas Williams On 8/1/2018 2:34 PM, Chas Williams wrote: > > > On Wed, Aug 1, 2018 at 9:04 AM Radu Nicolau <radu.nicolau@intel.com > <mailto:radu.nicolau@intel.com>> wrote: > > Update the bonding promiscuous mode enable/disable functions as to > propagate the change to all slaves instead of doing nothing; this > seems to be the correct behaviour according to the standard, > and also implemented in the linux network stack. > > Signed-off-by: Radu Nicolau <radu.nicolau@intel.com > <mailto:radu.nicolau@intel.com>> > --- > drivers/net/bonding/rte_eth_bond_pmd.c | 8 ++------ > 1 file changed, 2 insertions(+), 6 deletions(-) > > diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c > b/drivers/net/bonding/rte_eth_bond_pmd.c > index ad6e33f..16105cb 100644 > --- a/drivers/net/bonding/rte_eth_bond_pmd.c > +++ b/drivers/net/bonding/rte_eth_bond_pmd.c > @@ -2617,12 +2617,10 @@ bond_ethdev_promiscuous_enable(struct > rte_eth_dev *eth_dev) > case BONDING_MODE_ROUND_ROBIN: > case BONDING_MODE_BALANCE: > case BONDING_MODE_BROADCAST: > + case BONDING_MODE_8023AD: > for (i = 0; i < internals->slave_count; i++) > rte_eth_promiscuous_enable(internals->slaves[i].port_id); > break; > - /* In mode4 promiscus mode is managed when slave is > added/removed */ > > > This comment is true (and it appears it is always on in 802.3ad mode): > > /* use this port as agregator */ > port->aggregator_port_id = slave_id; > rte_eth_promiscuous_enable(slave_id); > > If we are going to do this here, we should probably get rid of it in > the other location so that future readers aren't confused about which > is the one doing the work. > > Since some adapters don't have group multicast support, we might > already be in promiscuous anyway. Turning off promiscuous for > the bonding master might turn it off in the slaves where an application > has already enabled it. The idea was to preserve the current behavior except for the explicit promiscuous disable/enable APIs; an application may disable the promiscuous mode on the bonding port and then enable it back, expecting it to propagate to the slaves. ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [dpdk-dev] [PATCH] net/bonding: propagate promiscous mode in mode 4 2018-08-01 13:47 ` Radu Nicolau @ 2018-08-01 15:35 ` Chas Williams 2018-08-02 6:35 ` Matan Azrad 0 siblings, 1 reply; 26+ messages in thread From: Chas Williams @ 2018-08-01 15:35 UTC (permalink / raw) To: Radu Nicolau; +Cc: dev, Declan Doherty, Chas Williams On Wed, Aug 1, 2018 at 9:48 AM Radu Nicolau <radu.nicolau@intel.com> wrote: > > > On 8/1/2018 2:34 PM, Chas Williams wrote: > > > > On Wed, Aug 1, 2018 at 9:04 AM Radu Nicolau <radu.nicolau@intel.com> > wrote: > >> Update the bonding promiscuous mode enable/disable functions as to >> propagate the change to all slaves instead of doing nothing; this >> seems to be the correct behaviour according to the standard, >> and also implemented in the linux network stack. >> >> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com> >> --- >> drivers/net/bonding/rte_eth_bond_pmd.c | 8 ++------ >> 1 file changed, 2 insertions(+), 6 deletions(-) >> >> diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c >> b/drivers/net/bonding/rte_eth_bond_pmd.c >> index ad6e33f..16105cb 100644 >> --- a/drivers/net/bonding/rte_eth_bond_pmd.c >> +++ b/drivers/net/bonding/rte_eth_bond_pmd.c >> @@ -2617,12 +2617,10 @@ bond_ethdev_promiscuous_enable(struct rte_eth_dev >> *eth_dev) >> case BONDING_MODE_ROUND_ROBIN: >> case BONDING_MODE_BALANCE: >> case BONDING_MODE_BROADCAST: >> + case BONDING_MODE_8023AD: >> for (i = 0; i < internals->slave_count; i++) >> >> rte_eth_promiscuous_enable(internals->slaves[i].port_id); >> break; >> - /* In mode4 promiscus mode is managed when slave is added/removed >> */ >> > > This comment is true (and it appears it is always on in 802.3ad mode): > > /* use this port as agregator */ > port->aggregator_port_id = slave_id; > rte_eth_promiscuous_enable(slave_id); > > If we are going to do this here, we should probably get rid of it in > the other location so that future readers aren't confused about which > is the one doing the work. > > Since some adapters don't have group multicast support, we might > already be in promiscuous anyway. Turning off promiscuous for > the bonding master might turn it off in the slaves where an application > has already enabled it. > > > The idea was to preserve the current behavior except for the explicit > promiscuous disable/enable APIs; an application may disable the promiscuous > mode on the bonding port and then enable it back, expecting it to propagate > to the slaves. > Yes, but an application doing that will break 802.3ad because promiscuous mode is used to receive the LAG PDUs which are on a multicast group. That's why this code doesn't let you disable promiscuous when you are in 802.3ad mode. If you want to do this it needs to be more complicated. In 802.3ad, you should try to add the multicast group to the slave interface. If that fails, turn on promisc mode for the slave. Make note of it. Later if bonding wants to enabled/disable promisc mode for the slaves, it needs to check if that slaves needs to remain in promisc to continue to get the LAG PDUs. ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [dpdk-dev] [PATCH] net/bonding: propagate promiscous mode in mode 4 2018-08-01 15:35 ` Chas Williams @ 2018-08-02 6:35 ` Matan Azrad 2018-08-02 13:23 ` Doherty, Declan 0 siblings, 1 reply; 26+ messages in thread From: Matan Azrad @ 2018-08-02 6:35 UTC (permalink / raw) To: Chas Williams, Radu Nicolau; +Cc: dev, Declan Doherty, Chas Williams Hi Chas, Radu From: Chas Williams > On Wed, Aug 1, 2018 at 9:48 AM Radu Nicolau <radu.nicolau@intel.com> > wrote: > > > > > > > On 8/1/2018 2:34 PM, Chas Williams wrote: > > > > > > > > On Wed, Aug 1, 2018 at 9:04 AM Radu Nicolau <radu.nicolau@intel.com> > > wrote: > > > >> Update the bonding promiscuous mode enable/disable functions as to > >> propagate the change to all slaves instead of doing nothing; this > >> seems to be the correct behaviour according to the standard, and also > >> implemented in the linux network stack. > >> > >> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com> > >> --- > >> drivers/net/bonding/rte_eth_bond_pmd.c | 8 ++------ > >> 1 file changed, 2 insertions(+), 6 deletions(-) > >> > >> diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c > >> b/drivers/net/bonding/rte_eth_bond_pmd.c > >> index ad6e33f..16105cb 100644 > >> --- a/drivers/net/bonding/rte_eth_bond_pmd.c > >> +++ b/drivers/net/bonding/rte_eth_bond_pmd.c > >> @@ -2617,12 +2617,10 @@ bond_ethdev_promiscuous_enable(struct > >> rte_eth_dev > >> *eth_dev) > >> case BONDING_MODE_ROUND_ROBIN: > >> case BONDING_MODE_BALANCE: > >> case BONDING_MODE_BROADCAST: > >> + case BONDING_MODE_8023AD: > >> for (i = 0; i < internals->slave_count; i++) > >> > >> rte_eth_promiscuous_enable(internals->slaves[i].port_id); > >> break; > >> - /* In mode4 promiscus mode is managed when slave is > added/removed > >> */ > >> > > > > This comment is true (and it appears it is always on in 802.3ad mode): > > > > /* use this port as agregator */ > > port->aggregator_port_id = slave_id; > > rte_eth_promiscuous_enable(slave_id); > > > > If we are going to do this here, we should probably get rid of it in > > the other location so that future readers aren't confused about which > > is the one doing the work. > > > > Since some adapters don't have group multicast support, we might > > already be in promiscuous anyway. Turning off promiscuous for the > > bonding master might turn it off in the slaves where an application > > has already enabled it. > > > > > > The idea was to preserve the current behavior except for the explicit > > promiscuous disable/enable APIs; an application may disable the > > promiscuous mode on the bonding port and then enable it back, > > expecting it to propagate to the slaves. > > > > Yes, but an application doing that will break 802.3ad because promiscuous > mode is used to receive the LAG PDUs which are on a multicast group. > That's why this code doesn't let you disable promiscuous when you are in > 802.3ad mode. > > If you want to do this it needs to be more complicated. In 802.3ad, you should > try to add the multicast group to the slave interface. If that fails, turn on > promisc mode for the slave. Make note of it. Later if bonding wants to > enabled/disable promisc mode for the slaves, it needs to check if that slaves > needs to remain in promisc to continue to get the LAG PDUs. I agree with Chas that this commit will hurt current LACP logic, but maybe this is the time to open discussion about it: The current bonding implementation is greedy while it setting promiscuous automatically for LACP, The user asks LACP and he gets promiscuous by the way. So if the user don't want promiscuous he must to disable it directly via slaves ports and to allow LACP using rte_flow\flow director\set_mc_addr_list\allmulti... I think the best way is to let the user to enable LACP as he wants, directly via slaves or by the bond promiscuous_enable API. For sure, it must be documented well. Matan. ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [dpdk-dev] [PATCH] net/bonding: propagate promiscous mode in mode 4 2018-08-02 6:35 ` Matan Azrad @ 2018-08-02 13:23 ` Doherty, Declan 2018-08-02 14:24 ` Matan Azrad 0 siblings, 1 reply; 26+ messages in thread From: Doherty, Declan @ 2018-08-02 13:23 UTC (permalink / raw) To: Matan Azrad, Chas Williams, Radu Nicolau; +Cc: dev, Chas Williams On 02/08/2018 7:35 AM, Matan Azrad wrote: > Hi Chas, Radu > > From: Chas Williams >> On Wed, Aug 1, 2018 at 9:48 AM Radu Nicolau <radu.nicolau@intel.com> >> wrote: >> >>> >>> >>> On 8/1/2018 2:34 PM, Chas Williams wrote: >>> >>> >>> >>> On Wed, Aug 1, 2018 at 9:04 AM Radu Nicolau <radu.nicolau@intel.com> >>> wrote: >>> >>>> Update the bonding promiscuous mode enable/disable functions as to >>>> propagate the change to all slaves instead of doing nothing; this >>>> seems to be the correct behaviour according to the standard, and also >>>> implemented in the linux network stack. >>>> >>>> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com> >>>> --- >>>> drivers/net/bonding/rte_eth_bond_pmd.c | 8 ++------ >>>> 1 file changed, 2 insertions(+), 6 deletions(-) >>>> >>>> diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c >>>> b/drivers/net/bonding/rte_eth_bond_pmd.c >>>> index ad6e33f..16105cb 100644 >>>> --- a/drivers/net/bonding/rte_eth_bond_pmd.c >>>> +++ b/drivers/net/bonding/rte_eth_bond_pmd.c >>>> @@ -2617,12 +2617,10 @@ bond_ethdev_promiscuous_enable(struct >>>> rte_eth_dev >>>> *eth_dev) >>>> case BONDING_MODE_ROUND_ROBIN: >>>> case BONDING_MODE_BALANCE: >>>> case BONDING_MODE_BROADCAST: >>>> + case BONDING_MODE_8023AD: >>>> for (i = 0; i < internals->slave_count; i++) >>>> >>>> rte_eth_promiscuous_enable(internals->slaves[i].port_id); >>>> break; >>>> - /* In mode4 promiscus mode is managed when slave is >> added/removed >>>> */ >>>> >>> >>> This comment is true (and it appears it is always on in 802.3ad mode): >>> >>> /* use this port as agregator */ >>> port->aggregator_port_id = slave_id; >>> rte_eth_promiscuous_enable(slave_id); >>> >>> If we are going to do this here, we should probably get rid of it in >>> the other location so that future readers aren't confused about which >>> is the one doing the work. >>> >>> Since some adapters don't have group multicast support, we might >>> already be in promiscuous anyway. Turning off promiscuous for the >>> bonding master might turn it off in the slaves where an application >>> has already enabled it. >>> >>> >>> The idea was to preserve the current behavior except for the explicit >>> promiscuous disable/enable APIs; an application may disable the >>> promiscuous mode on the bonding port and then enable it back, >>> expecting it to propagate to the slaves. >>> >> >> Yes, but an application doing that will break 802.3ad because promiscuous >> mode is used to receive the LAG PDUs which are on a multicast group. >> That's why this code doesn't let you disable promiscuous when you are in >> 802.3ad mode. >> >> If you want to do this it needs to be more complicated. In 802.3ad, you should >> try to add the multicast group to the slave interface. If that fails, turn on >> promisc mode for the slave. Make note of it. Later if bonding wants to >> enabled/disable promisc mode for the slaves, it needs to check if that slaves >> needs to remain in promisc to continue to get the LAG PDUs. > > I agree with Chas that this commit will hurt current LACP logic, but maybe this is the time to open discussion about it: > The current bonding implementation is greedy while it setting promiscuous automatically for LACP, > The user asks LACP and he gets promiscuous by the way. > > So if the user don't want promiscuous he must to disable it directly via slaves ports and to allow LACP using rte_flow\flow director\set_mc_addr_list\allmulti... > > I think the best way is to let the user to enable LACP as he wants, directly via slaves or by the bond promiscuous_enable API. > For sure, it must be documented well. > > Matan. > I'm thinking that default behavior should be that promiscuous mode should be disabled by default, and that the bond port should fail to start if any of the slave ports can't support subscription to the LACP multicast group. At this point the user can decided to enable promiscuous mode on the bond port (and therefore on all the slaves) and then start the bond. If we have slaves with different configurations for multicast subscriptions or promiscuous mode enablement, then there is potentially the opportunity for inconsistency in traffic depending on which slaves are active. Personally I would prefer that all configuration if possible is propagated through the bond port. So if a user wants to use a port which doesn't support multicast subscription then all ports in the bond need to be in promiscuous mode, and the user needs to explicitly enable it through the bond port, that way at least we can guarantee consist traffic irrespective of which ports in the bond are active at any one time. > > ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [dpdk-dev] [PATCH] net/bonding: propagate promiscous mode in mode 4 2018-08-02 13:23 ` Doherty, Declan @ 2018-08-02 14:24 ` Matan Azrad 2018-08-02 15:53 ` Doherty, Declan 2018-08-02 21:05 ` Chas Williams 0 siblings, 2 replies; 26+ messages in thread From: Matan Azrad @ 2018-08-02 14:24 UTC (permalink / raw) To: Doherty, Declan, Chas Williams, Radu Nicolau; +Cc: dev, Chas Williams Hi From: Doherty, Declan > On 02/08/2018 7:35 AM, Matan Azrad wrote: > > Hi Chas, Radu > > > > From: Chas Williams > >> On Wed, Aug 1, 2018 at 9:48 AM Radu Nicolau <radu.nicolau@intel.com> > >> wrote: > >> > >>> > >>> > >>> On 8/1/2018 2:34 PM, Chas Williams wrote: > >>> > >>> > >>> > >>> On Wed, Aug 1, 2018 at 9:04 AM Radu Nicolau <radu.nicolau@intel.com> > >>> wrote: > >>> > >>>> Update the bonding promiscuous mode enable/disable functions as to > >>>> propagate the change to all slaves instead of doing nothing; this > >>>> seems to be the correct behaviour according to the standard, and > >>>> also implemented in the linux network stack. > >>>> > >>>> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com> > >>>> --- > >>>> drivers/net/bonding/rte_eth_bond_pmd.c | 8 ++------ > >>>> 1 file changed, 2 insertions(+), 6 deletions(-) > >>>> > >>>> diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c > >>>> b/drivers/net/bonding/rte_eth_bond_pmd.c > >>>> index ad6e33f..16105cb 100644 > >>>> --- a/drivers/net/bonding/rte_eth_bond_pmd.c > >>>> +++ b/drivers/net/bonding/rte_eth_bond_pmd.c > >>>> @@ -2617,12 +2617,10 @@ bond_ethdev_promiscuous_enable(struct > >>>> rte_eth_dev > >>>> *eth_dev) > >>>> case BONDING_MODE_ROUND_ROBIN: > >>>> case BONDING_MODE_BALANCE: > >>>> case BONDING_MODE_BROADCAST: > >>>> + case BONDING_MODE_8023AD: > >>>> for (i = 0; i < internals->slave_count; i++) > >>>> > >>>> rte_eth_promiscuous_enable(internals->slaves[i].port_id); > >>>> break; > >>>> - /* In mode4 promiscus mode is managed when slave is > >> added/removed > >>>> */ > >>>> > >>> > >>> This comment is true (and it appears it is always on in 802.3ad mode): > >>> > >>> /* use this port as agregator */ > >>> port->aggregator_port_id = slave_id; > >>> rte_eth_promiscuous_enable(slave_id); > >>> > >>> If we are going to do this here, we should probably get rid of it in > >>> the other location so that future readers aren't confused about > >>> which is the one doing the work. > >>> > >>> Since some adapters don't have group multicast support, we might > >>> already be in promiscuous anyway. Turning off promiscuous for the > >>> bonding master might turn it off in the slaves where an application > >>> has already enabled it. > >>> > >>> > >>> The idea was to preserve the current behavior except for the > >>> explicit promiscuous disable/enable APIs; an application may disable > >>> the promiscuous mode on the bonding port and then enable it back, > >>> expecting it to propagate to the slaves. > >>> > >> > >> Yes, but an application doing that will break 802.3ad because > >> promiscuous mode is used to receive the LAG PDUs which are on a multicast > group. > >> That's why this code doesn't let you disable promiscuous when you are > >> in 802.3ad mode. > >> > >> If you want to do this it needs to be more complicated. In 802.3ad, > >> you should try to add the multicast group to the slave interface. If > >> that fails, turn on promisc mode for the slave. Make note of it. > >> Later if bonding wants to enabled/disable promisc mode for the > >> slaves, it needs to check if that slaves needs to remain in promisc to > continue to get the LAG PDUs. > > > > I agree with Chas that this commit will hurt current LACP logic, but maybe > this is the time to open discussion about it: > > The current bonding implementation is greedy while it setting > > promiscuous automatically for LACP, The user asks LACP and he gets > promiscuous by the way. > > > > So if the user don't want promiscuous he must to disable it directly via slaves > ports and to allow LACP using rte_flow\flow > director\set_mc_addr_list\allmulti... > > > > I think the best way is to let the user to enable LACP as he wants, directly via > slaves or by the bond promiscuous_enable API. > > For sure, it must be documented well. > > > > Matan. > > > > I'm thinking that default behavior should be that promiscuous mode should be > disabled by default, and that the bond port should fail to start if any of the slave > ports can't support subscription to the LACP multicast group. At this point the > user can decided to enable promiscuous mode on the bond port (and therefore > on all the slaves) and then start the bond. If we have slaves with different > configurations for multicast subscriptions or promiscuous mode enablement, > then there is potentially the opportunity for inconsistency in traffic depending > on which slaves are active. > Personally I would prefer that all configuration if possible is propagated > through the bond port. So if a user wants to use a port which doesn't support > multicast subscription then all ports in the bond need to be in promiscuous > mode, and the user needs to explicitly enable it through the bond port, that way > at least we can guarantee consist traffic irrespective of which ports in the bond > are active at any one time. That's exactly what I said :) I suggest to do it like next, To add one more parameter for LACP which means how to configure the LACP MC group - lacp_mc_grp_conf: 1. rte_flow. 2. flow director. 3. add_mac. 3. set_mc_add_list 4. allmulti 5. promiscuous Maybe more... or less :) By this way the user decides how to do it, if it's fail for a slave, the salve should be rejected. Conflict with another configuration(for example calling to promiscuous disable while running LACP lacp_mc_grp_conf=5) should raise an error. What do you think? Matan. ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [dpdk-dev] [PATCH] net/bonding: propagate promiscous mode in mode 4 2018-08-02 14:24 ` Matan Azrad @ 2018-08-02 15:53 ` Doherty, Declan 2018-08-02 17:33 ` Matan Azrad 2018-08-02 21:05 ` Chas Williams 1 sibling, 1 reply; 26+ messages in thread From: Doherty, Declan @ 2018-08-02 15:53 UTC (permalink / raw) To: Matan Azrad, Chas Williams, Radu Nicolau; +Cc: dev, Chas Williams On 02/08/2018 3:24 PM, Matan Azrad wrote: > Hi > > From: Doherty, Declan >> On 02/08/2018 7:35 AM, Matan Azrad wrote: >>> Hi Chas, Radu >>> >>> From: Chas Williams >>>> On Wed, Aug 1, 2018 at 9:48 AM Radu Nicolau <radu.nicolau@intel.com> >>>> wrote: >>>> >>>>> >>>>> >>>>> On 8/1/2018 2:34 PM, Chas Williams wrote: >>>>> >>>>> >>>>> >>>>> On Wed, Aug 1, 2018 at 9:04 AM Radu Nicolau <radu.nicolau@intel.com> >>>>> wrote: >>>>> >>>>>> Update the bonding promiscuous mode enable/disable functions as to >>>>>> propagate the change to all slaves instead of doing nothing; this >>>>>> seems to be the correct behaviour according to the standard, and >>>>>> also implemented in the linux network stack. >>>>>> >>>>>> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com> >>>>>> --- >>>>>> drivers/net/bonding/rte_eth_bond_pmd.c | 8 ++------ >>>>>> 1 file changed, 2 insertions(+), 6 deletions(-) >>>>>> >>>>>> diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c >>>>>> b/drivers/net/bonding/rte_eth_bond_pmd.c >>>>>> index ad6e33f..16105cb 100644 >>>>>> --- a/drivers/net/bonding/rte_eth_bond_pmd.c >>>>>> +++ b/drivers/net/bonding/rte_eth_bond_pmd.c >>>>>> @@ -2617,12 +2617,10 @@ bond_ethdev_promiscuous_enable(struct >>>>>> rte_eth_dev >>>>>> *eth_dev) >>>>>> case BONDING_MODE_ROUND_ROBIN: >>>>>> case BONDING_MODE_BALANCE: >>>>>> case BONDING_MODE_BROADCAST: >>>>>> + case BONDING_MODE_8023AD: >>>>>> for (i = 0; i < internals->slave_count; i++) >>>>>> >>>>>> rte_eth_promiscuous_enable(internals->slaves[i].port_id); >>>>>> break; >>>>>> - /* In mode4 promiscus mode is managed when slave is >>>> added/removed >>>>>> */ >>>>>> >>>>> >>>>> This comment is true (and it appears it is always on in 802.3ad mode): >>>>> >>>>> /* use this port as agregator */ >>>>> port->aggregator_port_id = slave_id; >>>>> rte_eth_promiscuous_enable(slave_id); >>>>> >>>>> If we are going to do this here, we should probably get rid of it in >>>>> the other location so that future readers aren't confused about >>>>> which is the one doing the work. >>>>> >>>>> Since some adapters don't have group multicast support, we might >>>>> already be in promiscuous anyway. Turning off promiscuous for the >>>>> bonding master might turn it off in the slaves where an application >>>>> has already enabled it. >>>>> >>>>> >>>>> The idea was to preserve the current behavior except for the >>>>> explicit promiscuous disable/enable APIs; an application may disable >>>>> the promiscuous mode on the bonding port and then enable it back, >>>>> expecting it to propagate to the slaves. >>>>> >>>> >>>> Yes, but an application doing that will break 802.3ad because >>>> promiscuous mode is used to receive the LAG PDUs which are on a multicast >> group. >>>> That's why this code doesn't let you disable promiscuous when you are >>>> in 802.3ad mode. >>>> >>>> If you want to do this it needs to be more complicated. In 802.3ad, >>>> you should try to add the multicast group to the slave interface. If >>>> that fails, turn on promisc mode for the slave. Make note of it. >>>> Later if bonding wants to enabled/disable promisc mode for the >>>> slaves, it needs to check if that slaves needs to remain in promisc to >> continue to get the LAG PDUs. >>> >>> I agree with Chas that this commit will hurt current LACP logic, but maybe >> this is the time to open discussion about it: >>> The current bonding implementation is greedy while it setting >>> promiscuous automatically for LACP, The user asks LACP and he gets >> promiscuous by the way. >>> >>> So if the user don't want promiscuous he must to disable it directly via slaves >> ports and to allow LACP using rte_flow\flow >> director\set_mc_addr_list\allmulti... >>> >>> I think the best way is to let the user to enable LACP as he wants, directly via >> slaves or by the bond promiscuous_enable API. >>> For sure, it must be documented well. >>> >>> Matan. >>> >> >> I'm thinking that default behavior should be that promiscuous mode should be >> disabled by default, and that the bond port should fail to start if any of the slave >> ports can't support subscription to the LACP multicast group. At this point the >> user can decided to enable promiscuous mode on the bond port (and therefore >> on all the slaves) and then start the bond. If we have slaves with different >> configurations for multicast subscriptions or promiscuous mode enablement, >> then there is potentially the opportunity for inconsistency in traffic depending >> on which slaves are active. > >> Personally I would prefer that all configuration if possible is propagated >> through the bond port. So if a user wants to use a port which doesn't support >> multicast subscription then all ports in the bond need to be in promiscuous >> mode, and the user needs to explicitly enable it through the bond port, that way >> at least we can guarantee consist traffic irrespective of which ports in the bond >> are active at any one time. > > That's exactly what I said :) > :) I guess so, but it was the configuration directly via the slave port bit which had me concerned, I think this needs to be managed directly from the bond port, ideally they > I suggest to do it like next, > To add one more parameter for LACP which means how to configure the LACP MC group - lacp_mc_grp_conf: > 1. rte_flow. > 2. flow director. > 3. add_mac. > 3. set_mc_add_list > 4. allmulti > 5. promiscuous > Maybe more... or less :) > > By this way the user decides how to do it, if it's fail for a slave, the salve should be rejected. > Conflict with another configuration(for example calling to promiscuous disable while running LACP lacp_mc_grp_conf=5) should raise an error. > > What do you think? > Supporting an LACP mc group specific configuration does make sense, but I wonder if this could just be handled by default during slave add. 1 and 2 are essentially the same hardware filtering offload mode, and the other modes are irrelevant if this is enabled, it should not be possible to add the slave if the bond is configured for this mode, or possible to change the bond into this mode if an existing slave doesn't support it. 3 should be the default expected behavior, but rte_eth_bond_slave_add() should fail if the slave being added doesn't support either adding the MAC to the slave or adding the LACP MC address. Then the user could try either rte_eth_allmulticast_enable() on the bond port and then try to add the slave again, which should fail if existing slave didn't support allmulticast or the add slave would fail again if the slave didn't support allmulticast and finally just call rte_eth_promiscuous_enable() on the bond and then try to re-add the that slave. but maybe having a explicit configuration parameter would be better. > Matan. > > ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [dpdk-dev] [PATCH] net/bonding: propagate promiscous mode in mode 4 2018-08-02 15:53 ` Doherty, Declan @ 2018-08-02 17:33 ` Matan Azrad 2018-08-02 21:10 ` Chas Williams 0 siblings, 1 reply; 26+ messages in thread From: Matan Azrad @ 2018-08-02 17:33 UTC (permalink / raw) To: Doherty, Declan, Chas Williams, Radu Nicolau; +Cc: dev, Chas Williams Hi Declan From: Doherty, Declan > On 02/08/2018 3:24 PM, Matan Azrad wrote: > > Hi > > > > From: Doherty, Declan > >> On 02/08/2018 7:35 AM, Matan Azrad wrote: > >>> Hi Chas, Radu > >>> > >>> From: Chas Williams > >>>> On Wed, Aug 1, 2018 at 9:48 AM Radu Nicolau <radu.nicolau@intel.com> > >>>> wrote: > >>>> > >>>>> > >>>>> > >>>>> On 8/1/2018 2:34 PM, Chas Williams wrote: > >>>>> > >>>>> > >>>>> > >>>>> On Wed, Aug 1, 2018 at 9:04 AM Radu Nicolau > <radu.nicolau@intel.com> > >>>>> wrote: > >>>>> > >>>>>> Update the bonding promiscuous mode enable/disable functions as to > >>>>>> propagate the change to all slaves instead of doing nothing; this > >>>>>> seems to be the correct behaviour according to the standard, and > >>>>>> also implemented in the linux network stack. > >>>>>> > >>>>>> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com> > >>>>>> --- > >>>>>> drivers/net/bonding/rte_eth_bond_pmd.c | 8 ++------ > >>>>>> 1 file changed, 2 insertions(+), 6 deletions(-) > >>>>>> > >>>>>> diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c > >>>>>> b/drivers/net/bonding/rte_eth_bond_pmd.c > >>>>>> index ad6e33f..16105cb 100644 > >>>>>> --- a/drivers/net/bonding/rte_eth_bond_pmd.c > >>>>>> +++ b/drivers/net/bonding/rte_eth_bond_pmd.c > >>>>>> @@ -2617,12 +2617,10 @@ bond_ethdev_promiscuous_enable(struct > >>>>>> rte_eth_dev > >>>>>> *eth_dev) > >>>>>> case BONDING_MODE_ROUND_ROBIN: > >>>>>> case BONDING_MODE_BALANCE: > >>>>>> case BONDING_MODE_BROADCAST: > >>>>>> + case BONDING_MODE_8023AD: > >>>>>> for (i = 0; i < internals->slave_count; i++) > >>>>>> > >>>>>> rte_eth_promiscuous_enable(internals->slaves[i].port_id); > >>>>>> break; > >>>>>> - /* In mode4 promiscus mode is managed when slave is > >>>> added/removed > >>>>>> */ > >>>>>> > >>>>> > >>>>> This comment is true (and it appears it is always on in 802.3ad mode): > >>>>> > >>>>> /* use this port as agregator */ > >>>>> port->aggregator_port_id = slave_id; > >>>>> rte_eth_promiscuous_enable(slave_id); > >>>>> > >>>>> If we are going to do this here, we should probably get rid of it in > >>>>> the other location so that future readers aren't confused about > >>>>> which is the one doing the work. > >>>>> > >>>>> Since some adapters don't have group multicast support, we might > >>>>> already be in promiscuous anyway. Turning off promiscuous for the > >>>>> bonding master might turn it off in the slaves where an application > >>>>> has already enabled it. > >>>>> > >>>>> > >>>>> The idea was to preserve the current behavior except for the > >>>>> explicit promiscuous disable/enable APIs; an application may disable > >>>>> the promiscuous mode on the bonding port and then enable it back, > >>>>> expecting it to propagate to the slaves. > >>>>> > >>>> > >>>> Yes, but an application doing that will break 802.3ad because > >>>> promiscuous mode is used to receive the LAG PDUs which are on a > multicast > >> group. > >>>> That's why this code doesn't let you disable promiscuous when you are > >>>> in 802.3ad mode. > >>>> > >>>> If you want to do this it needs to be more complicated. In 802.3ad, > >>>> you should try to add the multicast group to the slave interface. If > >>>> that fails, turn on promisc mode for the slave. Make note of it. > >>>> Later if bonding wants to enabled/disable promisc mode for the > >>>> slaves, it needs to check if that slaves needs to remain in promisc to > >> continue to get the LAG PDUs. > >>> > >>> I agree with Chas that this commit will hurt current LACP logic, but maybe > >> this is the time to open discussion about it: > >>> The current bonding implementation is greedy while it setting > >>> promiscuous automatically for LACP, The user asks LACP and he gets > >> promiscuous by the way. > >>> > >>> So if the user don't want promiscuous he must to disable it directly via > slaves > >> ports and to allow LACP using rte_flow\flow > >> director\set_mc_addr_list\allmulti... > >>> > >>> I think the best way is to let the user to enable LACP as he wants, directly > via > >> slaves or by the bond promiscuous_enable API. > >>> For sure, it must be documented well. > >>> > >>> Matan. > >>> > >> > >> I'm thinking that default behavior should be that promiscuous mode should > be > >> disabled by default, and that the bond port should fail to start if any of the > slave > >> ports can't support subscription to the LACP multicast group. At this point > the > >> user can decided to enable promiscuous mode on the bond port (and > therefore > >> on all the slaves) and then start the bond. If we have slaves with different > >> configurations for multicast subscriptions or promiscuous mode enablement, > >> then there is potentially the opportunity for inconsistency in traffic > depending > >> on which slaves are active. > > > >> Personally I would prefer that all configuration if possible is propagated > >> through the bond port. So if a user wants to use a port which doesn't support > >> multicast subscription then all ports in the bond need to be in promiscuous > >> mode, and the user needs to explicitly enable it through the bond port, that > way > >> at least we can guarantee consist traffic irrespective of which ports in the > bond > >> are active at any one time. > > > > That's exactly what I said :) > > > > :) > > I guess so, but it was the configuration directly via the slave port bit > which had me concerned, I think this needs to be managed directly from > the bond port, ideally they Yes, but you know that bond is far from supporting all the ethdev configuration and there is not a clear limitation in bond documentation to do configuration via slaves. I agree that it's better to use some bond API to configure the LACP mc group. > > > > I suggest to do it like next, > > To add one more parameter for LACP which means how to configure the > LACP MC group - lacp_mc_grp_conf: > > 1. rte_flow. > > 2. flow director. > > 3. add_mac. > > 3. set_mc_add_list > > 4. allmulti > > 5. promiscuous > > Maybe more... or less :) > > > > By this way the user decides how to do it, if it's fail for a slave, the salve > should be rejected. > > Conflict with another configuration(for example calling to promiscuous > disable while running LACP lacp_mc_grp_conf=5) should raise an error. > > > > What do you think? > > > > Supporting an LACP mc group specific configuration does make sense, but > I wonder if this could just be handled by default during slave add. > > > 1 and 2 are essentially the same hardware filtering offload mode, and > the other modes are irrelevant if this is enabled, it should not be > possible to add the slave if the bond is configured for this mode, or > possible to change the bond into this mode if an existing slave doesn't > support it. > > 3 should be the default expected behavior, but rte_eth_bond_slave_add() > should fail if the slave being added doesn't support either adding the > MAC to the slave or adding the LACP MC address. > > Then the user could try either rte_eth_allmulticast_enable() on the bond > port and then try to add the slave again, which should fail if existing > slave didn't support allmulticast or the add slave would fail again if > the slave didn't support allmulticast and finally just call > rte_eth_promiscuous_enable() on the bond and then try to re-add the that > slave. > > but maybe having a explicit configuration parameter would be better. I don't sure you understand exactly what I’m suggesting here, again: I suggest to add a new parameter to the LACP mode called lacp_mc_grp_conf(or something else). So, when the user configures LACP (mode 4) it must to configure the lacp_mc_grp_conf parameter to one of the options I suggested. This parameter is not per slave means the bond PMD will use the selected option to configure the LACP MC group for all the slave ports. If one of the slaves doesn't support the selected option it should be rejected. Conflicts should rais an error. The advantages are: The user knows which option is better to synchronize with his application. The user knows better than the bond PMD what is the slaves capabilities. All the slaves are configured by the same way - consistent traffic. ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [dpdk-dev] [PATCH] net/bonding: propagate promiscous mode in mode 4 2018-08-02 17:33 ` Matan Azrad @ 2018-08-02 21:10 ` Chas Williams 2018-08-03 5:47 ` Matan Azrad 0 siblings, 1 reply; 26+ messages in thread From: Chas Williams @ 2018-08-02 21:10 UTC (permalink / raw) To: Matan Azrad; +Cc: Declan Doherty, Radu Nicolau, dev, Chas Williams On Thu, Aug 2, 2018 at 1:33 PM Matan Azrad <matan@mellanox.com> wrote: > Hi Declan > > From: Doherty, Declan > > On 02/08/2018 3:24 PM, Matan Azrad wrote: > > > Hi > > > > > > From: Doherty, Declan > > >> On 02/08/2018 7:35 AM, Matan Azrad wrote: > > >>> Hi Chas, Radu > > >>> > > >>> From: Chas Williams > > >>>> On Wed, Aug 1, 2018 at 9:48 AM Radu Nicolau <radu.nicolau@intel.com > > > > >>>> wrote: > > >>>> > > >>>>> > > >>>>> > > >>>>> On 8/1/2018 2:34 PM, Chas Williams wrote: > > >>>>> > > >>>>> > > >>>>> > > >>>>> On Wed, Aug 1, 2018 at 9:04 AM Radu Nicolau > > <radu.nicolau@intel.com> > > >>>>> wrote: > > >>>>> > > >>>>>> Update the bonding promiscuous mode enable/disable functions as to > > >>>>>> propagate the change to all slaves instead of doing nothing; this > > >>>>>> seems to be the correct behaviour according to the standard, and > > >>>>>> also implemented in the linux network stack. > > >>>>>> > > >>>>>> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com> > > >>>>>> --- > > >>>>>> drivers/net/bonding/rte_eth_bond_pmd.c | 8 ++------ > > >>>>>> 1 file changed, 2 insertions(+), 6 deletions(-) > > >>>>>> > > >>>>>> diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c > > >>>>>> b/drivers/net/bonding/rte_eth_bond_pmd.c > > >>>>>> index ad6e33f..16105cb 100644 > > >>>>>> --- a/drivers/net/bonding/rte_eth_bond_pmd.c > > >>>>>> +++ b/drivers/net/bonding/rte_eth_bond_pmd.c > > >>>>>> @@ -2617,12 +2617,10 @@ bond_ethdev_promiscuous_enable(struct > > >>>>>> rte_eth_dev > > >>>>>> *eth_dev) > > >>>>>> case BONDING_MODE_ROUND_ROBIN: > > >>>>>> case BONDING_MODE_BALANCE: > > >>>>>> case BONDING_MODE_BROADCAST: > > >>>>>> + case BONDING_MODE_8023AD: > > >>>>>> for (i = 0; i < internals->slave_count; i++) > > >>>>>> > > >>>>>> rte_eth_promiscuous_enable(internals->slaves[i].port_id); > > >>>>>> break; > > >>>>>> - /* In mode4 promiscus mode is managed when slave is > > >>>> added/removed > > >>>>>> */ > > >>>>>> > > >>>>> > > >>>>> This comment is true (and it appears it is always on in 802.3ad > mode): > > >>>>> > > >>>>> /* use this port as agregator */ > > >>>>> port->aggregator_port_id = slave_id; > > >>>>> rte_eth_promiscuous_enable(slave_id); > > >>>>> > > >>>>> If we are going to do this here, we should probably get rid of it > in > > >>>>> the other location so that future readers aren't confused about > > >>>>> which is the one doing the work. > > >>>>> > > >>>>> Since some adapters don't have group multicast support, we might > > >>>>> already be in promiscuous anyway. Turning off promiscuous for the > > >>>>> bonding master might turn it off in the slaves where an application > > >>>>> has already enabled it. > > >>>>> > > >>>>> > > >>>>> The idea was to preserve the current behavior except for the > > >>>>> explicit promiscuous disable/enable APIs; an application may > disable > > >>>>> the promiscuous mode on the bonding port and then enable it back, > > >>>>> expecting it to propagate to the slaves. > > >>>>> > > >>>> > > >>>> Yes, but an application doing that will break 802.3ad because > > >>>> promiscuous mode is used to receive the LAG PDUs which are on a > > multicast > > >> group. > > >>>> That's why this code doesn't let you disable promiscuous when you > are > > >>>> in 802.3ad mode. > > >>>> > > >>>> If you want to do this it needs to be more complicated. In 802.3ad, > > >>>> you should try to add the multicast group to the slave interface. > If > > >>>> that fails, turn on promisc mode for the slave. Make note of it. > > >>>> Later if bonding wants to enabled/disable promisc mode for the > > >>>> slaves, it needs to check if that slaves needs to remain in promisc > to > > >> continue to get the LAG PDUs. > > >>> > > >>> I agree with Chas that this commit will hurt current LACP logic, but > maybe > > >> this is the time to open discussion about it: > > >>> The current bonding implementation is greedy while it setting > > >>> promiscuous automatically for LACP, The user asks LACP and he gets > > >> promiscuous by the way. > > >>> > > >>> So if the user don't want promiscuous he must to disable it directly > via > > slaves > > >> ports and to allow LACP using rte_flow\flow > > >> director\set_mc_addr_list\allmulti... > > >>> > > >>> I think the best way is to let the user to enable LACP as he wants, > directly > > via > > >> slaves or by the bond promiscuous_enable API. > > >>> For sure, it must be documented well. > > >>> > > >>> Matan. > > >>> > > >> > > >> I'm thinking that default behavior should be that promiscuous mode > should > > be > > >> disabled by default, and that the bond port should fail to start if > any of the > > slave > > >> ports can't support subscription to the LACP multicast group. At this > point > > the > > >> user can decided to enable promiscuous mode on the bond port (and > > therefore > > >> on all the slaves) and then start the bond. If we have slaves with > different > > >> configurations for multicast subscriptions or promiscuous mode > enablement, > > >> then there is potentially the opportunity for inconsistency in traffic > > depending > > >> on which slaves are active. > > > > > >> Personally I would prefer that all configuration if possible is > propagated > > >> through the bond port. So if a user wants to use a port which doesn't > support > > >> multicast subscription then all ports in the bond need to be in > promiscuous > > >> mode, and the user needs to explicitly enable it through the bond > port, that > > way > > >> at least we can guarantee consist traffic irrespective of which ports > in the > > bond > > >> are active at any one time. > > > > > > That's exactly what I said :) > > > > > > > :) > > > > I guess so, but it was the configuration directly via the slave port bit > > which had me concerned, I think this needs to be managed directly from > > the bond port, ideally they > > Yes, but you know that bond is far from supporting all the ethdev > configuration > and there is not a clear limitation in bond documentation to do > configuration via slaves. > > I agree that it's better to use some bond API to configure the LACP mc > group. > > > > > > > > > I suggest to do it like next, > > > To add one more parameter for LACP which means how to configure the > > LACP MC group - lacp_mc_grp_conf: > > > 1. rte_flow. > > > 2. flow director. > > > 3. add_mac. > > > 3. set_mc_add_list > > > 4. allmulti > > > 5. promiscuous > > > Maybe more... or less :) > > > > > > By this way the user decides how to do it, if it's fail for a slave, > the salve > > should be rejected. > > > Conflict with another configuration(for example calling to promiscuous > > disable while running LACP lacp_mc_grp_conf=5) should raise an error. > > > > > > What do you think? > > > > > > > Supporting an LACP mc group specific configuration does make sense, but > > I wonder if this could just be handled by default during slave add. > > > > > > 1 and 2 are essentially the same hardware filtering offload mode, and > > the other modes are irrelevant if this is enabled, it should not be > > possible to add the slave if the bond is configured for this mode, or > > possible to change the bond into this mode if an existing slave doesn't > > support it. > > > > > 3 should be the default expected behavior, but rte_eth_bond_slave_add() > > should fail if the slave being added doesn't support either adding the > > MAC to the slave or adding the LACP MC address. > > > > Then the user could try either rte_eth_allmulticast_enable() on the bond > > port and then try to add the slave again, which should fail if existing > > slave didn't support allmulticast or the add slave would fail again if > > the slave didn't support allmulticast and finally just call > > rte_eth_promiscuous_enable() on the bond and then try to re-add the that > > slave. > > > > but maybe having a explicit configuration parameter would be better. > > I don't sure you understand exactly what I’m suggesting here, again: > I suggest to add a new parameter to the LACP mode called > lacp_mc_grp_conf(or something else). > So, when the user configures LACP (mode 4) it must to configure the > lacp_mc_grp_conf parameter > to one of the options I suggested. > This parameter is not per slave means the bond PMD will use the selected > option to configure the LACP MC > group for all the slave ports. > > If one of the slaves doesn't support the selected option it should be > rejected. > Conflicts should rais an error. > I agree here. Yes, if a slave can't manage to subscribe to the multicast group, an error should be raised. The only way for this to happen is that you don't have promisc support which is the ultimate fallback. > > The advantages are: > The user knows which option is better to synchronize with his application. > The user knows better than the bond PMD what is the slaves capabilities. > All the slaves are configured by the same way - consistent traffic. > > > It would be ideal if all the slaves would have the same features and capabilities. There wasn't enforced before, so this would be a new restriction that would be less flexible than what we currently have. That doesn't seem like an improvement. The bonding user probably doesn't care which mode is used. The bonding user just wants bonding to work. He doesn't care about the details. If I am writing an application with this proposed API, I need to make a list of adapters and what they support (and keep this up to date as DPDK evolves). Ugh. ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [dpdk-dev] [PATCH] net/bonding: propagate promiscous mode in mode 4 2018-08-02 21:10 ` Chas Williams @ 2018-08-03 5:47 ` Matan Azrad 2018-08-06 16:00 ` Chas Williams 0 siblings, 1 reply; 26+ messages in thread From: Matan Azrad @ 2018-08-03 5:47 UTC (permalink / raw) To: Chas Williams; +Cc: Declan Doherty, Radu Nicolau, dev, Chas Williams Hi Chas From: Chas Williams [mailto:3chas3@gmail.com] On Thu, Aug 2, 2018 at 1:33 > PM Matan Azrad <matan@mellanox.com> wrote: > > > > > I suggest to do it like next, > > > To add one more parameter for LACP which means how to configure the > > LACP MC group - lacp_mc_grp_conf: > > > 1. rte_flow. > > > 2. flow director. > > > 3. add_mac. > > > 3. set_mc_add_list > > > 4. allmulti > > > 5. promiscuous > > > Maybe more... or less :) > > > > > > By this way the user decides how to do it, if it's fail for a slave, > > > the salve > > should be rejected. > > > Conflict with another configuration(for example calling to > > > promiscuous > > disable while running LACP lacp_mc_grp_conf=5) should raise an error. > > > > > > What do you think? > > > > > > > Supporting an LACP mc group specific configuration does make sense, > > but I wonder if this could just be handled by default during slave add. > > > > > > 1 and 2 are essentially the same hardware filtering offload mode, and > > the other modes are irrelevant if this is enabled, it should not be > > possible to add the slave if the bond is configured for this mode, or > > possible to change the bond into this mode if an existing slave > > doesn't support it. > > > > > 3 should be the default expected behavior, but > > rte_eth_bond_slave_add() should fail if the slave being added doesn't > > support either adding the MAC to the slave or adding the LACP MC address. > > > > Then the user could try either rte_eth_allmulticast_enable() on the > > bond port and then try to add the slave again, which should fail if > > existing slave didn't support allmulticast or the add slave would fail > > again if the slave didn't support allmulticast and finally just call > > rte_eth_promiscuous_enable() on the bond and then try to re-add the > > that slave. > > > > but maybe having a explicit configuration parameter would be better. > > I don't sure you understand exactly what I’m suggesting here, again: > I suggest to add a new parameter to the LACP mode called > lacp_mc_grp_conf(or something else). > So, when the user configures LACP (mode 4) it must to configure the > lacp_mc_grp_conf parameter to one of the options I suggested. > This parameter is not per slave means the bond PMD will use the selected > option to configure the LACP MC group for all the slave ports. > > If one of the slaves doesn't support the selected option it should be rejected. > Conflicts should rais an error. > > I agree here. Yes, if a slave can't manage to subscribe to the multicast group, > an error should be raised. The only way for this to happen is that you don't > have promisc support which is the ultimate fallback. > The advantages are: > The user knows which option is better to synchronize with his application. > The user knows better than the bond PMD what is the slaves capabilities. > All the slaves are configured by the same way - consistent traffic. > > > It would be ideal if all the slaves would have the same features and > capabilities. There wasn't enforced before, so this would be a new restriction > that would be less flexible than what we currently have. That doesn't seem like > an improvement. > The bonding user probably doesn't care which mode is used. > The bonding user just wants bonding to work. He doesn't care about the details. If I am writing > an application with this proposed API, I need to make a list of adapters and > what they support (and keep this up to date as DPDK evolves). Ugh. The applications commonly know what are the nics capabilities they work with. I know at least an one big application which really suffering because the bond configures promiscuous in mode 4 without the application asking (it's considered there as a bug in dpdk). I think that providing another option will be better. So, providing to applications a list of options will ease the application life and may be big improvement while not hurting the current behavior. Matan ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [dpdk-dev] [PATCH] net/bonding: propagate promiscous mode in mode 4 2018-08-03 5:47 ` Matan Azrad @ 2018-08-06 16:00 ` Chas Williams 2018-08-06 17:46 ` Matan Azrad 0 siblings, 1 reply; 26+ messages in thread From: Chas Williams @ 2018-08-06 16:00 UTC (permalink / raw) To: Matan Azrad; +Cc: Declan Doherty, Radu Nicolau, dev, Chas Williams On Fri, Aug 3, 2018 at 1:47 AM Matan Azrad <matan@mellanox.com> wrote: > Hi Chas > > From: Chas Williams [mailto:3chas3@gmail.com] On Thu, Aug 2, 2018 at 1:33 > > PM Matan Azrad <matan@mellanox.com> wrote: > > > > > > > I suggest to do it like next, > > > > To add one more parameter for LACP which means how to configure the > > > LACP MC group - lacp_mc_grp_conf: > > > > 1. rte_flow. > > > > 2. flow director. > > > > 3. add_mac. > > > > 3. set_mc_add_list > > > > 4. allmulti > > > > 5. promiscuous > > > > Maybe more... or less :) > > > > > > > > By this way the user decides how to do it, if it's fail for a slave, > > > > the salve > > > should be rejected. > > > > Conflict with another configuration(for example calling to > > > > promiscuous > > > disable while running LACP lacp_mc_grp_conf=5) should raise an error. > > > > > > > > What do you think? > > > > > > > > > > Supporting an LACP mc group specific configuration does make sense, > > > but I wonder if this could just be handled by default during slave add. > > > > > > > > > 1 and 2 are essentially the same hardware filtering offload mode, and > > > the other modes are irrelevant if this is enabled, it should not be > > > possible to add the slave if the bond is configured for this mode, or > > > possible to change the bond into this mode if an existing slave > > > doesn't support it. > > > > > > > > 3 should be the default expected behavior, but > > > rte_eth_bond_slave_add() should fail if the slave being added doesn't > > > support either adding the MAC to the slave or adding the LACP MC > address. > > > > > > Then the user could try either rte_eth_allmulticast_enable() on the > > > bond port and then try to add the slave again, which should fail if > > > existing slave didn't support allmulticast or the add slave would fail > > > again if the slave didn't support allmulticast and finally just call > > > rte_eth_promiscuous_enable() on the bond and then try to re-add the > > > that slave. > > > > > > but maybe having a explicit configuration parameter would be better. > > > > I don't sure you understand exactly what I’m suggesting here, again: > > I suggest to add a new parameter to the LACP mode called > > lacp_mc_grp_conf(or something else). > > So, when the user configures LACP (mode 4) it must to configure the > > lacp_mc_grp_conf parameter to one of the options I suggested. > > This parameter is not per slave means the bond PMD will use the selected > > option to configure the LACP MC group for all the slave ports. > > > > If one of the slaves doesn't support the selected option it should be > rejected. > > Conflicts should rais an error. > > > > I agree here. Yes, if a slave can't manage to subscribe to the > multicast group, > > an error should be raised. The only way for this to happen is that you > don't > > have promisc support which is the ultimate fallback. > > > The advantages are: > > The user knows which option is better to synchronize with his > application. > > The user knows better than the bond PMD what is the slaves capabilities. > > All the slaves are configured by the same way - consistent traffic. > > > > > > It would be ideal if all the slaves would have the same features and > > capabilities. There wasn't enforced before, so this would be a new > restriction > > that would be less flexible than what we currently have. That doesn't > seem like > > an improvement. > > > The bonding user probably doesn't care which mode is used. > > The bonding user just wants bonding to work. He doesn't care about the > details. If I am writing > > an application with this proposed API, I need to make a list of adapters > and > > what they support (and keep this up to date as DPDK evolves). Ugh. > > The applications commonly know what are the nics capabilities they work > with. > > I know at least an one big application which really suffering because the > bond > configures promiscuous in mode 4 without the application asking (it's > considered there as a bug in dpdk). > I think that providing another option will be better. > I think providing another option will be better as well. However we disagree on the option. If the PMD has no other way to subscribe the multicast group, it has to use promiscuous mode. Providing a list of options only makes life complicated for the developer and doesn't really make any difference in the end results. For instance, if the least common denominator between the two PMDs is promiscuous mode, you are going to be forced to run both in promiscuous mode instead of selecting the best mode for each PMD. DPDK already has a promiscuous flag for the PMDs: RTE_FUNC_PTR_OR_RET(*dev->dev_ops->promiscuous_enable); (*dev->dev_ops->promiscuous_enable)(dev); dev->data->promiscuous = 1; So the bonding PMD already should be able to tell if it can safely propagate the enable/disable for promiscuous mode. However, for 802.3ad, that is always going to be a no until we add some other way to subscribe to the multicast group. > > So, providing to applications a list of options will ease the application > life and may be big improvement > while not hurting the current behavior. > > Matan > > ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [dpdk-dev] [PATCH] net/bonding: propagate promiscous mode in mode 4 2018-08-06 16:00 ` Chas Williams @ 2018-08-06 17:46 ` Matan Azrad 2018-08-06 19:01 ` Chas Williams 0 siblings, 1 reply; 26+ messages in thread From: Matan Azrad @ 2018-08-06 17:46 UTC (permalink / raw) To: Chas Williams; +Cc: Declan Doherty, Radu Nicolau, dev, Chas Williams Hi Chas From: Chas Williams >On Fri, Aug 3, 2018 at 1:47 AM Matan Azrad <mailto:matan@mellanox.com> wrote: >Hi Chas > > From: Chas Williams [mailto:mailto:3chas3@gmail.com] On Thu, Aug 2, 2018 at 1:33 >> PM Matan Azrad <mailto:matan@mellanox.com> wrote: >> > >> > > I suggest to do it like next, >> > > To add one more parameter for LACP which means how to configure the >> > LACP MC group - lacp_mc_grp_conf: >> > > 1. rte_flow. >> > > 2. flow director. >> > > 3. add_mac. >> > > 3. set_mc_add_list >> > > 4. allmulti >> > > 5. promiscuous >> > > Maybe more... or less :) >> > > >> > > By this way the user decides how to do it, if it's fail for a slave, >> > > the salve >> > should be rejected. >> > > Conflict with another configuration(for example calling to >> > > promiscuous >> > disable while running LACP lacp_mc_grp_conf=5) should raise an error. >> > > >> > > What do you think? >> > > >> > >> > Supporting an LACP mc group specific configuration does make sense, >> > but I wonder if this could just be handled by default during slave add. >> > >> > >> > 1 and 2 are essentially the same hardware filtering offload mode, and >> > the other modes are irrelevant if this is enabled, it should not be >> > possible to add the slave if the bond is configured for this mode, or >> > possible to change the bond into this mode if an existing slave >> > doesn't support it. >> >> > >> > 3 should be the default expected behavior, but >> > rte_eth_bond_slave_add() should fail if the slave being added doesn't >> > support either adding the MAC to the slave or adding the LACP MC address. >> > >> > Then the user could try either rte_eth_allmulticast_enable() on the >> > bond port and then try to add the slave again, which should fail if >> > existing slave didn't support allmulticast or the add slave would fail >> > again if the slave didn't support allmulticast and finally just call >> > rte_eth_promiscuous_enable() on the bond and then try to re-add the >> > that slave. >> > >> > but maybe having a explicit configuration parameter would be better. >> >> I don't sure you understand exactly what I’m suggesting here, again: >> I suggest to add a new parameter to the LACP mode called >> lacp_mc_grp_conf(or something else). >> So, when the user configures LACP (mode 4) it must to configure the >> lacp_mc_grp_conf parameter to one of the options I suggested. >> This parameter is not per slave means the bond PMD will use the selected >> option to configure the LACP MC group for all the slave ports. >> >> If one of the slaves doesn't support the selected option it should be rejected. >> Conflicts should rais an error. >> >> I agree here. Yes, if a slave can't manage to subscribe to the multicast group, >> an error should be raised. The only way for this to happen is that you don't >> have promisc support which is the ultimate fallback. > >> The advantages are: >> The user knows which option is better to synchronize with his application. >> The user knows better than the bond PMD what is the slaves capabilities. >> All the slaves are configured by the same way - consistent traffic. >> >> >> It would be ideal if all the slaves would have the same features and >> capabilities. There wasn't enforced before, so this would be a new restriction >> that would be less flexible than what we currently have. That doesn't seem like >> an improvement. > >> The bonding user probably doesn't care which mode is used. >> The bonding user just wants bonding to work. He doesn't care about the details. If I am writing >> an application with this proposed API, I need to make a list of adapters and >> what they support (and keep this up to date as DPDK evolves). Ugh. > >The applications commonly know what are the nics capabilities they work with. > >I know at least an one big application which really suffering because the bond >configures promiscuous in mode 4 without the application asking (it's considered there as a bug in dpdk). >I think that providing another option will be better. > >I think providing another option will be better as well. However we disagree on the option. >If the PMD has no other way to subscribe the multicast group, it has to use promiscuous mode. Yes, it is true but there are a lot of other and better options, promiscuous is greedy! Should be the last alternative to use. >Providing a list of options only makes life complicated for the developer and doesn't really >make any difference in the end results. A big different, for example: Let's say the bonding groups 2 devices that support rte_flow. The user don't want neither promiscuous nor all multicast, he just want to get it's mac traffic + LACP MC group traffic,(a realistic use case) if he has an option to tell to the bond PMD, please use rte_flow to configure the specific LACP MC group it will be great. Think how much work these applications should do in the current behavior. > For instance, if the least common denominator between the two PMDs is promiscuous mode, > you are going to be forced to run both in promiscuous mode >instead of selecting the best mode for each PMD. In this case promiscuous is better, Using a different configuration is worst and against the bonding PMD principle to get a consistent traffic from the slaves. So, if one uses allmulti and one uses promiscuous the application may get an inconsistent traffic and it may trigger a lot of problems and complications for some applications. >DPDK already has a promiscuous flag for the PMDs: > > RTE_FUNC_PTR_OR_RET(*dev->dev_ops->promiscuous_enable); > (*dev->dev_ops->promiscuous_enable)(dev); > dev->data->promiscuous = 1; > >So the bonding PMD already should be able to tell if it can safely propagate the enable/disable >for promiscuous mode. However, for 802.3ad, that is always going to be a no until we add >some other way to subscribe to the multicast group. > > >So, providing to applications a list of options will ease the application life and may be big improvement >while not hurting the current behavior. > >Matan > ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [dpdk-dev] [PATCH] net/bonding: propagate promiscous mode in mode 4 2018-08-06 17:46 ` Matan Azrad @ 2018-08-06 19:01 ` Chas Williams 2018-08-06 19:35 ` Matan Azrad 0 siblings, 1 reply; 26+ messages in thread From: Chas Williams @ 2018-08-06 19:01 UTC (permalink / raw) To: Matan Azrad; +Cc: Declan Doherty, Radu Nicolau, dev, Chas Williams On Mon, Aug 6, 2018 at 1:46 PM Matan Azrad <matan@mellanox.com> wrote: > Hi Chas > > From: Chas Williams > >On Fri, Aug 3, 2018 at 1:47 AM Matan Azrad <mailto:matan@mellanox.com> > wrote: > >Hi Chas > > > > From: Chas Williams [mailto:mailto:3chas3@gmail.com] On Thu, Aug 2, > 2018 at 1:33 > >> PM Matan Azrad <mailto:matan@mellanox.com> wrote: > >> > > >> > > I suggest to do it like next, > >> > > To add one more parameter for LACP which means how to configure the > >> > LACP MC group - lacp_mc_grp_conf: > >> > > 1. rte_flow. > >> > > 2. flow director. > >> > > 3. add_mac. > >> > > 3. set_mc_add_list > >> > > 4. allmulti > >> > > 5. promiscuous > >> > > Maybe more... or less :) > >> > > > >> > > By this way the user decides how to do it, if it's fail for a slave, > >> > > the salve > >> > should be rejected. > >> > > Conflict with another configuration(for example calling to > >> > > promiscuous > >> > disable while running LACP lacp_mc_grp_conf=5) should raise an error. > >> > > > >> > > What do you think? > >> > > > >> > > >> > Supporting an LACP mc group specific configuration does make sense, > >> > but I wonder if this could just be handled by default during slave > add. > >> > > >> > > >> > 1 and 2 are essentially the same hardware filtering offload mode, and > >> > the other modes are irrelevant if this is enabled, it should not be > >> > possible to add the slave if the bond is configured for this mode, or > >> > possible to change the bond into this mode if an existing slave > >> > doesn't support it. > >> > >> > > >> > 3 should be the default expected behavior, but > >> > rte_eth_bond_slave_add() should fail if the slave being added doesn't > >> > support either adding the MAC to the slave or adding the LACP MC > address. > >> > > >> > Then the user could try either rte_eth_allmulticast_enable() on the > >> > bond port and then try to add the slave again, which should fail if > >> > existing slave didn't support allmulticast or the add slave would fail > >> > again if the slave didn't support allmulticast and finally just call > >> > rte_eth_promiscuous_enable() on the bond and then try to re-add the > >> > that slave. > >> > > >> > but maybe having a explicit configuration parameter would be better. > >> > >> I don't sure you understand exactly what I’m suggesting here, again: > >> I suggest to add a new parameter to the LACP mode called > >> lacp_mc_grp_conf(or something else). > >> So, when the user configures LACP (mode 4) it must to configure the > >> lacp_mc_grp_conf parameter to one of the options I suggested. > >> This parameter is not per slave means the bond PMD will use the selected > >> option to configure the LACP MC group for all the slave ports. > >> > >> If one of the slaves doesn't support the selected option it should be > rejected. > >> Conflicts should rais an error. > >> > >> I agree here. Yes, if a slave can't manage to subscribe to the > multicast group, > >> an error should be raised. The only way for this to happen is that you > don't > >> have promisc support which is the ultimate fallback. > > > >> The advantages are: > >> The user knows which option is better to synchronize with his > application. > >> The user knows better than the bond PMD what is the slaves capabilities. > >> All the slaves are configured by the same way - consistent traffic. > >> > >> > >> It would be ideal if all the slaves would have the same features and > >> capabilities. There wasn't enforced before, so this would be a new > restriction > >> that would be less flexible than what we currently have. That doesn't > seem like > >> an improvement. > > > >> The bonding user probably doesn't care which mode is used. > >> The bonding user just wants bonding to work. He doesn't care about the > details. If I am writing > >> an application with this proposed API, I need to make a list of > adapters and > >> what they support (and keep this up to date as DPDK evolves). Ugh. > > > >The applications commonly know what are the nics capabilities they work > with. > > > >I know at least an one big application which really suffering because the > bond > >configures promiscuous in mode 4 without the application asking (it's > considered there as a bug in dpdk). > >I think that providing another option will be better. > > > >I think providing another option will be better as well. However we > disagree on the option. > >If the PMD has no other way to subscribe the multicast group, it has to > use promiscuous mode. > > Yes, it is true but there are a lot of other and better options, > promiscuous is greedy! Should be the last alternative to use. > Unfortunately, it's the only option implemented. > > >Providing a list of options only makes life complicated for the developer > and doesn't really > >make any difference in the end results. > > A big different, for example: > Let's say the bonding groups 2 devices that support rte_flow. > The user don't want neither promiscuous nor all multicast, he just want to > get it's mac traffic + LACP MC group traffic,(a realistic use case) > if he has an option to tell to the bond PMD, please use rte_flow to > configure the specific LACP MC group it will be great. > Think how much work these applications should do in the current behavior. > The bond PMD should already know how to do that itself. Again, you are forcing more work on the user to ask them to select between the methods. > > > For instance, if the least common denominator between the two PMDs is > promiscuous mode, > > you are going to be forced to run both in promiscuous mode > >instead of selecting the best mode for each PMD. > > In this case promiscuous is better, > Using a different configuration is worst and against the bonding PMD > principle to get a consistent traffic from the slaves. > So, if one uses allmulti and one uses promiscuous the application may get > an inconsistent traffic > and it may trigger a lot of problems and complications for some > applications. > > Those applications should already have those problems. I can make the counter argument that there are potentially applications relying on the broken behavior. We need to ignore those issues and fix this the "right" way. The "right" way IMHO is the pass the least amount of traffic possible in each case. > >DPDK already has a promiscuous flag for the PMDs: > > > > RTE_FUNC_PTR_OR_RET(*dev->dev_ops->promiscuous_enable); > > (*dev->dev_ops->promiscuous_enable)(dev); > > dev->data->promiscuous = 1; > > > >So the bonding PMD already should be able to tell if it can safely > propagate the enable/disable > >for promiscuous mode. However, for 802.3ad, that is always going to be a > no until we add > >some other way to subscribe to the multicast group. > > > > > >So, providing to applications a list of options will ease the application > life and may be big improvement > >while not hurting the current behavior. > > > >Matan > > > ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [dpdk-dev] [PATCH] net/bonding: propagate promiscous mode in mode 4 2018-08-06 19:01 ` Chas Williams @ 2018-08-06 19:35 ` Matan Azrad 2018-09-11 3:31 ` Chas Williams 0 siblings, 1 reply; 26+ messages in thread From: Matan Azrad @ 2018-08-06 19:35 UTC (permalink / raw) To: Chas Williams; +Cc: Declan Doherty, Radu Nicolau, dev, Chas Williams Hi Chas From: Chas Williams >On Mon, Aug 6, 2018 at 1:46 PM Matan Azrad <mailto:matan@mellanox.com> wrote: >Hi Chas > >From: Chas Williams >>On Fri, Aug 3, 2018 at 1:47 AM Matan Azrad <mailto:mailto:matan@mellanox.com> wrote: >>Hi Chas >> >> From: Chas Williams [mailto:mailto:mailto:mailto:3chas3@gmail.com] On Thu, Aug 2, 2018 at 1:33 >>> PM Matan Azrad <mailto:mailto:matan@mellanox.com> wrote: >>> > >>> > > I suggest to do it like next, >>> > > To add one more parameter for LACP which means how to configure the >>> > LACP MC group - lacp_mc_grp_conf: >>> > > 1. rte_flow. >>> > > 2. flow director. >>> > > 3. add_mac. >>> > > 3. set_mc_add_list >>> > > 4. allmulti >>> > > 5. promiscuous >>> > > Maybe more... or less :) >>> > > >>> > > By this way the user decides how to do it, if it's fail for a slave, >>> > > the salve >>> > should be rejected. >>> > > Conflict with another configuration(for example calling to >>> > > promiscuous >>> > disable while running LACP lacp_mc_grp_conf=5) should raise an error. >>> > > >>> > > What do you think? >>> > > >>> > >>> > Supporting an LACP mc group specific configuration does make sense, >>> > but I wonder if this could just be handled by default during slave add. >>> > >>> > >>> > 1 and 2 are essentially the same hardware filtering offload mode, and >>> > the other modes are irrelevant if this is enabled, it should not be >>> > possible to add the slave if the bond is configured for this mode, or >>> > possible to change the bond into this mode if an existing slave >>> > doesn't support it. >>> >>> > >>> > 3 should be the default expected behavior, but >>> > rte_eth_bond_slave_add() should fail if the slave being added doesn't >>> > support either adding the MAC to the slave or adding the LACP MC address. >>> > >>> > Then the user could try either rte_eth_allmulticast_enable() on the >>> > bond port and then try to add the slave again, which should fail if >>> > existing slave didn't support allmulticast or the add slave would fail >>> > again if the slave didn't support allmulticast and finally just call >>> > rte_eth_promiscuous_enable() on the bond and then try to re-add the >>> > that slave. >>> > >>> > but maybe having a explicit configuration parameter would be better. >>> >>> I don't sure you understand exactly what I’m suggesting here, again: >>> I suggest to add a new parameter to the LACP mode called >>> lacp_mc_grp_conf(or something else). >>> So, when the user configures LACP (mode 4) it must to configure the >>> lacp_mc_grp_conf parameter to one of the options I suggested. >>> This parameter is not per slave means the bond PMD will use the selected >>> option to configure the LACP MC group for all the slave ports. >>> >>> If one of the slaves doesn't support the selected option it should be rejected. >>> Conflicts should rais an error. >>> >>> I agree here. Yes, if a slave can't manage to subscribe to the multicast group, >>> an error should be raised. The only way for this to happen is that you don't >>> have promisc support which is the ultimate fallback. >> >>> The advantages are: >>> The user knows which option is better to synchronize with his application. >>> The user knows better than the bond PMD what is the slaves capabilities. >>> All the slaves are configured by the same way - consistent traffic. >>> >>> >>> It would be ideal if all the slaves would have the same features and >>> capabilities. There wasn't enforced before, so this would be a new restriction >>> that would be less flexible than what we currently have. That doesn't seem like >>> an improvement. >> >>> The bonding user probably doesn't care which mode is used. >>> The bonding user just wants bonding to work. He doesn't care about the details. If I am writing >>> an application with this proposed API, I need to make a list of adapters and >>> what they support (and keep this up to date as DPDK evolves). Ugh. >> >>The applications commonly know what are the nics capabilities they work with. >> >>I know at least an one big application which really suffering because the bond >>configures promiscuous in mode 4 without the application asking (it's considered there as a bug in dpdk). >>I think that providing another option will be better. >> >>I think providing another option will be better as well. However we disagree on the option. >>If the PMD has no other way to subscribe the multicast group, it has to use promiscuous mode. > >>Yes, it is true but there are a lot of other and better options, promiscuous is greedy! Should be the last alternative to use. > >Unfortunately, it's the only option implemented. Yes, I know, I suggest to change it or at least not to make it worst. >>Providing a list of options only makes life complicated for the developer and doesn't really >>make any difference in the end results. > >>A big different, for example: >>Let's say the bonding groups 2 devices that support rte_flow. >>The user don't want neither promiscuous nor all multicast, he just want to get it's mac traffic + LACP MC group traffic,(a realistic use case) >> if he has an option to tell to the bond PMD, please use rte_flow to configure the specific LACP MC group it will be great. >>Think how much work these applications should do in the current behavior. > >The bond PMD should already know how to do that itself. The bond can do it with a lot of complexity, but again the user must know what the bond chose to be synchronized. So, I think it's better that the user will define it because it is a traffic configuration (the same as promiscuous configuration - the user configures it) > Again, you are forcing more work on the user to ask them to select between the methods. We can create a default option as now(promiscuous). >> For instance, if the least common denominator between the two PMDs is promiscuous mode, >> you are going to be forced to run both in promiscuous mode >>instead of selecting the best mode for each PMD. > >>In this case promiscuous is better, >>Using a different configuration is worst and against the bonding PMD principle to get a consistent traffic from the slaves. >>So, if one uses allmulti and one uses promiscuous the application may get an inconsistent traffic >>and it may trigger a lot of problems and complications for some applications. > >Those applications should already have those problems. > I can make the counter >argument that there are potentially applications relying on the broken behavior. You right. So adding allmulticast will require changes in these applications. >We need to ignore those issues and fix this the "right" way. The "right" way IMHO >is the pass the least amount of traffic possible in each case. Not in cost of an inconsistency, but looks like we are not agree here. ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [dpdk-dev] [PATCH] net/bonding: propagate promiscous mode in mode 4 2018-08-06 19:35 ` Matan Azrad @ 2018-09-11 3:31 ` Chas Williams 2018-09-12 5:56 ` Matan Azrad 0 siblings, 1 reply; 26+ messages in thread From: Chas Williams @ 2018-09-11 3:31 UTC (permalink / raw) To: Matan Azrad; +Cc: Declan Doherty, Radu Nicolau, dev, Chas Williams On Mon, Aug 6, 2018 at 3:35 PM Matan Azrad <matan@mellanox.com> wrote: > > > Hi Chas > > From: Chas Williams > >On Mon, Aug 6, 2018 at 1:46 PM Matan Azrad <mailto:matan@mellanox.com> wrote: > >Hi Chas > > > >From: Chas Williams > >>On Fri, Aug 3, 2018 at 1:47 AM Matan Azrad <mailto:mailto:matan@mellanox.com> wrote: > >>Hi Chas > >> > >> From: Chas Williams [mailto:mailto:mailto:mailto:3chas3@gmail.com] On Thu, Aug 2, 2018 at 1:33 > >>> PM Matan Azrad <mailto:mailto:matan@mellanox.com> wrote: > >>> > > >>> > > I suggest to do it like next, > >>> > > To add one more parameter for LACP which means how to configure the > >>> > LACP MC group - lacp_mc_grp_conf: > >>> > > 1. rte_flow. > >>> > > 2. flow director. > >>> > > 3. add_mac. > >>> > > 3. set_mc_add_list > >>> > > 4. allmulti > >>> > > 5. promiscuous > >>> > > Maybe more... or less :) > >>> > > > >>> > > By this way the user decides how to do it, if it's fail for a slave, > >>> > > the salve > >>> > should be rejected. > >>> > > Conflict with another configuration(for example calling to > >>> > > promiscuous > >>> > disable while running LACP lacp_mc_grp_conf=5) should raise an error. > >>> > > > >>> > > What do you think? > >>> > > > >>> > > >>> > Supporting an LACP mc group specific configuration does make sense, > >>> > but I wonder if this could just be handled by default during slave add. > >>> > > >>> > > >>> > 1 and 2 are essentially the same hardware filtering offload mode, and > >>> > the other modes are irrelevant if this is enabled, it should not be > >>> > possible to add the slave if the bond is configured for this mode, or > >>> > possible to change the bond into this mode if an existing slave > >>> > doesn't support it. > >>> > >>> > > >>> > 3 should be the default expected behavior, but > >>> > rte_eth_bond_slave_add() should fail if the slave being added doesn't > >>> > support either adding the MAC to the slave or adding the LACP MC address. > >>> > > >>> > Then the user could try either rte_eth_allmulticast_enable() on the > >>> > bond port and then try to add the slave again, which should fail if > >>> > existing slave didn't support allmulticast or the add slave would fail > >>> > again if the slave didn't support allmulticast and finally just call > >>> > rte_eth_promiscuous_enable() on the bond and then try to re-add the > >>> > that slave. > >>> > > >>> > but maybe having a explicit configuration parameter would be better. > >>> > >>> I don't sure you understand exactly what I’m suggesting here, again: > >>> I suggest to add a new parameter to the LACP mode called > >>> lacp_mc_grp_conf(or something else). > >>> So, when the user configures LACP (mode 4) it must to configure the > >>> lacp_mc_grp_conf parameter to one of the options I suggested. > >>> This parameter is not per slave means the bond PMD will use the selected > >>> option to configure the LACP MC group for all the slave ports. > >>> > >>> If one of the slaves doesn't support the selected option it should be rejected. > >>> Conflicts should rais an error. > >>> > >>> I agree here. Yes, if a slave can't manage to subscribe to the multicast group, > >>> an error should be raised. The only way for this to happen is that you don't > >>> have promisc support which is the ultimate fallback. > >> > >>> The advantages are: > >>> The user knows which option is better to synchronize with his application. > >>> The user knows better than the bond PMD what is the slaves capabilities. > >>> All the slaves are configured by the same way - consistent traffic. > >>> > >>> > >>> It would be ideal if all the slaves would have the same features and > >>> capabilities. There wasn't enforced before, so this would be a new restriction > >>> that would be less flexible than what we currently have. That doesn't seem like > >>> an improvement. > >> > >>> The bonding user probably doesn't care which mode is used. > >>> The bonding user just wants bonding to work. He doesn't care about the details. If I am writing > >>> an application with this proposed API, I need to make a list of adapters and > >>> what they support (and keep this up to date as DPDK evolves). Ugh. > >> > >>The applications commonly know what are the nics capabilities they work with. > >> > >>I know at least an one big application which really suffering because the bond > >>configures promiscuous in mode 4 without the application asking (it's considered there as a bug in dpdk). > >>I think that providing another option will be better. > >> > >>I think providing another option will be better as well. However we disagree on the option. > >>If the PMD has no other way to subscribe the multicast group, it has to use promiscuous mode. > > > >>Yes, it is true but there are a lot of other and better options, promiscuous is greedy! Should be the last alternative to use. > > > >Unfortunately, it's the only option implemented. > > Yes, I know, I suggest to change it or at least not to make it worst. > > >>Providing a list of options only makes life complicated for the developer and doesn't really > >>make any difference in the end results. > > > >>A big different, for example: > >>Let's say the bonding groups 2 devices that support rte_flow. > >>The user don't want neither promiscuous nor all multicast, he just want to get it's mac traffic + LACP MC group traffic,(a realistic use case) > >> if he has an option to tell to the bond PMD, please use rte_flow to configure the specific LACP MC group it will be great. > >>Think how much work these applications should do in the current behavior. > > > >The bond PMD should already know how to do that itself. > > The bond can do it with a lot of complexity, but again the user must know what the bond chose to be synchronized. > So, I think it's better that the user will define it because it is a traffic configuration (the same as promiscuous configuration - the user configures it) > > Again, you are forcing more work on the user to ask them to select between the methods. > > We can create a default option as now(promiscuous). > > >> For instance, if the least common denominator between the two PMDs is promiscuous mode, > >> you are going to be forced to run both in promiscuous mode > >>instead of selecting the best mode for each PMD. > > > >>In this case promiscuous is better, > >>Using a different configuration is worst and against the bonding PMD principle to get a consistent traffic from the slaves. > >>So, if one uses allmulti and one uses promiscuous the application may get an inconsistent traffic > >>and it may trigger a lot of problems and complications for some applications. > > > >Those applications should already have those problems. > > I can make the counter > >argument that there are potentially applications relying on the broken behavior. > > You right. So adding allmulticast will require changes in these applications. > > >We need to ignore those issues and fix this the "right" way. The "right" way IMHO > >is the pass the least amount of traffic possible in each case. > > Not in cost of an inconsistency, but looks like we are not agree here. > I have recently run into this issue again with a device that doesn't support promiscuous, but does let me subscribe to the appropriate multicast groups. At this point, I am leaning toward adding another API call to the bonding API so that the user can provide a callback to setup whatever they want on the slaves. The default setup routine would be enable promiscuous. Comments? ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [dpdk-dev] [PATCH] net/bonding: propagate promiscous mode in mode 4 2018-09-11 3:31 ` Chas Williams @ 2018-09-12 5:56 ` Matan Azrad 2018-09-13 15:14 ` Chas Williams 0 siblings, 1 reply; 26+ messages in thread From: Matan Azrad @ 2018-09-12 5:56 UTC (permalink / raw) To: Chas Williams; +Cc: Declan Doherty, Radu Nicolau, dev, Chas Williams Hi Chas From: Chas Williams > On Mon, Aug 6, 2018 at 3:35 PM Matan Azrad <matan@mellanox.com> > wrote: > > > > > > Hi Chas > > > > From: Chas Williams > > >On Mon, Aug 6, 2018 at 1:46 PM Matan Azrad > <mailto:matan@mellanox.com> wrote: > > >Hi Chas > > > > > >From: Chas Williams > > >>On Fri, Aug 3, 2018 at 1:47 AM Matan Azrad > <mailto:mailto:matan@mellanox.com> wrote: > > >>Hi Chas > > >> > > >> From: Chas Williams [mailto:mailto:mailto:mailto:3chas3@gmail.com] > > >> On Thu, Aug 2, 2018 at 1:33 > > >>> PM Matan Azrad <mailto:mailto:matan@mellanox.com> wrote: > > >>> > > > >>> > > I suggest to do it like next, > > >>> > > To add one more parameter for LACP which means how to > > >>> > > configure the > > >>> > LACP MC group - lacp_mc_grp_conf: > > >>> > > 1. rte_flow. > > >>> > > 2. flow director. > > >>> > > 3. add_mac. > > >>> > > 3. set_mc_add_list > > >>> > > 4. allmulti > > >>> > > 5. promiscuous > > >>> > > Maybe more... or less :) > > >>> > > > > >>> > > By this way the user decides how to do it, if it's fail for a > > >>> > > slave, the salve > > >>> > should be rejected. > > >>> > > Conflict with another configuration(for example calling to > > >>> > > promiscuous > > >>> > disable while running LACP lacp_mc_grp_conf=5) should raise an > error. > > >>> > > > > >>> > > What do you think? > > >>> > > > > >>> > > > >>> > Supporting an LACP mc group specific configuration does make > > >>> > sense, but I wonder if this could just be handled by default during > slave add. > > >>> > > > >>> > > > >>> > 1 and 2 are essentially the same hardware filtering offload > > >>> > mode, and the other modes are irrelevant if this is enabled, it > > >>> > should not be possible to add the slave if the bond is > > >>> > configured for this mode, or possible to change the bond into > > >>> > this mode if an existing slave doesn't support it. > > >>> > > >>> > > > >>> > 3 should be the default expected behavior, but > > >>> > rte_eth_bond_slave_add() should fail if the slave being added > > >>> > doesn't support either adding the MAC to the slave or adding the > LACP MC address. > > >>> > > > >>> > Then the user could try either rte_eth_allmulticast_enable() on > > >>> > the bond port and then try to add the slave again, which should > > >>> > fail if existing slave didn't support allmulticast or the add > > >>> > slave would fail again if the slave didn't support allmulticast > > >>> > and finally just call > > >>> > rte_eth_promiscuous_enable() on the bond and then try to re-add > > >>> > the that slave. > > >>> > > > >>> > but maybe having a explicit configuration parameter would be > better. > > >>> > > >>> I don't sure you understand exactly what I’m suggesting here, again: > > >>> I suggest to add a new parameter to the LACP mode called > > >>> lacp_mc_grp_conf(or something else). > > >>> So, when the user configures LACP (mode 4) it must to configure > > >>> the lacp_mc_grp_conf parameter to one of the options I suggested. > > >>> This parameter is not per slave means the bond PMD will use the > > >>> selected option to configure the LACP MC group for all the slave ports. > > >>> > > >>> If one of the slaves doesn't support the selected option it should be > rejected. > > >>> Conflicts should rais an error. > > >>> > > >>> I agree here. Yes, if a slave can't manage to subscribe to the > > >>> multicast group, an error should be raised. The only way for this > > >>> to happen is that you don't have promisc support which is the ultimate > fallback. > > >> > > >>> The advantages are: > > >>> The user knows which option is better to synchronize with his > application. > > >>> The user knows better than the bond PMD what is the slaves > capabilities. > > >>> All the slaves are configured by the same way - consistent traffic. > > >>> > > >>> > > >>> It would be ideal if all the slaves would have the same features > > >>> and capabilities. There wasn't enforced before, so this would be > > >>> a new restriction that would be less flexible than what we > > >>> currently have. That doesn't seem like an improvement. > > >> > > >>> The bonding user probably doesn't care which mode is used. > > >>> The bonding user just wants bonding to work. He doesn't care about > the details. If I am writing > > >>> an application with this proposed API, I need to make a list of > > >>> adapters and what they support (and keep this up to date as DPDK > evolves). Ugh. > > >> > > >>The applications commonly know what are the nics capabilities they > work with. > > >> > > >>I know at least an one big application which really suffering > > >>because the bond configures promiscuous in mode 4 without the > application asking (it's considered there as a bug in dpdk). > > >>I think that providing another option will be better. > > >> > > >>I think providing another option will be better as well. However we > disagree on the option. > > >>If the PMD has no other way to subscribe the multicast group, it has to > use promiscuous mode. > > > > > >>Yes, it is true but there are a lot of other and better options, > promiscuous is greedy! Should be the last alternative to use. > > > > > >Unfortunately, it's the only option implemented. > > > > Yes, I know, I suggest to change it or at least not to make it worst. > > > > >>Providing a list of options only makes life complicated for the > > >>developer and doesn't really make any difference in the end results. > > > > > >>A big different, for example: > > >>Let's say the bonding groups 2 devices that support rte_flow. > > >>The user don't want neither promiscuous nor all multicast, he just > > >>want to get it's mac traffic + LACP MC group traffic,(a realistic use case) if > he has an option to tell to the bond PMD, please use rte_flow to configure > the specific LACP MC group it will be great. > > >>Think how much work these applications should do in the current > behavior. > > > > > >The bond PMD should already know how to do that itself. > > > > The bond can do it with a lot of complexity, but again the user must know > what the bond chose to be synchronized. > > So, I think it's better that the user will define it because it is a > > traffic configuration (the same as promiscuous configuration - the > > user configures it) > > > Again, you are forcing more work on the user to ask them to select > between the methods. > > > > We can create a default option as now(promiscuous). > > > > >> For instance, if the least common denominator between the two PMDs > > >>is promiscuous mode, you are going to be forced to run both in > > >>promiscuous mode instead of selecting the best mode for each PMD. > > > > > >>In this case promiscuous is better, > > >>Using a different configuration is worst and against the bonding PMD > principle to get a consistent traffic from the slaves. > > >>So, if one uses allmulti and one uses promiscuous the application > > >>may get an inconsistent traffic and it may trigger a lot of problems and > complications for some applications. > > > > > >Those applications should already have those problems. > > > I can make the counter > > >argument that there are potentially applications relying on the broken > behavior. > > > > You right. So adding allmulticast will require changes in these applications. > > > > >We need to ignore those issues and fix this the "right" way. The > > >"right" way IMHO is the pass the least amount of traffic possible in each > case. > > > > Not in cost of an inconsistency, but looks like we are not agree here. > > > > I have recently run into this issue again with a device that doesn't support > promiscuous, but does let me subscribe to the appropriate multicast groups. > At this point, I am leaning toward adding another API call to the bonding API > so that the user can provide a callback to setup whatever they want on the > slaves. > The default setup routine would be enable promiscuous. > > Comments? The bonding already allows to the users to do operations directly to the slaves(it exports the port ids - rte_eth_bond_slaves_get), so I don't understand why do you need a new API. The only change you need may be to add parameter to disable the promiscuous configuration in mode4. ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [dpdk-dev] [PATCH] net/bonding: propagate promiscous mode in mode 4 2018-09-12 5:56 ` Matan Azrad @ 2018-09-13 15:14 ` Chas Williams 2018-09-13 15:40 ` Matan Azrad 0 siblings, 1 reply; 26+ messages in thread From: Chas Williams @ 2018-09-13 15:14 UTC (permalink / raw) To: Matan Azrad; +Cc: Declan Doherty, Radu Nicolau, dev, Chas Williams On Wed, Sep 12, 2018 at 1:56 AM Matan Azrad <matan@mellanox.com> wrote: > > Hi Chas > > From: Chas Williams > > On Mon, Aug 6, 2018 at 3:35 PM Matan Azrad <matan@mellanox.com> > > wrote: > > > > > > > > > Hi Chas > > > > > > From: Chas Williams > > > >On Mon, Aug 6, 2018 at 1:46 PM Matan Azrad > > <mailto:matan@mellanox.com> wrote: > > > >Hi Chas > > > > > > > >From: Chas Williams > > > >>On Fri, Aug 3, 2018 at 1:47 AM Matan Azrad > > <mailto:mailto:matan@mellanox.com> wrote: > > > >>Hi Chas > > > >> > > > >> From: Chas Williams [mailto:mailto:mailto:mailto:3chas3@gmail.com] > > > >> On Thu, Aug 2, 2018 at 1:33 > > > >>> PM Matan Azrad <mailto:mailto:matan@mellanox.com> wrote: > > > >>> > > > > >>> > > I suggest to do it like next, > > > >>> > > To add one more parameter for LACP which means how to > > > >>> > > configure the > > > >>> > LACP MC group - lacp_mc_grp_conf: > > > >>> > > 1. rte_flow. > > > >>> > > 2. flow director. > > > >>> > > 3. add_mac. > > > >>> > > 3. set_mc_add_list > > > >>> > > 4. allmulti > > > >>> > > 5. promiscuous > > > >>> > > Maybe more... or less :) > > > >>> > > > > > >>> > > By this way the user decides how to do it, if it's fail for a > > > >>> > > slave, the salve > > > >>> > should be rejected. > > > >>> > > Conflict with another configuration(for example calling to > > > >>> > > promiscuous > > > >>> > disable while running LACP lacp_mc_grp_conf=5) should raise an > > error. > > > >>> > > > > > >>> > > What do you think? > > > >>> > > > > > >>> > > > > >>> > Supporting an LACP mc group specific configuration does make > > > >>> > sense, but I wonder if this could just be handled by default during > > slave add. > > > >>> > > > > >>> > > > > >>> > 1 and 2 are essentially the same hardware filtering offload > > > >>> > mode, and the other modes are irrelevant if this is enabled, it > > > >>> > should not be possible to add the slave if the bond is > > > >>> > configured for this mode, or possible to change the bond into > > > >>> > this mode if an existing slave doesn't support it. > > > >>> > > > >>> > > > > >>> > 3 should be the default expected behavior, but > > > >>> > rte_eth_bond_slave_add() should fail if the slave being added > > > >>> > doesn't support either adding the MAC to the slave or adding the > > LACP MC address. > > > >>> > > > > >>> > Then the user could try either rte_eth_allmulticast_enable() on > > > >>> > the bond port and then try to add the slave again, which should > > > >>> > fail if existing slave didn't support allmulticast or the add > > > >>> > slave would fail again if the slave didn't support allmulticast > > > >>> > and finally just call > > > >>> > rte_eth_promiscuous_enable() on the bond and then try to re-add > > > >>> > the that slave. > > > >>> > > > > >>> > but maybe having a explicit configuration parameter would be > > better. > > > >>> > > > >>> I don't sure you understand exactly what I’m suggesting here, again: > > > >>> I suggest to add a new parameter to the LACP mode called > > > >>> lacp_mc_grp_conf(or something else). > > > >>> So, when the user configures LACP (mode 4) it must to configure > > > >>> the lacp_mc_grp_conf parameter to one of the options I suggested. > > > >>> This parameter is not per slave means the bond PMD will use the > > > >>> selected option to configure the LACP MC group for all the slave ports. > > > >>> > > > >>> If one of the slaves doesn't support the selected option it should be > > rejected. > > > >>> Conflicts should rais an error. > > > >>> > > > >>> I agree here. Yes, if a slave can't manage to subscribe to the > > > >>> multicast group, an error should be raised. The only way for this > > > >>> to happen is that you don't have promisc support which is the ultimate > > fallback. > > > >> > > > >>> The advantages are: > > > >>> The user knows which option is better to synchronize with his > > application. > > > >>> The user knows better than the bond PMD what is the slaves > > capabilities. > > > >>> All the slaves are configured by the same way - consistent traffic. > > > >>> > > > >>> > > > >>> It would be ideal if all the slaves would have the same features > > > >>> and capabilities. There wasn't enforced before, so this would be > > > >>> a new restriction that would be less flexible than what we > > > >>> currently have. That doesn't seem like an improvement. > > > >> > > > >>> The bonding user probably doesn't care which mode is used. > > > >>> The bonding user just wants bonding to work. He doesn't care about > > the details. If I am writing > > > >>> an application with this proposed API, I need to make a list of > > > >>> adapters and what they support (and keep this up to date as DPDK > > evolves). Ugh. > > > >> > > > >>The applications commonly know what are the nics capabilities they > > work with. > > > >> > > > >>I know at least an one big application which really suffering > > > >>because the bond configures promiscuous in mode 4 without the > > application asking (it's considered there as a bug in dpdk). > > > >>I think that providing another option will be better. > > > >> > > > >>I think providing another option will be better as well. However we > > disagree on the option. > > > >>If the PMD has no other way to subscribe the multicast group, it has to > > use promiscuous mode. > > > > > > > >>Yes, it is true but there are a lot of other and better options, > > promiscuous is greedy! Should be the last alternative to use. > > > > > > > >Unfortunately, it's the only option implemented. > > > > > > Yes, I know, I suggest to change it or at least not to make it worst. > > > > > > >>Providing a list of options only makes life complicated for the > > > >>developer and doesn't really make any difference in the end results. > > > > > > > >>A big different, for example: > > > >>Let's say the bonding groups 2 devices that support rte_flow. > > > >>The user don't want neither promiscuous nor all multicast, he just > > > >>want to get it's mac traffic + LACP MC group traffic,(a realistic use case) if > > he has an option to tell to the bond PMD, please use rte_flow to configure > > the specific LACP MC group it will be great. > > > >>Think how much work these applications should do in the current > > behavior. > > > > > > > >The bond PMD should already know how to do that itself. > > > > > > The bond can do it with a lot of complexity, but again the user must know > > what the bond chose to be synchronized. > > > So, I think it's better that the user will define it because it is a > > > traffic configuration (the same as promiscuous configuration - the > > > user configures it) > > > > Again, you are forcing more work on the user to ask them to select > > between the methods. > > > > > > We can create a default option as now(promiscuous). > > > > > > >> For instance, if the least common denominator between the two PMDs > > > >>is promiscuous mode, you are going to be forced to run both in > > > >>promiscuous mode instead of selecting the best mode for each PMD. > > > > > > > >>In this case promiscuous is better, > > > >>Using a different configuration is worst and against the bonding PMD > > principle to get a consistent traffic from the slaves. > > > >>So, if one uses allmulti and one uses promiscuous the application > > > >>may get an inconsistent traffic and it may trigger a lot of problems and > > complications for some applications. > > > > > > > >Those applications should already have those problems. > > > > I can make the counter > > > >argument that there are potentially applications relying on the broken > > behavior. > > > > > > You right. So adding allmulticast will require changes in these applications. > > > > > > >We need to ignore those issues and fix this the "right" way. The > > > >"right" way IMHO is the pass the least amount of traffic possible in each > > case. > > > > > > Not in cost of an inconsistency, but looks like we are not agree here. > > > > > > > I have recently run into this issue again with a device that doesn't support > > promiscuous, but does let me subscribe to the appropriate multicast groups. > > At this point, I am leaning toward adding another API call to the bonding API > > so that the user can provide a callback to setup whatever they want on the > > slaves. > > The default setup routine would be enable promiscuous. > > > > Comments? > > The bonding already allows to the users to do operations directly to the slaves(it exports the port ids - rte_eth_bond_slaves_get), so I don't understand why do you need a new API. > The only change you need may be to add parameter to disable the promiscuous configuration in mode4. Changing the API is new API. We should attempt to not break any of the existing API. As for being able to operate on the slaves, yet, you can. But the bonding PMD also controls the slaves as well. It seems cleaner to make this explcit, that the bonding driver calls out to the application to setup the 802.3ad listening when it needs to be done. If you want to control it a different way, you simply provide a null routine that does nothing and control it however you like. ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [dpdk-dev] [PATCH] net/bonding: propagate promiscous mode in mode 4 2018-09-13 15:14 ` Chas Williams @ 2018-09-13 15:40 ` Matan Azrad 2018-09-16 16:14 ` Chas Williams 0 siblings, 1 reply; 26+ messages in thread From: Matan Azrad @ 2018-09-13 15:40 UTC (permalink / raw) To: Chas Williams; +Cc: Declan Doherty, Radu Nicolau, dev, Chas Williams Hi Chas From: Chas Williams > On Wed, Sep 12, 2018 at 1:56 AM Matan Azrad <matan@mellanox.com> > wrote: > > > > Hi Chas > > > > From: Chas Williams > > > On Mon, Aug 6, 2018 at 3:35 PM Matan Azrad <matan@mellanox.com> > > > wrote: > > > > > > > > > > > > Hi Chas > > > > > > > > From: Chas Williams > > > > >On Mon, Aug 6, 2018 at 1:46 PM Matan Azrad > > > <mailto:matan@mellanox.com> wrote: > > > > >Hi Chas > > > > > > > > > >From: Chas Williams > > > > >>On Fri, Aug 3, 2018 at 1:47 AM Matan Azrad > > > <mailto:mailto:matan@mellanox.com> wrote: > > > > >>Hi Chas > > > > >> > > > > >> From: Chas Williams > > > > >> [mailto:mailto:mailto:mailto:3chas3@gmail.com] > > > > >> On Thu, Aug 2, 2018 at 1:33 > > > > >>> PM Matan Azrad <mailto:mailto:matan@mellanox.com> wrote: > > > > >>> > > > > > >>> > > I suggest to do it like next, To add one more parameter > > > > >>> > > for LACP which means how to configure the > > > > >>> > LACP MC group - lacp_mc_grp_conf: > > > > >>> > > 1. rte_flow. > > > > >>> > > 2. flow director. > > > > >>> > > 3. add_mac. > > > > >>> > > 3. set_mc_add_list > > > > >>> > > 4. allmulti > > > > >>> > > 5. promiscuous > > > > >>> > > Maybe more... or less :) > > > > >>> > > > > > > >>> > > By this way the user decides how to do it, if it's fail > > > > >>> > > for a slave, the salve > > > > >>> > should be rejected. > > > > >>> > > Conflict with another configuration(for example calling to > > > > >>> > > promiscuous > > > > >>> > disable while running LACP lacp_mc_grp_conf=5) should raise > > > > >>> > an > > > error. > > > > >>> > > > > > > >>> > > What do you think? > > > > >>> > > > > > > >>> > > > > > >>> > Supporting an LACP mc group specific configuration does make > > > > >>> > sense, but I wonder if this could just be handled by default > > > > >>> > during > > > slave add. > > > > >>> > > > > > >>> > > > > > >>> > 1 and 2 are essentially the same hardware filtering offload > > > > >>> > mode, and the other modes are irrelevant if this is enabled, > > > > >>> > it should not be possible to add the slave if the bond is > > > > >>> > configured for this mode, or possible to change the bond > > > > >>> > into this mode if an existing slave doesn't support it. > > > > >>> > > > > >>> > > > > > >>> > 3 should be the default expected behavior, but > > > > >>> > rte_eth_bond_slave_add() should fail if the slave being > > > > >>> > added doesn't support either adding the MAC to the slave or > > > > >>> > adding the > > > LACP MC address. > > > > >>> > > > > > >>> > Then the user could try either rte_eth_allmulticast_enable() > > > > >>> > on the bond port and then try to add the slave again, which > > > > >>> > should fail if existing slave didn't support allmulticast or > > > > >>> > the add slave would fail again if the slave didn't support > > > > >>> > allmulticast and finally just call > > > > >>> > rte_eth_promiscuous_enable() on the bond and then try to > > > > >>> > re-add the that slave. > > > > >>> > > > > > >>> > but maybe having a explicit configuration parameter would be > > > better. > > > > >>> > > > > >>> I don't sure you understand exactly what I’m suggesting here, > again: > > > > >>> I suggest to add a new parameter to the LACP mode called > > > > >>> lacp_mc_grp_conf(or something else). > > > > >>> So, when the user configures LACP (mode 4) it must to > > > > >>> configure the lacp_mc_grp_conf parameter to one of the options I > suggested. > > > > >>> This parameter is not per slave means the bond PMD will use > > > > >>> the selected option to configure the LACP MC group for all the > slave ports. > > > > >>> > > > > >>> If one of the slaves doesn't support the selected option it > > > > >>> should be > > > rejected. > > > > >>> Conflicts should rais an error. > > > > >>> > > > > >>> I agree here. Yes, if a slave can't manage to subscribe to > > > > >>> the multicast group, an error should be raised. The only way > > > > >>> for this to happen is that you don't have promisc support > > > > >>> which is the ultimate > > > fallback. > > > > >> > > > > >>> The advantages are: > > > > >>> The user knows which option is better to synchronize with his > > > application. > > > > >>> The user knows better than the bond PMD what is the slaves > > > capabilities. > > > > >>> All the slaves are configured by the same way - consistent traffic. > > > > >>> > > > > >>> > > > > >>> It would be ideal if all the slaves would have the same > > > > >>> features and capabilities. There wasn't enforced before, so > > > > >>> this would be a new restriction that would be less flexible > > > > >>> than what we currently have. That doesn't seem like an > improvement. > > > > >> > > > > >>> The bonding user probably doesn't care which mode is used. > > > > >>> The bonding user just wants bonding to work. He doesn't care > > > > >>> about > > > the details. If I am writing > > > > >>> an application with this proposed API, I need to make a list > > > > >>> of adapters and what they support (and keep this up to date as > > > > >>> DPDK > > > evolves). Ugh. > > > > >> > > > > >>The applications commonly know what are the nics capabilities > > > > >>they > > > work with. > > > > >> > > > > >>I know at least an one big application which really suffering > > > > >>because the bond configures promiscuous in mode 4 without the > > > application asking (it's considered there as a bug in dpdk). > > > > >>I think that providing another option will be better. > > > > >> > > > > >>I think providing another option will be better as well. > > > > >>However we > > > disagree on the option. > > > > >>If the PMD has no other way to subscribe the multicast group, it > > > > >>has to > > > use promiscuous mode. > > > > > > > > > >>Yes, it is true but there are a lot of other and better options, > > > promiscuous is greedy! Should be the last alternative to use. > > > > > > > > > >Unfortunately, it's the only option implemented. > > > > > > > > Yes, I know, I suggest to change it or at least not to make it worst. > > > > > > > > >>Providing a list of options only makes life complicated for the > > > > >>developer and doesn't really make any difference in the end results. > > > > > > > > > >>A big different, for example: > > > > >>Let's say the bonding groups 2 devices that support rte_flow. > > > > >>The user don't want neither promiscuous nor all multicast, he > > > > >>just want to get it's mac traffic + LACP MC group traffic,(a > > > > >>realistic use case) if > > > he has an option to tell to the bond PMD, please use rte_flow to > > > configure the specific LACP MC group it will be great. > > > > >>Think how much work these applications should do in the current > > > behavior. > > > > > > > > > >The bond PMD should already know how to do that itself. > > > > > > > > The bond can do it with a lot of complexity, but again the user > > > > must know > > > what the bond chose to be synchronized. > > > > So, I think it's better that the user will define it because it is > > > > a traffic configuration (the same as promiscuous configuration - > > > > the user configures it) > > > > > Again, you are forcing more work on the user to ask them to > > > > > select > > > between the methods. > > > > > > > > We can create a default option as now(promiscuous). > > > > > > > > >> For instance, if the least common denominator between the two > > > > >>PMDs is promiscuous mode, you are going to be forced to run > > > > >>both in promiscuous mode instead of selecting the best mode for > each PMD. > > > > > > > > > >>In this case promiscuous is better, Using a different > > > > >>configuration is worst and against the bonding PMD > > > principle to get a consistent traffic from the slaves. > > > > >>So, if one uses allmulti and one uses promiscuous the > > > > >>application may get an inconsistent traffic and it may trigger a > > > > >>lot of problems and > > > complications for some applications. > > > > > > > > > >Those applications should already have those problems. > > > > > I can make the counter > > > > >argument that there are potentially applications relying on the > > > > >broken > > > behavior. > > > > > > > > You right. So adding allmulticast will require changes in these > applications. > > > > > > > > >We need to ignore those issues and fix this the "right" way. The > > > > >"right" way IMHO is the pass the least amount of traffic possible > > > > >in each > > > case. > > > > > > > > Not in cost of an inconsistency, but looks like we are not agree here. > > > > > > > > > > I have recently run into this issue again with a device that doesn't > > > support promiscuous, but does let me subscribe to the appropriate > multicast groups. > > > At this point, I am leaning toward adding another API call to the > > > bonding API so that the user can provide a callback to setup > > > whatever they want on the slaves. > > > The default setup routine would be enable promiscuous. > > > > > > Comments? > > > > The bonding already allows to the users to do operations directly to the > slaves(it exports the port ids - rte_eth_bond_slaves_get), so I don't > understand why do you need a new API. > > The only change you need may be to add parameter to disable the > promiscuous configuration in mode4. > > Changing the API is new API. We should attempt to not break any of the > existing API. > > As for being able to operate on the slaves, yet, you can. But the bonding > PMD also controls the slaves as well. It seems cleaner to make this explcit, > that the bonding driver calls out to the application to setup the 802.3ad > listening when it needs to be done. > If you want to control it a different way, you simply provide a null routine > that does nothing and control it however you like. The issue is that the bonding PMD cannot be synchronized with such like callback, it doesn't know what was done by the application. I don't think we should open a direct application calls to the slaves by an API, if the application really need it, as we said, it has already option to do it without a new API while it is really not recommended by the bonding guide. ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [dpdk-dev] [PATCH] net/bonding: propagate promiscous mode in mode 4 2018-09-13 15:40 ` Matan Azrad @ 2018-09-16 16:14 ` Chas Williams 2018-09-17 6:29 ` Matan Azrad 0 siblings, 1 reply; 26+ messages in thread From: Chas Williams @ 2018-09-16 16:14 UTC (permalink / raw) To: Matan Azrad; +Cc: Declan Doherty, Radu Nicolau, dev, Chas Williams On Thu, Sep 13, 2018 at 11:40 AM Matan Azrad <matan@mellanox.com> wrote: > > Hi Chas > > From: Chas Williams > > On Wed, Sep 12, 2018 at 1:56 AM Matan Azrad <matan@mellanox.com> > > wrote: > > > > > > Hi Chas > > > > > > From: Chas Williams > > > > On Mon, Aug 6, 2018 at 3:35 PM Matan Azrad <matan@mellanox.com> > > > > wrote: > > > > > > > > > > > > > > > Hi Chas > > > > > > > > > > From: Chas Williams > > > > > >On Mon, Aug 6, 2018 at 1:46 PM Matan Azrad > > > > <mailto:matan@mellanox.com> wrote: > > > > > >Hi Chas > > > > > > > > > > > >From: Chas Williams > > > > > >>On Fri, Aug 3, 2018 at 1:47 AM Matan Azrad > > > > <mailto:mailto:matan@mellanox.com> wrote: > > > > > >>Hi Chas > > > > > >> > > > > > >> From: Chas Williams > > > > > >> [mailto:mailto:mailto:mailto:3chas3@gmail.com] > > > > > >> On Thu, Aug 2, 2018 at 1:33 > > > > > >>> PM Matan Azrad <mailto:mailto:matan@mellanox.com> wrote: > > > > > >>> > > > > > > >>> > > I suggest to do it like next, To add one more parameter > > > > > >>> > > for LACP which means how to configure the > > > > > >>> > LACP MC group - lacp_mc_grp_conf: > > > > > >>> > > 1. rte_flow. > > > > > >>> > > 2. flow director. > > > > > >>> > > 3. add_mac. > > > > > >>> > > 3. set_mc_add_list > > > > > >>> > > 4. allmulti > > > > > >>> > > 5. promiscuous > > > > > >>> > > Maybe more... or less :) > > > > > >>> > > > > > > > >>> > > By this way the user decides how to do it, if it's fail > > > > > >>> > > for a slave, the salve > > > > > >>> > should be rejected. > > > > > >>> > > Conflict with another configuration(for example calling to > > > > > >>> > > promiscuous > > > > > >>> > disable while running LACP lacp_mc_grp_conf=5) should raise > > > > > >>> > an > > > > error. > > > > > >>> > > > > > > > >>> > > What do you think? > > > > > >>> > > > > > > > >>> > > > > > > >>> > Supporting an LACP mc group specific configuration does make > > > > > >>> > sense, but I wonder if this could just be handled by default > > > > > >>> > during > > > > slave add. > > > > > >>> > > > > > > >>> > > > > > > >>> > 1 and 2 are essentially the same hardware filtering offload > > > > > >>> > mode, and the other modes are irrelevant if this is enabled, > > > > > >>> > it should not be possible to add the slave if the bond is > > > > > >>> > configured for this mode, or possible to change the bond > > > > > >>> > into this mode if an existing slave doesn't support it. > > > > > >>> > > > > > >>> > > > > > > >>> > 3 should be the default expected behavior, but > > > > > >>> > rte_eth_bond_slave_add() should fail if the slave being > > > > > >>> > added doesn't support either adding the MAC to the slave or > > > > > >>> > adding the > > > > LACP MC address. > > > > > >>> > > > > > > >>> > Then the user could try either rte_eth_allmulticast_enable() > > > > > >>> > on the bond port and then try to add the slave again, which > > > > > >>> > should fail if existing slave didn't support allmulticast or > > > > > >>> > the add slave would fail again if the slave didn't support > > > > > >>> > allmulticast and finally just call > > > > > >>> > rte_eth_promiscuous_enable() on the bond and then try to > > > > > >>> > re-add the that slave. > > > > > >>> > > > > > > >>> > but maybe having a explicit configuration parameter would be > > > > better. > > > > > >>> > > > > > >>> I don't sure you understand exactly what I’m suggesting here, > > again: > > > > > >>> I suggest to add a new parameter to the LACP mode called > > > > > >>> lacp_mc_grp_conf(or something else). > > > > > >>> So, when the user configures LACP (mode 4) it must to > > > > > >>> configure the lacp_mc_grp_conf parameter to one of the options I > > suggested. > > > > > >>> This parameter is not per slave means the bond PMD will use > > > > > >>> the selected option to configure the LACP MC group for all the > > slave ports. > > > > > >>> > > > > > >>> If one of the slaves doesn't support the selected option it > > > > > >>> should be > > > > rejected. > > > > > >>> Conflicts should rais an error. > > > > > >>> > > > > > >>> I agree here. Yes, if a slave can't manage to subscribe to > > > > > >>> the multicast group, an error should be raised. The only way > > > > > >>> for this to happen is that you don't have promisc support > > > > > >>> which is the ultimate > > > > fallback. > > > > > >> > > > > > >>> The advantages are: > > > > > >>> The user knows which option is better to synchronize with his > > > > application. > > > > > >>> The user knows better than the bond PMD what is the slaves > > > > capabilities. > > > > > >>> All the slaves are configured by the same way - consistent traffic. > > > > > >>> > > > > > >>> > > > > > >>> It would be ideal if all the slaves would have the same > > > > > >>> features and capabilities. There wasn't enforced before, so > > > > > >>> this would be a new restriction that would be less flexible > > > > > >>> than what we currently have. That doesn't seem like an > > improvement. > > > > > >> > > > > > >>> The bonding user probably doesn't care which mode is used. > > > > > >>> The bonding user just wants bonding to work. He doesn't care > > > > > >>> about > > > > the details. If I am writing > > > > > >>> an application with this proposed API, I need to make a list > > > > > >>> of adapters and what they support (and keep this up to date as > > > > > >>> DPDK > > > > evolves). Ugh. > > > > > >> > > > > > >>The applications commonly know what are the nics capabilities > > > > > >>they > > > > work with. > > > > > >> > > > > > >>I know at least an one big application which really suffering > > > > > >>because the bond configures promiscuous in mode 4 without the > > > > application asking (it's considered there as a bug in dpdk). > > > > > >>I think that providing another option will be better. > > > > > >> > > > > > >>I think providing another option will be better as well. > > > > > >>However we > > > > disagree on the option. > > > > > >>If the PMD has no other way to subscribe the multicast group, it > > > > > >>has to > > > > use promiscuous mode. > > > > > > > > > > > >>Yes, it is true but there are a lot of other and better options, > > > > promiscuous is greedy! Should be the last alternative to use. > > > > > > > > > > > >Unfortunately, it's the only option implemented. > > > > > > > > > > Yes, I know, I suggest to change it or at least not to make it worst. > > > > > > > > > > >>Providing a list of options only makes life complicated for the > > > > > >>developer and doesn't really make any difference in the end results. > > > > > > > > > > > >>A big different, for example: > > > > > >>Let's say the bonding groups 2 devices that support rte_flow. > > > > > >>The user don't want neither promiscuous nor all multicast, he > > > > > >>just want to get it's mac traffic + LACP MC group traffic,(a > > > > > >>realistic use case) if > > > > he has an option to tell to the bond PMD, please use rte_flow to > > > > configure the specific LACP MC group it will be great. > > > > > >>Think how much work these applications should do in the current > > > > behavior. > > > > > > > > > > > >The bond PMD should already know how to do that itself. > > > > > > > > > > The bond can do it with a lot of complexity, but again the user > > > > > must know > > > > what the bond chose to be synchronized. > > > > > So, I think it's better that the user will define it because it is > > > > > a traffic configuration (the same as promiscuous configuration - > > > > > the user configures it) > > > > > > Again, you are forcing more work on the user to ask them to > > > > > > select > > > > between the methods. > > > > > > > > > > We can create a default option as now(promiscuous). > > > > > > > > > > >> For instance, if the least common denominator between the two > > > > > >>PMDs is promiscuous mode, you are going to be forced to run > > > > > >>both in promiscuous mode instead of selecting the best mode for > > each PMD. > > > > > > > > > > > >>In this case promiscuous is better, Using a different > > > > > >>configuration is worst and against the bonding PMD > > > > principle to get a consistent traffic from the slaves. > > > > > >>So, if one uses allmulti and one uses promiscuous the > > > > > >>application may get an inconsistent traffic and it may trigger a > > > > > >>lot of problems and > > > > complications for some applications. > > > > > > > > > > > >Those applications should already have those problems. > > > > > > I can make the counter > > > > > >argument that there are potentially applications relying on the > > > > > >broken > > > > behavior. > > > > > > > > > > You right. So adding allmulticast will require changes in these > > applications. > > > > > > > > > > >We need to ignore those issues and fix this the "right" way. The > > > > > >"right" way IMHO is the pass the least amount of traffic possible > > > > > >in each > > > > case. > > > > > > > > > > Not in cost of an inconsistency, but looks like we are not agree here. > > > > > > > > > > > > > I have recently run into this issue again with a device that doesn't > > > > support promiscuous, but does let me subscribe to the appropriate > > multicast groups. > > > > At this point, I am leaning toward adding another API call to the > > > > bonding API so that the user can provide a callback to setup > > > > whatever they want on the slaves. > > > > The default setup routine would be enable promiscuous. > > > > > > > > Comments? > > > > > > The bonding already allows to the users to do operations directly to the > > slaves(it exports the port ids - rte_eth_bond_slaves_get), so I don't > > understand why do you need a new API. > > > The only change you need may be to add parameter to disable the > > promiscuous configuration in mode4. > > > > Changing the API is new API. We should attempt to not break any of the > > existing API. > > > > As for being able to operate on the slaves, yet, you can. But the bonding > > PMD also controls the slaves as well. It seems cleaner to make this explcit, > > that the bonding driver calls out to the application to setup the 802.3ad > > listening when it needs to be done. > > If you want to control it a different way, you simply provide a null routine > > that does nothing and control it however you like. > > The issue is that the bonding PMD cannot be synchronized with such like callback, it doesn't know what was done by the application. Exactly. That's why you need to be to change the native behavior of the bonding PMD. > I don't think we should open a direct application calls to the slaves by an API, if the application really need it, as we said, it has already option to do it without a new API while it is really not recommended by the bonding guide. This would be the opposite direction. The slaves would be calling out to the application to ask the application to do something. With the existing bonding PMD it will always attempt to enable promisc and that isn't desirable in all situations. Previously, activate_slave() always happened as part of slave add. That shouldn't be the case now if the bonding PMD isn't running. Adding and removing slaves while the bonding PMD is started shouldn't probably not be done because I am not sure we can ensure that there is race free behavior with the 802.3ad rx/tx routines. ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [dpdk-dev] [PATCH] net/bonding: propagate promiscous mode in mode 4 2018-09-16 16:14 ` Chas Williams @ 2018-09-17 6:29 ` Matan Azrad 0 siblings, 0 replies; 26+ messages in thread From: Matan Azrad @ 2018-09-17 6:29 UTC (permalink / raw) To: Chas Williams; +Cc: Declan Doherty, Radu Nicolau, dev, Chas Williams Hi Chas From: Chas Williams > On Thu, Sep 13, 2018 at 11:40 AM Matan Azrad <matan@mellanox.com> > wrote: > > > > Hi Chas > > > > From: Chas Williams > > > On Wed, Sep 12, 2018 at 1:56 AM Matan Azrad <matan@mellanox.com> > > > wrote: > > > > > > > > Hi Chas > > > > > > > > From: Chas Williams > > > > > On Mon, Aug 6, 2018 at 3:35 PM Matan Azrad > <matan@mellanox.com> > > > > > wrote: > > > > > > > > > > > > > > > > > > Hi Chas > > > > > > > > > > > > From: Chas Williams > > > > > > >On Mon, Aug 6, 2018 at 1:46 PM Matan Azrad > > > > > <mailto:matan@mellanox.com> wrote: > > > > > > >Hi Chas > > > > > > > > > > > > > >From: Chas Williams > > > > > > >>On Fri, Aug 3, 2018 at 1:47 AM Matan Azrad > > > > > <mailto:mailto:matan@mellanox.com> wrote: > > > > > > >>Hi Chas > > > > > > >> > > > > > > >> From: Chas Williams > > > > > > >> [mailto:mailto:mailto:mailto:3chas3@gmail.com] > > > > > > >> On Thu, Aug 2, 2018 at 1:33 > > > > > > >>> PM Matan Azrad <mailto:mailto:matan@mellanox.com> wrote: > > > > > > >>> > > > > > > > >>> > > I suggest to do it like next, To add one more > > > > > > >>> > > parameter for LACP which means how to configure the > > > > > > >>> > LACP MC group - lacp_mc_grp_conf: > > > > > > >>> > > 1. rte_flow. > > > > > > >>> > > 2. flow director. > > > > > > >>> > > 3. add_mac. > > > > > > >>> > > 3. set_mc_add_list > > > > > > >>> > > 4. allmulti > > > > > > >>> > > 5. promiscuous > > > > > > >>> > > Maybe more... or less :) > > > > > > >>> > > > > > > > > >>> > > By this way the user decides how to do it, if it's > > > > > > >>> > > fail for a slave, the salve > > > > > > >>> > should be rejected. > > > > > > >>> > > Conflict with another configuration(for example > > > > > > >>> > > calling to promiscuous > > > > > > >>> > disable while running LACP lacp_mc_grp_conf=5) should > > > > > > >>> > raise an > > > > > error. > > > > > > >>> > > > > > > > > >>> > > What do you think? > > > > > > >>> > > > > > > > > >>> > > > > > > > >>> > Supporting an LACP mc group specific configuration does > > > > > > >>> > make sense, but I wonder if this could just be handled > > > > > > >>> > by default during > > > > > slave add. > > > > > > >>> > > > > > > > >>> > > > > > > > >>> > 1 and 2 are essentially the same hardware filtering > > > > > > >>> > offload mode, and the other modes are irrelevant if this > > > > > > >>> > is enabled, it should not be possible to add the slave > > > > > > >>> > if the bond is configured for this mode, or possible to > > > > > > >>> > change the bond into this mode if an existing slave doesn't > support it. > > > > > > >>> > > > > > > >>> > > > > > > > >>> > 3 should be the default expected behavior, but > > > > > > >>> > rte_eth_bond_slave_add() should fail if the slave being > > > > > > >>> > added doesn't support either adding the MAC to the slave > > > > > > >>> > or adding the > > > > > LACP MC address. > > > > > > >>> > > > > > > > >>> > Then the user could try either > > > > > > >>> > rte_eth_allmulticast_enable() on the bond port and then > > > > > > >>> > try to add the slave again, which should fail if > > > > > > >>> > existing slave didn't support allmulticast or the add > > > > > > >>> > slave would fail again if the slave didn't support > > > > > > >>> > allmulticast and finally just call > > > > > > >>> > rte_eth_promiscuous_enable() on the bond and then try to > > > > > > >>> > re-add the that slave. > > > > > > >>> > > > > > > > >>> > but maybe having a explicit configuration parameter > > > > > > >>> > would be > > > > > better. > > > > > > >>> > > > > > > >>> I don't sure you understand exactly what I’m suggesting > > > > > > >>> here, > > > again: > > > > > > >>> I suggest to add a new parameter to the LACP mode called > > > > > > >>> lacp_mc_grp_conf(or something else). > > > > > > >>> So, when the user configures LACP (mode 4) it must to > > > > > > >>> configure the lacp_mc_grp_conf parameter to one of the > > > > > > >>> options I > > > suggested. > > > > > > >>> This parameter is not per slave means the bond PMD will > > > > > > >>> use the selected option to configure the LACP MC group for > > > > > > >>> all the > > > slave ports. > > > > > > >>> > > > > > > >>> If one of the slaves doesn't support the selected option > > > > > > >>> it should be > > > > > rejected. > > > > > > >>> Conflicts should rais an error. > > > > > > >>> > > > > > > >>> I agree here. Yes, if a slave can't manage to subscribe > > > > > > >>> to the multicast group, an error should be raised. The > > > > > > >>> only way for this to happen is that you don't have promisc > > > > > > >>> support which is the ultimate > > > > > fallback. > > > > > > >> > > > > > > >>> The advantages are: > > > > > > >>> The user knows which option is better to synchronize with > > > > > > >>> his > > > > > application. > > > > > > >>> The user knows better than the bond PMD what is the slaves > > > > > capabilities. > > > > > > >>> All the slaves are configured by the same way - consistent > traffic. > > > > > > >>> > > > > > > >>> > > > > > > >>> It would be ideal if all the slaves would have the same > > > > > > >>> features and capabilities. There wasn't enforced before, > > > > > > >>> so this would be a new restriction that would be less > > > > > > >>> flexible than what we currently have. That doesn't seem > > > > > > >>> like an > > > improvement. > > > > > > >> > > > > > > >>> The bonding user probably doesn't care which mode is used. > > > > > > >>> The bonding user just wants bonding to work. He doesn't > > > > > > >>> care about > > > > > the details. If I am writing > > > > > > >>> an application with this proposed API, I need to make a > > > > > > >>> list of adapters and what they support (and keep this up > > > > > > >>> to date as DPDK > > > > > evolves). Ugh. > > > > > > >> > > > > > > >>The applications commonly know what are the nics > > > > > > >>capabilities they > > > > > work with. > > > > > > >> > > > > > > >>I know at least an one big application which really > > > > > > >>suffering because the bond configures promiscuous in mode 4 > > > > > > >>without the > > > > > application asking (it's considered there as a bug in dpdk). > > > > > > >>I think that providing another option will be better. > > > > > > >> > > > > > > >>I think providing another option will be better as well. > > > > > > >>However we > > > > > disagree on the option. > > > > > > >>If the PMD has no other way to subscribe the multicast > > > > > > >>group, it has to > > > > > use promiscuous mode. > > > > > > > > > > > > > >>Yes, it is true but there are a lot of other and better > > > > > > >>options, > > > > > promiscuous is greedy! Should be the last alternative to use. > > > > > > > > > > > > > >Unfortunately, it's the only option implemented. > > > > > > > > > > > > Yes, I know, I suggest to change it or at least not to make it worst. > > > > > > > > > > > > >>Providing a list of options only makes life complicated for > > > > > > >>the developer and doesn't really make any difference in the end > results. > > > > > > > > > > > > > >>A big different, for example: > > > > > > >>Let's say the bonding groups 2 devices that support rte_flow. > > > > > > >>The user don't want neither promiscuous nor all multicast, > > > > > > >>he just want to get it's mac traffic + LACP MC group > > > > > > >>traffic,(a realistic use case) if > > > > > he has an option to tell to the bond PMD, please use rte_flow > > > > > to configure the specific LACP MC group it will be great. > > > > > > >>Think how much work these applications should do in the > > > > > > >>current > > > > > behavior. > > > > > > > > > > > > > >The bond PMD should already know how to do that itself. > > > > > > > > > > > > The bond can do it with a lot of complexity, but again the > > > > > > user must know > > > > > what the bond chose to be synchronized. > > > > > > So, I think it's better that the user will define it because > > > > > > it is a traffic configuration (the same as promiscuous > > > > > > configuration - the user configures it) > > > > > > > Again, you are forcing more work on the user to ask them to > > > > > > > select > > > > > between the methods. > > > > > > > > > > > > We can create a default option as now(promiscuous). > > > > > > > > > > > > >> For instance, if the least common denominator between the > > > > > > >>two PMDs is promiscuous mode, you are going to be forced to > > > > > > >>run both in promiscuous mode instead of selecting the best > > > > > > >>mode for > > > each PMD. > > > > > > > > > > > > > >>In this case promiscuous is better, Using a different > > > > > > >>configuration is worst and against the bonding PMD > > > > > principle to get a consistent traffic from the slaves. > > > > > > >>So, if one uses allmulti and one uses promiscuous the > > > > > > >>application may get an inconsistent traffic and it may > > > > > > >>trigger a lot of problems and > > > > > complications for some applications. > > > > > > > > > > > > > >Those applications should already have those problems. > > > > > > > I can make the counter > > > > > > >argument that there are potentially applications relying on > > > > > > >the broken > > > > > behavior. > > > > > > > > > > > > You right. So adding allmulticast will require changes in > > > > > > these > > > applications. > > > > > > > > > > > > >We need to ignore those issues and fix this the "right" way. > > > > > > >The "right" way IMHO is the pass the least amount of traffic > > > > > > >possible in each > > > > > case. > > > > > > > > > > > > Not in cost of an inconsistency, but looks like we are not agree here. > > > > > > > > > > > > > > > > I have recently run into this issue again with a device that > > > > > doesn't support promiscuous, but does let me subscribe to the > > > > > appropriate > > > multicast groups. > > > > > At this point, I am leaning toward adding another API call to > > > > > the bonding API so that the user can provide a callback to setup > > > > > whatever they want on the slaves. > > > > > The default setup routine would be enable promiscuous. > > > > > > > > > > Comments? > > > > > > > > The bonding already allows to the users to do operations directly > > > > to the > > > slaves(it exports the port ids - rte_eth_bond_slaves_get), so I > > > don't understand why do you need a new API. > > > > The only change you need may be to add parameter to disable the > > > promiscuous configuration in mode4. > > > > > > Changing the API is new API. We should attempt to not break any of > > > the existing API. > > > > > > As for being able to operate on the slaves, yet, you can. But the > > > bonding PMD also controls the slaves as well. It seems cleaner to > > > make this explcit, that the bonding driver calls out to the > > > application to setup the 802.3ad listening when it needs to be done. > > > If you want to control it a different way, you simply provide a null > > > routine that does nothing and control it however you like. > > > > The issue is that the bonding PMD cannot be synchronized with such like > callback, it doesn't know what was done by the application. > > Exactly. That's why you need to be to change the native behavior of the > bonding PMD. I don't sure I understand you here, But I agree that both bonding PMD and the application should be able to know exactly what is the configuration set at any time. The current behavior is that bonding configures promiscuous in mode4 without any doc\guide which says it to the application user and it is problematic and need to be changed. > > > I don't think we should open a direct application calls to the slaves by an > API, if the application really need it, as we said, it has already option to do it > without a new API while it is really not recommended by the bonding guide. > > This would be the opposite direction. The slaves would be calling out to the > application to ask the application to do something. This "something" must be known by the bonding PMD also - that what I am saying. In the callback solution you are suggesting now, the application knows but the bonding PMD doesn't. > With the existing bonding > PMD it will always attempt to enable promisc and that isn't desirable in all > situations. Agree, it's greedy. > Previously, activate_slave() always happened as part of slave add. > That shouldn't be the case now if the bonding PMD isn't running. Adding and > removing slaves while the bonding PMD is started shouldn't probably not be > done because I am not sure we can ensure that there is race free behavior > with the 802.3ad rx/tx routines. I don't sure I understand what you want to say in the above statement. I didn't talk about synchronization in terms of locks solutions but in terms of "every entity must know what are the slave configurations at any time". That’s what I suggested in the past, saying it again: Let the user to decide the way he wants to configure the LACP MC group by the bonding command line: 1. promiscuous (default) 2. allmulti 3. set mc list 4. set mac .... Using this way both the application user and the bonding PMD know what are the configurations at any time. (also saves consistency and the last behavior of the bonding) ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [dpdk-dev] [PATCH] net/bonding: propagate promiscous mode in mode 4 2018-08-02 14:24 ` Matan Azrad 2018-08-02 15:53 ` Doherty, Declan @ 2018-08-02 21:05 ` Chas Williams 1 sibling, 0 replies; 26+ messages in thread From: Chas Williams @ 2018-08-02 21:05 UTC (permalink / raw) To: Matan Azrad; +Cc: Declan Doherty, Radu Nicolau, dev, Chas Williams On Thu, Aug 2, 2018 at 10:24 AM Matan Azrad <matan@mellanox.com> wrote: > Hi > > From: Doherty, Declan > > On 02/08/2018 7:35 AM, Matan Azrad wrote: > > > Hi Chas, Radu > > > > > > From: Chas Williams > > >> On Wed, Aug 1, 2018 at 9:48 AM Radu Nicolau <radu.nicolau@intel.com> > > >> wrote: > > >> > > >>> > > >>> > > >>> On 8/1/2018 2:34 PM, Chas Williams wrote: > > >>> > > >>> > > >>> > > >>> On Wed, Aug 1, 2018 at 9:04 AM Radu Nicolau <radu.nicolau@intel.com> > > >>> wrote: > > >>> > > >>>> Update the bonding promiscuous mode enable/disable functions as to > > >>>> propagate the change to all slaves instead of doing nothing; this > > >>>> seems to be the correct behaviour according to the standard, and > > >>>> also implemented in the linux network stack. > > >>>> > > >>>> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com> > > >>>> --- > > >>>> drivers/net/bonding/rte_eth_bond_pmd.c | 8 ++------ > > >>>> 1 file changed, 2 insertions(+), 6 deletions(-) > > >>>> > > >>>> diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c > > >>>> b/drivers/net/bonding/rte_eth_bond_pmd.c > > >>>> index ad6e33f..16105cb 100644 > > >>>> --- a/drivers/net/bonding/rte_eth_bond_pmd.c > > >>>> +++ b/drivers/net/bonding/rte_eth_bond_pmd.c > > >>>> @@ -2617,12 +2617,10 @@ bond_ethdev_promiscuous_enable(struct > > >>>> rte_eth_dev > > >>>> *eth_dev) > > >>>> case BONDING_MODE_ROUND_ROBIN: > > >>>> case BONDING_MODE_BALANCE: > > >>>> case BONDING_MODE_BROADCAST: > > >>>> + case BONDING_MODE_8023AD: > > >>>> for (i = 0; i < internals->slave_count; i++) > > >>>> > > >>>> rte_eth_promiscuous_enable(internals->slaves[i].port_id); > > >>>> break; > > >>>> - /* In mode4 promiscus mode is managed when slave is > > >> added/removed > > >>>> */ > > >>>> > > >>> > > >>> This comment is true (and it appears it is always on in 802.3ad > mode): > > >>> > > >>> /* use this port as agregator */ > > >>> port->aggregator_port_id = slave_id; > > >>> rte_eth_promiscuous_enable(slave_id); > > >>> > > >>> If we are going to do this here, we should probably get rid of it in > > >>> the other location so that future readers aren't confused about > > >>> which is the one doing the work. > > >>> > > >>> Since some adapters don't have group multicast support, we might > > >>> already be in promiscuous anyway. Turning off promiscuous for the > > >>> bonding master might turn it off in the slaves where an application > > >>> has already enabled it. > > >>> > > >>> > > >>> The idea was to preserve the current behavior except for the > > >>> explicit promiscuous disable/enable APIs; an application may disable > > >>> the promiscuous mode on the bonding port and then enable it back, > > >>> expecting it to propagate to the slaves. > > >>> > > >> > > >> Yes, but an application doing that will break 802.3ad because > > >> promiscuous mode is used to receive the LAG PDUs which are on a > multicast > > group. > > >> That's why this code doesn't let you disable promiscuous when you are > > >> in 802.3ad mode. > > >> > > >> If you want to do this it needs to be more complicated. In 802.3ad, > > >> you should try to add the multicast group to the slave interface. If > > >> that fails, turn on promisc mode for the slave. Make note of it. > > >> Later if bonding wants to enabled/disable promisc mode for the > > >> slaves, it needs to check if that slaves needs to remain in promisc to > > continue to get the LAG PDUs. > > > > > > I agree with Chas that this commit will hurt current LACP logic, but > maybe > > this is the time to open discussion about it: > > > The current bonding implementation is greedy while it setting > > > promiscuous automatically for LACP, The user asks LACP and he gets > > promiscuous by the way. > > > > > > So if the user don't want promiscuous he must to disable it directly > via slaves > > ports and to allow LACP using rte_flow\flow > > director\set_mc_addr_list\allmulti... > > > > > > I think the best way is to let the user to enable LACP as he wants, > directly via > > slaves or by the bond promiscuous_enable API. > > > For sure, it must be documented well. > > > > > > Matan. > > > > > > > I'm thinking that default behavior should be that promiscuous mode > should be > > disabled by default, and that the bond port should fail to start if any > of the slave > > ports can't support subscription to the LACP multicast group. At this > point the > > user can decided to enable promiscuous mode on the bond port (and > therefore > > on all the slaves) and then start the bond. If we have slaves with > different > > configurations for multicast subscriptions or promiscuous mode > enablement, > > then there is potentially the opportunity for inconsistency in traffic > depending > > on which slaves are active. > > > Personally I would prefer that all configuration if possible is > propagated > > through the bond port. So if a user wants to use a port which doesn't > support > > multicast subscription then all ports in the bond need to be in > promiscuous > > mode, and the user needs to explicitly enable it through the bond port, > that way > > at least we can guarantee consist traffic irrespective of which ports in > the bond > > are active at any one time. > > That's exactly what I said :) > > I suggest to do it like next, > To add one more parameter for LACP which means how to configure the LACP > MC group - lacp_mc_grp_conf: > 1. rte_flow. > 2. flow director. > 3. add_mac. > 3. set_mc_add_list > 4. allmulti > 5. promiscuous > Maybe more... or less :) > > By this way the user decides how to do it, if it's fail for a slave, the > salve should be rejected. > Conflict with another configuration(for example calling to promiscuous > disable while running LACP lacp_mc_grp_conf=5) should raise an error. > What do you think? > Not a good idea. The slave should do what it needs to go to get subscribed to the multicast group. For the user to make this decision ahead of time, he would have to know that all the PMDs support the same method. Forcing more and more work on the caller of the API is not a solution. > > Matan. > > > ^ permalink raw reply [flat|nested] 26+ messages in thread
* [dpdk-dev] [PATCH v2 1/2] net/bonding: in 8023ad mode enable all multicast rather than promiscuous 2018-08-01 12:57 [dpdk-dev] [PATCH] net/bonding: propagate promiscous mode in mode 4 Radu Nicolau 2018-08-01 13:34 ` Chas Williams @ 2018-08-02 9:57 ` Radu Nicolau 2018-08-02 9:57 ` [dpdk-dev] [PATCH v2 2/2] net/bonding: propagate promiscous mode in mode 4 Radu Nicolau ` (2 more replies) 1 sibling, 3 replies; 26+ messages in thread From: Radu Nicolau @ 2018-08-02 9:57 UTC (permalink / raw) To: dev; +Cc: declan.doherty, chas3, matan, Radu Nicolau Signed-off-by: Radu Nicolau <radu.nicolau@intel.com> --- drivers/net/bonding/rte_eth_bond_8023ad.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c index f8cea4b..730087f 100644 --- a/drivers/net/bonding/rte_eth_bond_8023ad.c +++ b/drivers/net/bonding/rte_eth_bond_8023ad.c @@ -917,7 +917,7 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev, }; char mem_name[RTE_ETH_NAME_MAX_LEN]; - int socket_id; + int socket_id, ret; unsigned element_size; uint32_t total_tx_desc; struct bond_tx_queue *bd_tx_q; @@ -942,7 +942,12 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev, /* use this port as agregator */ port->aggregator_port_id = slave_id; - rte_eth_promiscuous_enable(slave_id); + + /* try to enable multicast, if fail set promiscuous */ + rte_eth_allmulticast_enable(slave_id); + ret = rte_eth_allmulticast_get(slave_id); + if (ret != 1) + rte_eth_promiscuous_enable(slave_id); timer_cancel(&port->warning_timer); -- 2.7.5 ^ permalink raw reply [flat|nested] 26+ messages in thread
* [dpdk-dev] [PATCH v2 2/2] net/bonding: propagate promiscous mode in mode 4 2018-08-02 9:57 ` [dpdk-dev] [PATCH v2 1/2] net/bonding: in 8023ad mode enable all multicast rather than promiscuous Radu Nicolau @ 2018-08-02 9:57 ` Radu Nicolau 2018-08-02 10:21 ` [dpdk-dev] [PATCH v2 1/2] net/bonding: in 8023ad mode enable all multicast rather than promiscuous Matan Azrad 2018-08-02 21:16 ` Chas Williams 2 siblings, 0 replies; 26+ messages in thread From: Radu Nicolau @ 2018-08-02 9:57 UTC (permalink / raw) To: dev; +Cc: declan.doherty, chas3, matan, Radu Nicolau Update the bonding promiscuous mode enable/disable functions as to propagate the change to all slaves instead of doing nothing. Signed-off-by: Radu Nicolau <radu.nicolau@intel.com> --- drivers/net/bonding/rte_eth_bond_pmd.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c index ad6e33f..fcb2268 100644 --- a/drivers/net/bonding/rte_eth_bond_pmd.c +++ b/drivers/net/bonding/rte_eth_bond_pmd.c @@ -2617,12 +2617,10 @@ bond_ethdev_promiscuous_enable(struct rte_eth_dev *eth_dev) case BONDING_MODE_ROUND_ROBIN: case BONDING_MODE_BALANCE: case BONDING_MODE_BROADCAST: + case BONDING_MODE_8023AD: for (i = 0; i < internals->slave_count; i++) rte_eth_promiscuous_enable(internals->slaves[i].port_id); break; - /* In mode4 promiscus mode is managed when slave is added/removed */ - case BONDING_MODE_8023AD: - break; /* Promiscuous mode is propagated only to primary slave */ case BONDING_MODE_ACTIVE_BACKUP: case BONDING_MODE_TLB: @@ -2648,8 +2646,11 @@ bond_ethdev_promiscuous_disable(struct rte_eth_dev *dev) for (i = 0; i < internals->slave_count; i++) rte_eth_promiscuous_disable(internals->slaves[i].port_id); break; - /* In mode4 promiscus mode is set managed when slave is added/removed */ + /* Propagate to slaves only if multicast is enabled */ case BONDING_MODE_8023AD: + for (i = 0; i < internals->slave_count; i++) + if (rte_eth_allmulticast_get(internals->slaves[i].port_id) == 1) + rte_eth_promiscuous_disable(internals->slaves[i].port_id); break; /* Promiscuous mode is propagated only to primary slave */ case BONDING_MODE_ACTIVE_BACKUP: -- 2.7.5 ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/2] net/bonding: in 8023ad mode enable all multicast rather than promiscuous 2018-08-02 9:57 ` [dpdk-dev] [PATCH v2 1/2] net/bonding: in 8023ad mode enable all multicast rather than promiscuous Radu Nicolau 2018-08-02 9:57 ` [dpdk-dev] [PATCH v2 2/2] net/bonding: propagate promiscous mode in mode 4 Radu Nicolau @ 2018-08-02 10:21 ` Matan Azrad 2018-08-02 21:16 ` Chas Williams 2 siblings, 0 replies; 26+ messages in thread From: Matan Azrad @ 2018-08-02 10:21 UTC (permalink / raw) To: Radu Nicolau, dev; +Cc: declan.doherty, chas3 Hi Radu From: Radu Nicolau > Signed-off-by: Radu Nicolau <radu.nicolau@intel.com> > --- > drivers/net/bonding/rte_eth_bond_8023ad.c | 9 +++++++-- > 1 file changed, 7 insertions(+), 2 deletions(-) > > diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c > b/drivers/net/bonding/rte_eth_bond_8023ad.c > index f8cea4b..730087f 100644 > --- a/drivers/net/bonding/rte_eth_bond_8023ad.c > +++ b/drivers/net/bonding/rte_eth_bond_8023ad.c > @@ -917,7 +917,7 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev > *bond_dev, > }; > > char mem_name[RTE_ETH_NAME_MAX_LEN]; > - int socket_id; > + int socket_id, ret; > unsigned element_size; > uint32_t total_tx_desc; > struct bond_tx_queue *bd_tx_q; > @@ -942,7 +942,12 @@ bond_mode_8023ad_activate_slave(struct > rte_eth_dev *bond_dev, > > /* use this port as agregator */ > port->aggregator_port_id = slave_id; > - rte_eth_promiscuous_enable(slave_id); > + > + /* try to enable multicast, if fail set promiscuous */ > + rte_eth_allmulticast_enable(slave_id); > + ret = rte_eth_allmulticast_get(slave_id); > + if (ret != 1) > + rte_eth_promiscuous_enable(slave_id); It is still greedy to configure allmulticast for LACP, The user may not expect to get multicast traffic. Moreover, here each slave can be with different unexpected configuration, may be even worst from the application perspective. > > timer_cancel(&port->warning_timer); > > -- > 2.7.5 ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/2] net/bonding: in 8023ad mode enable all multicast rather than promiscuous 2018-08-02 9:57 ` [dpdk-dev] [PATCH v2 1/2] net/bonding: in 8023ad mode enable all multicast rather than promiscuous Radu Nicolau 2018-08-02 9:57 ` [dpdk-dev] [PATCH v2 2/2] net/bonding: propagate promiscous mode in mode 4 Radu Nicolau 2018-08-02 10:21 ` [dpdk-dev] [PATCH v2 1/2] net/bonding: in 8023ad mode enable all multicast rather than promiscuous Matan Azrad @ 2018-08-02 21:16 ` Chas Williams 2 siblings, 0 replies; 26+ messages in thread From: Chas Williams @ 2018-08-02 21:16 UTC (permalink / raw) To: Radu Nicolau; +Cc: dev, Declan Doherty, Chas Williams, Matan Azrad On Thu, Aug 2, 2018 at 6:03 AM Radu Nicolau <radu.nicolau@intel.com> wrote: > Signed-off-by: Radu Nicolau <radu.nicolau@intel.com> > --- > drivers/net/bonding/rte_eth_bond_8023ad.c | 9 +++++++-- > 1 file changed, 7 insertions(+), 2 deletions(-) > > diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c > b/drivers/net/bonding/rte_eth_bond_8023ad.c > index f8cea4b..730087f 100644 > --- a/drivers/net/bonding/rte_eth_bond_8023ad.c > +++ b/drivers/net/bonding/rte_eth_bond_8023ad.c > @@ -917,7 +917,7 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev > *bond_dev, > }; > > char mem_name[RTE_ETH_NAME_MAX_LEN]; > - int socket_id; > + int socket_id, ret; > unsigned element_size; > uint32_t total_tx_desc; > struct bond_tx_queue *bd_tx_q; > @@ -942,7 +942,12 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev > *bond_dev, > > /* use this port as agregator */ > port->aggregator_port_id = slave_id; > - rte_eth_promiscuous_enable(slave_id); > + > + /* try to enable multicast, if fail set promiscuous */ > + rte_eth_allmulticast_enable(slave_id); > + ret = rte_eth_allmulticast_get(slave_id); > You should really try to use rte_eth_dev_set_mc_addr_list() first. Luckily, bonding doesn't implement rte_eth_dev_set_mc_addr_list() so you don't need to reserve a slot. > + if (ret != 1) > + rte_eth_promiscuous_enable(slave_id); > > timer_cancel(&port->warning_timer); > > -- > 2.7.5 > > ^ permalink raw reply [flat|nested] 26+ messages in thread
end of thread, other threads:[~2018-09-17 6:29 UTC | newest] Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2018-08-01 12:57 [dpdk-dev] [PATCH] net/bonding: propagate promiscous mode in mode 4 Radu Nicolau 2018-08-01 13:34 ` Chas Williams 2018-08-01 13:47 ` Radu Nicolau 2018-08-01 15:35 ` Chas Williams 2018-08-02 6:35 ` Matan Azrad 2018-08-02 13:23 ` Doherty, Declan 2018-08-02 14:24 ` Matan Azrad 2018-08-02 15:53 ` Doherty, Declan 2018-08-02 17:33 ` Matan Azrad 2018-08-02 21:10 ` Chas Williams 2018-08-03 5:47 ` Matan Azrad 2018-08-06 16:00 ` Chas Williams 2018-08-06 17:46 ` Matan Azrad 2018-08-06 19:01 ` Chas Williams 2018-08-06 19:35 ` Matan Azrad 2018-09-11 3:31 ` Chas Williams 2018-09-12 5:56 ` Matan Azrad 2018-09-13 15:14 ` Chas Williams 2018-09-13 15:40 ` Matan Azrad 2018-09-16 16:14 ` Chas Williams 2018-09-17 6:29 ` Matan Azrad 2018-08-02 21:05 ` Chas Williams 2018-08-02 9:57 ` [dpdk-dev] [PATCH v2 1/2] net/bonding: in 8023ad mode enable all multicast rather than promiscuous Radu Nicolau 2018-08-02 9:57 ` [dpdk-dev] [PATCH v2 2/2] net/bonding: propagate promiscous mode in mode 4 Radu Nicolau 2018-08-02 10:21 ` [dpdk-dev] [PATCH v2 1/2] net/bonding: in 8023ad mode enable all multicast rather than promiscuous Matan Azrad 2018-08-02 21:16 ` Chas Williams
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).