DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 0/2] net/bonding: a couple minor fixes
@ 2019-01-10 10:22 Hyong Youb Kim
  2019-01-10 10:22 ` [dpdk-dev] [PATCH 1/2] net/bonding: do not set promisc on non-existent primary port Hyong Youb Kim
  2019-01-10 10:22 ` [dpdk-dev] [PATCH 2/2] net/bonding: avoid the next active slave going out of bound Hyong Youb Kim
  0 siblings, 2 replies; 8+ messages in thread
From: Hyong Youb Kim @ 2019-01-10 10:22 UTC (permalink / raw)
  To: Ferruh Yigit, Declan Doherty, Chas Williams; +Cc: dev, Hyong Youb Kim

A couple patches for minor issues found during in-house testing. I do
not have a strong opinion on the first one, as it is just a warning
message not technically a bug. But, the second one addresses an
easy-to-reproduce crash. If someone has a better/cleaner fix, I am all
ears.

Hyong Youb Kim (2):
  net/bonding: do not set promisc on non-existent primary port
  net/bonding: avoid the next active slave going out of bound

 drivers/net/bonding/rte_eth_bond_pmd.c | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

-- 
2.16.2

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [dpdk-dev] [PATCH 1/2] net/bonding: do not set promisc on non-existent primary port
  2019-01-10 10:22 [dpdk-dev] [PATCH 0/2] net/bonding: a couple minor fixes Hyong Youb Kim
@ 2019-01-10 10:22 ` Hyong Youb Kim
  2019-02-09 13:16   ` Chas Williams
  2019-01-10 10:22 ` [dpdk-dev] [PATCH 2/2] net/bonding: avoid the next active slave going out of bound Hyong Youb Kim
  1 sibling, 1 reply; 8+ messages in thread
From: Hyong Youb Kim @ 2019-01-10 10:22 UTC (permalink / raw)
  To: Ferruh Yigit, Declan Doherty, Chas Williams; +Cc: dev, Hyong Youb Kim, stable

For active-backup, tlb, and alb mode,
bond_ethdev_promiscuous_{enable,disable} tries to set promisc mode on
the primary port, even when there are no slaves. It is harmless, as
rte_eth_promiscuous_{enable,disable} does nothing if the port number
is invalid. But, it does print a warning message. Here is an example
from testpmd.

testpmd> create bonded device 5 0
Created new bonded device net_bonding_testpmd_0 on (port 4).
Invalid port_id=33
testpmd> set promisc 4 off
Invalid port_id=33

33 in this case is RTE_MAX_ETHPORTS + 1, the invalid primary port
number used within the bonding driver. This warning message is
harmless but can be confusing to the user. So do not try to set
promisc on a primary port when we know it does not exist (i.e. no
slaves).

Fixes: 2efb58cbab6e ("bond: new link bonding library")
Cc: stable@dpdk.org

Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
---
 drivers/net/bonding/rte_eth_bond_pmd.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 44deaf119..daf2440cd 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -2593,6 +2593,9 @@ bond_ethdev_promiscuous_enable(struct rte_eth_dev *eth_dev)
 	case BONDING_MODE_TLB:
 	case BONDING_MODE_ALB:
 	default:
+		/* Do not touch promisc when there cannot be primary ports */
+		if (internals->slave_count == 0)
+			break;
 		rte_eth_promiscuous_enable(internals->current_primary_port);
 	}
 }
@@ -2621,6 +2624,9 @@ bond_ethdev_promiscuous_disable(struct rte_eth_dev *dev)
 	case BONDING_MODE_TLB:
 	case BONDING_MODE_ALB:
 	default:
+		/* Do not touch promisc when there cannot be primary ports */
+		if (internals->slave_count == 0)
+			break;
 		rte_eth_promiscuous_disable(internals->current_primary_port);
 	}
 }
-- 
2.16.2

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [dpdk-dev] [PATCH 2/2] net/bonding: avoid the next active slave going out of bound
  2019-01-10 10:22 [dpdk-dev] [PATCH 0/2] net/bonding: a couple minor fixes Hyong Youb Kim
  2019-01-10 10:22 ` [dpdk-dev] [PATCH 1/2] net/bonding: do not set promisc on non-existent primary port Hyong Youb Kim
@ 2019-01-10 10:22 ` Hyong Youb Kim
  2019-02-09 13:17   ` Chas Williams
  1 sibling, 1 reply; 8+ messages in thread
From: Hyong Youb Kim @ 2019-01-10 10:22 UTC (permalink / raw)
  To: Ferruh Yigit, Declan Doherty, Chas Williams; +Cc: dev, Hyong Youb Kim, stable

For bonding modes like broadcast that use bond_ethdev_rx_burst(), it
is fairly easy to produce a crash simply by bringing a slave port's
link down. When slave links go down, the driver on one thread reduces
active_slave_count via the LSC callback and deactivate_slave(). At the
same time, bond_ethdev_rx_burst() running on a forwarding thread may
increment active_slave (next active slave) beyond
active_slave_count. Here is a typical sequence of events.

At time 0:
active_slave_count = 3
active_slave = 2

At time 1:
A slave link goes down.
Thread 0 (main) reduces active_slave_count to 2.

At time 2:
Thread 1 (forwarding) executes bond_ethdev_rx_burst().
- Reads active_slave_count = 2.
- Increments active_slave at the end to 3.

>From this point on, everytime bond_ethdev_rx_burst() runs,
active_slave increments by one, eventually going well out of bound of
the active_slaves array and causing a crash.

Make the rx burst function to first check that active_slave is within
bound. If not, reset it to 0 to avoid out-of-range array access.

Fixes: e1110e977648 ("net/bonding: fix Rx slave fairness")
Cc: stable@dpdk.org

Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
---
 drivers/net/bonding/rte_eth_bond_pmd.c | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index daf2440cd..bc2405e54 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -68,6 +68,15 @@ bond_ethdev_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	internals = bd_rx_q->dev_private;
 	slave_count = internals->active_slave_count;
 	active_slave = internals->active_slave;
+	/*
+	 * Reset the active slave index, in case active_slave goes out
+	 * of bound.  It can hapen when slave links go down, and
+	 * another thread (LSC callback) shrinks the slave count.
+	 */
+	if (active_slave >= slave_count) {
+		internals->active_slave = 0;
+		active_slave = 0;
+	}
 
 	for (i = 0; i < slave_count && nb_pkts; i++) {
 		uint16_t num_rx_slave;
@@ -273,6 +282,11 @@ bond_ethdev_rx_burst_8023ad_fast_queue(void *queue, struct rte_mbuf **bufs,
 	active_slave = internals->active_slave;
 	memcpy(slaves, internals->active_slaves,
 			sizeof(internals->active_slaves[0]) * slave_count);
+	/* active_slave may go out of bound. See bond_ethdev_rx_burst() */
+	if (active_slave >= slave_count) {
+		internals->active_slave = 0;
+		active_slave = 0;
+	}
 
 	for (i = 0; i < slave_count && nb_pkts; i++) {
 		uint16_t num_rx_slave;
-- 
2.16.2

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-dev] [PATCH 1/2] net/bonding: do not set promisc on non-existent primary port
  2019-01-10 10:22 ` [dpdk-dev] [PATCH 1/2] net/bonding: do not set promisc on non-existent primary port Hyong Youb Kim
@ 2019-02-09 13:16   ` Chas Williams
  2019-02-21 14:38     ` Ferruh Yigit
  0 siblings, 1 reply; 8+ messages in thread
From: Chas Williams @ 2019-02-09 13:16 UTC (permalink / raw)
  To: Hyong Youb Kim, Ferruh Yigit, Declan Doherty, Chas Williams; +Cc: dev, stable



On 1/10/19 5:22 AM, Hyong Youb Kim wrote:
> For active-backup, tlb, and alb mode,
> bond_ethdev_promiscuous_{enable,disable} tries to set promisc mode on
> the primary port, even when there are no slaves. It is harmless, as
> rte_eth_promiscuous_{enable,disable} does nothing if the port number
> is invalid. But, it does print a warning message. Here is an example
> from testpmd.
> 
> testpmd> create bonded device 5 0
> Created new bonded device net_bonding_testpmd_0 on (port 4).
> Invalid port_id=33
> testpmd> set promisc 4 off
> Invalid port_id=33
> 
> 33 in this case is RTE_MAX_ETHPORTS + 1, the invalid primary port
> number used within the bonding driver. This warning message is
> harmless but can be confusing to the user. So do not try to set
> promisc on a primary port when we know it does not exist (i.e. no
> slaves).
> 
> Fixes: 2efb58cbab6e ("bond: new link bonding library")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>

Acked-by: Chas Williams <chas3@att.com>

> ---
>   drivers/net/bonding/rte_eth_bond_pmd.c | 6 ++++++
>   1 file changed, 6 insertions(+)
> 
> diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
> index 44deaf119..daf2440cd 100644
> --- a/drivers/net/bonding/rte_eth_bond_pmd.c
> +++ b/drivers/net/bonding/rte_eth_bond_pmd.c
> @@ -2593,6 +2593,9 @@ bond_ethdev_promiscuous_enable(struct rte_eth_dev *eth_dev)
>   	case BONDING_MODE_TLB:
>   	case BONDING_MODE_ALB:
>   	default:
> +		/* Do not touch promisc when there cannot be primary ports */
> +		if (internals->slave_count == 0)
> +			break;
>   		rte_eth_promiscuous_enable(internals->current_primary_port);
>   	}
>   }
> @@ -2621,6 +2624,9 @@ bond_ethdev_promiscuous_disable(struct rte_eth_dev *dev)
>   	case BONDING_MODE_TLB:
>   	case BONDING_MODE_ALB:
>   	default:
> +		/* Do not touch promisc when there cannot be primary ports */
> +		if (internals->slave_count == 0)
> +			break;
>   		rte_eth_promiscuous_disable(internals->current_primary_port);
>   	}
>   }
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-dev] [PATCH 2/2] net/bonding: avoid the next active slave going out of bound
  2019-01-10 10:22 ` [dpdk-dev] [PATCH 2/2] net/bonding: avoid the next active slave going out of bound Hyong Youb Kim
@ 2019-02-09 13:17   ` Chas Williams
       [not found]     ` <7AE31235A30B41498D1C31348DC858BD5B5329AB@IRSMSX103.ger.corp.intel.com>
  0 siblings, 1 reply; 8+ messages in thread
From: Chas Williams @ 2019-02-09 13:17 UTC (permalink / raw)
  To: Hyong Youb Kim, Ferruh Yigit, Declan Doherty, Chas Williams; +Cc: dev, stable



On 1/10/19 5:22 AM, Hyong Youb Kim wrote:
> For bonding modes like broadcast that use bond_ethdev_rx_burst(), it
> is fairly easy to produce a crash simply by bringing a slave port's
> link down. When slave links go down, the driver on one thread reduces
> active_slave_count via the LSC callback and deactivate_slave(). At the
> same time, bond_ethdev_rx_burst() running on a forwarding thread may
> increment active_slave (next active slave) beyond
> active_slave_count. Here is a typical sequence of events.
> 
> At time 0:
> active_slave_count = 3
> active_slave = 2
> 
> At time 1:
> A slave link goes down.
> Thread 0 (main) reduces active_slave_count to 2.
> 
> At time 2:
> Thread 1 (forwarding) executes bond_ethdev_rx_burst().
> - Reads active_slave_count = 2.
> - Increments active_slave at the end to 3.
> 
>  From this point on, everytime bond_ethdev_rx_burst() runs,
> active_slave increments by one, eventually going well out of bound of
> the active_slaves array and causing a crash.
> 
> Make the rx burst function to first check that active_slave is within
> bound. If not, reset it to 0 to avoid out-of-range array access.
> 
> Fixes: e1110e977648 ("net/bonding: fix Rx slave fairness")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>

Acked-by: Chas Williams <chas3@att.com>

> ---
>   drivers/net/bonding/rte_eth_bond_pmd.c | 14 ++++++++++++++
>   1 file changed, 14 insertions(+)
> 
> diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
> index daf2440cd..bc2405e54 100644
> --- a/drivers/net/bonding/rte_eth_bond_pmd.c
> +++ b/drivers/net/bonding/rte_eth_bond_pmd.c
> @@ -68,6 +68,15 @@ bond_ethdev_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
>   	internals = bd_rx_q->dev_private;
>   	slave_count = internals->active_slave_count;
>   	active_slave = internals->active_slave;
> +	/*
> +	 * Reset the active slave index, in case active_slave goes out
> +	 * of bound.  It can hapen when slave links go down, and
> +	 * another thread (LSC callback) shrinks the slave count.
> +	 */
> +	if (active_slave >= slave_count) {
> +		internals->active_slave = 0;
> +		active_slave = 0;
> +	}
>   
>   	for (i = 0; i < slave_count && nb_pkts; i++) {
>   		uint16_t num_rx_slave;
> @@ -273,6 +282,11 @@ bond_ethdev_rx_burst_8023ad_fast_queue(void *queue, struct rte_mbuf **bufs,
>   	active_slave = internals->active_slave;
>   	memcpy(slaves, internals->active_slaves,
>   			sizeof(internals->active_slaves[0]) * slave_count);
> +	/* active_slave may go out of bound. See bond_ethdev_rx_burst() */
> +	if (active_slave >= slave_count) {
> +		internals->active_slave = 0;
> +		active_slave = 0;
> +	}
>   
>   	for (i = 0; i < slave_count && nb_pkts; i++) {
>   		uint16_t num_rx_slave;
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-dev] [PATCH 2/2] net/bonding: avoid the next active slave going out of bound
       [not found]     ` <7AE31235A30B41498D1C31348DC858BD5B5329AB@IRSMSX103.ger.corp.intel.com>
@ 2019-02-18 15:25       ` Ferruh Yigit
  2019-02-20 16:28         ` [dpdk-dev] [dpdk-stable] " Ferruh Yigit
  0 siblings, 1 reply; 8+ messages in thread
From: Ferruh Yigit @ 2019-02-18 15:25 UTC (permalink / raw)
  To: Parthasarathy, JananeeX M, Chas Williams, Hyong Youb Kim,
	Doherty, Declan, Chas Williams
  Cc: dev, stable, Vemula, Hari KumarX, Pattan, Reshma

On 2/11/2019 10:25 AM, Parthasarathy, JananeeX M wrote:
> Hi
> 
>> -----Original Message-----
>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Chas Williams
>> Sent: Saturday, February 09, 2019 6:47 PM
>> To: Hyong Youb Kim <hyonkim@cisco.com>; Yigit, Ferruh
>> <ferruh.yigit@intel.com>; Doherty, Declan <declan.doherty@intel.com>; Chas
>> Williams <chas3@att.com>
>> Cc: dev@dpdk.org; stable@dpdk.org
>> Subject: Re: [dpdk-dev] [PATCH 2/2] net/bonding: avoid the next active slave
>> going out of bound
>>
>>
>>
>> On 1/10/19 5:22 AM, Hyong Youb Kim wrote:
>>> For bonding modes like broadcast that use bond_ethdev_rx_burst(), it
>>> is fairly easy to produce a crash simply by bringing a slave port's
>>> link down. When slave links go down, the driver on one thread reduces
>>> active_slave_count via the LSC callback and deactivate_slave(). At the
>>> same time, bond_ethdev_rx_burst() running on a forwarding thread may
>>> increment active_slave (next active slave) beyond active_slave_count.
>>> Here is a typical sequence of events.
>>>
>>> At time 0:
>>> active_slave_count = 3
>>> active_slave = 2
>>>
>>> At time 1:
>>> A slave link goes down.
>>> Thread 0 (main) reduces active_slave_count to 2.
>>>
>>> At time 2:
>>> Thread 1 (forwarding) executes bond_ethdev_rx_burst().
>>> - Reads active_slave_count = 2.
>>> - Increments active_slave at the end to 3.
>>>
>>>  From this point on, everytime bond_ethdev_rx_burst() runs,
>>> active_slave increments by one, eventually going well out of bound of
>>> the active_slaves array and causing a crash.
>>>
>>> Make the rx burst function to first check that active_slave is within
>>> bound. If not, reset it to 0 to avoid out-of-range array access.
>>>
>>> Fixes: e1110e977648 ("net/bonding: fix Rx slave fairness")
>>> Cc: stable@dpdk.org
>>>
>>> Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
>>
>> Acked-by: Chas Williams <chas3@att.com>
>>
>>> ---
>>>   drivers/net/bonding/rte_eth_bond_pmd.c | 14 ++++++++++++++
>>>   1 file changed, 14 insertions(+)
>>>
>>> diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c
>>> b/drivers/net/bonding/rte_eth_bond_pmd.c
>>> index daf2440cd..bc2405e54 100644
>>> --- a/drivers/net/bonding/rte_eth_bond_pmd.c
>>> +++ b/drivers/net/bonding/rte_eth_bond_pmd.c
>>> @@ -68,6 +68,15 @@ bond_ethdev_rx_burst(void *queue, struct rte_mbuf
>> **bufs, uint16_t nb_pkts)
>>>   	internals = bd_rx_q->dev_private;
>>>   	slave_count = internals->active_slave_count;
>>>   	active_slave = internals->active_slave;
>>> +	/*
>>> +	 * Reset the active slave index, in case active_slave goes out
>>> +	 * of bound.  It can hapen when slave links go down, and
>>> +	 * another thread (LSC callback) shrinks the slave count.
>>> +	 */
>>> +	if (active_slave >= slave_count) {
>>> +		internals->active_slave = 0;
>>> +		active_slave = 0;
>>> +	}
> 
> Instead of introducing new conditions again at the top of functions, it would be better to check greater than, equal to >= instead of the equal to  in below condition.
> if (++internals->active_slave == slave_count)
>                 internals->active_slave = 0;
> 
> Thereby we can reduce the multiple if conditions and still ensure internals->active_slave points to correct index always.
> 
>>>
>>>   	for (i = 0; i < slave_count && nb_pkts; i++) {
>>>   		uint16_t num_rx_slave;
>>> @@ -273,6 +282,11 @@ bond_ethdev_rx_burst_8023ad_fast_queue(void
>> *queue, struct rte_mbuf **bufs,
>>>   	active_slave = internals->active_slave;
>>>   	memcpy(slaves, internals->active_slaves,
>>>   			sizeof(internals->active_slaves[0]) * slave_count);
>>> +	/* active_slave may go out of bound. See bond_ethdev_rx_burst() */
>>> +	if (active_slave >= slave_count) {
>>> +		internals->active_slave = 0;
>>> +		active_slave = 0;
>>> +	}
> 
> Same as above comment would be better.
>>>
>>>   	for (i = 0; i < slave_count && nb_pkts; i++) {
>>>   		uint16_t num_rx_slave;
>>>
> 
> It would be better to check the internals->active_slave during deactivate_slave() as well in rte_eth_bond_api.c.
> Since slave counts would be decremented during de-activation and resetting here appropriately would be better.
> 
> Regards
> M.P.Jananee


I don't see this comment on the patchwork, can you double check if your comment
hit the mailing list?

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-dev] [dpdk-stable] [PATCH 2/2] net/bonding: avoid the next active slave going out of bound
  2019-02-18 15:25       ` Ferruh Yigit
@ 2019-02-20 16:28         ` Ferruh Yigit
  0 siblings, 0 replies; 8+ messages in thread
From: Ferruh Yigit @ 2019-02-20 16:28 UTC (permalink / raw)
  To: Parthasarathy, JananeeX M, Chas Williams, Hyong Youb Kim,
	Doherty, Declan, Chas Williams
  Cc: dev, stable, Vemula, Hari KumarX, Pattan, Reshma

On 2/18/2019 3:25 PM, Ferruh Yigit wrote:
> On 2/11/2019 10:25 AM, Parthasarathy, JananeeX M wrote:
>> Hi
>>
>>> -----Original Message-----
>>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Chas Williams
>>> Sent: Saturday, February 09, 2019 6:47 PM
>>> To: Hyong Youb Kim <hyonkim@cisco.com>; Yigit, Ferruh
>>> <ferruh.yigit@intel.com>; Doherty, Declan <declan.doherty@intel.com>; Chas
>>> Williams <chas3@att.com>
>>> Cc: dev@dpdk.org; stable@dpdk.org
>>> Subject: Re: [dpdk-dev] [PATCH 2/2] net/bonding: avoid the next active slave
>>> going out of bound
>>>
>>>
>>>
>>> On 1/10/19 5:22 AM, Hyong Youb Kim wrote:
>>>> For bonding modes like broadcast that use bond_ethdev_rx_burst(), it
>>>> is fairly easy to produce a crash simply by bringing a slave port's
>>>> link down. When slave links go down, the driver on one thread reduces
>>>> active_slave_count via the LSC callback and deactivate_slave(). At the
>>>> same time, bond_ethdev_rx_burst() running on a forwarding thread may
>>>> increment active_slave (next active slave) beyond active_slave_count.
>>>> Here is a typical sequence of events.
>>>>
>>>> At time 0:
>>>> active_slave_count = 3
>>>> active_slave = 2
>>>>
>>>> At time 1:
>>>> A slave link goes down.
>>>> Thread 0 (main) reduces active_slave_count to 2.
>>>>
>>>> At time 2:
>>>> Thread 1 (forwarding) executes bond_ethdev_rx_burst().
>>>> - Reads active_slave_count = 2.
>>>> - Increments active_slave at the end to 3.
>>>>
>>>>  From this point on, everytime bond_ethdev_rx_burst() runs,
>>>> active_slave increments by one, eventually going well out of bound of
>>>> the active_slaves array and causing a crash.
>>>>
>>>> Make the rx burst function to first check that active_slave is within
>>>> bound. If not, reset it to 0 to avoid out-of-range array access.
>>>>
>>>> Fixes: e1110e977648 ("net/bonding: fix Rx slave fairness")
>>>> Cc: stable@dpdk.org
>>>>
>>>> Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
>>>
>>> Acked-by: Chas Williams <chas3@att.com>
>>>
>>>> ---
>>>>   drivers/net/bonding/rte_eth_bond_pmd.c | 14 ++++++++++++++
>>>>   1 file changed, 14 insertions(+)
>>>>
>>>> diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c
>>>> b/drivers/net/bonding/rte_eth_bond_pmd.c
>>>> index daf2440cd..bc2405e54 100644
>>>> --- a/drivers/net/bonding/rte_eth_bond_pmd.c
>>>> +++ b/drivers/net/bonding/rte_eth_bond_pmd.c
>>>> @@ -68,6 +68,15 @@ bond_ethdev_rx_burst(void *queue, struct rte_mbuf
>>> **bufs, uint16_t nb_pkts)
>>>>   	internals = bd_rx_q->dev_private;
>>>>   	slave_count = internals->active_slave_count;
>>>>   	active_slave = internals->active_slave;
>>>> +	/*
>>>> +	 * Reset the active slave index, in case active_slave goes out
>>>> +	 * of bound.  It can hapen when slave links go down, and
>>>> +	 * another thread (LSC callback) shrinks the slave count.
>>>> +	 */
>>>> +	if (active_slave >= slave_count) {
>>>> +		internals->active_slave = 0;
>>>> +		active_slave = 0;
>>>> +	}
>>
>> Instead of introducing new conditions again at the top of functions, it would be better to check greater than, equal to >= instead of the equal to  in below condition.
>> if (++internals->active_slave == slave_count)
>>                 internals->active_slave = 0;
>>
>> Thereby we can reduce the multiple if conditions and still ensure internals->active_slave points to correct index always.
>>
>>>>
>>>>   	for (i = 0; i < slave_count && nb_pkts; i++) {
>>>>   		uint16_t num_rx_slave;
>>>> @@ -273,6 +282,11 @@ bond_ethdev_rx_burst_8023ad_fast_queue(void
>>> *queue, struct rte_mbuf **bufs,
>>>>   	active_slave = internals->active_slave;
>>>>   	memcpy(slaves, internals->active_slaves,
>>>>   			sizeof(internals->active_slaves[0]) * slave_count);
>>>> +	/* active_slave may go out of bound. See bond_ethdev_rx_burst() */
>>>> +	if (active_slave >= slave_count) {
>>>> +		internals->active_slave = 0;
>>>> +		active_slave = 0;
>>>> +	}
>>
>> Same as above comment would be better.
>>>>
>>>>   	for (i = 0; i < slave_count && nb_pkts; i++) {
>>>>   		uint16_t num_rx_slave;
>>>>
>>
>> It would be better to check the internals->active_slave during deactivate_slave() as well in rte_eth_bond_api.c.
>> Since slave counts would be decremented during de-activation and resetting here appropriately would be better.
>>
>> Regards
>> M.P.Jananee
> 
> 
> I don't see this comment on the patchwork, can you double check if your comment
> hit the mailing list?

For record, this patch superseded by:
https://patches.dpdk.org/patch/50346/

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-dev] [PATCH 1/2] net/bonding: do not set promisc on non-existent primary port
  2019-02-09 13:16   ` Chas Williams
@ 2019-02-21 14:38     ` Ferruh Yigit
  0 siblings, 0 replies; 8+ messages in thread
From: Ferruh Yigit @ 2019-02-21 14:38 UTC (permalink / raw)
  To: Chas Williams, Hyong Youb Kim, Declan Doherty, Chas Williams; +Cc: dev, stable

On 2/9/2019 1:16 PM, Chas Williams wrote:
> 
> 
> On 1/10/19 5:22 AM, Hyong Youb Kim wrote:
>> For active-backup, tlb, and alb mode,
>> bond_ethdev_promiscuous_{enable,disable} tries to set promisc mode on
>> the primary port, even when there are no slaves. It is harmless, as
>> rte_eth_promiscuous_{enable,disable} does nothing if the port number
>> is invalid. But, it does print a warning message. Here is an example
>> from testpmd.
>>
>> testpmd> create bonded device 5 0
>> Created new bonded device net_bonding_testpmd_0 on (port 4).
>> Invalid port_id=33
>> testpmd> set promisc 4 off
>> Invalid port_id=33
>>
>> 33 in this case is RTE_MAX_ETHPORTS + 1, the invalid primary port
>> number used within the bonding driver. This warning message is
>> harmless but can be confusing to the user. So do not try to set
>> promisc on a primary port when we know it does not exist (i.e. no
>> slaves).
>>
>> Fixes: 2efb58cbab6e ("bond: new link bonding library")
>> Cc: stable@dpdk.org
>>
>> Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
> 
> Acked-by: Chas Williams <chas3@att.com>

Applied to dpdk-next-net/master, thanks.

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2019-02-21 14:38 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-01-10 10:22 [dpdk-dev] [PATCH 0/2] net/bonding: a couple minor fixes Hyong Youb Kim
2019-01-10 10:22 ` [dpdk-dev] [PATCH 1/2] net/bonding: do not set promisc on non-existent primary port Hyong Youb Kim
2019-02-09 13:16   ` Chas Williams
2019-02-21 14:38     ` Ferruh Yigit
2019-01-10 10:22 ` [dpdk-dev] [PATCH 2/2] net/bonding: avoid the next active slave going out of bound Hyong Youb Kim
2019-02-09 13:17   ` Chas Williams
     [not found]     ` <7AE31235A30B41498D1C31348DC858BD5B5329AB@IRSMSX103.ger.corp.intel.com>
2019-02-18 15:25       ` Ferruh Yigit
2019-02-20 16:28         ` [dpdk-dev] [dpdk-stable] " Ferruh Yigit

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).