patches for DPDK stable branches
 help / color / mirror / Atom feed
* [PATCH] net/bonding: fix destroy dedicated queues flow
@ 2023-06-08  2:59 Chaoyong He
  2023-06-20  3:02 ` humin (Q)
  0 siblings, 1 reply; 3+ messages in thread
From: Chaoyong He @ 2023-06-08  2:59 UTC (permalink / raw)
  To: dev; +Cc: oss-drivers, niklas.soderlund, Long Wu, matan, stable, Chaoyong He

From: Long Wu <long.wu@corigine.com>

Bonding port in mode 4 enables dedicated queues, we
will create a flow for this feature. So we need to
destroy this flow when we remove the member port.

Furthermore if we don't destroy the flow that created
for dedicated queues when we remove the member port,
maybe we couldn't add the member port to a new bonding
port that needed dedicated queues.

We add the destroy action in removing member port
function.

Fixes: 49dad9028e2a ("net/bonding: support flow API")
Cc: matan@mellanox.com
Cc: stable@dpdk.org

Signed-off-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Reviewed-by: Chaoyong He <chaoyong.he@corigine.com>
---
 drivers/net/bonding/rte_eth_bond_api.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c
index c0178369b4..85d0528b7c 100644
--- a/drivers/net/bonding/rte_eth_bond_api.c
+++ b/drivers/net/bonding/rte_eth_bond_api.c
@@ -712,6 +712,16 @@ __eth_bond_slave_remove_lock_free(uint16_t bonded_port_id,
 		}
 	}
 
+	/* Remove the dedicated queues flow */
+	if (internals->mode == BONDING_MODE_8023AD &&
+		internals->mode4.dedicated_queues.enabled == 1 &&
+		internals->mode4.dedicated_queues.flow[slave_port_id] != NULL) {
+		rte_flow_destroy(slave_port_id,
+				internals->mode4.dedicated_queues.flow[slave_port_id],
+				&flow_error);
+		internals->mode4.dedicated_queues.flow[slave_port_id] = NULL;
+	}
+
 	slave_eth_dev = &rte_eth_devices[slave_port_id];
 	slave_remove(internals, slave_eth_dev);
 	slave_eth_dev->data->dev_flags &= (~RTE_ETH_DEV_BONDED_SLAVE);
-- 
2.39.1


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH] net/bonding: fix destroy dedicated queues flow
  2023-06-08  2:59 [PATCH] net/bonding: fix destroy dedicated queues flow Chaoyong He
@ 2023-06-20  3:02 ` humin (Q)
  2023-06-20 11:20   ` Ferruh Yigit
  0 siblings, 1 reply; 3+ messages in thread
From: humin (Q) @ 2023-06-20  3:02 UTC (permalink / raw)
  To: Chaoyong He, dev; +Cc: oss-drivers, niklas.soderlund, Long Wu, matan, stable

Acked-by: Min Hu (Connor) <humin29@huawei.com>

在 2023/6/8 10:59, Chaoyong He 写道:
> From: Long Wu <long.wu@corigine.com>
>
> Bonding port in mode 4 enables dedicated queues, we
> will create a flow for this feature. So we need to
> destroy this flow when we remove the member port.
>
> Furthermore if we don't destroy the flow that created
> for dedicated queues when we remove the member port,
> maybe we couldn't add the member port to a new bonding
> port that needed dedicated queues.
>
> We add the destroy action in removing member port
> function.
>
> Fixes: 49dad9028e2a ("net/bonding: support flow API")
> Cc: matan@mellanox.com
> Cc: stable@dpdk.org
>
> Signed-off-by: Long Wu <long.wu@corigine.com>
> Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
> Reviewed-by: Chaoyong He <chaoyong.he@corigine.com>
> ---
>   drivers/net/bonding/rte_eth_bond_api.c | 10 ++++++++++
>   1 file changed, 10 insertions(+)
>
> diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c
> index c0178369b4..85d0528b7c 100644
> --- a/drivers/net/bonding/rte_eth_bond_api.c
> +++ b/drivers/net/bonding/rte_eth_bond_api.c
> @@ -712,6 +712,16 @@ __eth_bond_slave_remove_lock_free(uint16_t bonded_port_id,
>   		}
>   	}
>   
> +	/* Remove the dedicated queues flow */
> +	if (internals->mode == BONDING_MODE_8023AD &&
> +		internals->mode4.dedicated_queues.enabled == 1 &&
> +		internals->mode4.dedicated_queues.flow[slave_port_id] != NULL) {
> +		rte_flow_destroy(slave_port_id,
> +				internals->mode4.dedicated_queues.flow[slave_port_id],
> +				&flow_error);
> +		internals->mode4.dedicated_queues.flow[slave_port_id] = NULL;
> +	}
> +
>   	slave_eth_dev = &rte_eth_devices[slave_port_id];
>   	slave_remove(internals, slave_eth_dev);
>   	slave_eth_dev->data->dev_flags &= (~RTE_ETH_DEV_BONDED_SLAVE);

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH] net/bonding: fix destroy dedicated queues flow
  2023-06-20  3:02 ` humin (Q)
@ 2023-06-20 11:20   ` Ferruh Yigit
  0 siblings, 0 replies; 3+ messages in thread
From: Ferruh Yigit @ 2023-06-20 11:20 UTC (permalink / raw)
  To: humin (Q), Chaoyong He, dev
  Cc: oss-drivers, niklas.soderlund, Long Wu, matan, stable

On 6/20/2023 4:02 AM, humin (Q) wrote:

> 在 2023/6/8 10:59, Chaoyong He 写道:
>> From: Long Wu <long.wu@corigine.com>
>>
>> Bonding port in mode 4 enables dedicated queues, we
>> will create a flow for this feature. So we need to
>> destroy this flow when we remove the member port.
>>
>> Furthermore if we don't destroy the flow that created
>> for dedicated queues when we remove the member port,
>> maybe we couldn't add the member port to a new bonding
>> port that needed dedicated queues.
>>
>> We add the destroy action in removing member port
>> function.
>>
>> Fixes: 49dad9028e2a ("net/bonding: support flow API")
>> Cc: matan@mellanox.com
>> Cc: stable@dpdk.org
>>
>> Signed-off-by: Long Wu <long.wu@corigine.com>
>> Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
>> Reviewed-by: Chaoyong He <chaoyong.he@corigine.com>
>> 
>
> Acked-by: Min Hu (Connor) <humin29@huawei.com>
>

Applied to dpdk-next-net/main, thanks.


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2023-06-20 11:20 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-06-08  2:59 [PATCH] net/bonding: fix destroy dedicated queues flow Chaoyong He
2023-06-20  3:02 ` humin (Q)
2023-06-20 11:20   ` Ferruh Yigit

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).