DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH] net/mlx5: fix shared Rx queue list management
@ 2023-11-13  7:24 Bing Zhao
  2023-11-14  8:13 ` Raslan Darawsheh
  0 siblings, 1 reply; 3+ messages in thread
From: Bing Zhao @ 2023-11-13  7:24 UTC (permalink / raw)
  To: matan, viacheslavo, rasland, suanmingm, orika; +Cc: dev, xuemingl, stable

In shared Rx queue case, the shared control structure could only be
released after the last port's dereference in the group.

There is another management list that holding all of the used Rx
queues' structures for a port. If the reference count of a control
structure is changed to zero during port close, it can be removed
from the list directly without freeing the resource.

Fixes: 09c2555303be ("net/mlx5: support shared Rx queue")
Cc: xuemingl@nvidia.com
Cc: stable@dpdk.org

Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 drivers/net/mlx5/mlx5_rxq.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 88b2dc54b3..2c51af11c7 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -2280,6 +2280,7 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx)
 					RTE_ETH_QUEUE_STATE_STOPPED;
 		}
 	} else { /* Refcnt zero, closing device. */
+		LIST_REMOVE(rxq_ctrl, next);
 		LIST_REMOVE(rxq, owner_entry);
 		if (LIST_EMPTY(&rxq_ctrl->owners)) {
 			if (!rxq_ctrl->is_hairpin)
@@ -2287,7 +2288,6 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx)
 					(&rxq_ctrl->rxq.mr_ctrl.cache_bh);
 			if (rxq_ctrl->rxq.shared)
 				LIST_REMOVE(rxq_ctrl, share_entry);
-			LIST_REMOVE(rxq_ctrl, next);
 			mlx5_free(rxq_ctrl);
 		}
 		dev->data->rx_queues[idx] = NULL;
-- 
2.34.1


^ permalink raw reply	[flat|nested] 3+ messages in thread
* [PATCH] net/mlx5: fix shared Rx queue list management
@ 2023-11-13  7:26 Bing Zhao
  0 siblings, 0 replies; 3+ messages in thread
From: Bing Zhao @ 2023-11-13  7:26 UTC (permalink / raw)
  To: matan, viacheslavo, rasland, suanmingm, orika; +Cc: dev, xuemingl, stable

In shared Rx queue case, the shared control structure could only be
released after the last port's dereference in the group.

There is another management list that holding all of the used Rx
queues' structures for a port. If the reference count of a control
structure is changed to zero during port close, it can be removed
from the list directly without freeing the resource.

Fixes: 09c2555303be ("net/mlx5: support shared Rx queue")
Cc: xuemingl@nvidia.com
Cc: stable@dpdk.org

Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 drivers/net/mlx5/mlx5_rxq.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 88b2dc54b3..2c51af11c7 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -2280,6 +2280,7 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx)
 					RTE_ETH_QUEUE_STATE_STOPPED;
 		}
 	} else { /* Refcnt zero, closing device. */
+		LIST_REMOVE(rxq_ctrl, next);
 		LIST_REMOVE(rxq, owner_entry);
 		if (LIST_EMPTY(&rxq_ctrl->owners)) {
 			if (!rxq_ctrl->is_hairpin)
@@ -2287,7 +2288,6 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx)
 					(&rxq_ctrl->rxq.mr_ctrl.cache_bh);
 			if (rxq_ctrl->rxq.shared)
 				LIST_REMOVE(rxq_ctrl, share_entry);
-			LIST_REMOVE(rxq_ctrl, next);
 			mlx5_free(rxq_ctrl);
 		}
 		dev->data->rx_queues[idx] = NULL;
-- 
2.34.1


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2023-11-14  8:13 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-11-13  7:24 [PATCH] net/mlx5: fix shared Rx queue list management Bing Zhao
2023-11-14  8:13 ` Raslan Darawsheh
2023-11-13  7:26 Bing Zhao

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).