* [PATCH] net/mlx5: fix storage handling of shared Rx queues
@ 2025-07-31 10:41 Gregory Etelson
2025-08-18 6:32 ` Raslan Darawsheh
0 siblings, 1 reply; 2+ messages in thread
From: Gregory Etelson @ 2025-07-31 10:41 UTC (permalink / raw)
To: dev
Cc: getelson, ,
rasland, Dariusz Sosnowski, Viacheslav Ovsiienko, Bing Zhao,
Ori Kam, Suanming Mou, Matan Azrad
The MLX5 PMD maintains 2 lists for Rx queues:
- mlx5_priv::rxqsctrl - for non-shared and shared Rx queues
- mlx5_dev_ctx_shared::shared_rxqs - for shared Rx queues only
The PMD used the `rxqsctrl` as the primary list for Rx queues
maintenance.
The PMD wipes out port mlx5_priv object after an application closed
the port.
If PMD shared Rx queues between the transfer proxy port and
representor ports and closed the transfer proxy port before
representor, the representor port cannot iterate its shared Rx queues
because Rx queues list head was wiped out.
The patch separates Rx queue storage list according to the list type:
- shared Rx queues are stored in the `shared_rxqs` only
- non-shared Rx queues are stored in the `rxqsctrl` list only.
Fixes: 6886b5f39d66 ("net/mlx5: fix hairpin queue release")
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
drivers/net/mlx5/linux/mlx5_os.c | 13 ++++++++++---
drivers/net/mlx5/mlx5_flow.c | 6 ++++++
drivers/net/mlx5/mlx5_rxq.c | 6 ++++--
3 files changed, 20 insertions(+), 5 deletions(-)
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index 696a3e12c7..0266f71bb5 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -747,13 +747,20 @@ void
mlx5_os_free_shared_dr(struct mlx5_priv *priv)
{
struct mlx5_dev_ctx_shared *sh = priv->sh;
-#ifdef HAVE_MLX5DV_DR
- int i;
-#endif
+ struct mlx5_rxq_ctrl *rxq_ctrl;
+ int i = 0;
MLX5_ASSERT(sh && sh->refcnt);
if (sh->refcnt > 1)
return;
+ LIST_FOREACH(rxq_ctrl, &sh->shared_rxqs, next) {
+ DRV_LOG(DEBUG, "port %u Rx Queue %u still referenced",
+ priv->dev_data->port_id, rxq_ctrl->rxq.idx);
+ ++i;
+ }
+ if (i > 0)
+ DRV_LOG(WARNING, "port %u some Rx queues still remain %d",
+ priv->dev_data->port_id, i);
MLX5_ASSERT(LIST_EMPTY(&sh->shared_rxqs));
#ifdef HAVE_MLX5DV_DR
if (sh->rx_domain) {
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 8db372123c..fa8b95df16 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -1652,12 +1652,18 @@ flow_rxq_mark_flag_set(struct rte_eth_dev *dev)
LIST_FOREACH(rxq_ctrl, &opriv->rxqsctrl, next) {
rxq_ctrl->rxq.mark = 1;
}
+ LIST_FOREACH(rxq_ctrl, &opriv->sh->shared_rxqs, next) {
+ rxq_ctrl->rxq.mark = 1;
+ }
opriv->mark_enabled = 1;
}
} else {
LIST_FOREACH(rxq_ctrl, &priv->rxqsctrl, next) {
rxq_ctrl->rxq.mark = 1;
}
+ LIST_FOREACH(rxq_ctrl, &priv->sh->shared_rxqs, next) {
+ rxq_ctrl->rxq.mark = 1;
+ }
priv->mark_enabled = 1;
}
priv->sh->shared_mark_enabled = 1;
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 77c5848c37..1425886a22 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -2033,8 +2033,9 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
tmpl->share_group = conf->share_group;
tmpl->share_qid = conf->share_qid;
LIST_INSERT_HEAD(&priv->sh->shared_rxqs, tmpl, share_entry);
+ } else {
+ LIST_INSERT_HEAD(&priv->rxqsctrl, tmpl, next);
}
- LIST_INSERT_HEAD(&priv->rxqsctrl, tmpl, next);
rte_atomic_store_explicit(&tmpl->ctrl_ref, 1, rte_memory_order_relaxed);
return tmpl;
error:
@@ -2365,7 +2366,8 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx)
(&rxq_ctrl->rxq.mr_ctrl.cache_bh);
if (rxq_ctrl->rxq.shared)
LIST_REMOVE(rxq_ctrl, share_entry);
- LIST_REMOVE(rxq_ctrl, next);
+ else
+ LIST_REMOVE(rxq_ctrl, next);
mlx5_free(rxq_ctrl->rxq.rq_win_data);
mlx5_free(rxq_ctrl);
}
--
2.48.1
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: [PATCH] net/mlx5: fix storage handling of shared Rx queues
2025-07-31 10:41 [PATCH] net/mlx5: fix storage handling of shared Rx queues Gregory Etelson
@ 2025-08-18 6:32 ` Raslan Darawsheh
0 siblings, 0 replies; 2+ messages in thread
From: Raslan Darawsheh @ 2025-08-18 6:32 UTC (permalink / raw)
To: Gregory Etelson, dev
Cc: mkashani, Dariusz Sosnowski, Viacheslav Ovsiienko, Bing Zhao,
Ori Kam, Suanming Mou, Matan Azrad
Hi,
On 31/07/2025 1:41 PM, Gregory Etelson wrote:
> The MLX5 PMD maintains 2 lists for Rx queues:
> - mlx5_priv::rxqsctrl - for non-shared and shared Rx queues
> - mlx5_dev_ctx_shared::shared_rxqs - for shared Rx queues only
>
> The PMD used the `rxqsctrl` as the primary list for Rx queues
> maintenance.
>
> The PMD wipes out port mlx5_priv object after an application closed
> the port.
>
> If PMD shared Rx queues between the transfer proxy port and
> representor ports and closed the transfer proxy port before
> representor, the representor port cannot iterate its shared Rx queues
> because Rx queues list head was wiped out.
>
> The patch separates Rx queue storage list according to the list type:
> - shared Rx queues are stored in the `shared_rxqs` only
> - non-shared Rx queues are stored in the `rxqsctrl` list only.
>
> Fixes: 6886b5f39d66 ("net/mlx5: fix hairpin queue release")
>
> Signed-off-by: Gregory Etelson <getelson@nvidia.com>
> Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Patch applied to next-net-mlx,
Kindest regards
Raslan Darawsheh
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2025-08-18 6:33 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-07-31 10:41 [PATCH] net/mlx5: fix storage handling of shared Rx queues Gregory Etelson
2025-08-18 6:32 ` Raslan Darawsheh
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).