DPDK patches and discussions
 help / color / mirror / Atom feed
From: Gregory Etelson <getelson@nvidia.com>
To: <dev@dpdk.org>
Cc: getelson@nvidia.com,   <mkashani@nvidia.com>,
	rasland@nvidia.com, stable@dpdk.org,
	"Viacheslav Ovsiienko" <viacheslavo@nvidia.com>,
	"Dariusz Sosnowski" <dsosnowski@nvidia.com>,
	"Bing Zhao" <bingz@nvidia.com>, "Ori Kam" <orika@nvidia.com>,
	"Suanming Mou" <suanmingm@nvidia.com>,
	"Matan Azrad" <matan@nvidia.com>,
	"Rongwei Liu" <rongweil@nvidia.com>,
	"Alex Vesker" <valex@nvidia.com>
Subject: [PATCH] net/mlx5: fix SRH flex parser initialization synchronization
Date: Thu, 25 Dec 2025 18:20:49 +0200	[thread overview]
Message-ID: <20251225162049.648482-1-getelson@nvidia.com> (raw)

When multiple threads attempt to create the SRH flex parser
simultaneously, only the first thread (T[0]) can proceed to the
initialization code.
Before doing so, T[0] increases the SRH flex parser reference counter.
Other threads (T[i]) entering the SRH creation function also increase
the reference counter before returning a successful code to their
respective caller functions (CF[i]).

This can lead to three issues:
1. CF[i] may receive a successful return code from the SRH flex parser
creation function before T[0] completes the parser construction.
2. If T[0] fails to create the SRH flex parser, CF[i] will not be
aware and will assume the parser is valid.
3. If T[0] fails, it will not update the SRH flex parser reference
counter.

The patch addresses these issues by locking the SRH flex parser node
creation attempt.

The first thread to enter the locked section will proceed with the
node creation.
If successful, T[0] increases the node reference counter and removes
the lock.
If it fails, T[0] removes the lock and returns an error.

For other threads (T[i]) that obtain the lock, if the flex parser node
reference count is non-zero, the node has already been created.
These threads will then increase the reference counter and return
success.
Otherwise, they will behave like T[0].

Fixes: 00e579166cc0 ("net/mlx5: support IPv6 routing extension matching")
Cc: stable@dpdk.org

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 drivers/net/mlx5/mlx5.c | 21 +++++++++++++++------
 1 file changed, 15 insertions(+), 6 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index decf540c51..a0bbd28834 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1050,6 +1050,7 @@ mlx5_flex_parser_ecpri_alloc(struct rte_eth_dev *dev)
 int
 mlx5_alloc_srh_flex_parser(struct rte_eth_dev *dev)
 {
+	static rte_spinlock_t srh_init_sl = RTE_SPINLOCK_INITIALIZER;
 	struct mlx5_devx_graph_node_attr node = {
 		.modify_field_select = 0,
 	};
@@ -1067,13 +1068,16 @@ mlx5_alloc_srh_flex_parser(struct rte_eth_dev *dev)
 		DRV_LOG(ERR, "Dynamic flex parser is not supported on HWS");
 		return -ENOTSUP;
 	}
-	if (rte_atomic_fetch_add_explicit(&priv->sh->srh_flex_parser.refcnt, 1,
-			rte_memory_order_relaxed) + 1 > 1)
-		return 0;
+	rte_spinlock_lock(&srh_init_sl);
+	if (rte_atomic_load_explicit(&priv->sh->srh_flex_parser.refcnt,
+				     rte_memory_order_relaxed) > 0)
+		goto end;
 	priv->sh->srh_flex_parser.flex.devx_fp = mlx5_malloc(MLX5_MEM_ZERO,
 			sizeof(struct mlx5_flex_parser_devx), 0, SOCKET_ID_ANY);
-	if (!priv->sh->srh_flex_parser.flex.devx_fp)
-		return -ENOMEM;
+	if (!priv->sh->srh_flex_parser.flex.devx_fp) {
+		rte_errno = ENOMEM;
+		goto error;
+	}
 	node.header_length_mode = MLX5_GRAPH_NODE_LEN_FIELD;
 	/* Srv6 first two DW are not counted in. */
 	node.header_length_base_value = 0x8;
@@ -1143,12 +1147,17 @@ mlx5_alloc_srh_flex_parser(struct rte_eth_dev *dev)
 						(i + 1) * sizeof(uint32_t) * CHAR_BIT;
 	}
 	priv->sh->srh_flex_parser.flex.map[0].shift = 0;
+end:
+	rte_atomic_fetch_add_explicit(&priv->sh->srh_flex_parser.refcnt, 1,
+				      rte_memory_order_relaxed);
+	rte_spinlock_unlock(&srh_init_sl);
 	return 0;
 error:
 	if (fp)
 		mlx5_devx_cmd_destroy(fp);
 	if (priv->sh->srh_flex_parser.flex.devx_fp)
 		mlx5_free(priv->sh->srh_flex_parser.flex.devx_fp);
+	rte_spinlock_unlock(&srh_init_sl);
 	return (rte_errno == 0) ? -ENODEV : -rte_errno;
 }
 
@@ -1165,7 +1174,7 @@ mlx5_free_srh_flex_parser(struct rte_eth_dev *dev)
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_internal_flex_parser_profile *fp = &priv->sh->srh_flex_parser;
 
-	if (rte_atomic_fetch_sub_explicit(&fp->refcnt, 1, rte_memory_order_relaxed) - 1)
+	if (rte_atomic_fetch_sub_explicit(&fp->refcnt, 1, rte_memory_order_relaxed) > 1)
 		return;
 	mlx5_devx_cmd_destroy(fp->flex.devx_fp->devx_obj);
 	mlx5_free(fp->flex.devx_fp);
-- 
2.51.0


                 reply	other threads:[~2025-12-25 16:21 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20251225162049.648482-1-getelson@nvidia.com \
    --to=getelson@nvidia.com \
    --cc=bingz@nvidia.com \
    --cc=dev@dpdk.org \
    --cc=dsosnowski@nvidia.com \
    --cc=matan@nvidia.com \
    --cc=mkashani@nvidia.com \
    --cc=orika@nvidia.com \
    --cc=rasland@nvidia.com \
    --cc=rongweil@nvidia.com \
    --cc=stable@dpdk.org \
    --cc=suanmingm@nvidia.com \
    --cc=valex@nvidia.com \
    --cc=viacheslavo@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).