DPDK patches and discussions
 help / color / mirror / Atom feed
From: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
To: <dev@dpdk.org>
Cc: Matan Azrad <matan@nvidia.com>,
	Viacheslav Ovsiienko <viacheslavo@nvidia.com>, <stable@dpdk.org>
Subject: [PATCH 2/2] common/mlx5: fix multi-process mempool registration
Date: Mon, 8 Aug 2022 12:42:36 +0300	[thread overview]
Message-ID: <20220808094236.3395516-3-dkozlyuk@nvidia.com> (raw)
In-Reply-To: <20220808094236.3395516-1-dkozlyuk@nvidia.com>

The `mp_cb_registered` flag shared between all processes
was used to ensure that for any IB device (MLX5 common device)
mempool event callback was registered only once
and mempools that had been existing before the device start
were traversed only once to register them.
Since mempool callback registrations have become process-private,
callback registration must be done by every process.
The flag can no longer reflect the state for any single process.
Replace it with a registration counter to track
when no more callbacks are registered for the device in any process.
It is sufficient to only register pre-existing mempools
in the primary process because it is the one that starts the device.

Fixes: 690b2a88c2f7 ("common/mlx5: add mempool registration facilities")
Cc: stable@dpdk.org

Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
---
 drivers/common/mlx5/mlx5_common.c    | 15 +++++++++------
 drivers/common/mlx5/mlx5_common_mr.c |  2 +-
 drivers/common/mlx5/mlx5_common_mr.h |  2 +-
 3 files changed, 11 insertions(+), 8 deletions(-)

diff --git a/drivers/common/mlx5/mlx5_common.c b/drivers/common/mlx5/mlx5_common.c
index 89fef2b535..4dcc8cc49c 100644
--- a/drivers/common/mlx5/mlx5_common.c
+++ b/drivers/common/mlx5/mlx5_common.c
@@ -583,18 +583,17 @@ mlx5_dev_mempool_subscribe(struct mlx5_common_device *cdev)
 	if (!cdev->config.mr_mempool_reg_en)
 		return 0;
 	rte_rwlock_write_lock(&cdev->mr_scache.mprwlock);
-	if (cdev->mr_scache.mp_cb_registered)
-		goto exit;
 	/* Callback for this device may be already registered. */
 	ret = rte_mempool_event_callback_register(mlx5_dev_mempool_event_cb,
 						  cdev);
 	if (ret != 0 && rte_errno != EEXIST)
 		goto exit;
+	__atomic_add_fetch(&cdev->mr_scache.mempool_cb_reg_n, 1,
+			   __ATOMIC_ACQUIRE);
 	/* Register mempools only once for this device. */
-	if (ret == 0)
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
 		rte_mempool_walk(mlx5_dev_mempool_register_cb, cdev);
 	ret = 0;
-	cdev->mr_scache.mp_cb_registered = 1;
 exit:
 	rte_rwlock_write_unlock(&cdev->mr_scache.mprwlock);
 	return ret;
@@ -603,10 +602,14 @@ mlx5_dev_mempool_subscribe(struct mlx5_common_device *cdev)
 static void
 mlx5_dev_mempool_unsubscribe(struct mlx5_common_device *cdev)
 {
+	uint32_t mempool_cb_reg_n;
 	int ret;
 
-	if (!cdev->mr_scache.mp_cb_registered ||
-	    !cdev->config.mr_mempool_reg_en)
+	if (!cdev->config.mr_mempool_reg_en)
+		return;
+	mempool_cb_reg_n = __atomic_sub_fetch(&cdev->mr_scache.mempool_cb_reg_n,
+					      1, __ATOMIC_RELEASE);
+	if (mempool_cb_reg_n > 0)
 		return;
 	/* Stop watching for mempool events and unregister all mempools. */
 	ret = rte_mempool_event_callback_unregister(mlx5_dev_mempool_event_cb,
diff --git a/drivers/common/mlx5/mlx5_common_mr.c b/drivers/common/mlx5/mlx5_common_mr.c
index 8d8bec99a9..1d54102b54 100644
--- a/drivers/common/mlx5/mlx5_common_mr.c
+++ b/drivers/common/mlx5/mlx5_common_mr.c
@@ -1138,7 +1138,7 @@ mlx5_mr_create_cache(struct mlx5_mr_share_cache *share_cache, int socket)
 			      &share_cache->dereg_mr_cb);
 	rte_rwlock_init(&share_cache->rwlock);
 	rte_rwlock_init(&share_cache->mprwlock);
-	share_cache->mp_cb_registered = 0;
+	share_cache->mempool_cb_reg_n = 0;
 	/* Initialize B-tree and allocate memory for global MR cache table. */
 	return mlx5_mr_btree_init(&share_cache->cache,
 				  MLX5_MR_BTREE_CACHE_N * 2, socket);
diff --git a/drivers/common/mlx5/mlx5_common_mr.h b/drivers/common/mlx5/mlx5_common_mr.h
index 213f5427cb..a5f2d4fd35 100644
--- a/drivers/common/mlx5/mlx5_common_mr.h
+++ b/drivers/common/mlx5/mlx5_common_mr.h
@@ -81,7 +81,7 @@ struct mlx5_mr_share_cache {
 	uint32_t dev_gen; /* Generation number to flush local caches. */
 	rte_rwlock_t rwlock; /* MR cache Lock. */
 	rte_rwlock_t mprwlock; /* Mempool Registration Lock. */
-	uint8_t mp_cb_registered; /* Mempool are Registered. */
+	uint32_t mempool_cb_reg_n; /* Mempool event callabck registrants. */
 	struct mlx5_mr_btree cache; /* Global MR cache table. */
 	struct mlx5_mr_list mr_list; /* Registered MR list. */
 	struct mlx5_mr_list mr_free_list; /* Freed MR list. */
-- 
2.25.1


  parent reply	other threads:[~2022-08-08  9:42 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-08-08  9:42 [PATCH 0/2] " Dmitry Kozlyuk
2022-08-08  9:42 ` [PATCH 1/2] mempool: make event callbacks process-private Dmitry Kozlyuk
2022-08-28 18:33   ` Slava Ovsiienko
2022-10-10  8:02     ` Andrew Rybchenko
2022-09-22  7:31   ` Dmitry Kozlyuk
2022-08-08  9:42 ` Dmitry Kozlyuk [this message]
2022-08-28 18:34   ` [PATCH 2/2] common/mlx5: fix multi-process mempool registration Slava Ovsiienko
2022-10-10 13:20 ` [PATCH 0/2] " Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220808094236.3395516-3-dkozlyuk@nvidia.com \
    --to=dkozlyuk@nvidia.com \
    --cc=dev@dpdk.org \
    --cc=matan@nvidia.com \
    --cc=stable@dpdk.org \
    --cc=viacheslavo@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).