DPDK patches and discussions
 help / color / mirror / Atom feed
From: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
To: <dev@dpdk.org>
Cc: Thomas Monjalon <thomas@monjalon.net>,
	Raslan Darawsheh <rasland@nvidia.com>,
	Viacheslav Ovsiienko <viacheslavo@nvidia.com>,
	"Matan Azrad" <matan@nvidia.com>
Subject: [PATCH] common/mlx5: fix MR lookup on slow path
Date: Thu, 25 Nov 2021 22:20:44 +0200	[thread overview]
Message-ID: <20211125202044.3483813-1-dkozlyuk@nvidia.com> (raw)

Memory region (MR) was being looked up incorrectly
for the data address of an externally-attached mbuf.
A search was attempted for the mempool of the mbuf,
while mbuf data address does not belong to this mempool
in case of externally-attached mbuf.
Only attempt the search:
1) for not externally-attached mbufs;
2) for mbufs coming from MPRQ mempool;
3) for externally-attached mbufs from mempools
   with pinned external buffers.

Fixes: 08ac03580ef2 ("common/mlx5: fix mempool registration")

Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Reviewed-by: Matan Azrad <matan@nvidia.com>
---
 drivers/common/mlx5/mlx5_common_mr.c | 22 +++++++++++++++-------
 1 file changed, 15 insertions(+), 7 deletions(-)

diff --git a/drivers/common/mlx5/mlx5_common_mr.c b/drivers/common/mlx5/mlx5_common_mr.c
index 01f35ebcdf..c694aaf28c 100644
--- a/drivers/common/mlx5/mlx5_common_mr.c
+++ b/drivers/common/mlx5/mlx5_common_mr.c
@@ -2002,21 +2002,29 @@ mlx5_mr_mb2mr_bh(struct mlx5_mr_ctrl *mr_ctrl, struct rte_mbuf *mb)
 			     dev_gen);
 	struct mlx5_common_device *cdev =
 		container_of(share_cache, struct mlx5_common_device, mr_scache);
+	bool external, mprq, pinned = false;
 
 	/* Recover MPRQ mempool. */
-	if (RTE_MBUF_HAS_EXTBUF(mb) &&
-	    mb->shinfo->free_cb == mlx5_mprq_buf_free_cb) {
+	external = RTE_MBUF_HAS_EXTBUF(mb);
+	if (external && mb->shinfo->free_cb == mlx5_mprq_buf_free_cb) {
+		mprq = true;
 		buf = mb->shinfo->fcb_opaque;
 		mp = buf->mp;
 	} else {
+		mprq = false;
 		mp = mlx5_mb2mp(mb);
+		pinned = rte_pktmbuf_priv_flags(mp) &
+			 RTE_PKTMBUF_POOL_F_PINNED_EXT_BUF;
+	}
+	if (!external || mprq || pinned) {
+		lkey = mlx5_mr_mempool2mr_bh(mr_ctrl, mp, addr);
+		if (lkey != UINT32_MAX)
+			return lkey;
+		/* MPRQ is always registered. */
+		MLX5_ASSERT(!mprq);
 	}
-	lkey = mlx5_mr_mempool2mr_bh(mr_ctrl, mp, addr);
-	if (lkey != UINT32_MAX)
-		return lkey;
 	/* Register pinned external memory if the mempool is not used for Rx. */
-	if (cdev->config.mr_mempool_reg_en &&
-	    (rte_pktmbuf_priv_flags(mp) & RTE_PKTMBUF_POOL_F_PINNED_EXT_BUF)) {
+	if (cdev->config.mr_mempool_reg_en && pinned) {
 		if (mlx5_mr_mempool_register(cdev, mp, true) < 0)
 			return UINT32_MAX;
 		lkey = mlx5_mr_mempool2mr_bh(mr_ctrl, mp, addr);
-- 
2.25.1


             reply	other threads:[~2021-11-25 20:21 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-11-25 20:20 Dmitry Kozlyuk [this message]
2021-11-26  8:52 ` Slava Ovsiienko
2021-11-26 12:28   ` Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20211125202044.3483813-1-dkozlyuk@nvidia.com \
    --to=dkozlyuk@nvidia.com \
    --cc=dev@dpdk.org \
    --cc=matan@nvidia.com \
    --cc=rasland@nvidia.com \
    --cc=thomas@monjalon.net \
    --cc=viacheslavo@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).