DPDK patches and discussions
 help / color / mirror / Atom feed
From: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
To: dev@dpdk.org
Cc: rasland@nvidia.com, matan@nvidia.com, stable@dpdk.org
Subject: [dpdk-dev] [PATCH] net/mlx5: fix external buffer pool registration for Rx queue
Date: Fri, 12 Feb 2021 13:06:30 +0200	[thread overview]
Message-ID: <20210212110630.2605-1-viacheslavo@nvidia.com> (raw)

On Rx queue creation the mlx5 PMD registers the data buffers of the
specified pools for DMA operations. It scans the mem_list of the pools
and creates the MRs (DMA related NIC objects) for the chunks found.
If the pool is created with rte_pktmbuf_pool_create_extbuf() and
refers to the external attached buffers (whose are in the area of
application responsibility and it should explicitly register the
data buffer memory for DMA with rte_dev_dma_map() call) the chunks
contain the mbuf structures only, w/o any built-in data buffers.
Hence, DMA with mlx5 NIC never happens to this area and there is
no need to create MRs for these ones.

The extra not needed MRs were created for the pools with external
buffers causing MR cache load and performance was slightly affected.
The patch checks the mbuf pool type and skips MR creation for the
pools with external buffers.

Fixes: bdb8e5b1ea7b ("net/mlx5: allow allocated mbuf with external buffer")
Cc: stable@dpdk.org

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 drivers/net/mlx5/mlx5_mr.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/drivers/net/mlx5/mlx5_mr.c b/drivers/net/mlx5/mlx5_mr.c
index 8b20ee3f83..da4e91fc24 100644
--- a/drivers/net/mlx5/mlx5_mr.c
+++ b/drivers/net/mlx5/mlx5_mr.c
@@ -535,7 +535,18 @@ mlx5_mr_update_mp(struct rte_eth_dev *dev, struct mlx5_mr_ctrl *mr_ctrl,
 		.mr_ctrl = mr_ctrl,
 		.ret = 0,
 	};
+	uint32_t flags = rte_pktmbuf_priv_flags(mp);
 
+	if (flags & RTE_PKTMBUF_POOL_F_PINNED_EXT_BUF) {
+		/*
+		 * The pinned external buffer should be registered for DMA
+		 * operations by application. The mem_list of the pool contains
+		 * the list of chunks with mbuf structures w/o built-in data
+		 * buffers and DMA actually does not happen there, no need
+		 * to create MR for these chunks.
+		 */
+		return 0;
+	}
 	DRV_LOG(DEBUG, "Port %u Rx queue registering mp %s "
 		       "having %u chunks.", dev->data->port_id,
 		       mp->name, mp->nb_mem_chunks);
-- 
2.18.1


             reply	other threads:[~2021-02-12 11:06 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-12 11:06 Viacheslav Ovsiienko [this message]
2021-02-14 10:42 ` Matan Azrad
2021-02-21  8:14 ` Raslan Darawsheh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210212110630.2605-1-viacheslavo@nvidia.com \
    --to=viacheslavo@nvidia.com \
    --cc=dev@dpdk.org \
    --cc=matan@nvidia.com \
    --cc=rasland@nvidia.com \
    --cc=stable@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).