From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F229EA0547; Fri, 12 Feb 2021 12:06:34 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 80C6922A259; Fri, 12 Feb 2021 12:06:34 +0100 (CET) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by mails.dpdk.org (Postfix) with ESMTP id A2AC822A240 for ; Fri, 12 Feb 2021 12:06:32 +0100 (CET) Received: from Internal Mail-Server by MTLPINE1 (envelope-from viacheslavo@nvidia.com) with SMTP; 12 Feb 2021 13:06:31 +0200 Received: from nvidia.com (pegasus11.mtr.labs.mlnx [10.210.16.104]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 11CB6Vl4015241; Fri, 12 Feb 2021 13:06:31 +0200 From: Viacheslav Ovsiienko To: dev@dpdk.org Cc: rasland@nvidia.com, matan@nvidia.com, stable@dpdk.org Date: Fri, 12 Feb 2021 13:06:30 +0200 Message-Id: <20210212110630.2605-1-viacheslavo@nvidia.com> X-Mailer: git-send-email 2.18.1 Subject: [dpdk-dev] [PATCH] net/mlx5: fix external buffer pool registration for Rx queue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Rx queue creation the mlx5 PMD registers the data buffers of the specified pools for DMA operations. It scans the mem_list of the pools and creates the MRs (DMA related NIC objects) for the chunks found. If the pool is created with rte_pktmbuf_pool_create_extbuf() and refers to the external attached buffers (whose are in the area of application responsibility and it should explicitly register the data buffer memory for DMA with rte_dev_dma_map() call) the chunks contain the mbuf structures only, w/o any built-in data buffers. Hence, DMA with mlx5 NIC never happens to this area and there is no need to create MRs for these ones. The extra not needed MRs were created for the pools with external buffers causing MR cache load and performance was slightly affected. The patch checks the mbuf pool type and skips MR creation for the pools with external buffers. Fixes: bdb8e5b1ea7b ("net/mlx5: allow allocated mbuf with external buffer") Cc: stable@dpdk.org Signed-off-by: Viacheslav Ovsiienko --- drivers/net/mlx5/mlx5_mr.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/drivers/net/mlx5/mlx5_mr.c b/drivers/net/mlx5/mlx5_mr.c index 8b20ee3f83..da4e91fc24 100644 --- a/drivers/net/mlx5/mlx5_mr.c +++ b/drivers/net/mlx5/mlx5_mr.c @@ -535,7 +535,18 @@ mlx5_mr_update_mp(struct rte_eth_dev *dev, struct mlx5_mr_ctrl *mr_ctrl, .mr_ctrl = mr_ctrl, .ret = 0, }; + uint32_t flags = rte_pktmbuf_priv_flags(mp); + if (flags & RTE_PKTMBUF_POOL_F_PINNED_EXT_BUF) { + /* + * The pinned external buffer should be registered for DMA + * operations by application. The mem_list of the pool contains + * the list of chunks with mbuf structures w/o built-in data + * buffers and DMA actually does not happen there, no need + * to create MR for these chunks. + */ + return 0; + } DRV_LOG(DEBUG, "Port %u Rx queue registering mp %s " "having %u chunks.", dev->data->port_id, mp->name, mp->nb_mem_chunks); -- 2.18.1