From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E78E1440A2; Thu, 23 May 2024 09:10:54 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D7427402B3; Thu, 23 May 2024 09:10:54 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.12]) by mails.dpdk.org (Postfix) with ESMTP id 3AB234026B for ; Thu, 23 May 2024 09:10:53 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716448253; x=1747984253; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=y4eWmXOkXRKQoPjo/TQtjvpcZb2qbweSn6Xzctn58nw=; b=PcSzadeUpisRCDI8l9Kk2zic3XNWUhEeuNQ2ed+w8YXyMrZ8iwCG7VjO RAjFZAjbL2AMmVUr74brzzGKzOlmS+Typ5E3HMpXEArtyH8zYvCS+oWQk sYA6qGMYG6gsGLbZnwdPKrIZGPxpe5e4Aoc55iStWvcmhQibLyoLy5kgi kfS5D+Pc+0usTDdkQ2rlrMnBZ1NhUK3qL5kBCJWNy82RIn05uTAtS4G2z Z3mlfGyLkjnVBHJH0IFxTRnYnoKVqoUDhJ9m706HWvf5keeyJtZV/cUkZ jpDCrtGtZhGoTIIQms74xRUZlezXP8TGkUBgq/MIbsQThGxsBYQpzdrU0 g==; X-CSE-ConnectionGUID: jLcl8rPORO632vuKQhgt2g== X-CSE-MsgGUID: sl2u+vWbSqyai19mJhJODg== X-IronPort-AV: E=McAfee;i="6600,9927,11080"; a="24155263" X-IronPort-AV: E=Sophos;i="6.08,181,1712646000"; d="scan'208";a="24155263" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by orvoesa104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 May 2024 00:10:50 -0700 X-CSE-ConnectionGUID: RmOk9FF4QQivE9Krw1QarA== X-CSE-MsgGUID: 0LgKS32PRYCLq17pbkqXwQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,181,1712646000"; d="scan'208";a="38540017" Received: from media-frankdu-kahawai-node2.sh.intel.com ([10.67.119.123]) by orviesa004.jf.intel.com with ESMTP; 23 May 2024 00:10:49 -0700 From: Frank Du To: dev@dpdk.org Cc: ciara.loftus@intel.com, ferruh.yigit@amd.com, mb@smartsharesystems.com Subject: [PATCH v3] net/af_xdp: fix umem map size for zero copy Date: Thu, 23 May 2024 14:53:02 +0800 Message-Id: <20240523065302.2345392-1-frank.du@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240426005128.148730-1-frank.du@intel.com> References: <20240426005128.148730-1-frank.du@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The current calculation assumes that the mbufs are contiguous. However, this assumption is incorrect when the memory spans across a huge page. Correct to directly read the size from the mempool memory chunks. Fixes: d8a210774e1d ("net/af_xdp: support unaligned umem chunks") Cc: stable@dpdk.org Signed-off-by: Frank Du --- v2: * Add virtual contiguous detect for for multiple memhdrs. v3: * Use RTE_ALIGN_FLOOR to get the aligned addr * Add check on the first memhdr of memory chunks --- drivers/net/af_xdp/rte_eth_af_xdp.c | 40 ++++++++++++++++++++++++----- 1 file changed, 33 insertions(+), 7 deletions(-) diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c index 6ba455bb9b..986665d1d4 100644 --- a/drivers/net/af_xdp/rte_eth_af_xdp.c +++ b/drivers/net/af_xdp/rte_eth_af_xdp.c @@ -1040,16 +1040,39 @@ eth_link_update(struct rte_eth_dev *dev __rte_unused, } #if defined(XDP_UMEM_UNALIGNED_CHUNK_FLAG) -static inline uintptr_t get_base_addr(struct rte_mempool *mp, uint64_t *align) +static inline uintptr_t get_memhdr_info(struct rte_mempool *mp, uint64_t *align, size_t *len) { - struct rte_mempool_memhdr *memhdr; + struct rte_mempool_memhdr *memhdr, *next; uintptr_t memhdr_addr, aligned_addr; + size_t memhdr_len = 0; + /* get the mempool base addr and align */ memhdr = STAILQ_FIRST(&mp->mem_list); + if (!memhdr) { + AF_XDP_LOG(ERR, "The mempool is not populated\n"); + return 0; + } memhdr_addr = (uintptr_t)memhdr->addr; - aligned_addr = memhdr_addr & ~(getpagesize() - 1); + aligned_addr = RTE_ALIGN_FLOOR(memhdr_addr, getpagesize()); *align = memhdr_addr - aligned_addr; + memhdr_len += memhdr->len; + + /* check if virtual contiguous memory for multiple memhdrs */ + next = STAILQ_NEXT(memhdr, next); + while (next) { + if ((uintptr_t)next->addr != (uintptr_t)memhdr->addr + memhdr->len) { + AF_XDP_LOG(ERR, "Memory chunks not virtual contiguous, " + "next: %p, cur: %p(len: %" PRId64 " )\n", + next->addr, memhdr->addr, memhdr->len); + return 0; + } + /* virtual contiguous */ + memhdr = next; + memhdr_len += memhdr->len; + next = STAILQ_NEXT(memhdr, next); + } + *len = memhdr_len; return aligned_addr; } @@ -1126,6 +1149,7 @@ xsk_umem_info *xdp_umem_configure(struct pmd_internals *internals, void *base_addr = NULL; struct rte_mempool *mb_pool = rxq->mb_pool; uint64_t umem_size, align = 0; + size_t len = 0; if (internals->shared_umem) { if (get_shared_umem(rxq, internals->if_name, &umem) < 0) @@ -1157,10 +1181,12 @@ xsk_umem_info *xdp_umem_configure(struct pmd_internals *internals, } umem->mb_pool = mb_pool; - base_addr = (void *)get_base_addr(mb_pool, &align); - umem_size = (uint64_t)mb_pool->populated_size * - (uint64_t)usr_config.frame_size + - align; + base_addr = (void *)get_memhdr_info(mb_pool, &align, &len); + if (!base_addr) { + AF_XDP_LOG(ERR, "The memory pool can't be mapped into umem\n"); + goto err; + } + umem_size = (uint64_t)len + align; ret = xsk_umem__create(&umem->umem, base_addr, umem_size, &rxq->fq, &rxq->cq, &usr_config); -- 2.34.1