From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C1721440A4; Thu, 23 May 2024 10:25:42 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8F495402C8; Thu, 23 May 2024 10:25:42 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.14]) by mails.dpdk.org (Postfix) with ESMTP id 840164025C for ; Thu, 23 May 2024 10:25:40 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716452741; x=1747988741; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=wdft3EFL4zfbT2U5iyxTnZXmlmtlxH44MS8V9MOiRIM=; b=j8bNZ1VNtoTAPADThvLgQrcH3+1vVKx0ccT7cz072HMf4j/HoGw6PN2z o0BWKOC1Ot4Uwb8FkFui0a4TzXIbU4jsVcYOZ3D3TE9Ta1CRl8QRSFeVJ lN0EyBTaUca2tlO6euawvpqsWxfgDBSvrPiujFtASsQM/QV8B+1JTdWdA u5Z2EAbp9IRqWK+qr+R3Y09Kh7YBARyWIxme/vNRhWj8f0IQy72BCXIWt /l4tiOyXBv2MbNtume/JjWNjfUiY/+sGCCWmeaD2NYNDlsRcFQ3noagQA 4430Pa7qjZH1v0dxAnSoB3XfvItiXTNBn9fEyfwUa42Y4p5u6EwondT6R Q==; X-CSE-ConnectionGUID: +X1doOe9TUSpyuWt1ehJ7A== X-CSE-MsgGUID: Hpudm3ArQRScWCAqZKjqgQ== X-IronPort-AV: E=McAfee;i="6600,9927,11080"; a="12982498" X-IronPort-AV: E=Sophos;i="6.08,182,1712646000"; d="scan'208";a="12982498" Received: from fmviesa009.fm.intel.com ([10.60.135.149]) by fmvoesa108.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 May 2024 01:25:40 -0700 X-CSE-ConnectionGUID: /BxqeZl3R7C8+dX8tcJS3g== X-CSE-MsgGUID: iL/hGBcVT/6ACFYDwtVDfA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,182,1712646000"; d="scan'208";a="33660575" Received: from media-frankdu-kahawai-node2.sh.intel.com ([10.67.119.123]) by fmviesa009.fm.intel.com with ESMTP; 23 May 2024 01:25:38 -0700 From: Frank Du To: dev@dpdk.org Cc: ciara.loftus@intel.com, ferruh.yigit@amd.com, mb@smartsharesystems.com Subject: [PATCH v4] net/af_xdp: fix umem map size for zero copy Date: Thu, 23 May 2024 16:07:51 +0800 Message-Id: <20240523080751.2347970-1-frank.du@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240426005128.148730-1-frank.du@intel.com> References: <20240426005128.148730-1-frank.du@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The current calculation assumes that the mbufs are contiguous. However, this assumption is incorrect when the mbuf memory spans across huge page. To ensure that each mbuf resides exclusively within a single page, there are deliberate spacing gaps when allocating mbufs across the boundaries. Correct to directly read the size from the mempool memory chunk. Fixes: d8a210774e1d ("net/af_xdp: support unaligned umem chunks") Cc: stable@dpdk.org Signed-off-by: Frank Du --- v2: * Add virtual contiguous detect for for multiple memhdrs v3: * Use RTE_ALIGN_FLOOR to get the aligned addr * Add check on the first memhdr of memory chunks v4: * Replace the iterating with simple nb_mem_chunks check --- drivers/net/af_xdp/rte_eth_af_xdp.c | 33 +++++++++++++++++++++++------ 1 file changed, 26 insertions(+), 7 deletions(-) diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c index 6ba455bb9b..d0431ec089 100644 --- a/drivers/net/af_xdp/rte_eth_af_xdp.c +++ b/drivers/net/af_xdp/rte_eth_af_xdp.c @@ -1040,16 +1040,32 @@ eth_link_update(struct rte_eth_dev *dev __rte_unused, } #if defined(XDP_UMEM_UNALIGNED_CHUNK_FLAG) -static inline uintptr_t get_base_addr(struct rte_mempool *mp, uint64_t *align) +static inline uintptr_t +get_memhdr_info(const struct rte_mempool *mp, uint64_t *align, size_t *len) { struct rte_mempool_memhdr *memhdr; uintptr_t memhdr_addr, aligned_addr; + if (mp->nb_mem_chunks != 1) { + /* + * The mempool with multiple chunks is not virtual contiguous but + * xsk umem only support single virtual region mapping. + */ + AF_XDP_LOG(ERR, "The mempool contain multiple %u memory chunks\n", + mp->nb_mem_chunks); + return 0; + } + + /* Get the mempool base addr and align from the header now */ memhdr = STAILQ_FIRST(&mp->mem_list); + if (!memhdr) { + AF_XDP_LOG(ERR, "The mempool is not populated\n"); + return 0; + } memhdr_addr = (uintptr_t)memhdr->addr; - aligned_addr = memhdr_addr & ~(getpagesize() - 1); + aligned_addr = RTE_ALIGN_FLOOR(memhdr_addr, getpagesize()); *align = memhdr_addr - aligned_addr; - + *len = memhdr->len; return aligned_addr; } @@ -1126,6 +1142,7 @@ xsk_umem_info *xdp_umem_configure(struct pmd_internals *internals, void *base_addr = NULL; struct rte_mempool *mb_pool = rxq->mb_pool; uint64_t umem_size, align = 0; + size_t len = 0; if (internals->shared_umem) { if (get_shared_umem(rxq, internals->if_name, &umem) < 0) @@ -1157,10 +1174,12 @@ xsk_umem_info *xdp_umem_configure(struct pmd_internals *internals, } umem->mb_pool = mb_pool; - base_addr = (void *)get_base_addr(mb_pool, &align); - umem_size = (uint64_t)mb_pool->populated_size * - (uint64_t)usr_config.frame_size + - align; + base_addr = (void *)get_memhdr_info(mb_pool, &align, &len); + if (!base_addr) { + AF_XDP_LOG(ERR, "The memory pool can't be mapped as umem\n"); + goto err; + } + umem_size = (uint64_t)len + align; ret = xsk_umem__create(&umem->umem, base_addr, umem_size, &rxq->fq, &rxq->cq, &usr_config); -- 2.34.1