From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9403245491; Thu, 20 Jun 2024 05:26:15 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 85886415D7; Thu, 20 Jun 2024 05:26:15 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) by mails.dpdk.org (Postfix) with ESMTP id BF3A140BA2 for ; Thu, 20 Jun 2024 05:26:13 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1718853974; x=1750389974; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=F3/f0OM9k6gNbv1pSqLHgJPWm/97JMGYTNgkF+TVEc8=; b=P/Q8yNDtCPk5+/OZJhFjs32eV6I/KEzPvNBcfcDSmWFwj4fiDrGHez1y GMwOgkwpOOzQ8CexL2fGp7dqHK0qhQ+fQ1GxFkrnE/fFEXDqpk3wMqnA1 C8GdbWtRoouS0MJDBNUxDVcr9TWgIBiksd5c+uHApsGn/lmWcCVEcbMYT q0g0RNCAj9p5ywhuUEvkQ+TlzDi5PqfF/ciDXgN/XNJUlLC3WvdYK13Cf gfnPFcnatlvy8wz7YhENWRNPb+FeY763uZ6NTWThUkOU66JENpCA6sq4B YzRwY17bgT+VWXQ8MLZc48MdslOdeiKxYI8SNsw2mP6BkI5ui2Ifjudbn A==; X-CSE-ConnectionGUID: v5gZBk4jRGOjsJLFR1tRZw== X-CSE-MsgGUID: TVNr5ctrQFuYoconfu5+iw== X-IronPort-AV: E=McAfee;i="6700,10204,11108"; a="38323187" X-IronPort-AV: E=Sophos;i="6.08,251,1712646000"; d="scan'208";a="38323187" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jun 2024 20:26:13 -0700 X-CSE-ConnectionGUID: iRk+M7GwT2uVNTLm0coUow== X-CSE-MsgGUID: ooLiXOBMTUqXDapYSfotUw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,251,1712646000"; d="scan'208";a="65355055" Received: from media-frankdu-kahawai-node2.sh.intel.com ([10.67.119.123]) by fmviesa002.fm.intel.com with ESMTP; 19 Jun 2024 20:26:12 -0700 From: Frank Du To: dev@dpdk.org Cc: ciara.loftus@intel.com, ferruh.yigit@amd.com, mb@smartsharesystems.com Subject: [PATCH v5] net/af_xdp: parse umem map info from mempool range api Date: Thu, 20 Jun 2024 11:25:23 +0800 Message-Id: <20240620032523.440117-1-frank.du@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240426005128.148730-1-frank.du@intel.com> References: <20240426005128.148730-1-frank.du@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The current calculation assumes that the mbufs are contiguous. However, this assumption is incorrect when the mbuf memory spans across huge page. Correct to directly read with mempool get range API. Fixes: d8a210774e1d ("net/af_xdp: support unaligned umem chunks") Cc: stable@dpdk.org Signed-off-by: Frank Du --- v2: * Add virtual contiguous detect for for multiple memhdrs v3: * Use RTE_ALIGN_FLOOR to get the aligned addr * Add check on the first memhdr of memory chunks v4: * Replace the iterating with simple nb_mem_chunks check v5: * Use rte_mempool_get_mem_range to query the mempool range --- drivers/net/af_xdp/rte_eth_af_xdp.c | 42 ++++++++++++++--------------- 1 file changed, 20 insertions(+), 22 deletions(-) diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c index 4b282adb03..0bc0d9a55a 100644 --- a/drivers/net/af_xdp/rte_eth_af_xdp.c +++ b/drivers/net/af_xdp/rte_eth_af_xdp.c @@ -1067,19 +1067,6 @@ eth_link_update(struct rte_eth_dev *dev __rte_unused, } #if defined(XDP_UMEM_UNALIGNED_CHUNK_FLAG) -static inline uintptr_t get_base_addr(struct rte_mempool *mp, uint64_t *align) -{ - struct rte_mempool_memhdr *memhdr; - uintptr_t memhdr_addr, aligned_addr; - - memhdr = STAILQ_FIRST(&mp->mem_list); - memhdr_addr = (uintptr_t)memhdr->addr; - aligned_addr = memhdr_addr & ~(getpagesize() - 1); - *align = memhdr_addr - aligned_addr; - - return aligned_addr; -} - /* Check if the netdev,qid context already exists */ static inline bool ctx_exists(struct pkt_rx_queue *rxq, const char *ifname, @@ -1150,9 +1137,10 @@ xsk_umem_info *xdp_umem_configure(struct pmd_internals *internals, .fill_size = ETH_AF_XDP_DFLT_NUM_DESCS * 2, .comp_size = ETH_AF_XDP_DFLT_NUM_DESCS, .flags = XDP_UMEM_UNALIGNED_CHUNK_FLAG}; - void *base_addr = NULL; struct rte_mempool *mb_pool = rxq->mb_pool; - uint64_t umem_size, align = 0; + void *aligned_addr; + uint64_t umem_size; + struct rte_mempool_mem_range_info range; if (internals->shared_umem) { if (get_shared_umem(rxq, internals->if_name, &umem) < 0) @@ -1184,19 +1172,29 @@ xsk_umem_info *xdp_umem_configure(struct pmd_internals *internals, } umem->mb_pool = mb_pool; - base_addr = (void *)get_base_addr(mb_pool, &align); - umem_size = (uint64_t)mb_pool->populated_size * - (uint64_t)usr_config.frame_size + - align; - - ret = xsk_umem__create(&umem->umem, base_addr, umem_size, + ret = rte_mempool_get_mem_range(mb_pool, &range); + if (ret < 0) { + AF_XDP_LOG(ERR, "Failed(%d) to get range from mempool\n", ret); + goto err; + } + if (!range.is_contiguous) { + AF_XDP_LOG(ERR, "Can't mapped to umem as mempool is not contiguous\n"); + goto err; + } + /* + * umem requires the memory area be page aligned, safe to map with a large area as + * the memory pointer for each XSK TX/RX descriptor is derived from mbuf data area. + */ + aligned_addr = (void *)RTE_ALIGN_FLOOR((uintptr_t)range.start, getpagesize()); + umem_size = range.length + RTE_PTR_DIFF(range.start, aligned_addr); + ret = xsk_umem__create(&umem->umem, aligned_addr, umem_size, &rxq->fq, &rxq->cq, &usr_config); if (ret) { AF_XDP_LOG(ERR, "Failed to create umem [%d]: [%s]\n", errno, strerror(errno)); goto err; } - umem->buffer = base_addr; + umem->buffer = aligned_addr; if (internals->shared_umem) { umem->max_xsks = mb_pool->populated_size / -- 2.34.1