From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0351AA00C2; Fri, 11 Mar 2022 14:45:18 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9258B40140; Fri, 11 Mar 2022 14:45:18 +0100 (CET) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 672FA40042 for ; Fri, 11 Mar 2022 14:45:17 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1647006317; x=1678542317; h=from:to:subject:date:message-id:mime-version: content-transfer-encoding; bh=9hA8vWi8g3LY7oxkbalrr0dnmZCOD0IgcpwW52MLezk=; b=RuU2Qvj6hL6n/l0vYkvvPUzNLaBlF9g8x8yWQR2P+8+CE26nYkMDVgoc kDJOZvmJhPFETZfNhXV2b1lIOuOxH3Rk7k/QHmbeYW8+FkdGg4LIMJoga Pgw4et++7cWTEFV8Zg65ZDdFnOzY3vi2wot7Ikcc26+N+dSOGVCGjEmzX AnuKhai0T261EJG5ZuT/J8m10HqV2m/Rp1Yoi65smnrnBKNh494am/vSf yzU37jIXoYdubXXjorLAdVbonIlmomPOeem0+K79rl3M/3OLrChbYiaQT T7sHqtIeMqs7IlqoBvy+YY75yRV9pKdpWtlRXBAJbfrppd/JGy8q+Nlx0 Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10282"; a="342000977" X-IronPort-AV: E=Sophos;i="5.90,173,1643702400"; d="scan'208";a="342000977" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Mar 2022 05:45:16 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,173,1643702400"; d="scan'208";a="511389329" Received: from silpixa00401086.ir.intel.com (HELO localhost.localdomain) ([10.55.128.118]) by orsmga002.jf.intel.com with ESMTP; 11 Mar 2022 05:45:15 -0800 From: Ciara Loftus To: dev@dpdk.org Subject: [PATCH] net/af_xdp: fix shared umem fill queue reserve Date: Fri, 11 Mar 2022 13:45:13 +0000 Message-Id: <20220311134513.25512-1-ciara.loftus@intel.com> X-Mailer: git-send-email 2.17.1 MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Commit 81fe6720f84f ("net/af_xdp: reserve fill queue before socket create") moves the fill queue reserve logic to before the creation of the socket in order to suppress kernel logs like: XSK buffer pool does not provide enough addresses to fill 2047 buffers on Rx ring 0 However, for queues that share umem, the fill queue reserve must occur after the socket creation, because the fill queue is not valid until that point. This commit uses the umem refcnt value to determine whether the queue is sharing a umem, and performs the fill queue reservation either before or after the socket creation, depending on the refcnt value. The kernel logs will still be seen for the shared umem queues. Fixes: 81fe6720f84f ("net/af_xdp: reserve fill queue before socket create") Signed-off-by: Ciara Loftus --- drivers/net/af_xdp/rte_eth_af_xdp.c | 22 ++++++++++++++++++---- 1 file changed, 18 insertions(+), 4 deletions(-) diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c index 65479138d3..c7e5fb94df 100644 --- a/drivers/net/af_xdp/rte_eth_af_xdp.c +++ b/drivers/net/af_xdp/rte_eth_af_xdp.c @@ -1277,11 +1277,13 @@ xsk_configure(struct pmd_internals *internals, struct pkt_rx_queue *rxq, int ret = 0; int reserve_size = ETH_AF_XDP_DFLT_NUM_DESCS; struct rte_mbuf *fq_bufs[reserve_size]; + bool reserve_before; rxq->umem = xdp_umem_configure(internals, rxq); if (rxq->umem == NULL) return -ENOMEM; txq->umem = rxq->umem; + reserve_before = __atomic_load_n(&rxq->umem->refcnt, __ATOMIC_ACQUIRE) <= 1; #if defined(XDP_UMEM_UNALIGNED_CHUNK_FLAG) ret = rte_pktmbuf_alloc_bulk(rxq->umem->mb_pool, fq_bufs, reserve_size); @@ -1291,10 +1293,13 @@ xsk_configure(struct pmd_internals *internals, struct pkt_rx_queue *rxq, } #endif - ret = reserve_fill_queue(rxq->umem, reserve_size, fq_bufs, &rxq->fq); - if (ret) { - AF_XDP_LOG(ERR, "Failed to reserve fill queue.\n"); - goto out_umem; + /* reserve fill queue of queues not (yet) sharing UMEM */ + if (reserve_before) { + ret = reserve_fill_queue(rxq->umem, reserve_size, fq_bufs, &rxq->fq); + if (ret) { + AF_XDP_LOG(ERR, "Failed to reserve fill queue.\n"); + goto out_umem; + } } cfg.rx_size = ring_size; @@ -1335,6 +1340,15 @@ xsk_configure(struct pmd_internals *internals, struct pkt_rx_queue *rxq, goto out_umem; } + if (!reserve_before) { + /* reserve fill queue of queues sharing UMEM */ + ret = reserve_fill_queue(rxq->umem, reserve_size, fq_bufs, &rxq->fq); + if (ret) { + AF_XDP_LOG(ERR, "Failed to reserve fill queue.\n"); + goto out_xsk; + } + } + /* insert the xsk into the xsks_map */ if (internals->custom_prog_configured) { int err, fd; -- 2.17.1