From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8A14245EFA; Fri, 20 Dec 2024 15:42:28 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0F3B940656; Fri, 20 Dec 2024 15:40:44 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.14]) by mails.dpdk.org (Postfix) with ESMTP id 52AEB40656 for ; Fri, 20 Dec 2024 15:40:42 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1734705643; x=1766241643; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=aoJ/VvgSxit2ep5rPj6rg7d2EL7EmmSNjAXC/JgmpvQ=; b=FlNjaQYjECUnOt5alOLqsytPrlGA/5FCZ62jO3LTmsN5FJN6c+Tl7jW1 f1AHW5ZxEzw+Kgaawjb5QyfOutmcAb9kHOnDo8H/Iu5LdkoUDsViUNM2s 2CjOlaD4WPpQzUVOi0dt6otzuJKcAuz10upqMdhVyiJQ3gdtcQ45wIQ+Z B6GsklDZ5ciKvNMpY9AW6s4xQVt4Dq34NGnHWaHdvoe2FOO7gUodRohJb YHcdvLN5mlGyDWPRRJkd6cOsGwpcqns4iNNFrjcF7pXTD5YRoPW53gJLz rNw2DgPSakKx/qcLV+QrzHSDp4LGNzqP5ZGU6O4Dia0lp5XDA0B6eedO6 g==; X-CSE-ConnectionGUID: u9AhgrpbRaWEXuoWmLF8/w== X-CSE-MsgGUID: w9Bd382gT6i5Xr2718XWtg== X-IronPort-AV: E=McAfee;i="6700,10204,11292"; a="39040272" X-IronPort-AV: E=Sophos;i="6.12,251,1728975600"; d="scan'208";a="39040272" Received: from fmviesa006.fm.intel.com ([10.60.135.146]) by orvoesa106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Dec 2024 06:40:42 -0800 X-CSE-ConnectionGUID: Zin4oKbFQfW4DLcZQjarLA== X-CSE-MsgGUID: NBuwatbUS0GwQAhD1IH6NA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,251,1728975600"; d="scan'208";a="98310743" Received: from silpixa00401197coob.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.45]) by fmviesa006.fm.intel.com with ESMTP; 20 Dec 2024 06:40:40 -0800 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson , Ian Stokes , Konstantin Ananyev , Vladimir Medvedkin , Anatoly Burakov Subject: [PATCH v4 21/24] net/_common_intel: remove unneeded code Date: Fri, 20 Dec 2024 14:39:18 +0000 Message-ID: <20241220143925.609044-22-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241220143925.609044-1-bruce.richardson@intel.com> References: <20241122125418.2857301-1-bruce.richardson@intel.com> <20241220143925.609044-1-bruce.richardson@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org With all drivers using the common Tx structure updated so that their vector paths all use the simplified Tx mbuf ring format, it's no longer necessary to have a separate flag for the ring format and for use of a vector driver. Remove the former flag and base all decisions off the vector flag. With that done, we go from having only two paths to consider for releasing all mbufs in the ring, not three. That allows further simplification of the "ci_txq_release_all_mbufs" function. The separate function to free buffers from the vector driver not using the simplified ring format can similarly be removed as no longer necessary. Signed-off-by: Bruce Richardson --- drivers/net/_common_intel/tx.h | 97 +++-------------------- drivers/net/i40e/i40e_rxtx.c | 1 - drivers/net/iavf/iavf_rxtx_vec_sse.c | 1 - drivers/net/ice/ice_rxtx.c | 1 - drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 1 - 5 files changed, 10 insertions(+), 91 deletions(-) diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h index aa42b9b49f..d9cf4474fc 100644 --- a/drivers/net/_common_intel/tx.h +++ b/drivers/net/_common_intel/tx.h @@ -66,7 +66,6 @@ struct ci_tx_queue { bool tx_deferred_start; /* don't start this queue in dev start */ bool q_set; /* indicate if tx queue has been configured */ bool vector_tx; /* port is using vector TX */ - bool vector_sw_ring; /* port is using vectorized SW ring (ieth_tx_entry_vec) */ union { /* the VSI this queue belongs to */ struct i40e_vsi *i40e_vsi; struct iavf_vsi *iavf_vsi; @@ -120,72 +119,6 @@ ci_tx_backlog_entry_vec(struct ci_tx_entry_vec *txep, struct rte_mbuf **tx_pkts, typedef int (*ci_desc_done_fn)(struct ci_tx_queue *txq, uint16_t idx); -static __rte_always_inline int -ci_tx_free_bufs(struct ci_tx_queue *txq, ci_desc_done_fn desc_done) -{ - struct ci_tx_entry *txep; - uint32_t n; - uint32_t i; - int nb_free = 0; - struct rte_mbuf *m, *free[IETH_VPMD_TX_MAX_FREE_BUF]; - - /* check DD bits on threshold descriptor */ - if (!desc_done(txq, txq->tx_next_dd)) - return 0; - - n = txq->tx_rs_thresh; - - /* first buffer to free from S/W ring is at index - * tx_next_dd - (tx_rs_thresh-1) - */ - txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)]; - - if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) { - for (i = 0; i < n; i++) { - free[i] = txep[i].mbuf; - /* no need to reset txep[i].mbuf in vector path */ - } - rte_mempool_put_bulk(free[0]->pool, (void **)free, n); - goto done; - } - - m = rte_pktmbuf_prefree_seg(txep[0].mbuf); - if (likely(m != NULL)) { - free[0] = m; - nb_free = 1; - for (i = 1; i < n; i++) { - m = rte_pktmbuf_prefree_seg(txep[i].mbuf); - if (likely(m != NULL)) { - if (likely(m->pool == free[0]->pool)) { - free[nb_free++] = m; - } else { - rte_mempool_put_bulk(free[0]->pool, - (void *)free, - nb_free); - free[0] = m; - nb_free = 1; - } - } - } - rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free); - } else { - for (i = 1; i < n; i++) { - m = rte_pktmbuf_prefree_seg(txep[i].mbuf); - if (m != NULL) - rte_mempool_put(m->pool, m); - } - } - -done: - /* buffers were freed, update counters */ - txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh); - txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh); - if (txq->tx_next_dd >= txq->nb_tx_desc) - txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1); - - return txq->tx_rs_thresh; -} - static __rte_always_inline int ci_tx_free_bufs_vec(struct ci_tx_queue *txq, ci_desc_done_fn desc_done, bool ctx_descs) { @@ -278,21 +211,6 @@ ci_tx_free_bufs_vec(struct ci_tx_queue *txq, ci_desc_done_fn desc_done, bool ctx return txq->tx_rs_thresh; } -#define IETH_FREE_BUFS_LOOP(swr, nb_desc, start, end) do { \ - uint16_t i = start; \ - if (end < i) { \ - for (; i < nb_desc; i++) { \ - rte_pktmbuf_free_seg(swr[i].mbuf); \ - swr[i].mbuf = NULL; \ - } \ - i = 0; \ - } \ - for (; i < end; i++) { \ - rte_pktmbuf_free_seg(swr[i].mbuf); \ - swr[i].mbuf = NULL; \ - } \ -} while (0) - static inline void ci_txq_release_all_mbufs(struct ci_tx_queue *txq, bool use_ctx) { @@ -311,16 +229,21 @@ ci_txq_release_all_mbufs(struct ci_tx_queue *txq, bool use_ctx) /** * vPMD tx will not set sw_ring's mbuf to NULL after free, - * so need to free remains more carefully. + * so determining buffers to free is a little more complex. */ const uint16_t start = (txq->tx_next_dd - txq->tx_rs_thresh + 1) >> use_ctx; const uint16_t nb_desc = txq->nb_tx_desc >> use_ctx; const uint16_t end = txq->tx_tail >> use_ctx; - if (txq->vector_sw_ring) - IETH_FREE_BUFS_LOOP(txq->sw_ring_vec, nb_desc, start, end); - else - IETH_FREE_BUFS_LOOP(txq->sw_ring, nb_desc, start, end); + uint16_t i = start; + if (end < i) { + for (; i < nb_desc; i++) + rte_pktmbuf_free_seg(txq->sw_ring_vec[i].mbuf); + i = 0; + } + for (; i < end; i++) + rte_pktmbuf_free_seg(txq->sw_ring_vec[i].mbuf); + memset(txq->sw_ring_vec, 0, sizeof(txq->sw_ring_vec[0]) * nb_desc); } #endif /* _COMMON_INTEL_TX_H_ */ diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c index 745c467912..c3ff2e05c3 100644 --- a/drivers/net/i40e/i40e_rxtx.c +++ b/drivers/net/i40e/i40e_rxtx.c @@ -1891,7 +1891,6 @@ i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) tx_queue_id); txq->vector_tx = ad->tx_vec_allowed; - txq->vector_sw_ring = txq->vector_tx; /* * tx_queue_id is queue id application refers to, while diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c index 9f7db80bfd..21d5bfd309 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_sse.c +++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c @@ -1462,7 +1462,6 @@ int __rte_cold iavf_txq_vec_setup(struct ci_tx_queue *txq) { txq->vector_tx = true; - txq->vector_sw_ring = txq->vector_tx; return 0; } diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c index 77cb6688a7..dcfa409813 100644 --- a/drivers/net/ice/ice_rxtx.c +++ b/drivers/net/ice/ice_rxtx.c @@ -825,7 +825,6 @@ ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) /* record what kind of descriptor cleanup we need on teardown */ txq->vector_tx = ad->tx_vec_allowed; - txq->vector_sw_ring = txq->vector_tx; dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED; diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h index 2b12bdcc9c..53d1fed6f8 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h @@ -182,7 +182,6 @@ ixgbe_txq_vec_setup_default(struct ci_tx_queue *txq, txq->sw_ring_vec = txq->sw_ring_vec + 1; txq->ops = txq_ops; txq->vector_tx = 1; - txq->vector_sw_ring = 1; return 0; } -- 2.43.0