From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 18CA6460C2; Mon, 20 Jan 2025 13:03:14 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E3C0F4275D; Mon, 20 Jan 2025 13:01:21 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.19]) by mails.dpdk.org (Postfix) with ESMTP id D18C04275D for ; Mon, 20 Jan 2025 13:01:20 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1737374481; x=1768910481; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=CWWDGTaAOzotOJqQygsHputIkhC705cZpAtHAQAuALA=; b=Ch0E8i2hMsUA6A66C2QGTtI/si5BXibkd+GIZncQ4bjhQH9xhk+LZ5NG Vn7UAbMT06TGoHDK6nzbiHJQuS4m+xs5viJ96V82lSF8qmO53s802bgL1 bdHmR0/Tp73BK2Q87uYmv2QCkS4lwIXvHCXCtiRLhcdOCrBe5ijbT4jat RPRgbXW/IMIOH8DV+uJQcPkRnCybyIfDrrFhK5sWStN3r3u4vTSQfn/pQ CeVLIzPIngiitVpYzcvptkdAGzEZtRl/UAqyUDG3vd7/ikRj/jho2WQ0w v1YBn05TC0bqf5o5I/Gb0qDnIGNy4ecdT+Q5FU/seBqkP9rgfYlEJBdhx w==; X-CSE-ConnectionGUID: kL/6NTf8SPGliQEaeFIAnA== X-CSE-MsgGUID: RTjgAG4ZQR6urGZUI1NtGQ== X-IronPort-AV: E=McAfee;i="6700,10204,11320"; a="36979157" X-IronPort-AV: E=Sophos;i="6.13,219,1732608000"; d="scan'208";a="36979157" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmvoesa113.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jan 2025 04:01:20 -0800 X-CSE-ConnectionGUID: BfPnyjDNRTeA4iPYd3V54w== X-CSE-MsgGUID: C0CMk+kMT4qP4z+JYD6Ysw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="143767110" Received: from silpixa00401197coob.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.45]) by orviesa001.jf.intel.com with ESMTP; 20 Jan 2025 04:01:19 -0800 From: Bruce Richardson To: dev@dpdk.org Cc: david.marchand@redhat.com, Bruce Richardson , Ian Stokes , Vladimir Medvedkin , Anatoly Burakov Subject: [PATCH v5 22/25] net/intel/common: remove unneeded code Date: Mon, 20 Jan 2025 12:00:04 +0000 Message-ID: <20250120120016.1530274-23-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250120120016.1530274-1-bruce.richardson@intel.com> References: <20241122125418.2857301-1-bruce.richardson@intel.com> <20250120120016.1530274-1-bruce.richardson@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org With all drivers using the common Tx structure updated so that their vector paths all use the simplified Tx mbuf ring format, it's no longer necessary to have a separate flag for the ring format and for use of a vector driver. Remove the former flag and base all decisions off the vector flag. With that done, we go from having only two paths to consider for releasing all mbufs in the ring, not three. That allows further simplification of the "ci_txq_release_all_mbufs" function. The separate function to free buffers from the vector driver not using the simplified ring format can similarly be removed as no longer necessary. Signed-off-by: Bruce Richardson --- drivers/net/intel/common/tx.h | 97 ++----------------- drivers/net/intel/i40e/i40e_rxtx.c | 1 - drivers/net/intel/iavf/iavf_rxtx_vec_sse.c | 1 - drivers/net/intel/ice/ice_rxtx.c | 1 - .../net/intel/ixgbe/ixgbe_rxtx_vec_common.h | 1 - 5 files changed, 10 insertions(+), 91 deletions(-) diff --git a/drivers/net/intel/common/tx.h b/drivers/net/intel/common/tx.h index aa42b9b49f..d9cf4474fc 100644 --- a/drivers/net/intel/common/tx.h +++ b/drivers/net/intel/common/tx.h @@ -66,7 +66,6 @@ struct ci_tx_queue { bool tx_deferred_start; /* don't start this queue in dev start */ bool q_set; /* indicate if tx queue has been configured */ bool vector_tx; /* port is using vector TX */ - bool vector_sw_ring; /* port is using vectorized SW ring (ieth_tx_entry_vec) */ union { /* the VSI this queue belongs to */ struct i40e_vsi *i40e_vsi; struct iavf_vsi *iavf_vsi; @@ -120,72 +119,6 @@ ci_tx_backlog_entry_vec(struct ci_tx_entry_vec *txep, struct rte_mbuf **tx_pkts, typedef int (*ci_desc_done_fn)(struct ci_tx_queue *txq, uint16_t idx); -static __rte_always_inline int -ci_tx_free_bufs(struct ci_tx_queue *txq, ci_desc_done_fn desc_done) -{ - struct ci_tx_entry *txep; - uint32_t n; - uint32_t i; - int nb_free = 0; - struct rte_mbuf *m, *free[IETH_VPMD_TX_MAX_FREE_BUF]; - - /* check DD bits on threshold descriptor */ - if (!desc_done(txq, txq->tx_next_dd)) - return 0; - - n = txq->tx_rs_thresh; - - /* first buffer to free from S/W ring is at index - * tx_next_dd - (tx_rs_thresh-1) - */ - txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)]; - - if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) { - for (i = 0; i < n; i++) { - free[i] = txep[i].mbuf; - /* no need to reset txep[i].mbuf in vector path */ - } - rte_mempool_put_bulk(free[0]->pool, (void **)free, n); - goto done; - } - - m = rte_pktmbuf_prefree_seg(txep[0].mbuf); - if (likely(m != NULL)) { - free[0] = m; - nb_free = 1; - for (i = 1; i < n; i++) { - m = rte_pktmbuf_prefree_seg(txep[i].mbuf); - if (likely(m != NULL)) { - if (likely(m->pool == free[0]->pool)) { - free[nb_free++] = m; - } else { - rte_mempool_put_bulk(free[0]->pool, - (void *)free, - nb_free); - free[0] = m; - nb_free = 1; - } - } - } - rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free); - } else { - for (i = 1; i < n; i++) { - m = rte_pktmbuf_prefree_seg(txep[i].mbuf); - if (m != NULL) - rte_mempool_put(m->pool, m); - } - } - -done: - /* buffers were freed, update counters */ - txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh); - txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh); - if (txq->tx_next_dd >= txq->nb_tx_desc) - txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1); - - return txq->tx_rs_thresh; -} - static __rte_always_inline int ci_tx_free_bufs_vec(struct ci_tx_queue *txq, ci_desc_done_fn desc_done, bool ctx_descs) { @@ -278,21 +211,6 @@ ci_tx_free_bufs_vec(struct ci_tx_queue *txq, ci_desc_done_fn desc_done, bool ctx return txq->tx_rs_thresh; } -#define IETH_FREE_BUFS_LOOP(swr, nb_desc, start, end) do { \ - uint16_t i = start; \ - if (end < i) { \ - for (; i < nb_desc; i++) { \ - rte_pktmbuf_free_seg(swr[i].mbuf); \ - swr[i].mbuf = NULL; \ - } \ - i = 0; \ - } \ - for (; i < end; i++) { \ - rte_pktmbuf_free_seg(swr[i].mbuf); \ - swr[i].mbuf = NULL; \ - } \ -} while (0) - static inline void ci_txq_release_all_mbufs(struct ci_tx_queue *txq, bool use_ctx) { @@ -311,16 +229,21 @@ ci_txq_release_all_mbufs(struct ci_tx_queue *txq, bool use_ctx) /** * vPMD tx will not set sw_ring's mbuf to NULL after free, - * so need to free remains more carefully. + * so determining buffers to free is a little more complex. */ const uint16_t start = (txq->tx_next_dd - txq->tx_rs_thresh + 1) >> use_ctx; const uint16_t nb_desc = txq->nb_tx_desc >> use_ctx; const uint16_t end = txq->tx_tail >> use_ctx; - if (txq->vector_sw_ring) - IETH_FREE_BUFS_LOOP(txq->sw_ring_vec, nb_desc, start, end); - else - IETH_FREE_BUFS_LOOP(txq->sw_ring, nb_desc, start, end); + uint16_t i = start; + if (end < i) { + for (; i < nb_desc; i++) + rte_pktmbuf_free_seg(txq->sw_ring_vec[i].mbuf); + i = 0; + } + for (; i < end; i++) + rte_pktmbuf_free_seg(txq->sw_ring_vec[i].mbuf); + memset(txq->sw_ring_vec, 0, sizeof(txq->sw_ring_vec[0]) * nb_desc); } #endif /* _COMMON_INTEL_TX_H_ */ diff --git a/drivers/net/intel/i40e/i40e_rxtx.c b/drivers/net/intel/i40e/i40e_rxtx.c index 745c467912..c3ff2e05c3 100644 --- a/drivers/net/intel/i40e/i40e_rxtx.c +++ b/drivers/net/intel/i40e/i40e_rxtx.c @@ -1891,7 +1891,6 @@ i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) tx_queue_id); txq->vector_tx = ad->tx_vec_allowed; - txq->vector_sw_ring = txq->vector_tx; /* * tx_queue_id is queue id application refers to, while diff --git a/drivers/net/intel/iavf/iavf_rxtx_vec_sse.c b/drivers/net/intel/iavf/iavf_rxtx_vec_sse.c index 9f7db80bfd..21d5bfd309 100644 --- a/drivers/net/intel/iavf/iavf_rxtx_vec_sse.c +++ b/drivers/net/intel/iavf/iavf_rxtx_vec_sse.c @@ -1462,7 +1462,6 @@ int __rte_cold iavf_txq_vec_setup(struct ci_tx_queue *txq) { txq->vector_tx = true; - txq->vector_sw_ring = txq->vector_tx; return 0; } diff --git a/drivers/net/intel/ice/ice_rxtx.c b/drivers/net/intel/ice/ice_rxtx.c index 77cb6688a7..dcfa409813 100644 --- a/drivers/net/intel/ice/ice_rxtx.c +++ b/drivers/net/intel/ice/ice_rxtx.c @@ -825,7 +825,6 @@ ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) /* record what kind of descriptor cleanup we need on teardown */ txq->vector_tx = ad->tx_vec_allowed; - txq->vector_sw_ring = txq->vector_tx; dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED; diff --git a/drivers/net/intel/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/intel/ixgbe/ixgbe_rxtx_vec_common.h index bd52775c48..9584ac3a44 100644 --- a/drivers/net/intel/ixgbe/ixgbe_rxtx_vec_common.h +++ b/drivers/net/intel/ixgbe/ixgbe_rxtx_vec_common.h @@ -182,7 +182,6 @@ ixgbe_txq_vec_setup_default(struct ci_tx_queue *txq, txq->sw_ring_vec = txq->sw_ring_vec + 1; txq->ops = txq_ops; txq->vector_tx = 1; - txq->vector_sw_ring = 1; return 0; } -- 2.43.0