From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3DE0645E16; Tue, 3 Dec 2024 17:44:37 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id ACDD840EE6; Tue, 3 Dec 2024 17:42:34 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.18]) by mails.dpdk.org (Postfix) with ESMTP id 3F5E340E38 for ; Tue, 3 Dec 2024 17:42:31 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1733244152; x=1764780152; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=kVypZX/6mEcyZ6ZErU3VrIXkavqFRPKtSLh5ez1Fv/4=; b=njocyePCrmglxspFTrjNZ54rvOIH0mmYkhSs5vC2iUIgvCwwtgJUFab0 jRjXHqIUrItGvzCORava6nXYJYvf+mF513rKZ7Ho+Vy2YWPSrfehF3fka vXKNUJTT470oXsnHFv4PapGdl+BrVzGWw7A3xtan6nYzBlFW/Mqt5D0Z8 EvGmFsBxFg7jE+vaCoEdZi+2wnWPysRRxSoEsaDkiR/4Qr80YlUE27q2J UQcg5yjrUHdM7DiYck0eALi3LIqg4IT9vqOCuRf8KEAo83wRTCCAEMjKs ABKl5ASYhrCTBTdqi0GTkgWLZTq+/L/9QTB7kJN+O9iGr8fFcRXJG5WY7 g==; X-CSE-ConnectionGUID: JlnLfc8MRdyl1u8rabsllA== X-CSE-MsgGUID: U6XQ1WSfRwGEr8D0xzc/Kg== X-IronPort-AV: E=McAfee;i="6700,10204,11275"; a="33620856" X-IronPort-AV: E=Sophos;i="6.12,205,1728975600"; d="scan'208";a="33620856" Received: from orviesa010.jf.intel.com ([10.64.159.150]) by orvoesa110.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Dec 2024 08:42:31 -0800 X-CSE-ConnectionGUID: ga/Q+NsrRiOwL1abDKd1AQ== X-CSE-MsgGUID: m/Evxz4lQ4eWdDoliFhl2Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,205,1728975600"; d="scan'208";a="93357822" Received: from silpixa00401197coob.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.45]) by orviesa010.jf.intel.com with ESMTP; 03 Dec 2024 08:42:30 -0800 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson , Ian Stokes , Konstantin Ananyev , Vladimir Medvedkin , Anatoly Burakov Subject: [PATCH v2 21/22] net/_common_intel: remove unneeded code Date: Tue, 3 Dec 2024 16:41:27 +0000 Message-ID: <20241203164132.2686558-22-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241203164132.2686558-1-bruce.richardson@intel.com> References: <20241122125418.2857301-1-bruce.richardson@intel.com> <20241203164132.2686558-1-bruce.richardson@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org With all drivers using the common Tx structure updated so that their vector paths all use the simplified Tx mbuf ring format, it's no longer necessary to have a separate flag for the ring format and for use of a vector driver. Remove the former flag and base all decisions off the vector flag. With that done, we go from having only two paths to consider for releasing all mbufs in the ring, not three. That allows further simpification of the "ci_txq_release_all_mbufs" function. The separate function to free buffers from the vector driver not using the simplified ring format can similarly be removed as no longer necessary. Signed-off-by: Bruce Richardson --- drivers/net/_common_intel/tx.h | 97 +++-------------------- drivers/net/i40e/i40e_rxtx.c | 1 - drivers/net/iavf/iavf_rxtx_vec_sse.c | 1 - drivers/net/ice/ice_rxtx.c | 1 - drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 1 - 5 files changed, 10 insertions(+), 91 deletions(-) diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h index aa42b9b49f..d9cf4474fc 100644 --- a/drivers/net/_common_intel/tx.h +++ b/drivers/net/_common_intel/tx.h @@ -66,7 +66,6 @@ struct ci_tx_queue { bool tx_deferred_start; /* don't start this queue in dev start */ bool q_set; /* indicate if tx queue has been configured */ bool vector_tx; /* port is using vector TX */ - bool vector_sw_ring; /* port is using vectorized SW ring (ieth_tx_entry_vec) */ union { /* the VSI this queue belongs to */ struct i40e_vsi *i40e_vsi; struct iavf_vsi *iavf_vsi; @@ -120,72 +119,6 @@ ci_tx_backlog_entry_vec(struct ci_tx_entry_vec *txep, struct rte_mbuf **tx_pkts, typedef int (*ci_desc_done_fn)(struct ci_tx_queue *txq, uint16_t idx); -static __rte_always_inline int -ci_tx_free_bufs(struct ci_tx_queue *txq, ci_desc_done_fn desc_done) -{ - struct ci_tx_entry *txep; - uint32_t n; - uint32_t i; - int nb_free = 0; - struct rte_mbuf *m, *free[IETH_VPMD_TX_MAX_FREE_BUF]; - - /* check DD bits on threshold descriptor */ - if (!desc_done(txq, txq->tx_next_dd)) - return 0; - - n = txq->tx_rs_thresh; - - /* first buffer to free from S/W ring is at index - * tx_next_dd - (tx_rs_thresh-1) - */ - txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)]; - - if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) { - for (i = 0; i < n; i++) { - free[i] = txep[i].mbuf; - /* no need to reset txep[i].mbuf in vector path */ - } - rte_mempool_put_bulk(free[0]->pool, (void **)free, n); - goto done; - } - - m = rte_pktmbuf_prefree_seg(txep[0].mbuf); - if (likely(m != NULL)) { - free[0] = m; - nb_free = 1; - for (i = 1; i < n; i++) { - m = rte_pktmbuf_prefree_seg(txep[i].mbuf); - if (likely(m != NULL)) { - if (likely(m->pool == free[0]->pool)) { - free[nb_free++] = m; - } else { - rte_mempool_put_bulk(free[0]->pool, - (void *)free, - nb_free); - free[0] = m; - nb_free = 1; - } - } - } - rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free); - } else { - for (i = 1; i < n; i++) { - m = rte_pktmbuf_prefree_seg(txep[i].mbuf); - if (m != NULL) - rte_mempool_put(m->pool, m); - } - } - -done: - /* buffers were freed, update counters */ - txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh); - txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh); - if (txq->tx_next_dd >= txq->nb_tx_desc) - txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1); - - return txq->tx_rs_thresh; -} - static __rte_always_inline int ci_tx_free_bufs_vec(struct ci_tx_queue *txq, ci_desc_done_fn desc_done, bool ctx_descs) { @@ -278,21 +211,6 @@ ci_tx_free_bufs_vec(struct ci_tx_queue *txq, ci_desc_done_fn desc_done, bool ctx return txq->tx_rs_thresh; } -#define IETH_FREE_BUFS_LOOP(swr, nb_desc, start, end) do { \ - uint16_t i = start; \ - if (end < i) { \ - for (; i < nb_desc; i++) { \ - rte_pktmbuf_free_seg(swr[i].mbuf); \ - swr[i].mbuf = NULL; \ - } \ - i = 0; \ - } \ - for (; i < end; i++) { \ - rte_pktmbuf_free_seg(swr[i].mbuf); \ - swr[i].mbuf = NULL; \ - } \ -} while (0) - static inline void ci_txq_release_all_mbufs(struct ci_tx_queue *txq, bool use_ctx) { @@ -311,16 +229,21 @@ ci_txq_release_all_mbufs(struct ci_tx_queue *txq, bool use_ctx) /** * vPMD tx will not set sw_ring's mbuf to NULL after free, - * so need to free remains more carefully. + * so determining buffers to free is a little more complex. */ const uint16_t start = (txq->tx_next_dd - txq->tx_rs_thresh + 1) >> use_ctx; const uint16_t nb_desc = txq->nb_tx_desc >> use_ctx; const uint16_t end = txq->tx_tail >> use_ctx; - if (txq->vector_sw_ring) - IETH_FREE_BUFS_LOOP(txq->sw_ring_vec, nb_desc, start, end); - else - IETH_FREE_BUFS_LOOP(txq->sw_ring, nb_desc, start, end); + uint16_t i = start; + if (end < i) { + for (; i < nb_desc; i++) + rte_pktmbuf_free_seg(txq->sw_ring_vec[i].mbuf); + i = 0; + } + for (; i < end; i++) + rte_pktmbuf_free_seg(txq->sw_ring_vec[i].mbuf); + memset(txq->sw_ring_vec, 0, sizeof(txq->sw_ring_vec[0]) * nb_desc); } #endif /* _COMMON_INTEL_TX_H_ */ diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c index 745c467912..c3ff2e05c3 100644 --- a/drivers/net/i40e/i40e_rxtx.c +++ b/drivers/net/i40e/i40e_rxtx.c @@ -1891,7 +1891,6 @@ i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) tx_queue_id); txq->vector_tx = ad->tx_vec_allowed; - txq->vector_sw_ring = txq->vector_tx; /* * tx_queue_id is queue id application refers to, while diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c index 9f7db80bfd..21d5bfd309 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_sse.c +++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c @@ -1462,7 +1462,6 @@ int __rte_cold iavf_txq_vec_setup(struct ci_tx_queue *txq) { txq->vector_tx = true; - txq->vector_sw_ring = txq->vector_tx; return 0; } diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c index 77cb6688a7..dcfa409813 100644 --- a/drivers/net/ice/ice_rxtx.c +++ b/drivers/net/ice/ice_rxtx.c @@ -825,7 +825,6 @@ ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) /* record what kind of descriptor cleanup we need on teardown */ txq->vector_tx = ad->tx_vec_allowed; - txq->vector_sw_ring = txq->vector_tx; dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED; diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h index 65794e45cb..3d4840c3b7 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h @@ -183,7 +183,6 @@ ixgbe_txq_vec_setup_default(struct ci_tx_queue *txq, txq->sw_ring_vec = txq->sw_ring_vec + 1; txq->ops = txq_ops; txq->vector_tx = 1; - txq->vector_sw_ring = 1; return 0; } -- 2.43.0