From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 836A245E7A; Wed, 11 Dec 2024 18:36:19 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D550640E0F; Wed, 11 Dec 2024 18:34:19 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.19]) by mails.dpdk.org (Postfix) with ESMTP id B17BF40DD7 for ; Wed, 11 Dec 2024 18:34:17 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1733938458; x=1765474458; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=a2rJ6RFfNHd8faUdkf+wWsWUnewN0i9Jwn2g5+xvzA0=; b=ZD+/RaqeHVX3+NNwTHubtLmlX6HM8M0F6BV0bleMN767kj6kYLWpsKbe 90FK7GTnZuJDif+mG0Y9k34to7oweP64nXbxicZC9gVrf0caRqcW/D6hq 0DQ4T2zVst46e5SYAmRTyg7wzEYpZ/JTUrgu9R0db5XIabdsJ6VlfKnrs qN1bq8Ku9P7cZpDLXHkTSLPxej8ydSdAC3pq4vF+SjEjRut56hdW919Cu PVDM7ZqfGM4im87ekfIP1wYdhxwxLIV4L74qWoconUV47DmXBP3yg9sHa yYv37ywdRBPJ7aZ96AANMKIvotEuItQc9mGCnwROtXP+R+I4S8c5qB7um g==; X-CSE-ConnectionGUID: T17sxCKrSDCTjXPjs8ODyA== X-CSE-MsgGUID: gkOi1GngSdKq37JNd63o9A== X-IronPort-AV: E=McAfee;i="6700,10204,11283"; a="34206115" X-IronPort-AV: E=Sophos;i="6.12,226,1728975600"; d="scan'208";a="34206115" Received: from fmviesa007.fm.intel.com ([10.60.135.147]) by orvoesa111.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Dec 2024 09:34:17 -0800 X-CSE-ConnectionGUID: BffA2QLYRqmkQhI2/v8fzg== X-CSE-MsgGUID: v4ipBphYS4y8HeaBAViqgA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,226,1728975600"; d="scan'208";a="95719310" Received: from silpixa00401197coob.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.45]) by fmviesa007.fm.intel.com with ESMTP; 11 Dec 2024 09:34:15 -0800 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson , Ian Stokes , Konstantin Ananyev , Vladimir Medvedkin , Anatoly Burakov Subject: [PATCH v3 21/22] net/_common_intel: remove unneeded code Date: Wed, 11 Dec 2024 17:33:27 +0000 Message-ID: <20241211173331.65262-22-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241211173331.65262-1-bruce.richardson@intel.com> References: <20241122125418.2857301-1-bruce.richardson@intel.com> <20241211173331.65262-1-bruce.richardson@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org With all drivers using the common Tx structure updated so that their vector paths all use the simplified Tx mbuf ring format, it's no longer necessary to have a separate flag for the ring format and for use of a vector driver. Remove the former flag and base all decisions off the vector flag. With that done, we go from having only two paths to consider for releasing all mbufs in the ring, not three. That allows further simpification of the "ci_txq_release_all_mbufs" function. The separate function to free buffers from the vector driver not using the simplified ring format can similarly be removed as no longer necessary. Signed-off-by: Bruce Richardson --- drivers/net/_common_intel/tx.h | 97 +++-------------------- drivers/net/i40e/i40e_rxtx.c | 1 - drivers/net/iavf/iavf_rxtx_vec_sse.c | 1 - drivers/net/ice/ice_rxtx.c | 1 - drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 1 - 5 files changed, 10 insertions(+), 91 deletions(-) diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h index aa42b9b49f..d9cf4474fc 100644 --- a/drivers/net/_common_intel/tx.h +++ b/drivers/net/_common_intel/tx.h @@ -66,7 +66,6 @@ struct ci_tx_queue { bool tx_deferred_start; /* don't start this queue in dev start */ bool q_set; /* indicate if tx queue has been configured */ bool vector_tx; /* port is using vector TX */ - bool vector_sw_ring; /* port is using vectorized SW ring (ieth_tx_entry_vec) */ union { /* the VSI this queue belongs to */ struct i40e_vsi *i40e_vsi; struct iavf_vsi *iavf_vsi; @@ -120,72 +119,6 @@ ci_tx_backlog_entry_vec(struct ci_tx_entry_vec *txep, struct rte_mbuf **tx_pkts, typedef int (*ci_desc_done_fn)(struct ci_tx_queue *txq, uint16_t idx); -static __rte_always_inline int -ci_tx_free_bufs(struct ci_tx_queue *txq, ci_desc_done_fn desc_done) -{ - struct ci_tx_entry *txep; - uint32_t n; - uint32_t i; - int nb_free = 0; - struct rte_mbuf *m, *free[IETH_VPMD_TX_MAX_FREE_BUF]; - - /* check DD bits on threshold descriptor */ - if (!desc_done(txq, txq->tx_next_dd)) - return 0; - - n = txq->tx_rs_thresh; - - /* first buffer to free from S/W ring is at index - * tx_next_dd - (tx_rs_thresh-1) - */ - txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)]; - - if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) { - for (i = 0; i < n; i++) { - free[i] = txep[i].mbuf; - /* no need to reset txep[i].mbuf in vector path */ - } - rte_mempool_put_bulk(free[0]->pool, (void **)free, n); - goto done; - } - - m = rte_pktmbuf_prefree_seg(txep[0].mbuf); - if (likely(m != NULL)) { - free[0] = m; - nb_free = 1; - for (i = 1; i < n; i++) { - m = rte_pktmbuf_prefree_seg(txep[i].mbuf); - if (likely(m != NULL)) { - if (likely(m->pool == free[0]->pool)) { - free[nb_free++] = m; - } else { - rte_mempool_put_bulk(free[0]->pool, - (void *)free, - nb_free); - free[0] = m; - nb_free = 1; - } - } - } - rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free); - } else { - for (i = 1; i < n; i++) { - m = rte_pktmbuf_prefree_seg(txep[i].mbuf); - if (m != NULL) - rte_mempool_put(m->pool, m); - } - } - -done: - /* buffers were freed, update counters */ - txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh); - txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh); - if (txq->tx_next_dd >= txq->nb_tx_desc) - txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1); - - return txq->tx_rs_thresh; -} - static __rte_always_inline int ci_tx_free_bufs_vec(struct ci_tx_queue *txq, ci_desc_done_fn desc_done, bool ctx_descs) { @@ -278,21 +211,6 @@ ci_tx_free_bufs_vec(struct ci_tx_queue *txq, ci_desc_done_fn desc_done, bool ctx return txq->tx_rs_thresh; } -#define IETH_FREE_BUFS_LOOP(swr, nb_desc, start, end) do { \ - uint16_t i = start; \ - if (end < i) { \ - for (; i < nb_desc; i++) { \ - rte_pktmbuf_free_seg(swr[i].mbuf); \ - swr[i].mbuf = NULL; \ - } \ - i = 0; \ - } \ - for (; i < end; i++) { \ - rte_pktmbuf_free_seg(swr[i].mbuf); \ - swr[i].mbuf = NULL; \ - } \ -} while (0) - static inline void ci_txq_release_all_mbufs(struct ci_tx_queue *txq, bool use_ctx) { @@ -311,16 +229,21 @@ ci_txq_release_all_mbufs(struct ci_tx_queue *txq, bool use_ctx) /** * vPMD tx will not set sw_ring's mbuf to NULL after free, - * so need to free remains more carefully. + * so determining buffers to free is a little more complex. */ const uint16_t start = (txq->tx_next_dd - txq->tx_rs_thresh + 1) >> use_ctx; const uint16_t nb_desc = txq->nb_tx_desc >> use_ctx; const uint16_t end = txq->tx_tail >> use_ctx; - if (txq->vector_sw_ring) - IETH_FREE_BUFS_LOOP(txq->sw_ring_vec, nb_desc, start, end); - else - IETH_FREE_BUFS_LOOP(txq->sw_ring, nb_desc, start, end); + uint16_t i = start; + if (end < i) { + for (; i < nb_desc; i++) + rte_pktmbuf_free_seg(txq->sw_ring_vec[i].mbuf); + i = 0; + } + for (; i < end; i++) + rte_pktmbuf_free_seg(txq->sw_ring_vec[i].mbuf); + memset(txq->sw_ring_vec, 0, sizeof(txq->sw_ring_vec[0]) * nb_desc); } #endif /* _COMMON_INTEL_TX_H_ */ diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c index 745c467912..c3ff2e05c3 100644 --- a/drivers/net/i40e/i40e_rxtx.c +++ b/drivers/net/i40e/i40e_rxtx.c @@ -1891,7 +1891,6 @@ i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) tx_queue_id); txq->vector_tx = ad->tx_vec_allowed; - txq->vector_sw_ring = txq->vector_tx; /* * tx_queue_id is queue id application refers to, while diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c index 9f7db80bfd..21d5bfd309 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_sse.c +++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c @@ -1462,7 +1462,6 @@ int __rte_cold iavf_txq_vec_setup(struct ci_tx_queue *txq) { txq->vector_tx = true; - txq->vector_sw_ring = txq->vector_tx; return 0; } diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c index 77cb6688a7..dcfa409813 100644 --- a/drivers/net/ice/ice_rxtx.c +++ b/drivers/net/ice/ice_rxtx.c @@ -825,7 +825,6 @@ ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) /* record what kind of descriptor cleanup we need on teardown */ txq->vector_tx = ad->tx_vec_allowed; - txq->vector_sw_ring = txq->vector_tx; dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED; diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h index 2b12bdcc9c..53d1fed6f8 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h @@ -182,7 +182,6 @@ ixgbe_txq_vec_setup_default(struct ci_tx_queue *txq, txq->sw_ring_vec = txq->sw_ring_vec + 1; txq->ops = txq_ops; txq->vector_tx = 1; - txq->vector_sw_ring = 1; return 0; } -- 2.43.0