From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 74F4F4612D; Fri, 24 Jan 2025 17:32:07 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 611DC42DCE; Fri, 24 Jan 2025 17:30:17 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) by mails.dpdk.org (Postfix) with ESMTP id 6A19C42DD0 for ; Fri, 24 Jan 2025 17:30:15 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1737736215; x=1769272215; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Q0rd4924ww3pMJmMhWwZ0pkRsEDB6PpvDdWEnu2sEXs=; b=DGS2rFPNbMzFI5VyW44Aal03ni7BZVaU6TGuSPgFhKv1hRs93iRtCV4y HeNm+X3FRgZNu75X5aJ8I1Dos32BkVJrjReFiSBfb1Lpov1BglRzqqhUe DWLnaJ8zwa3cemMVP7hLGmxfHhP3sRLNmTMSCHmunst/FkJJe180dJY2M UrUQIT6uuZwrM2uUm6oAa3GL2ISO9LcY4fek7git/WLkyMbhJ3OxdcsE+ kbuQ6Mv0/wwjrh9uNSGKgRcXlnT49Nc7lNeUFMNSnqxfkEuw/zUkU4raC FPgjLmKCOlE1i0OE9wxKNwTBtWGDeOF61hV5hKypQhwnIrt2UkJnzR5M5 A==; X-CSE-ConnectionGUID: pxsOcX5cRzy+cVnt9tqLJA== X-CSE-MsgGUID: 5jWNfCsbTnGG/Q7z9tMfJA== X-IronPort-AV: E=McAfee;i="6700,10204,11325"; a="60740265" X-IronPort-AV: E=Sophos;i="6.13,231,1732608000"; d="scan'208";a="60740265" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Jan 2025 08:30:15 -0800 X-CSE-ConnectionGUID: NkAxd1cmQ72d1Snn2EVr8Q== X-CSE-MsgGUID: Xi+mSo64SRmvZyiCUFhGHg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,231,1732608000"; d="scan'208";a="108352779" Received: from silpixa00401197coob.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.45]) by fmviesa010.fm.intel.com with ESMTP; 24 Jan 2025 08:30:13 -0800 From: Bruce Richardson To: dev@dpdk.org Cc: david.marchand@redhat.com, anatoly.burakov@intel.com, vladimir.medvedkin@intel.com, ian.stokes@intel.com, praveen.shetty@intel.com, Bruce Richardson Subject: [PATCH v6 19/25] net/ice: use vector SW ring for all vector paths Date: Fri, 24 Jan 2025 16:29:14 +0000 Message-ID: <20250124162921.1406103-20-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250124162921.1406103-1-bruce.richardson@intel.com> References: <20241122125418.2857301-1-bruce.richardson@intel.com> <20250124162921.1406103-1-bruce.richardson@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The AVX-512 code path used a smaller SW ring structure only containing the mbuf pointer, but no other fields. The other fields are only used in the scalar code path, so update all vector driver code paths to use the smaller, faster structure. Signed-off-by: Bruce Richardson --- drivers/net/intel/common/tx.h | 7 +++++++ drivers/net/intel/ice/ice_rxtx.c | 2 +- drivers/net/intel/ice/ice_rxtx_vec_avx2.c | 12 ++++++------ drivers/net/intel/ice/ice_rxtx_vec_avx512.c | 14 ++------------ drivers/net/intel/ice/ice_rxtx_vec_common.h | 6 ------ drivers/net/intel/ice/ice_rxtx_vec_sse.c | 12 ++++++------ 6 files changed, 22 insertions(+), 31 deletions(-) diff --git a/drivers/net/intel/common/tx.h b/drivers/net/intel/common/tx.h index 310b51adcf..aa42b9b49f 100644 --- a/drivers/net/intel/common/tx.h +++ b/drivers/net/intel/common/tx.h @@ -109,6 +109,13 @@ ci_tx_backlog_entry(struct ci_tx_entry *txep, struct rte_mbuf **tx_pkts, uint16_ txep[i].mbuf = tx_pkts[i]; } +static __rte_always_inline void +ci_tx_backlog_entry_vec(struct ci_tx_entry_vec *txep, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) +{ + for (uint16_t i = 0; i < nb_pkts; ++i) + txep[i].mbuf = tx_pkts[i]; +} + #define IETH_VPMD_TX_MAX_FREE_BUF 64 typedef int (*ci_desc_done_fn)(struct ci_tx_queue *txq, uint16_t idx); diff --git a/drivers/net/intel/ice/ice_rxtx.c b/drivers/net/intel/ice/ice_rxtx.c index 03a7092e79..8c1e508147 100644 --- a/drivers/net/intel/ice/ice_rxtx.c +++ b/drivers/net/intel/ice/ice_rxtx.c @@ -826,7 +826,7 @@ ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) /* record what kind of descriptor cleanup we need on teardown */ txq->vector_tx = ad->tx_vec_allowed; - txq->vector_sw_ring = ad->tx_use_avx512; + txq->vector_sw_ring = txq->vector_tx; dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED; diff --git a/drivers/net/intel/ice/ice_rxtx_vec_avx2.c b/drivers/net/intel/ice/ice_rxtx_vec_avx2.c index 12ffa0fa9a..98bab322b4 100644 --- a/drivers/net/intel/ice/ice_rxtx_vec_avx2.c +++ b/drivers/net/intel/ice/ice_rxtx_vec_avx2.c @@ -858,7 +858,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, { struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue; volatile struct ice_tx_desc *txdp; - struct ci_tx_entry *txep; + struct ci_tx_entry_vec *txep; uint16_t n, nb_commit, tx_id; uint64_t flags = ICE_TD_CMD; uint64_t rs = ICE_TX_DESC_CMD_RS | ICE_TD_CMD; @@ -867,7 +867,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, nb_pkts = RTE_MIN(nb_pkts, txq->tx_rs_thresh); if (txq->nb_tx_free < txq->tx_free_thresh) - ice_tx_free_bufs_vec(txq); + ci_tx_free_bufs_vec(txq, ice_tx_desc_done, false); nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts); if (unlikely(nb_pkts == 0)) @@ -875,13 +875,13 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, tx_id = txq->tx_tail; txdp = &txq->ice_tx_ring[tx_id]; - txep = &txq->sw_ring[tx_id]; + txep = &txq->sw_ring_vec[tx_id]; txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts); n = (uint16_t)(txq->nb_tx_desc - tx_id); if (nb_commit >= n) { - ci_tx_backlog_entry(txep, tx_pkts, n); + ci_tx_backlog_entry_vec(txep, tx_pkts, n); ice_vtx(txdp, tx_pkts, n - 1, flags, offload); tx_pkts += (n - 1); @@ -896,10 +896,10 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, /* avoid reach the end of ring */ txdp = &txq->ice_tx_ring[tx_id]; - txep = &txq->sw_ring[tx_id]; + txep = &txq->sw_ring_vec[tx_id]; } - ci_tx_backlog_entry(txep, tx_pkts, nb_commit); + ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit); ice_vtx(txdp, tx_pkts, nb_commit, flags, offload); diff --git a/drivers/net/intel/ice/ice_rxtx_vec_avx512.c b/drivers/net/intel/ice/ice_rxtx_vec_avx512.c index f6ec593f96..481f784e34 100644 --- a/drivers/net/intel/ice/ice_rxtx_vec_avx512.c +++ b/drivers/net/intel/ice/ice_rxtx_vec_avx512.c @@ -924,16 +924,6 @@ ice_vtx(volatile struct ice_tx_desc *txdp, struct rte_mbuf **pkt, } } -static __rte_always_inline void -ice_tx_backlog_entry_avx512(struct ci_tx_entry_vec *txep, - struct rte_mbuf **tx_pkts, uint16_t nb_pkts) -{ - int i; - - for (i = 0; i < (int)nb_pkts; ++i) - txep[i].mbuf = tx_pkts[i]; -} - static __rte_always_inline uint16_t ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts, bool do_offload) @@ -964,7 +954,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts, n = (uint16_t)(txq->nb_tx_desc - tx_id); if (nb_commit >= n) { - ice_tx_backlog_entry_avx512(txep, tx_pkts, n); + ci_tx_backlog_entry_vec(txep, tx_pkts, n); ice_vtx(txdp, tx_pkts, n - 1, flags, do_offload); tx_pkts += (n - 1); @@ -982,7 +972,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts, txep = (void *)txq->sw_ring; } - ice_tx_backlog_entry_avx512(txep, tx_pkts, nb_commit); + ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit); ice_vtx(txdp, tx_pkts, nb_commit, flags, do_offload); diff --git a/drivers/net/intel/ice/ice_rxtx_vec_common.h b/drivers/net/intel/ice/ice_rxtx_vec_common.h index d03128d9b6..9bd7c2cbf6 100644 --- a/drivers/net/intel/ice/ice_rxtx_vec_common.h +++ b/drivers/net/intel/ice/ice_rxtx_vec_common.h @@ -20,12 +20,6 @@ ice_tx_desc_done(struct ci_tx_queue *txq, uint16_t idx) rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE); } -static __rte_always_inline int -ice_tx_free_bufs_vec(struct ci_tx_queue *txq) -{ - return ci_tx_free_bufs(txq, ice_tx_desc_done); -} - static inline void _ice_rx_queue_release_mbufs_vec(struct ice_rx_queue *rxq) { diff --git a/drivers/net/intel/ice/ice_rxtx_vec_sse.c b/drivers/net/intel/ice/ice_rxtx_vec_sse.c index bff39c28d8..73e3e9eb54 100644 --- a/drivers/net/intel/ice/ice_rxtx_vec_sse.c +++ b/drivers/net/intel/ice/ice_rxtx_vec_sse.c @@ -699,7 +699,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, { struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue; volatile struct ice_tx_desc *txdp; - struct ci_tx_entry *txep; + struct ci_tx_entry_vec *txep; uint16_t n, nb_commit, tx_id; uint64_t flags = ICE_TD_CMD; uint64_t rs = ICE_TX_DESC_CMD_RS | ICE_TD_CMD; @@ -709,7 +709,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, nb_pkts = RTE_MIN(nb_pkts, txq->tx_rs_thresh); if (txq->nb_tx_free < txq->tx_free_thresh) - ice_tx_free_bufs_vec(txq); + ci_tx_free_bufs_vec(txq, ice_tx_desc_done, false); nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts); nb_commit = nb_pkts; @@ -718,13 +718,13 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, tx_id = txq->tx_tail; txdp = &txq->ice_tx_ring[tx_id]; - txep = &txq->sw_ring[tx_id]; + txep = &txq->sw_ring_vec[tx_id]; txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts); n = (uint16_t)(txq->nb_tx_desc - tx_id); if (nb_commit >= n) { - ci_tx_backlog_entry(txep, tx_pkts, n); + ci_tx_backlog_entry_vec(txep, tx_pkts, n); for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp) ice_vtx1(txdp, *tx_pkts, flags); @@ -738,10 +738,10 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, /* avoid reach the end of ring */ txdp = &txq->ice_tx_ring[tx_id]; - txep = &txq->sw_ring[tx_id]; + txep = &txq->sw_ring_vec[tx_id]; } - ci_tx_backlog_entry(txep, tx_pkts, nb_commit); + ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit); ice_vtx(txdp, tx_pkts, nb_commit, flags); -- 2.43.0