* [RFC PATCH 00/21] Reduce code duplication across Intel NIC drivers
@ 2024-11-22 12:53 Bruce Richardson
2024-11-22 12:53 ` [RFC PATCH 01/21] common/intel_eth: add pkt reassembly fn for intel drivers Bruce Richardson
` (25 more replies)
0 siblings, 26 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-11-22 12:53 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson
This RFC attempts to reduce the amount of code duplication across a
number of Intel NIC drivers, specifically: ixgbe, i40e, iavf, and ice.
The first patch extract a function from the Rx side, otherwise the
majority of the changes are on the Tx side, leading to a converged Tx
queue structure across the 4 drivers, and a large number of common
functions.
Open question:
* How should common code across drivers within a single device class be
managed?
- For now, I've created an "intel_eth" folder within the "common"
driver directory, thinking about it after, it implies to me that
it is common across driver classes.
- Would it be better to create an "intel_common" directory within the
"net" folder?
Bruce Richardson (21):
common/intel_eth: add pkt reassembly fn for intel drivers
common/intel_eth: provide common Tx entry structures
common/intel_eth: add Tx mbuf ring replenish fn
drivers/net: align Tx queue struct field names
drivers/net: add prefix for driver-specific structs
common/intel_eth: merge ice and i40e Tx queue struct
net/iavf: use common Tx queue structure
net/ixgbe: convert Tx queue context cache field to ptr
net/ixgbe: use common Tx queue structure
common/intel_eth: pack Tx queue structure
common/intel_eth: add post-Tx buffer free function
common/intel_eth: add Tx buffer free fn for AVX-512
net/iavf: use common Tx free fn for AVX-512
net/ice: move Tx queue mbuf cleanup fn to common
net/i40e: use common Tx queue mbuf cleanup fn
net/ixgbe: use common Tx queue mbuf cleanup fn
net/iavf: use common Tx queue mbuf cleanup fn
net/ice: use vector SW ring for all vector paths
net/i40e: use vector SW ring for all vector paths
net/iavf: use vector SW ring for all vector paths
net/ixgbe: use common Tx backlog entry fn
drivers/common/intel_eth/ieth_rxtx.h | 153 +++++++++++
.../common/intel_eth/ieth_rxtx_vec_common.h | 260 ++++++++++++++++++
drivers/net/i40e/i40e_ethdev.c | 4 +-
drivers/net/i40e/i40e_ethdev.h | 8 +-
drivers/net/i40e/i40e_fdir.c | 10 +-
.../net/i40e/i40e_recycle_mbufs_vec_common.c | 6 +-
drivers/net/i40e/i40e_rxtx.c | 194 +++++--------
drivers/net/i40e/i40e_rxtx.h | 61 +---
drivers/net/i40e/i40e_rxtx_vec_altivec.c | 26 +-
drivers/net/i40e/i40e_rxtx_vec_avx2.c | 26 +-
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 144 +---------
drivers/net/i40e/i40e_rxtx_vec_common.h | 144 +---------
drivers/net/i40e/i40e_rxtx_vec_neon.c | 26 +-
drivers/net/i40e/i40e_rxtx_vec_sse.c | 26 +-
drivers/net/i40e/meson.build | 2 +-
drivers/net/iavf/iavf.h | 2 +-
drivers/net/iavf/iavf_ethdev.c | 4 +-
drivers/net/iavf/iavf_rxtx.c | 180 +++++-------
drivers/net/iavf/iavf_rxtx.h | 61 +---
drivers/net/iavf/iavf_rxtx_vec_avx2.c | 46 ++--
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 213 +++-----------
drivers/net/iavf/iavf_rxtx_vec_common.h | 160 +----------
drivers/net/iavf/iavf_rxtx_vec_sse.c | 57 ++--
drivers/net/iavf/iavf_vchnl.c | 6 +-
drivers/net/iavf/meson.build | 2 +-
drivers/net/ice/ice_dcf.c | 4 +-
drivers/net/ice/ice_dcf_ethdev.c | 21 +-
drivers/net/ice/ice_diagnose.c | 2 +-
drivers/net/ice/ice_ethdev.c | 2 +-
drivers/net/ice/ice_ethdev.h | 7 +-
drivers/net/ice/ice_rxtx.c | 164 +++++------
drivers/net/ice/ice_rxtx.h | 52 +---
drivers/net/ice/ice_rxtx_vec_avx2.c | 26 +-
drivers/net/ice/ice_rxtx_vec_avx512.c | 153 +----------
drivers/net/ice/ice_rxtx_vec_common.h | 190 +------------
drivers/net/ice/ice_rxtx_vec_sse.c | 30 +-
drivers/net/ice/meson.build | 2 +-
drivers/net/ixgbe/base/ixgbe_osdep.h | 2 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 4 +-
.../ixgbe/ixgbe_recycle_mbufs_vec_common.c | 6 +-
drivers/net/ixgbe/ixgbe_rxtx.c | 137 ++++-----
drivers/net/ixgbe/ixgbe_rxtx.h | 73 +----
drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 119 +-------
drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 33 +--
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 33 +--
drivers/net/ixgbe/meson.build | 2 +-
46 files changed, 1008 insertions(+), 1875 deletions(-)
create mode 100644 drivers/common/intel_eth/ieth_rxtx.h
create mode 100644 drivers/common/intel_eth/ieth_rxtx_vec_common.h
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [RFC PATCH 01/21] common/intel_eth: add pkt reassembly fn for intel drivers
2024-11-22 12:53 [RFC PATCH 00/21] Reduce code duplication across Intel NIC drivers Bruce Richardson
@ 2024-11-22 12:53 ` Bruce Richardson
2024-11-22 12:53 ` [RFC PATCH 02/21] common/intel_eth: provide common Tx entry structures Bruce Richardson
` (24 subsequent siblings)
25 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-11-22 12:53 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, David Christensen, Ian Stokes,
Konstantin Ananyev, Wathsala Vithanage, Vladimir Medvedkin,
Anatoly Burakov
The code for reassembling a single, multi-mbuf packet from multiple
buffers received from the NIC is duplicated across many drivers. Rather
than having multiple copies of this function, we can create an
"intel_eth" common driver to hold such functions and consolidate
multiple functions down to a single one for easier maintenance.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
.../common/intel_eth/ieth_rxtx_vec_common.h | 81 +++++++++++++++++++
drivers/net/i40e/i40e_rxtx_vec_altivec.c | 4 +-
drivers/net/i40e/i40e_rxtx_vec_avx2.c | 4 +-
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 4 +-
drivers/net/i40e/i40e_rxtx_vec_common.h | 64 +--------------
drivers/net/i40e/i40e_rxtx_vec_neon.c | 4 +-
drivers/net/i40e/i40e_rxtx_vec_sse.c | 4 +-
drivers/net/i40e/meson.build | 2 +-
drivers/net/iavf/iavf_rxtx_vec_avx2.c | 8 +-
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 8 +-
drivers/net/iavf/iavf_rxtx_vec_common.h | 65 +--------------
drivers/net/iavf/iavf_rxtx_vec_sse.c | 8 +-
drivers/net/iavf/meson.build | 2 +-
drivers/net/ice/ice_rxtx_vec_avx2.c | 4 +-
drivers/net/ice/ice_rxtx_vec_avx512.c | 8 +-
drivers/net/ice/ice_rxtx_vec_common.h | 66 +--------------
drivers/net/ice/ice_rxtx_vec_sse.c | 4 +-
drivers/net/ice/meson.build | 2 +-
drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 63 +--------------
drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 4 +-
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 4 +-
drivers/net/ixgbe/meson.build | 2 +-
22 files changed, 123 insertions(+), 292 deletions(-)
create mode 100644 drivers/common/intel_eth/ieth_rxtx_vec_common.h
diff --git a/drivers/common/intel_eth/ieth_rxtx_vec_common.h b/drivers/common/intel_eth/ieth_rxtx_vec_common.h
new file mode 100644
index 0000000000..0771af820c
--- /dev/null
+++ b/drivers/common/intel_eth/ieth_rxtx_vec_common.h
@@ -0,0 +1,81 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Intel Corporation
+ */
+
+#ifndef IETH_RXTX_VEC_COMMON_H_
+#define IETH_RXTX_VEC_COMMON_H_
+
+#include <stdint.h>
+#include <unistd.h>
+#include <rte_mbuf.h>
+
+#define IETH_RX_BURST 32
+
+static inline uint16_t
+ieth_rx_reassemble_packets(struct rte_mbuf **rx_bufs,
+ uint16_t nb_bufs, uint8_t *split_flags,
+ struct rte_mbuf **pkt_first_seg,
+ struct rte_mbuf **pkt_last_seg,
+ const uint8_t crc_len)
+{
+ struct rte_mbuf *pkts[IETH_RX_BURST] = {0}; /*finished pkts*/
+ struct rte_mbuf *start = *pkt_first_seg;
+ struct rte_mbuf *end = *pkt_last_seg;
+ unsigned int pkt_idx, buf_idx;
+
+ for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) {
+ if (end) {
+ /* processing a split packet */
+ end->next = rx_bufs[buf_idx];
+ rx_bufs[buf_idx]->data_len += crc_len;
+
+ start->nb_segs++;
+ start->pkt_len += rx_bufs[buf_idx]->data_len;
+ end = end->next;
+
+ if (!split_flags[buf_idx]) {
+ /* it's the last packet of the set */
+ start->hash = end->hash;
+ start->vlan_tci = end->vlan_tci;
+ start->ol_flags = end->ol_flags;
+ /* we need to strip crc for the whole packet */
+ start->pkt_len -= crc_len;
+ if (end->data_len > crc_len)
+ end->data_len -= crc_len;
+ else {
+ /* free up last mbuf */
+ struct rte_mbuf *secondlast = start;
+
+ start->nb_segs--;
+ while (secondlast->next != end)
+ secondlast = secondlast->next;
+ secondlast->data_len -= (crc_len - end->data_len);
+ secondlast->next = NULL;
+ rte_pktmbuf_free_seg(end);
+ }
+ pkts[pkt_idx++] = start;
+ start = NULL;
+ end = NULL;
+ }
+ } else{
+ /* not processing a split packet */
+ if (!split_flags[buf_idx]) {
+ /* not a split packet, save and skip */
+ pkts[pkt_idx++] = rx_bufs[buf_idx];
+ continue;
+ }
+ start = rx_bufs[buf_idx];
+ end = start;
+ rx_bufs[buf_idx]->data_len += crc_len;
+ rx_bufs[buf_idx]->pkt_len += crc_len;
+ }
+ }
+
+ /* save the partial packet for next time */
+ *pkt_first_seg = start;
+ *pkt_last_seg = end;
+ memcpy(rx_bufs, pkts, pkt_idx * (sizeof(*pkts)));
+ return pkt_idx;
+}
+
+#endif /* IETH_RXTX_VEC_COMMON_H_ */
diff --git a/drivers/net/i40e/i40e_rxtx_vec_altivec.c b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
index b6b0d38ec1..526355f61d 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_altivec.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
@@ -494,8 +494,8 @@ i40e_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
if (i == nb_bufs)
return nb_bufs;
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ieth_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx2.c b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
index 19cf0ac718..231c5f6d4b 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx2.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
@@ -657,8 +657,8 @@ i40e_recv_scattered_burst_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ieth_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/*
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index 3b2750221b..30ce24634a 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -725,8 +725,8 @@ i40e_recv_scattered_burst_vec_avx512(void *rx_queue,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ieth_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index 8b745630e4..7cefbc98ef 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -8,6 +8,7 @@
#include <ethdev_driver.h>
#include <rte_malloc.h>
+#include <ieth_rxtx_vec_common.h>
#include "i40e_ethdev.h"
#include "i40e_rxtx.h"
@@ -15,69 +16,6 @@
#pragma GCC diagnostic ignored "-Wcast-qual"
#endif
-static inline uint16_t
-reassemble_packets(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_bufs,
- uint16_t nb_bufs, uint8_t *split_flags)
-{
- struct rte_mbuf *pkts[RTE_I40E_VPMD_RX_BURST]; /*finished pkts*/
- struct rte_mbuf *start = rxq->pkt_first_seg;
- struct rte_mbuf *end = rxq->pkt_last_seg;
- unsigned pkt_idx, buf_idx;
-
- for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) {
- if (end != NULL) {
- /* processing a split packet */
- end->next = rx_bufs[buf_idx];
- rx_bufs[buf_idx]->data_len += rxq->crc_len;
-
- start->nb_segs++;
- start->pkt_len += rx_bufs[buf_idx]->data_len;
- end = end->next;
-
- if (!split_flags[buf_idx]) {
- /* it's the last packet of the set */
- start->hash = end->hash;
- start->vlan_tci = end->vlan_tci;
- start->ol_flags = end->ol_flags;
- /* we need to strip crc for the whole packet */
- start->pkt_len -= rxq->crc_len;
- if (end->data_len > rxq->crc_len)
- end->data_len -= rxq->crc_len;
- else {
- /* free up last mbuf */
- struct rte_mbuf *secondlast = start;
-
- start->nb_segs--;
- while (secondlast->next != end)
- secondlast = secondlast->next;
- secondlast->data_len -= (rxq->crc_len -
- end->data_len);
- secondlast->next = NULL;
- rte_pktmbuf_free_seg(end);
- }
- pkts[pkt_idx++] = start;
- start = end = NULL;
- }
- } else {
- /* not processing a split packet */
- if (!split_flags[buf_idx]) {
- /* not a split packet, save and skip */
- pkts[pkt_idx++] = rx_bufs[buf_idx];
- continue;
- }
- end = start = rx_bufs[buf_idx];
- rx_bufs[buf_idx]->data_len += rxq->crc_len;
- rx_bufs[buf_idx]->pkt_len += rxq->crc_len;
- }
- }
-
- /* save the partial packet for next time */
- rxq->pkt_first_seg = start;
- rxq->pkt_last_seg = end;
- memcpy(rx_bufs, pkts, pkt_idx * (sizeof(*pkts)));
- return pkt_idx;
-}
-
static __rte_always_inline int
i40e_tx_free_bufs(struct i40e_tx_queue *txq)
{
diff --git a/drivers/net/i40e/i40e_rxtx_vec_neon.c b/drivers/net/i40e/i40e_rxtx_vec_neon.c
index e1c5c7041b..ab0d4f1a15 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_neon.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_neon.c
@@ -623,8 +623,8 @@ i40e_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ieth_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/i40e/i40e_rxtx_vec_sse.c b/drivers/net/i40e/i40e_rxtx_vec_sse.c
index ad560d2b6b..03fb9eb59b 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_sse.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_sse.c
@@ -641,8 +641,8 @@ i40e_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ieth_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/i40e/meson.build b/drivers/net/i40e/meson.build
index 5c93493124..b965963b58 100644
--- a/drivers/net/i40e/meson.build
+++ b/drivers/net/i40e/meson.build
@@ -36,7 +36,7 @@ sources = files(
testpmd_sources = files('i40e_testpmd.c')
deps += ['hash']
-includes += include_directories('base')
+includes += include_directories('base', '../../common/intel_eth')
if arch_subdir == 'x86'
sources += files('i40e_rxtx_vec_sse.c')
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index 49d41af953..a05494891b 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -1508,8 +1508,8 @@ iavf_recv_scattered_burst_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ieth_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
@@ -1597,8 +1597,8 @@ iavf_recv_scattered_burst_vec_avx2_flex_rxd(void *rx_queue,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ieth_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index d6a861bf80..20ce9e2a3a 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -1685,8 +1685,8 @@ iavf_recv_scattered_burst_vec_avx512(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ieth_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
@@ -1761,8 +1761,8 @@ iavf_recv_scattered_burst_vec_avx512_flex_rxd(void *rx_queue,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ieth_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index 5c5220048d..874e10fd59 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -8,6 +8,7 @@
#include <ethdev_driver.h>
#include <rte_malloc.h>
+#include <ieth_rxtx_vec_common.h>
#include "iavf.h"
#include "iavf_rxtx.h"
@@ -15,70 +16,6 @@
#pragma GCC diagnostic ignored "-Wcast-qual"
#endif
-static __rte_always_inline uint16_t
-reassemble_packets(struct iavf_rx_queue *rxq, struct rte_mbuf **rx_bufs,
- uint16_t nb_bufs, uint8_t *split_flags)
-{
- struct rte_mbuf *pkts[IAVF_VPMD_RX_MAX_BURST];
- struct rte_mbuf *start = rxq->pkt_first_seg;
- struct rte_mbuf *end = rxq->pkt_last_seg;
- unsigned int pkt_idx, buf_idx;
-
- for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) {
- if (end) {
- /* processing a split packet */
- end->next = rx_bufs[buf_idx];
- rx_bufs[buf_idx]->data_len += rxq->crc_len;
-
- start->nb_segs++;
- start->pkt_len += rx_bufs[buf_idx]->data_len;
- end = end->next;
-
- if (!split_flags[buf_idx]) {
- /* it's the last packet of the set */
- start->hash = end->hash;
- start->vlan_tci = end->vlan_tci;
- start->ol_flags = end->ol_flags;
- /* we need to strip crc for the whole packet */
- start->pkt_len -= rxq->crc_len;
- if (end->data_len > rxq->crc_len) {
- end->data_len -= rxq->crc_len;
- } else {
- /* free up last mbuf */
- struct rte_mbuf *secondlast = start;
-
- start->nb_segs--;
- while (secondlast->next != end)
- secondlast = secondlast->next;
- secondlast->data_len -= (rxq->crc_len -
- end->data_len);
- secondlast->next = NULL;
- rte_pktmbuf_free_seg(end);
- }
- pkts[pkt_idx++] = start;
- start = NULL;
- end = NULL;
- }
- } else {
- /* not processing a split packet */
- if (!split_flags[buf_idx]) {
- /* not a split packet, save and skip */
- pkts[pkt_idx++] = rx_bufs[buf_idx];
- continue;
- }
- end = start = rx_bufs[buf_idx];
- rx_bufs[buf_idx]->data_len += rxq->crc_len;
- rx_bufs[buf_idx]->pkt_len += rxq->crc_len;
- }
- }
-
- /* save the partial packet for next time */
- rxq->pkt_first_seg = start;
- rxq->pkt_last_seg = end;
- memcpy(rx_bufs, pkts, pkt_idx * (sizeof(*pkts)));
- return pkt_idx;
-}
-
static __rte_always_inline int
iavf_tx_free_bufs(struct iavf_tx_queue *txq)
{
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index 0db6fa8bd4..7c1a1b8fa9 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -1238,8 +1238,8 @@ iavf_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ieth_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
@@ -1307,8 +1307,8 @@ iavf_recv_scattered_burst_vec_flex_rxd(void *rx_queue,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ieth_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/iavf/meson.build b/drivers/net/iavf/meson.build
index b48bb83438..d26cd3133a 100644
--- a/drivers/net/iavf/meson.build
+++ b/drivers/net/iavf/meson.build
@@ -5,7 +5,7 @@ if dpdk_conf.get('RTE_IOVA_IN_MBUF') == 0
subdir_done()
endif
-includes += include_directories('../../common/iavf')
+includes += include_directories('../../common/iavf', '../../common/intel_eth')
testpmd_sources = files('iavf_testpmd.c')
diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index d6e88dbb29..1a3df29503 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -726,8 +726,8 @@ ice_recv_scattered_burst_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + ice_rx_reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ieth_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index add095ef06..5e18f23204 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -763,8 +763,8 @@ ice_recv_scattered_burst_vec_avx512(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + ice_rx_reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ieth_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
@@ -805,8 +805,8 @@ ice_recv_scattered_burst_vec_avx512_offload(void *rx_queue,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + ice_rx_reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ieth_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index 4b73465af5..89e45939e7 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -5,77 +5,13 @@
#ifndef _ICE_RXTX_VEC_COMMON_H_
#define _ICE_RXTX_VEC_COMMON_H_
+#include <ieth_rxtx_vec_common.h>
#include "ice_rxtx.h"
#ifndef __INTEL_COMPILER
#pragma GCC diagnostic ignored "-Wcast-qual"
#endif
-static inline uint16_t
-ice_rx_reassemble_packets(struct ice_rx_queue *rxq, struct rte_mbuf **rx_bufs,
- uint16_t nb_bufs, uint8_t *split_flags)
-{
- struct rte_mbuf *pkts[ICE_VPMD_RX_BURST] = {0}; /*finished pkts*/
- struct rte_mbuf *start = rxq->pkt_first_seg;
- struct rte_mbuf *end = rxq->pkt_last_seg;
- unsigned int pkt_idx, buf_idx;
-
- for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) {
- if (end) {
- /* processing a split packet */
- end->next = rx_bufs[buf_idx];
- rx_bufs[buf_idx]->data_len += rxq->crc_len;
-
- start->nb_segs++;
- start->pkt_len += rx_bufs[buf_idx]->data_len;
- end = end->next;
-
- if (!split_flags[buf_idx]) {
- /* it's the last packet of the set */
- start->hash = end->hash;
- start->vlan_tci = end->vlan_tci;
- start->ol_flags = end->ol_flags;
- /* we need to strip crc for the whole packet */
- start->pkt_len -= rxq->crc_len;
- if (end->data_len > rxq->crc_len) {
- end->data_len -= rxq->crc_len;
- } else {
- /* free up last mbuf */
- struct rte_mbuf *secondlast = start;
-
- start->nb_segs--;
- while (secondlast->next != end)
- secondlast = secondlast->next;
- secondlast->data_len -= (rxq->crc_len -
- end->data_len);
- secondlast->next = NULL;
- rte_pktmbuf_free_seg(end);
- }
- pkts[pkt_idx++] = start;
- start = NULL;
- end = NULL;
- }
- } else {
- /* not processing a split packet */
- if (!split_flags[buf_idx]) {
- /* not a split packet, save and skip */
- pkts[pkt_idx++] = rx_bufs[buf_idx];
- continue;
- }
- start = rx_bufs[buf_idx];
- end = start;
- rx_bufs[buf_idx]->data_len += rxq->crc_len;
- rx_bufs[buf_idx]->pkt_len += rxq->crc_len;
- }
- }
-
- /* save the partial packet for next time */
- rxq->pkt_first_seg = start;
- rxq->pkt_last_seg = end;
- memcpy(rx_bufs, pkts, pkt_idx * (sizeof(*pkts)));
- return pkt_idx;
-}
-
static __rte_always_inline int
ice_tx_free_bufs_vec(struct ice_tx_queue *txq)
{
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index c01d8ede29..9fcd975ed2 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -640,8 +640,8 @@ ice_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + ice_rx_reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ieth_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/ice/meson.build b/drivers/net/ice/meson.build
index 1c9dc0cc6d..db1f85964c 100644
--- a/drivers/net/ice/meson.build
+++ b/drivers/net/ice/meson.build
@@ -19,7 +19,7 @@ sources = files(
testpmd_sources = files('ice_testpmd.c')
deps += ['hash', 'net', 'common_iavf']
-includes += include_directories('base', '../../common/iavf')
+includes += include_directories('base', '../../common/intel_eth')
if arch_subdir == 'x86'
sources += files('ice_rxtx_vec_sse.c')
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
index a4d9ec9b08..275af944f7 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
@@ -7,71 +7,10 @@
#include <stdint.h>
#include <ethdev_driver.h>
+#include <ieth_rxtx_vec_common.h>
#include "ixgbe_ethdev.h"
#include "ixgbe_rxtx.h"
-static inline uint16_t
-reassemble_packets(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_bufs,
- uint16_t nb_bufs, uint8_t *split_flags)
-{
- struct rte_mbuf *pkts[nb_bufs]; /*finished pkts*/
- struct rte_mbuf *start = rxq->pkt_first_seg;
- struct rte_mbuf *end = rxq->pkt_last_seg;
- unsigned int pkt_idx, buf_idx;
-
- for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) {
- if (end != NULL) {
- /* processing a split packet */
- end->next = rx_bufs[buf_idx];
- rx_bufs[buf_idx]->data_len += rxq->crc_len;
-
- start->nb_segs++;
- start->pkt_len += rx_bufs[buf_idx]->data_len;
- end = end->next;
-
- if (!split_flags[buf_idx]) {
- /* it's the last packet of the set */
- start->hash = end->hash;
- start->ol_flags = end->ol_flags;
- /* we need to strip crc for the whole packet */
- start->pkt_len -= rxq->crc_len;
- if (end->data_len > rxq->crc_len)
- end->data_len -= rxq->crc_len;
- else {
- /* free up last mbuf */
- struct rte_mbuf *secondlast = start;
-
- start->nb_segs--;
- while (secondlast->next != end)
- secondlast = secondlast->next;
- secondlast->data_len -= (rxq->crc_len -
- end->data_len);
- secondlast->next = NULL;
- rte_pktmbuf_free_seg(end);
- }
- pkts[pkt_idx++] = start;
- start = end = NULL;
- }
- } else {
- /* not processing a split packet */
- if (!split_flags[buf_idx]) {
- /* not a split packet, save and skip */
- pkts[pkt_idx++] = rx_bufs[buf_idx];
- continue;
- }
- end = start = rx_bufs[buf_idx];
- rx_bufs[buf_idx]->data_len += rxq->crc_len;
- rx_bufs[buf_idx]->pkt_len += rxq->crc_len;
- }
- }
-
- /* save the partial packet for next time */
- rxq->pkt_first_seg = start;
- rxq->pkt_last_seg = end;
- memcpy(rx_bufs, pkts, pkt_idx * (sizeof(*pkts)));
- return pkt_idx;
-}
-
static __rte_always_inline int
ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq)
{
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
index 952b032eb6..91ba8036cf 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
@@ -516,8 +516,8 @@ ixgbe_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ieth_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
index a77370cdb7..a108a718a8 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -639,8 +639,8 @@ ixgbe_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ieth_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/ixgbe/meson.build b/drivers/net/ixgbe/meson.build
index 0ae12dd5ff..95ea27c1b9 100644
--- a/drivers/net/ixgbe/meson.build
+++ b/drivers/net/ixgbe/meson.build
@@ -35,6 +35,6 @@ elif arch_subdir == 'arm'
sources += files('ixgbe_recycle_mbufs_vec_common.c')
endif
-includes += include_directories('base')
+includes += include_directories('base', '../../common/intel_eth')
headers = files('rte_pmd_ixgbe.h')
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [RFC PATCH 02/21] common/intel_eth: provide common Tx entry structures
2024-11-22 12:53 [RFC PATCH 00/21] Reduce code duplication across Intel NIC drivers Bruce Richardson
2024-11-22 12:53 ` [RFC PATCH 01/21] common/intel_eth: add pkt reassembly fn for intel drivers Bruce Richardson
@ 2024-11-22 12:53 ` Bruce Richardson
2024-11-22 12:53 ` [RFC PATCH 03/21] common/intel_eth: add Tx mbuf ring replenish fn Bruce Richardson
` (23 subsequent siblings)
25 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-11-22 12:53 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Ian Stokes, David Christensen,
Konstantin Ananyev, Wathsala Vithanage, Vladimir Medvedkin,
Anatoly Burakov
The Tx entry structures, both vector and scalar, are common across Intel
drivers, so provide a single definition to be used everywhere.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/common/intel_eth/ieth_rxtx.h | 29 +++++++++++++++++++
.../net/i40e/i40e_recycle_mbufs_vec_common.c | 2 +-
drivers/net/i40e/i40e_rxtx.c | 18 ++++++------
drivers/net/i40e/i40e_rxtx.h | 14 ++-------
drivers/net/i40e/i40e_rxtx_vec_altivec.c | 2 +-
drivers/net/i40e/i40e_rxtx_vec_avx2.c | 2 +-
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 6 ++--
drivers/net/i40e/i40e_rxtx_vec_common.h | 4 +--
drivers/net/i40e/i40e_rxtx_vec_neon.c | 2 +-
drivers/net/i40e/i40e_rxtx_vec_sse.c | 2 +-
drivers/net/iavf/iavf_rxtx.c | 12 ++++----
drivers/net/iavf/iavf_rxtx.h | 14 ++-------
drivers/net/iavf/iavf_rxtx_vec_avx2.c | 2 +-
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 10 +++----
drivers/net/iavf/iavf_rxtx_vec_common.h | 4 +--
drivers/net/iavf/iavf_rxtx_vec_sse.c | 2 +-
drivers/net/ice/ice_dcf_ethdev.c | 2 +-
drivers/net/ice/ice_rxtx.c | 16 +++++-----
drivers/net/ice/ice_rxtx.h | 13 ++-------
drivers/net/ice/ice_rxtx_vec_avx2.c | 2 +-
drivers/net/ice/ice_rxtx_vec_avx512.c | 6 ++--
drivers/net/ice/ice_rxtx_vec_common.h | 6 ++--
drivers/net/ice/ice_rxtx_vec_sse.c | 2 +-
.../ixgbe/ixgbe_recycle_mbufs_vec_common.c | 2 +-
drivers/net/ixgbe/ixgbe_rxtx.c | 16 +++++-----
drivers/net/ixgbe/ixgbe_rxtx.h | 22 +++-----------
drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 8 ++---
drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 2 +-
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 2 +-
29 files changed, 107 insertions(+), 117 deletions(-)
create mode 100644 drivers/common/intel_eth/ieth_rxtx.h
diff --git a/drivers/common/intel_eth/ieth_rxtx.h b/drivers/common/intel_eth/ieth_rxtx.h
new file mode 100644
index 0000000000..95a3cff048
--- /dev/null
+++ b/drivers/common/intel_eth/ieth_rxtx.h
@@ -0,0 +1,29 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Intel Corporation
+ */
+
+#ifndef IETH_RXTX_H_
+#define IETH_RXTX_H_
+
+#include <stdint.h>
+#include <rte_mbuf.h>
+
+/**
+ * Structure associated with each descriptor of the TX ring of a TX queue.
+ */
+struct ieth_tx_entry
+{
+ struct rte_mbuf *mbuf; /* mbuf associated with TX desc, if any. */
+ uint16_t next_id; /* Index of next descriptor in ring. */
+ uint16_t last_id; /* Index of last scattered descriptor. */
+};
+
+/**
+ * Structure associated with each descriptor of the TX ring of a TX queue in vector Tx.
+ */
+struct ieth_vec_tx_entry
+{
+ struct rte_mbuf *mbuf; /* mbuf associated with TX desc, if any. */
+};
+
+#endif /* IETH_RXTX_H_ */
diff --git a/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c b/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
index 14424c9921..5a23adc6a4 100644
--- a/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
+++ b/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
@@ -56,7 +56,7 @@ i40e_recycle_tx_mbufs_reuse_vec(void *tx_queue,
struct rte_eth_recycle_rxq_info *recycle_rxq_info)
{
struct i40e_tx_queue *txq = tx_queue;
- struct i40e_tx_entry *txep;
+ struct ieth_tx_entry *txep;
struct rte_mbuf **rxep;
int i, n;
uint16_t nb_recycle_mbufs;
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 839c8a5442..b628d83a42 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -378,7 +378,7 @@ i40e_build_ctob(uint32_t td_cmd,
static inline int
i40e_xmit_cleanup(struct i40e_tx_queue *txq)
{
- struct i40e_tx_entry *sw_ring = txq->sw_ring;
+ struct ieth_tx_entry *sw_ring = txq->sw_ring;
volatile struct i40e_tx_desc *txd = txq->tx_ring;
uint16_t last_desc_cleaned = txq->last_desc_cleaned;
uint16_t nb_tx_desc = txq->nb_tx_desc;
@@ -1081,8 +1081,8 @@ uint16_t
i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
struct i40e_tx_queue *txq;
- struct i40e_tx_entry *sw_ring;
- struct i40e_tx_entry *txe, *txn;
+ struct ieth_tx_entry *sw_ring;
+ struct ieth_tx_entry *txe, *txn;
volatile struct i40e_tx_desc *txd;
volatile struct i40e_tx_desc *txr;
struct rte_mbuf *tx_pkt;
@@ -1331,7 +1331,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
static __rte_always_inline int
i40e_tx_free_bufs(struct i40e_tx_queue *txq)
{
- struct i40e_tx_entry *txep;
+ struct ieth_tx_entry *txep;
uint16_t tx_rs_thresh = txq->tx_rs_thresh;
uint16_t i = 0, j = 0;
struct rte_mbuf *free[RTE_I40E_TX_MAX_FREE_BUF_SZ];
@@ -1418,7 +1418,7 @@ i40e_tx_fill_hw_ring(struct i40e_tx_queue *txq,
uint16_t nb_pkts)
{
volatile struct i40e_tx_desc *txdp = &(txq->tx_ring[txq->tx_tail]);
- struct i40e_tx_entry *txep = &(txq->sw_ring[txq->tx_tail]);
+ struct ieth_tx_entry *txep = &(txq->sw_ring[txq->tx_tail]);
const int N_PER_LOOP = 4;
const int N_PER_LOOP_MASK = N_PER_LOOP - 1;
int mainpart, leftover;
@@ -2555,7 +2555,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate software ring */
txq->sw_ring =
rte_zmalloc_socket("i40e tx sw ring",
- sizeof(struct i40e_tx_entry) * nb_desc,
+ sizeof(struct ieth_tx_entry) * nb_desc,
RTE_CACHE_LINE_SIZE,
socket_id);
if (!txq->sw_ring) {
@@ -2723,7 +2723,7 @@ i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq)
*/
#ifdef CC_AVX512_SUPPORT
if (dev->tx_pkt_burst == i40e_xmit_pkts_vec_avx512) {
- struct i40e_vec_tx_entry *swr = (void *)txq->sw_ring;
+ struct ieth_vec_tx_entry *swr = (void *)txq->sw_ring;
i = txq->tx_next_dd - txq->tx_rs_thresh + 1;
if (txq->tx_tail < i) {
@@ -2768,7 +2768,7 @@ static int
i40e_tx_done_cleanup_full(struct i40e_tx_queue *txq,
uint32_t free_cnt)
{
- struct i40e_tx_entry *swr_ring = txq->sw_ring;
+ struct ieth_tx_entry *swr_ring = txq->sw_ring;
uint16_t i, tx_last, tx_id;
uint16_t nb_tx_free_last;
uint16_t nb_tx_to_clean;
@@ -2874,7 +2874,7 @@ i40e_tx_done_cleanup(void *txq, uint32_t free_cnt)
void
i40e_reset_tx_queue(struct i40e_tx_queue *txq)
{
- struct i40e_tx_entry *txe;
+ struct ieth_tx_entry *txe;
uint16_t i, prev, size;
if (!txq) {
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 33fc9770d9..47ece1eb7d 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -5,6 +5,8 @@
#ifndef _I40E_RXTX_H_
#define _I40E_RXTX_H_
+#include <ieth_rxtx.h>
+
#define RTE_PMD_I40E_RX_MAX_BURST 32
#define RTE_PMD_I40E_TX_MAX_BURST 32
@@ -122,16 +124,6 @@ struct i40e_rx_queue {
const struct rte_memzone *mz;
};
-struct i40e_tx_entry {
- struct rte_mbuf *mbuf;
- uint16_t next_id;
- uint16_t last_id;
-};
-
-struct i40e_vec_tx_entry {
- struct rte_mbuf *mbuf;
-};
-
/*
* Structure associated with each TX queue.
*/
@@ -139,7 +131,7 @@ struct i40e_tx_queue {
uint16_t nb_tx_desc; /**< number of TX descriptors */
uint64_t tx_ring_phys_addr; /**< TX ring DMA address */
volatile struct i40e_tx_desc *tx_ring; /**< TX ring virtual address */
- struct i40e_tx_entry *sw_ring; /**< virtual address of SW ring */
+ struct ieth_tx_entry *sw_ring; /**< virtual address of SW ring */
uint16_t tx_tail; /**< current value of tail register */
volatile uint8_t *qtx_tail; /**< register address of tail */
uint16_t nb_tx_used; /**< number of TX desc used since RS bit set */
diff --git a/drivers/net/i40e/i40e_rxtx_vec_altivec.c b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
index 526355f61d..382a4d9305 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_altivec.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
@@ -553,7 +553,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
- struct i40e_tx_entry *txep;
+ struct ieth_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = I40E_TD_CMD;
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx2.c b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
index 231c5f6d4b..48909d6230 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx2.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
@@ -745,7 +745,7 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
- struct i40e_tx_entry *txep;
+ struct ieth_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = I40E_TD_CMD;
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index 30ce24634a..25ed4c78a7 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -757,7 +757,7 @@ i40e_recv_scattered_pkts_vec_avx512(void *rx_queue,
static __rte_always_inline int
i40e_tx_free_bufs_avx512(struct i40e_tx_queue *txq)
{
- struct i40e_vec_tx_entry *txep;
+ struct ieth_vec_tx_entry *txep;
uint32_t n;
uint32_t i;
int nb_free = 0;
@@ -920,7 +920,7 @@ vtx(volatile struct i40e_tx_desc *txdp,
}
static __rte_always_inline void
-tx_backlog_entry_avx512(struct i40e_vec_tx_entry *txep,
+tx_backlog_entry_avx512(struct ieth_vec_tx_entry *txep,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
int i;
@@ -935,7 +935,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
- struct i40e_vec_tx_entry *txep;
+ struct ieth_vec_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = I40E_TD_CMD;
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index 7cefbc98ef..3f6319ee65 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -19,7 +19,7 @@
static __rte_always_inline int
i40e_tx_free_bufs(struct i40e_tx_queue *txq)
{
- struct i40e_tx_entry *txep;
+ struct ieth_tx_entry *txep;
uint32_t n;
uint32_t i;
int nb_free = 0;
@@ -85,7 +85,7 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
}
static __rte_always_inline void
-tx_backlog_entry(struct i40e_tx_entry *txep,
+tx_backlog_entry(struct ieth_tx_entry *txep,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
int i;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_neon.c b/drivers/net/i40e/i40e_rxtx_vec_neon.c
index ab0d4f1a15..09f52d0409 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_neon.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_neon.c
@@ -681,7 +681,7 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
{
struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
- struct i40e_tx_entry *txep;
+ struct ieth_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = I40E_TD_CMD;
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_sse.c b/drivers/net/i40e/i40e_rxtx_vec_sse.c
index 03fb9eb59b..cff33343e7 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_sse.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_sse.c
@@ -700,7 +700,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
- struct i40e_tx_entry *txep;
+ struct ieth_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = I40E_TD_CMD;
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 6a093c6746..1db34fd12f 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -284,7 +284,7 @@ reset_rx_queue(struct iavf_rx_queue *rxq)
static inline void
reset_tx_queue(struct iavf_tx_queue *txq)
{
- struct iavf_tx_entry *txe;
+ struct ieth_tx_entry *txe;
uint32_t i, size;
uint16_t prev;
@@ -860,7 +860,7 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate software ring */
txq->sw_ring =
rte_zmalloc_socket("iavf tx sw ring",
- sizeof(struct iavf_tx_entry) * nb_desc,
+ sizeof(struct ieth_tx_entry) * nb_desc,
RTE_CACHE_LINE_SIZE,
socket_id);
if (!txq->sw_ring) {
@@ -2379,7 +2379,7 @@ iavf_recv_pkts_bulk_alloc(void *rx_queue,
static inline int
iavf_xmit_cleanup(struct iavf_tx_queue *txq)
{
- struct iavf_tx_entry *sw_ring = txq->sw_ring;
+ struct ieth_tx_entry *sw_ring = txq->sw_ring;
uint16_t last_desc_cleaned = txq->last_desc_cleaned;
uint16_t nb_tx_desc = txq->nb_tx_desc;
uint16_t desc_to_clean_to;
@@ -2797,8 +2797,8 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
struct iavf_tx_queue *txq = tx_queue;
volatile struct iavf_tx_desc *txr = txq->tx_ring;
- struct iavf_tx_entry *txe_ring = txq->sw_ring;
- struct iavf_tx_entry *txe, *txn;
+ struct ieth_tx_entry *txe_ring = txq->sw_ring;
+ struct ieth_tx_entry *txe, *txn;
struct rte_mbuf *mb, *mb_seg;
uint64_t buf_dma_addr;
uint16_t desc_idx, desc_idx_last;
@@ -4268,7 +4268,7 @@ static int
iavf_tx_done_cleanup_full(struct iavf_tx_queue *txq,
uint32_t free_cnt)
{
- struct iavf_tx_entry *swr_ring = txq->sw_ring;
+ struct ieth_tx_entry *swr_ring = txq->sw_ring;
uint16_t tx_last, tx_id;
uint16_t nb_tx_free_last;
uint16_t nb_tx_to_clean;
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index 7b56076d32..63abe1cdb3 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -5,6 +5,8 @@
#ifndef _IAVF_RXTX_H_
#define _IAVF_RXTX_H_
+#include <ieth_rxtx.h>
+
/* In QLEN must be whole number of 32 descriptors. */
#define IAVF_ALIGN_RING_DESC 32
#define IAVF_MIN_RING_DESC 64
@@ -271,22 +273,12 @@ struct iavf_rx_queue {
uint64_t hw_time_update;
};
-struct iavf_tx_entry {
- struct rte_mbuf *mbuf;
- uint16_t next_id;
- uint16_t last_id;
-};
-
-struct iavf_tx_vec_entry {
- struct rte_mbuf *mbuf;
-};
-
/* Structure associated with each TX queue. */
struct iavf_tx_queue {
const struct rte_memzone *mz; /* memzone for Tx ring */
volatile struct iavf_tx_desc *tx_ring; /* Tx ring virtual address */
uint64_t tx_ring_phys_addr; /* Tx ring DMA address */
- struct iavf_tx_entry *sw_ring; /* address array of SW ring */
+ struct ieth_tx_entry *sw_ring; /* address array of SW ring */
uint16_t nb_tx_desc; /* ring length */
uint16_t tx_tail; /* current value of tail */
volatile uint8_t *qtx_tail; /* register address of tail */
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index a05494891b..79c6b2027a 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -1736,7 +1736,7 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
- struct iavf_tx_entry *txep;
+ struct ieth_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
/* bit2 is reserved and must be set to 1 according to Spec */
uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index 20ce9e2a3a..91f42670db 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -1847,7 +1847,7 @@ iavf_recv_scattered_pkts_vec_avx512_flex_rxd_offload(void *rx_queue,
static __rte_always_inline int
iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq)
{
- struct iavf_tx_vec_entry *txep;
+ struct ieth_vec_tx_entry *txep;
uint32_t n;
uint32_t i;
int nb_free = 0;
@@ -1960,7 +1960,7 @@ iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq)
}
static __rte_always_inline void
-tx_backlog_entry_avx512(struct iavf_tx_vec_entry *txep,
+tx_backlog_entry_avx512(struct ieth_vec_tx_entry *txep,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
int i;
@@ -2313,7 +2313,7 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
- struct iavf_tx_vec_entry *txep;
+ struct ieth_vec_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
/* bit2 is reserved and must be set to 1 according to Spec */
uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC;
@@ -2380,7 +2380,7 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
- struct iavf_tx_vec_entry *txep;
+ struct ieth_vec_tx_entry *txep;
uint16_t n, nb_commit, nb_mbuf, tx_id;
/* bit2 is reserved and must be set to 1 according to Spec */
uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC;
@@ -2478,7 +2478,7 @@ iavf_tx_queue_release_mbufs_avx512(struct iavf_tx_queue *txq)
const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1);
const uint16_t end_desc = txq->tx_tail >> txq->use_ctx; /* next empty slot */
const uint16_t wrap_point = txq->nb_tx_desc >> txq->use_ctx; /* end of SW ring */
- struct iavf_tx_vec_entry *swr = (void *)txq->sw_ring;
+ struct ieth_vec_tx_entry *swr = (void *)txq->sw_ring;
if (!txq->sw_ring || txq->nb_free == max_desc)
return;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index 874e10fd59..b237c9ab93 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -19,7 +19,7 @@
static __rte_always_inline int
iavf_tx_free_bufs(struct iavf_tx_queue *txq)
{
- struct iavf_tx_entry *txep;
+ struct ieth_tx_entry *txep;
uint32_t n;
uint32_t i;
int nb_free = 0;
@@ -74,7 +74,7 @@ iavf_tx_free_bufs(struct iavf_tx_queue *txq)
}
static __rte_always_inline void
-tx_backlog_entry(struct iavf_tx_entry *txep,
+tx_backlog_entry(struct ieth_tx_entry *txep,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
int i;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index 7c1a1b8fa9..48028c2e32 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -1368,7 +1368,7 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
- struct iavf_tx_entry *txep;
+ struct ieth_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = IAVF_TX_DESC_CMD_EOP | 0x04; /* bit 2 must be set */
uint64_t rs = IAVF_TX_DESC_CMD_RS | flags;
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 91f4943a11..f37dd2fdc1 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -389,7 +389,7 @@ reset_rx_queue(struct ice_rx_queue *rxq)
static inline void
reset_tx_queue(struct ice_tx_queue *txq)
{
- struct ice_tx_entry *txe;
+ struct ieth_tx_entry *txe;
uint32_t i, size;
uint16_t prev;
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 0c7106c7e0..9faa878caf 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -1028,7 +1028,7 @@ _ice_tx_queue_release_mbufs(struct ice_tx_queue *txq)
static void
ice_reset_tx_queue(struct ice_tx_queue *txq)
{
- struct ice_tx_entry *txe;
+ struct ieth_tx_entry *txe;
uint16_t i, prev, size;
if (!txq) {
@@ -1509,7 +1509,7 @@ ice_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate software ring */
txq->sw_ring =
rte_zmalloc_socket(NULL,
- sizeof(struct ice_tx_entry) * nb_desc,
+ sizeof(struct ieth_tx_entry) * nb_desc,
RTE_CACHE_LINE_SIZE,
socket_id);
if (!txq->sw_ring) {
@@ -2837,7 +2837,7 @@ ice_txd_enable_checksum(uint64_t ol_flags,
static inline int
ice_xmit_cleanup(struct ice_tx_queue *txq)
{
- struct ice_tx_entry *sw_ring = txq->sw_ring;
+ struct ieth_tx_entry *sw_ring = txq->sw_ring;
volatile struct ice_tx_desc *txd = txq->tx_ring;
uint16_t last_desc_cleaned = txq->last_desc_cleaned;
uint16_t nb_tx_desc = txq->nb_tx_desc;
@@ -2961,8 +2961,8 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
struct ice_tx_queue *txq;
volatile struct ice_tx_desc *tx_ring;
volatile struct ice_tx_desc *txd;
- struct ice_tx_entry *sw_ring;
- struct ice_tx_entry *txe, *txn;
+ struct ieth_tx_entry *sw_ring;
+ struct ieth_tx_entry *txe, *txn;
struct rte_mbuf *tx_pkt;
struct rte_mbuf *m_seg;
uint32_t cd_tunneling_params;
@@ -3184,7 +3184,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
static __rte_always_inline int
ice_tx_free_bufs(struct ice_tx_queue *txq)
{
- struct ice_tx_entry *txep;
+ struct ieth_tx_entry *txep;
uint16_t i;
if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
@@ -3221,7 +3221,7 @@ static int
ice_tx_done_cleanup_full(struct ice_tx_queue *txq,
uint32_t free_cnt)
{
- struct ice_tx_entry *swr_ring = txq->sw_ring;
+ struct ieth_tx_entry *swr_ring = txq->sw_ring;
uint16_t i, tx_last, tx_id;
uint16_t nb_tx_free_last;
uint16_t nb_tx_to_clean;
@@ -3361,7 +3361,7 @@ ice_tx_fill_hw_ring(struct ice_tx_queue *txq, struct rte_mbuf **pkts,
uint16_t nb_pkts)
{
volatile struct ice_tx_desc *txdp = &txq->tx_ring[txq->tx_tail];
- struct ice_tx_entry *txep = &txq->sw_ring[txq->tx_tail];
+ struct ieth_tx_entry *txep = &txq->sw_ring[txq->tx_tail];
const int N_PER_LOOP = 4;
const int N_PER_LOOP_MASK = N_PER_LOOP - 1;
int mainpart, leftover;
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index 45f25b3609..615bed8a60 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -5,6 +5,7 @@
#ifndef _ICE_RXTX_H_
#define _ICE_RXTX_H_
+#include <ieth_rxtx.h>
#include "ice_ethdev.h"
#define ICE_ALIGN_RING_DESC 32
@@ -144,21 +145,11 @@ struct ice_rx_queue {
bool ts_enable; /* if rxq timestamp is enabled */
};
-struct ice_tx_entry {
- struct rte_mbuf *mbuf;
- uint16_t next_id;
- uint16_t last_id;
-};
-
-struct ice_vec_tx_entry {
- struct rte_mbuf *mbuf;
-};
-
struct ice_tx_queue {
uint16_t nb_tx_desc; /* number of TX descriptors */
rte_iova_t tx_ring_dma; /* TX ring DMA address */
volatile struct ice_tx_desc *tx_ring; /* TX ring virtual address */
- struct ice_tx_entry *sw_ring; /* virtual address of SW ring */
+ struct ieth_tx_entry *sw_ring; /* virtual address of SW ring */
uint16_t tx_tail; /* current value of tail register */
volatile uint8_t *qtx_tail; /* register address of tail */
uint16_t nb_tx_used; /* number of TX desc used since RS bit set */
diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index 1a3df29503..190e80a34e 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -858,7 +858,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
volatile struct ice_tx_desc *txdp;
- struct ice_tx_entry *txep;
+ struct ieth_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = ICE_TD_CMD;
uint64_t rs = ICE_TX_DESC_CMD_RS | ICE_TD_CMD;
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index 5e18f23204..5ba6d15ef0 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -862,7 +862,7 @@ ice_recv_scattered_pkts_vec_avx512_offload(void *rx_queue,
static __rte_always_inline int
ice_tx_free_bufs_avx512(struct ice_tx_queue *txq)
{
- struct ice_vec_tx_entry *txep;
+ struct ieth_vec_tx_entry *txep;
uint32_t n;
uint32_t i;
int nb_free = 0;
@@ -1040,7 +1040,7 @@ ice_vtx(volatile struct ice_tx_desc *txdp, struct rte_mbuf **pkt,
}
static __rte_always_inline void
-ice_tx_backlog_entry_avx512(struct ice_vec_tx_entry *txep,
+ice_tx_backlog_entry_avx512(struct ieth_vec_tx_entry *txep,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
int i;
@@ -1055,7 +1055,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
volatile struct ice_tx_desc *txdp;
- struct ice_vec_tx_entry *txep;
+ struct ieth_vec_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = ICE_TD_CMD;
uint64_t rs = ICE_TX_DESC_CMD_RS | ICE_TD_CMD;
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index 89e45939e7..5c30ecb674 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -15,7 +15,7 @@
static __rte_always_inline int
ice_tx_free_bufs_vec(struct ice_tx_queue *txq)
{
- struct ice_tx_entry *txep;
+ struct ieth_tx_entry *txep;
uint32_t n;
uint32_t i;
int nb_free = 0;
@@ -70,7 +70,7 @@ ice_tx_free_bufs_vec(struct ice_tx_queue *txq)
}
static __rte_always_inline void
-ice_tx_backlog_entry(struct ice_tx_entry *txep,
+ice_tx_backlog_entry(struct ieth_tx_entry *txep,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
int i;
@@ -135,7 +135,7 @@ _ice_tx_queue_release_mbufs_vec(struct ice_tx_queue *txq)
if (dev->tx_pkt_burst == ice_xmit_pkts_vec_avx512 ||
dev->tx_pkt_burst == ice_xmit_pkts_vec_avx512_offload) {
- struct ice_vec_tx_entry *swr = (void *)txq->sw_ring;
+ struct ieth_vec_tx_entry *swr = (void *)txq->sw_ring;
if (txq->tx_tail < i) {
for (; i < txq->nb_tx_desc; i++) {
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index 9fcd975ed2..1bfed8f310 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -699,7 +699,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
volatile struct ice_tx_desc *txdp;
- struct ice_tx_entry *txep;
+ struct ieth_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = ICE_TD_CMD;
uint64_t rs = ICE_TX_DESC_CMD_RS | ICE_TD_CMD;
diff --git a/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c b/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c
index d451562269..4c8f6b7b64 100644
--- a/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c
+++ b/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c
@@ -52,7 +52,7 @@ ixgbe_recycle_tx_mbufs_reuse_vec(void *tx_queue,
struct rte_eth_recycle_rxq_info *recycle_rxq_info)
{
struct ixgbe_tx_queue *txq = tx_queue;
- struct ixgbe_tx_entry *txep;
+ struct ieth_tx_entry *txep;
struct rte_mbuf **rxep;
int i, n;
uint32_t status;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 0d42fd8a3b..28dca3fb7b 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -100,7 +100,7 @@
static __rte_always_inline int
ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq)
{
- struct ixgbe_tx_entry *txep;
+ struct ieth_tx_entry *txep;
uint32_t status;
int i, nb_free = 0;
struct rte_mbuf *m, *free[RTE_IXGBE_TX_MAX_FREE_BUF_SZ];
@@ -199,7 +199,7 @@ ixgbe_tx_fill_hw_ring(struct ixgbe_tx_queue *txq, struct rte_mbuf **pkts,
uint16_t nb_pkts)
{
volatile union ixgbe_adv_tx_desc *txdp = &(txq->tx_ring[txq->tx_tail]);
- struct ixgbe_tx_entry *txep = &(txq->sw_ring[txq->tx_tail]);
+ struct ieth_tx_entry *txep = &(txq->sw_ring[txq->tx_tail]);
const int N_PER_LOOP = 4;
const int N_PER_LOOP_MASK = N_PER_LOOP-1;
int mainpart, leftover;
@@ -563,7 +563,7 @@ tx_desc_ol_flags_to_cmdtype(uint64_t ol_flags)
static inline int
ixgbe_xmit_cleanup(struct ixgbe_tx_queue *txq)
{
- struct ixgbe_tx_entry *sw_ring = txq->sw_ring;
+ struct ieth_tx_entry *sw_ring = txq->sw_ring;
volatile union ixgbe_adv_tx_desc *txr = txq->tx_ring;
uint16_t last_desc_cleaned = txq->last_desc_cleaned;
uint16_t nb_tx_desc = txq->nb_tx_desc;
@@ -624,8 +624,8 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
struct ixgbe_tx_queue *txq;
- struct ixgbe_tx_entry *sw_ring;
- struct ixgbe_tx_entry *txe, *txn;
+ struct ieth_tx_entry *sw_ring;
+ struct ieth_tx_entry *txe, *txn;
volatile union ixgbe_adv_tx_desc *txr;
volatile union ixgbe_adv_tx_desc *txd, *txp;
struct rte_mbuf *tx_pkt;
@@ -2352,7 +2352,7 @@ ixgbe_tx_queue_release_mbufs(struct ixgbe_tx_queue *txq)
static int
ixgbe_tx_done_cleanup_full(struct ixgbe_tx_queue *txq, uint32_t free_cnt)
{
- struct ixgbe_tx_entry *swr_ring = txq->sw_ring;
+ struct ieth_tx_entry *swr_ring = txq->sw_ring;
uint16_t i, tx_last, tx_id;
uint16_t nb_tx_free_last;
uint16_t nb_tx_to_clean;
@@ -2490,7 +2490,7 @@ static void __rte_cold
ixgbe_reset_tx_queue(struct ixgbe_tx_queue *txq)
{
static const union ixgbe_adv_tx_desc zeroed_desc = {{0}};
- struct ixgbe_tx_entry *txe = txq->sw_ring;
+ struct ieth_tx_entry *txe = txq->sw_ring;
uint16_t prev, i;
/* Zero out HW ring memory */
@@ -2795,7 +2795,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate software ring */
txq->sw_ring = rte_zmalloc_socket("txq->sw_ring",
- sizeof(struct ixgbe_tx_entry) * nb_desc,
+ sizeof(struct ieth_tx_entry) * nb_desc,
RTE_CACHE_LINE_SIZE, socket_id);
if (txq->sw_ring == NULL) {
ixgbe_tx_queue_release(txq);
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index 0550c1da60..552dd2b340 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -5,6 +5,8 @@
#ifndef _IXGBE_RXTX_H_
#define _IXGBE_RXTX_H_
+#include <ieth_rxtx.h>
+
/*
* Rings setup and release.
*
@@ -75,22 +77,6 @@ struct ixgbe_scattered_rx_entry {
struct rte_mbuf *fbuf; /**< First segment of the fragmented packet. */
};
-/**
- * Structure associated with each descriptor of the TX ring of a TX queue.
- */
-struct ixgbe_tx_entry {
- struct rte_mbuf *mbuf; /**< mbuf associated with TX desc, if any. */
- uint16_t next_id; /**< Index of next descriptor in ring. */
- uint16_t last_id; /**< Index of last scattered descriptor. */
-};
-
-/**
- * Structure associated with each descriptor of the TX ring of a TX queue.
- */
-struct ixgbe_tx_entry_v {
- struct rte_mbuf *mbuf; /**< mbuf associated with TX desc, if any. */
-};
-
/**
* Structure associated with each RX queue.
*/
@@ -202,8 +188,8 @@ struct ixgbe_tx_queue {
volatile union ixgbe_adv_tx_desc *tx_ring;
uint64_t tx_ring_phys_addr; /**< TX ring DMA address. */
union {
- struct ixgbe_tx_entry *sw_ring; /**< address of SW ring for scalar PMD. */
- struct ixgbe_tx_entry_v *sw_ring_v; /**< address of SW ring for vector PMD */
+ struct ieth_tx_entry *sw_ring; /**< address of SW ring for scalar PMD. */
+ struct ieth_vec_tx_entry *sw_ring_v; /**< address of SW ring for vector PMD */
};
volatile uint32_t *tdt_reg_addr; /**< Address of TDT register. */
uint16_t nb_tx_desc; /**< number of TX descriptors. */
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
index 275af944f7..d25875935e 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
@@ -14,7 +14,7 @@
static __rte_always_inline int
ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq)
{
- struct ixgbe_tx_entry_v *txep;
+ struct ieth_vec_tx_entry *txep;
uint32_t status;
uint32_t n;
uint32_t i;
@@ -69,7 +69,7 @@ ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq)
}
static __rte_always_inline void
-tx_backlog_entry(struct ixgbe_tx_entry_v *txep,
+tx_backlog_entry(struct ieth_vec_tx_entry *txep,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
int i;
@@ -82,7 +82,7 @@ static inline void
_ixgbe_tx_queue_release_mbufs_vec(struct ixgbe_tx_queue *txq)
{
unsigned int i;
- struct ixgbe_tx_entry_v *txe;
+ struct ieth_vec_tx_entry *txe;
const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1);
if (txq->sw_ring == NULL || txq->nb_tx_free == max_desc)
@@ -149,7 +149,7 @@ static inline void
_ixgbe_reset_tx_queue_vec(struct ixgbe_tx_queue *txq)
{
static const union ixgbe_adv_tx_desc zeroed_desc = { { 0 } };
- struct ixgbe_tx_entry_v *txe = txq->sw_ring_v;
+ struct ieth_vec_tx_entry *txe = txq->sw_ring_v;
uint16_t i;
/* Zero out HW ring memory */
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
index 91ba8036cf..b8edef5228 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
@@ -573,7 +573,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
volatile union ixgbe_adv_tx_desc *txdp;
- struct ixgbe_tx_entry_v *txep;
+ struct ieth_vec_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = DCMD_DTYP_FLAGS;
uint64_t rs = IXGBE_ADVTXD_DCMD_RS | DCMD_DTYP_FLAGS;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
index a108a718a8..0a9d21eaf3 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -695,7 +695,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
volatile union ixgbe_adv_tx_desc *txdp;
- struct ixgbe_tx_entry_v *txep;
+ struct ieth_vec_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = DCMD_DTYP_FLAGS;
uint64_t rs = IXGBE_ADVTXD_DCMD_RS|DCMD_DTYP_FLAGS;
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [RFC PATCH 03/21] common/intel_eth: add Tx mbuf ring replenish fn
2024-11-22 12:53 [RFC PATCH 00/21] Reduce code duplication across Intel NIC drivers Bruce Richardson
2024-11-22 12:53 ` [RFC PATCH 01/21] common/intel_eth: add pkt reassembly fn for intel drivers Bruce Richardson
2024-11-22 12:53 ` [RFC PATCH 02/21] common/intel_eth: provide common Tx entry structures Bruce Richardson
@ 2024-11-22 12:53 ` Bruce Richardson
2024-11-22 12:53 ` [RFC PATCH 04/21] drivers/net: align Tx queue struct field names Bruce Richardson
` (22 subsequent siblings)
25 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-11-22 12:53 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, David Christensen, Ian Stokes,
Konstantin Ananyev, Wathsala Vithanage, Vladimir Medvedkin,
Anatoly Burakov
Move the short function used to place mbufs on the SW Tx ring to common
code to avoid duplication.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/common/intel_eth/ieth_rxtx_vec_common.h | 7 +++++++
drivers/net/i40e/i40e_rxtx_vec_altivec.c | 4 ++--
drivers/net/i40e/i40e_rxtx_vec_avx2.c | 4 ++--
drivers/net/i40e/i40e_rxtx_vec_common.h | 10 ----------
drivers/net/i40e/i40e_rxtx_vec_neon.c | 4 ++--
drivers/net/i40e/i40e_rxtx_vec_sse.c | 4 ++--
drivers/net/iavf/iavf_rxtx_vec_avx2.c | 4 ++--
drivers/net/iavf/iavf_rxtx_vec_common.h | 10 ----------
drivers/net/iavf/iavf_rxtx_vec_sse.c | 4 ++--
drivers/net/ice/ice_rxtx_vec_avx2.c | 4 ++--
drivers/net/ice/ice_rxtx_vec_common.h | 10 ----------
drivers/net/ice/ice_rxtx_vec_sse.c | 4 ++--
12 files changed, 23 insertions(+), 46 deletions(-)
diff --git a/drivers/common/intel_eth/ieth_rxtx_vec_common.h b/drivers/common/intel_eth/ieth_rxtx_vec_common.h
index 0771af820c..49096d2a41 100644
--- a/drivers/common/intel_eth/ieth_rxtx_vec_common.h
+++ b/drivers/common/intel_eth/ieth_rxtx_vec_common.h
@@ -8,6 +8,7 @@
#include <stdint.h>
#include <unistd.h>
#include <rte_mbuf.h>
+#include "ieth_rxtx.h"
#define IETH_RX_BURST 32
@@ -78,4 +79,10 @@ ieth_rx_reassemble_packets(struct rte_mbuf **rx_bufs,
return pkt_idx;
}
+static __rte_always_inline void
+ieth_tx_backlog_entry(struct ieth_tx_entry *txep, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+ for (uint16_t i = 0; i < (int)nb_pkts; ++i)
+ txep[i].mbuf = tx_pkts[i];
+}
#endif /* IETH_RXTX_VEC_COMMON_H_ */
diff --git a/drivers/net/i40e/i40e_rxtx_vec_altivec.c b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
index 382a4d9305..614af752b8 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_altivec.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
@@ -575,7 +575,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- tx_backlog_entry(txep, tx_pkts, n);
+ ieth_tx_backlog_entry(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -592,7 +592,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = &txq->sw_ring[tx_id];
}
- tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ieth_tx_backlog_entry(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx2.c b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
index 48909d6230..2b0a774d47 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx2.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
@@ -765,7 +765,7 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- tx_backlog_entry(txep, tx_pkts, n);
+ ieth_tx_backlog_entry(txep, tx_pkts, n);
vtx(txdp, tx_pkts, n - 1, flags);
tx_pkts += (n - 1);
@@ -783,7 +783,7 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = &txq->sw_ring[tx_id];
}
- tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ieth_tx_backlog_entry(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index 3f6319ee65..676c3b1034 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -84,16 +84,6 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
return txq->tx_rs_thresh;
}
-static __rte_always_inline void
-tx_backlog_entry(struct ieth_tx_entry *txep,
- struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
-{
- int i;
-
- for (i = 0; i < (int)nb_pkts; ++i)
- txep[i].mbuf = tx_pkts[i];
-}
-
static inline void
_i40e_rx_queue_release_mbufs_vec(struct i40e_rx_queue *rxq)
{
diff --git a/drivers/net/i40e/i40e_rxtx_vec_neon.c b/drivers/net/i40e/i40e_rxtx_vec_neon.c
index 09f52d0409..2df7f3fed2 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_neon.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_neon.c
@@ -702,7 +702,7 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- tx_backlog_entry(txep, tx_pkts, n);
+ ieth_tx_backlog_entry(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -719,7 +719,7 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
txep = &txq->sw_ring[tx_id];
}
- tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ieth_tx_backlog_entry(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_sse.c b/drivers/net/i40e/i40e_rxtx_vec_sse.c
index cff33343e7..23fbd9f852 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_sse.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_sse.c
@@ -721,7 +721,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- tx_backlog_entry(txep, tx_pkts, n);
+ ieth_tx_backlog_entry(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -738,7 +738,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = &txq->sw_ring[tx_id];
}
- tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ieth_tx_backlog_entry(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index 79c6b2027a..9a7da591ac 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -1757,7 +1757,7 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- tx_backlog_entry(txep, tx_pkts, n);
+ ieth_tx_backlog_entry(txep, tx_pkts, n);
iavf_vtx(txdp, tx_pkts, n - 1, flags, offload);
tx_pkts += (n - 1);
@@ -1775,7 +1775,7 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = &txq->sw_ring[tx_id];
}
- tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ieth_tx_backlog_entry(txep, tx_pkts, nb_commit);
iavf_vtx(txdp, tx_pkts, nb_commit, flags, offload);
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index b237c9ab93..a53df9c52c 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -73,16 +73,6 @@ iavf_tx_free_bufs(struct iavf_tx_queue *txq)
return txq->rs_thresh;
}
-static __rte_always_inline void
-tx_backlog_entry(struct ieth_tx_entry *txep,
- struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
-{
- int i;
-
- for (i = 0; i < (int)nb_pkts; ++i)
- txep[i].mbuf = tx_pkts[i];
-}
-
static inline void
_iavf_rx_queue_release_mbufs_vec(struct iavf_rx_queue *rxq)
{
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index 48028c2e32..419080ac9d 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -1390,7 +1390,7 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- tx_backlog_entry(txep, tx_pkts, n);
+ ieth_tx_backlog_entry(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -1407,7 +1407,7 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = &txq->sw_ring[tx_id];
}
- tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ieth_tx_backlog_entry(txep, tx_pkts, nb_commit);
iavf_vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index 190e80a34e..657b40858b 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -881,7 +881,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ice_tx_backlog_entry(txep, tx_pkts, n);
+ ieth_tx_backlog_entry(txep, tx_pkts, n);
ice_vtx(txdp, tx_pkts, n - 1, flags, offload);
tx_pkts += (n - 1);
@@ -899,7 +899,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = &txq->sw_ring[tx_id];
}
- ice_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ieth_tx_backlog_entry(txep, tx_pkts, nb_commit);
ice_vtx(txdp, tx_pkts, nb_commit, flags, offload);
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index 5c30ecb674..5266ba2d53 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -69,16 +69,6 @@ ice_tx_free_bufs_vec(struct ice_tx_queue *txq)
return txq->tx_rs_thresh;
}
-static __rte_always_inline void
-ice_tx_backlog_entry(struct ieth_tx_entry *txep,
- struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
-{
- int i;
-
- for (i = 0; i < (int)nb_pkts; ++i)
- txep[i].mbuf = tx_pkts[i];
-}
-
static inline void
_ice_rx_queue_release_mbufs_vec(struct ice_rx_queue *rxq)
{
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index 1bfed8f310..4f603976c5 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -724,7 +724,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ice_tx_backlog_entry(txep, tx_pkts, n);
+ ieth_tx_backlog_entry(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
ice_vtx1(txdp, *tx_pkts, flags);
@@ -741,7 +741,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = &txq->sw_ring[tx_id];
}
- ice_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ieth_tx_backlog_entry(txep, tx_pkts, nb_commit);
ice_vtx(txdp, tx_pkts, nb_commit, flags);
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [RFC PATCH 04/21] drivers/net: align Tx queue struct field names
2024-11-22 12:53 [RFC PATCH 00/21] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (2 preceding siblings ...)
2024-11-22 12:53 ` [RFC PATCH 03/21] common/intel_eth: add Tx mbuf ring replenish fn Bruce Richardson
@ 2024-11-22 12:53 ` Bruce Richardson
2024-11-22 12:53 ` [RFC PATCH 05/21] drivers/net: add prefix for driver-specific structs Bruce Richardson
` (21 subsequent siblings)
25 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-11-22 12:53 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Ian Stokes, Vladimir Medvedkin,
Konstantin Ananyev, Anatoly Burakov, Wathsala Vithanage
Across the various Intel drivers sometimes different names are given to
fields in the Tx queue structure which have the same function. Do some
renaming to align things better for future merging.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/i40e/i40e_rxtx.c | 6 +--
drivers/net/i40e/i40e_rxtx.h | 2 +-
drivers/net/iavf/iavf_rxtx.c | 60 ++++++++++++-------------
drivers/net/iavf/iavf_rxtx.h | 14 +++---
drivers/net/iavf/iavf_rxtx_vec_avx2.c | 18 ++++----
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 56 +++++++++++------------
drivers/net/iavf/iavf_rxtx_vec_common.h | 24 +++++-----
drivers/net/iavf/iavf_rxtx_vec_sse.c | 18 ++++----
drivers/net/iavf/iavf_vchnl.c | 2 +-
drivers/net/ixgbe/base/ixgbe_osdep.h | 2 +-
drivers/net/ixgbe/ixgbe_rxtx.c | 16 +++----
drivers/net/ixgbe/ixgbe_rxtx.h | 6 +--
drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 2 +-
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 2 +-
14 files changed, 114 insertions(+), 114 deletions(-)
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index b628d83a42..20e72cac54 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2549,7 +2549,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->vsi = vsi;
txq->tx_deferred_start = tx_conf->tx_deferred_start;
- txq->tx_ring_phys_addr = tz->iova;
+ txq->tx_ring_dma = tz->iova;
txq->tx_ring = (struct i40e_tx_desc *)tz->addr;
/* Allocate software ring */
@@ -2923,7 +2923,7 @@ i40e_tx_queue_init(struct i40e_tx_queue *txq)
/* clear the context structure first */
memset(&tx_ctx, 0, sizeof(tx_ctx));
tx_ctx.new_context = 1;
- tx_ctx.base = txq->tx_ring_phys_addr / I40E_QUEUE_BASE_ADDR_UNIT;
+ tx_ctx.base = txq->tx_ring_dma / I40E_QUEUE_BASE_ADDR_UNIT;
tx_ctx.qlen = txq->nb_tx_desc;
#ifdef RTE_LIBRTE_IEEE1588
@@ -3209,7 +3209,7 @@ i40e_fdir_setup_tx_resources(struct i40e_pf *pf)
txq->reg_idx = pf->fdir.fdir_vsi->base_queue;
txq->vsi = pf->fdir.fdir_vsi;
- txq->tx_ring_phys_addr = tz->iova;
+ txq->tx_ring_dma = tz->iova;
txq->tx_ring = (struct i40e_tx_desc *)tz->addr;
/*
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 47ece1eb7d..c5fbadc9e2 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -129,7 +129,7 @@ struct i40e_rx_queue {
*/
struct i40e_tx_queue {
uint16_t nb_tx_desc; /**< number of TX descriptors */
- uint64_t tx_ring_phys_addr; /**< TX ring DMA address */
+ rte_iova_t tx_ring_dma; /**< TX ring DMA address */
volatile struct i40e_tx_desc *tx_ring; /**< TX ring virtual address */
struct ieth_tx_entry *sw_ring; /**< virtual address of SW ring */
uint16_t tx_tail; /**< current value of tail register */
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 1db34fd12f..b6d287245f 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -216,8 +216,8 @@ static inline bool
check_tx_vec_allow(struct iavf_tx_queue *txq)
{
if (!(txq->offloads & IAVF_TX_NO_VECTOR_FLAGS) &&
- txq->rs_thresh >= IAVF_VPMD_TX_MAX_BURST &&
- txq->rs_thresh <= IAVF_VPMD_TX_MAX_FREE_BUF) {
+ txq->tx_rs_thresh >= IAVF_VPMD_TX_MAX_BURST &&
+ txq->tx_rs_thresh <= IAVF_VPMD_TX_MAX_FREE_BUF) {
PMD_INIT_LOG(DEBUG, "Vector tx can be enabled on this txq.");
return true;
}
@@ -309,13 +309,13 @@ reset_tx_queue(struct iavf_tx_queue *txq)
}
txq->tx_tail = 0;
- txq->nb_used = 0;
+ txq->nb_tx_used = 0;
txq->last_desc_cleaned = txq->nb_tx_desc - 1;
- txq->nb_free = txq->nb_tx_desc - 1;
+ txq->nb_tx_free = txq->nb_tx_desc - 1;
- txq->next_dd = txq->rs_thresh - 1;
- txq->next_rs = txq->rs_thresh - 1;
+ txq->tx_next_dd = txq->tx_rs_thresh - 1;
+ txq->tx_next_rs = txq->tx_rs_thresh - 1;
}
static int
@@ -845,8 +845,8 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,
}
txq->nb_tx_desc = nb_desc;
- txq->rs_thresh = tx_rs_thresh;
- txq->free_thresh = tx_free_thresh;
+ txq->tx_rs_thresh = tx_rs_thresh;
+ txq->tx_free_thresh = tx_free_thresh;
txq->queue_id = queue_idx;
txq->port_id = dev->data->port_id;
txq->offloads = offloads;
@@ -881,7 +881,7 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,
rte_free(txq);
return -ENOMEM;
}
- txq->tx_ring_phys_addr = mz->iova;
+ txq->tx_ring_dma = mz->iova;
txq->tx_ring = (struct iavf_tx_desc *)mz->addr;
txq->mz = mz;
@@ -2387,7 +2387,7 @@ iavf_xmit_cleanup(struct iavf_tx_queue *txq)
volatile struct iavf_tx_desc *txd = txq->tx_ring;
- desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->rs_thresh);
+ desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->tx_rs_thresh);
if (desc_to_clean_to >= nb_tx_desc)
desc_to_clean_to = (uint16_t)(desc_to_clean_to - nb_tx_desc);
@@ -2411,7 +2411,7 @@ iavf_xmit_cleanup(struct iavf_tx_queue *txq)
txd[desc_to_clean_to].cmd_type_offset_bsz = 0;
txq->last_desc_cleaned = desc_to_clean_to;
- txq->nb_free = (uint16_t)(txq->nb_free + nb_tx_to_clean);
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + nb_tx_to_clean);
return 0;
}
@@ -2807,7 +2807,7 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
/* Check if the descriptor ring needs to be cleaned. */
- if (txq->nb_free < txq->free_thresh)
+ if (txq->nb_tx_free < txq->tx_free_thresh)
iavf_xmit_cleanup(txq);
desc_idx = txq->tx_tail;
@@ -2862,14 +2862,14 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
"port_id=%u queue_id=%u tx_first=%u tx_last=%u",
txq->port_id, txq->queue_id, desc_idx, desc_idx_last);
- if (nb_desc_required > txq->nb_free) {
+ if (nb_desc_required > txq->nb_tx_free) {
if (iavf_xmit_cleanup(txq)) {
if (idx == 0)
return 0;
goto end_of_tx;
}
- if (unlikely(nb_desc_required > txq->rs_thresh)) {
- while (nb_desc_required > txq->nb_free) {
+ if (unlikely(nb_desc_required > txq->tx_rs_thresh)) {
+ while (nb_desc_required > txq->nb_tx_free) {
if (iavf_xmit_cleanup(txq)) {
if (idx == 0)
return 0;
@@ -2991,10 +2991,10 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
/* The last packet data descriptor needs End Of Packet (EOP) */
ddesc_cmd = IAVF_TX_DESC_CMD_EOP;
- txq->nb_used = (uint16_t)(txq->nb_used + nb_desc_required);
- txq->nb_free = (uint16_t)(txq->nb_free - nb_desc_required);
+ txq->nb_tx_used = (uint16_t)(txq->nb_tx_used + nb_desc_required);
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_desc_required);
- if (txq->nb_used >= txq->rs_thresh) {
+ if (txq->nb_tx_used >= txq->tx_rs_thresh) {
PMD_TX_LOG(DEBUG, "Setting RS bit on TXD id="
"%4u (port=%d queue=%d)",
desc_idx_last, txq->port_id, txq->queue_id);
@@ -3002,7 +3002,7 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
ddesc_cmd |= IAVF_TX_DESC_CMD_RS;
/* Update txq RS bit counters */
- txq->nb_used = 0;
+ txq->nb_tx_used = 0;
}
ddesc->cmd_type_offset_bsz |= rte_cpu_to_le_64(ddesc_cmd <<
@@ -4278,11 +4278,11 @@ iavf_tx_done_cleanup_full(struct iavf_tx_queue *txq,
tx_id = txq->tx_tail;
tx_last = tx_id;
- if (txq->nb_free == 0 && iavf_xmit_cleanup(txq))
+ if (txq->nb_tx_free == 0 && iavf_xmit_cleanup(txq))
return 0;
- nb_tx_to_clean = txq->nb_free;
- nb_tx_free_last = txq->nb_free;
+ nb_tx_to_clean = txq->nb_tx_free;
+ nb_tx_free_last = txq->nb_tx_free;
if (!free_cnt)
free_cnt = txq->nb_tx_desc;
@@ -4305,16 +4305,16 @@ iavf_tx_done_cleanup_full(struct iavf_tx_queue *txq,
tx_id = swr_ring[tx_id].next_id;
} while (--nb_tx_to_clean && pkt_cnt < free_cnt && tx_id != tx_last);
- if (txq->rs_thresh > txq->nb_tx_desc -
- txq->nb_free || tx_id == tx_last)
+ if (txq->tx_rs_thresh > txq->nb_tx_desc -
+ txq->nb_tx_free || tx_id == tx_last)
break;
if (pkt_cnt < free_cnt) {
if (iavf_xmit_cleanup(txq))
break;
- nb_tx_to_clean = txq->nb_free - nb_tx_free_last;
- nb_tx_free_last = txq->nb_free;
+ nb_tx_to_clean = txq->nb_tx_free - nb_tx_free_last;
+ nb_tx_free_last = txq->nb_tx_free;
}
}
@@ -4356,8 +4356,8 @@ iavf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
qinfo->nb_desc = txq->nb_tx_desc;
- qinfo->conf.tx_free_thresh = txq->free_thresh;
- qinfo->conf.tx_rs_thresh = txq->rs_thresh;
+ qinfo->conf.tx_free_thresh = txq->tx_free_thresh;
+ qinfo->conf.tx_rs_thresh = txq->tx_rs_thresh;
qinfo->conf.offloads = txq->offloads;
qinfo->conf.tx_deferred_start = txq->tx_deferred_start;
}
@@ -4432,8 +4432,8 @@ iavf_dev_tx_desc_status(void *tx_queue, uint16_t offset)
desc = txq->tx_tail + offset;
/* go to next desc that has the RS bit */
- desc = ((desc + txq->rs_thresh - 1) / txq->rs_thresh) *
- txq->rs_thresh;
+ desc = ((desc + txq->tx_rs_thresh - 1) / txq->tx_rs_thresh) *
+ txq->tx_rs_thresh;
if (desc >= txq->nb_tx_desc) {
desc -= txq->nb_tx_desc;
if (desc >= txq->nb_tx_desc)
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index 63abe1cdb3..759f1759a7 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -277,25 +277,25 @@ struct iavf_rx_queue {
struct iavf_tx_queue {
const struct rte_memzone *mz; /* memzone for Tx ring */
volatile struct iavf_tx_desc *tx_ring; /* Tx ring virtual address */
- uint64_t tx_ring_phys_addr; /* Tx ring DMA address */
+ rte_iova_t tx_ring_dma; /* Tx ring DMA address */
struct ieth_tx_entry *sw_ring; /* address array of SW ring */
uint16_t nb_tx_desc; /* ring length */
uint16_t tx_tail; /* current value of tail */
volatile uint8_t *qtx_tail; /* register address of tail */
/* number of used desc since RS bit set */
- uint16_t nb_used;
- uint16_t nb_free;
+ uint16_t nb_tx_used;
+ uint16_t nb_tx_free;
uint16_t last_desc_cleaned; /* last desc have been cleaned*/
- uint16_t free_thresh;
- uint16_t rs_thresh;
+ uint16_t tx_free_thresh;
+ uint16_t tx_rs_thresh;
uint8_t rel_mbufs_type;
struct iavf_vsi *vsi; /**< the VSI this queue belongs to */
uint16_t port_id;
uint16_t queue_id;
uint64_t offloads;
- uint16_t next_dd; /* next to set RS, for VPMD */
- uint16_t next_rs; /* next to check DD, for VPMD */
+ uint16_t tx_next_dd; /* next to set RS, for VPMD */
+ uint16_t tx_next_rs; /* next to check DD, for VPMD */
uint16_t ipsec_crypto_pkt_md_offset;
uint64_t mbuf_errors;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index 9a7da591ac..a63763cdec 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -1742,10 +1742,10 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC;
uint64_t rs = IAVF_TX_DESC_CMD_RS | flags;
- if (txq->nb_free < txq->free_thresh)
+ if (txq->nb_tx_free < txq->tx_free_thresh)
iavf_tx_free_bufs(txq);
- nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_free, nb_pkts);
+ nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
return 0;
@@ -1753,7 +1753,7 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
txdp = &txq->tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
- txq->nb_free = (uint16_t)(txq->nb_free - nb_pkts);
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
@@ -1768,7 +1768,7 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_commit = (uint16_t)(nb_commit - n);
tx_id = 0;
- txq->next_rs = (uint16_t)(txq->rs_thresh - 1);
+ txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
txdp = &txq->tx_ring[tx_id];
@@ -1780,12 +1780,12 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
iavf_vtx(txdp, tx_pkts, nb_commit, flags, offload);
tx_id = (uint16_t)(tx_id + nb_commit);
- if (tx_id > txq->next_rs) {
- txq->tx_ring[txq->next_rs].cmd_type_offset_bsz |=
+ if (tx_id > txq->tx_next_rs) {
+ txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) <<
IAVF_TXD_QW1_CMD_SHIFT);
- txq->next_rs =
- (uint16_t)(txq->next_rs + txq->rs_thresh);
+ txq->tx_next_rs =
+ (uint16_t)(txq->tx_next_rs + txq->tx_rs_thresh);
}
txq->tx_tail = tx_id;
@@ -1806,7 +1806,7 @@ iavf_xmit_pkts_vec_avx2_common(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t ret, num;
/* cross rs_thresh boundary is not allowed */
- num = (uint16_t)RTE_MIN(nb_pkts, txq->rs_thresh);
+ num = (uint16_t)RTE_MIN(nb_pkts, txq->tx_rs_thresh);
ret = iavf_xmit_fixed_burst_vec_avx2(tx_queue, &tx_pkts[nb_tx],
num, offload);
nb_tx += ret;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index 91f42670db..e04d66d757 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -1854,18 +1854,18 @@ iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq)
struct rte_mbuf *m, *free[IAVF_VPMD_TX_MAX_FREE_BUF];
/* check DD bits on threshold descriptor */
- if ((txq->tx_ring[txq->next_dd].cmd_type_offset_bsz &
+ if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(IAVF_TXD_QW1_DTYPE_MASK)) !=
rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE))
return 0;
- n = txq->rs_thresh >> txq->use_ctx;
+ n = txq->tx_rs_thresh >> txq->use_ctx;
/* first buffer to free from S/W ring is at index
* tx_next_dd - (tx_rs_thresh-1)
*/
txep = (void *)txq->sw_ring;
- txep += (txq->next_dd >> txq->use_ctx) - (n - 1);
+ txep += (txq->tx_next_dd >> txq->use_ctx) - (n - 1);
if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
struct rte_mempool *mp = txep[0].mbuf->pool;
@@ -1951,12 +1951,12 @@ iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq)
done:
/* buffers were freed, update counters */
- txq->nb_free = (uint16_t)(txq->nb_free + txq->rs_thresh);
- txq->next_dd = (uint16_t)(txq->next_dd + txq->rs_thresh);
- if (txq->next_dd >= txq->nb_tx_desc)
- txq->next_dd = (uint16_t)(txq->rs_thresh - 1);
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
+ txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
+ if (txq->tx_next_dd >= txq->nb_tx_desc)
+ txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
- return txq->rs_thresh;
+ return txq->tx_rs_thresh;
}
static __rte_always_inline void
@@ -2319,10 +2319,10 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC;
uint64_t rs = IAVF_TX_DESC_CMD_RS | flags;
- if (txq->nb_free < txq->free_thresh)
+ if (txq->nb_tx_free < txq->tx_free_thresh)
iavf_tx_free_bufs_avx512(txq);
- nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_free, nb_pkts);
+ nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
return 0;
@@ -2331,7 +2331,7 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = (void *)txq->sw_ring;
txep += tx_id;
- txq->nb_free = (uint16_t)(txq->nb_free - nb_pkts);
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
@@ -2346,7 +2346,7 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_commit = (uint16_t)(nb_commit - n);
tx_id = 0;
- txq->next_rs = (uint16_t)(txq->rs_thresh - 1);
+ txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
txdp = &txq->tx_ring[tx_id];
@@ -2359,12 +2359,12 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
iavf_vtx(txdp, tx_pkts, nb_commit, flags, offload);
tx_id = (uint16_t)(tx_id + nb_commit);
- if (tx_id > txq->next_rs) {
- txq->tx_ring[txq->next_rs].cmd_type_offset_bsz |=
+ if (tx_id > txq->tx_next_rs) {
+ txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) <<
IAVF_TXD_QW1_CMD_SHIFT);
- txq->next_rs =
- (uint16_t)(txq->next_rs + txq->rs_thresh);
+ txq->tx_next_rs =
+ (uint16_t)(txq->tx_next_rs + txq->tx_rs_thresh);
}
txq->tx_tail = tx_id;
@@ -2386,10 +2386,10 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC;
uint64_t rs = IAVF_TX_DESC_CMD_RS | flags;
- if (txq->nb_free < txq->free_thresh)
+ if (txq->nb_tx_free < txq->tx_free_thresh)
iavf_tx_free_bufs_avx512(txq);
- nb_commit = (uint16_t)RTE_MIN(txq->nb_free, nb_pkts << 1);
+ nb_commit = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts << 1);
nb_commit &= 0xFFFE;
if (unlikely(nb_commit == 0))
return 0;
@@ -2400,7 +2400,7 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = (void *)txq->sw_ring;
txep += (tx_id >> 1);
- txq->nb_free = (uint16_t)(txq->nb_free - nb_commit);
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_commit);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (n != 0 && nb_commit >= n) {
@@ -2414,7 +2414,7 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_commit = (uint16_t)(nb_commit - n);
- txq->next_rs = (uint16_t)(txq->rs_thresh - 1);
+ txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
tx_id = 0;
/* avoid reach the end of ring */
txdp = txq->tx_ring;
@@ -2427,12 +2427,12 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
ctx_vtx(txdp, tx_pkts, nb_mbuf, flags, offload, txq->vlan_flag);
tx_id = (uint16_t)(tx_id + nb_commit);
- if (tx_id > txq->next_rs) {
- txq->tx_ring[txq->next_rs].cmd_type_offset_bsz |=
+ if (tx_id > txq->tx_next_rs) {
+ txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) <<
IAVF_TXD_QW1_CMD_SHIFT);
- txq->next_rs =
- (uint16_t)(txq->next_rs + txq->rs_thresh);
+ txq->tx_next_rs =
+ (uint16_t)(txq->tx_next_rs + txq->tx_rs_thresh);
}
txq->tx_tail = tx_id;
@@ -2452,7 +2452,7 @@ iavf_xmit_pkts_vec_avx512_cmn(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t ret, num;
/* cross rs_thresh boundary is not allowed */
- num = (uint16_t)RTE_MIN(nb_pkts, txq->rs_thresh);
+ num = (uint16_t)RTE_MIN(nb_pkts, txq->tx_rs_thresh);
ret = iavf_xmit_fixed_burst_vec_avx512(tx_queue, &tx_pkts[nb_tx],
num, offload);
nb_tx += ret;
@@ -2480,10 +2480,10 @@ iavf_tx_queue_release_mbufs_avx512(struct iavf_tx_queue *txq)
const uint16_t wrap_point = txq->nb_tx_desc >> txq->use_ctx; /* end of SW ring */
struct ieth_vec_tx_entry *swr = (void *)txq->sw_ring;
- if (!txq->sw_ring || txq->nb_free == max_desc)
+ if (!txq->sw_ring || txq->nb_tx_free == max_desc)
return;
- i = (txq->next_dd - txq->rs_thresh + 1) >> txq->use_ctx;
+ i = (txq->tx_next_dd - txq->tx_rs_thresh + 1) >> txq->use_ctx;
while (i != end_desc) {
rte_pktmbuf_free_seg(swr[i].mbuf);
swr[i].mbuf = NULL;
@@ -2517,7 +2517,7 @@ iavf_xmit_pkts_vec_avx512_ctx_cmn(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t ret, num;
/* cross rs_thresh boundary is not allowed */
- num = (uint16_t)RTE_MIN(nb_pkts << 1, txq->rs_thresh);
+ num = (uint16_t)RTE_MIN(nb_pkts << 1, txq->tx_rs_thresh);
num = num >> 1;
ret = iavf_xmit_fixed_burst_vec_avx512_ctx(tx_queue, &tx_pkts[nb_tx],
num, offload);
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index a53df9c52c..0a9243a684 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -26,17 +26,17 @@ iavf_tx_free_bufs(struct iavf_tx_queue *txq)
struct rte_mbuf *m, *free[IAVF_VPMD_TX_MAX_FREE_BUF];
/* check DD bits on threshold descriptor */
- if ((txq->tx_ring[txq->next_dd].cmd_type_offset_bsz &
+ if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(IAVF_TXD_QW1_DTYPE_MASK)) !=
rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE))
return 0;
- n = txq->rs_thresh;
+ n = txq->tx_rs_thresh;
/* first buffer to free from S/W ring is at index
* tx_next_dd - (tx_rs_thresh-1)
*/
- txep = &txq->sw_ring[txq->next_dd - (n - 1)];
+ txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)];
m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
if (likely(m != NULL)) {
free[0] = m;
@@ -65,12 +65,12 @@ iavf_tx_free_bufs(struct iavf_tx_queue *txq)
}
/* buffers were freed, update counters */
- txq->nb_free = (uint16_t)(txq->nb_free + txq->rs_thresh);
- txq->next_dd = (uint16_t)(txq->next_dd + txq->rs_thresh);
- if (txq->next_dd >= txq->nb_tx_desc)
- txq->next_dd = (uint16_t)(txq->rs_thresh - 1);
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
+ txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
+ if (txq->tx_next_dd >= txq->nb_tx_desc)
+ txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
- return txq->rs_thresh;
+ return txq->tx_rs_thresh;
}
static inline void
@@ -109,10 +109,10 @@ _iavf_tx_queue_release_mbufs_vec(struct iavf_tx_queue *txq)
unsigned i;
const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1);
- if (!txq->sw_ring || txq->nb_free == max_desc)
+ if (!txq->sw_ring || txq->nb_tx_free == max_desc)
return;
- i = txq->next_dd - txq->rs_thresh + 1;
+ i = txq->tx_next_dd - txq->tx_rs_thresh + 1;
while (i != txq->tx_tail) {
rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
txq->sw_ring[i].mbuf = NULL;
@@ -169,8 +169,8 @@ iavf_tx_vec_queue_default(struct iavf_tx_queue *txq)
if (!txq)
return -1;
- if (txq->rs_thresh < IAVF_VPMD_TX_MAX_BURST ||
- txq->rs_thresh > IAVF_VPMD_TX_MAX_FREE_BUF)
+ if (txq->tx_rs_thresh < IAVF_VPMD_TX_MAX_BURST ||
+ txq->tx_rs_thresh > IAVF_VPMD_TX_MAX_FREE_BUF)
return -1;
if (txq->offloads & IAVF_TX_NO_VECTOR_FLAGS)
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index 419080ac9d..e9d19525ae 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -1374,10 +1374,10 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint64_t rs = IAVF_TX_DESC_CMD_RS | flags;
int i;
- if (txq->nb_free < txq->free_thresh)
+ if (txq->nb_tx_free < txq->tx_free_thresh)
iavf_tx_free_bufs(txq);
- nb_pkts = (uint16_t)RTE_MIN(txq->nb_free, nb_pkts);
+ nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
return 0;
nb_commit = nb_pkts;
@@ -1386,7 +1386,7 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txdp = &txq->tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
- txq->nb_free = (uint16_t)(txq->nb_free - nb_pkts);
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
@@ -1400,7 +1400,7 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_commit = (uint16_t)(nb_commit - n);
tx_id = 0;
- txq->next_rs = (uint16_t)(txq->rs_thresh - 1);
+ txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
txdp = &txq->tx_ring[tx_id];
@@ -1412,12 +1412,12 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
iavf_vtx(txdp, tx_pkts, nb_commit, flags);
tx_id = (uint16_t)(tx_id + nb_commit);
- if (tx_id > txq->next_rs) {
- txq->tx_ring[txq->next_rs].cmd_type_offset_bsz |=
+ if (tx_id > txq->tx_next_rs) {
+ txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) <<
IAVF_TXD_QW1_CMD_SHIFT);
- txq->next_rs =
- (uint16_t)(txq->next_rs + txq->rs_thresh);
+ txq->tx_next_rs =
+ (uint16_t)(txq->tx_next_rs + txq->tx_rs_thresh);
}
txq->tx_tail = tx_id;
@@ -1441,7 +1441,7 @@ iavf_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t ret, num;
/* cross rs_thresh boundary is not allowed */
- num = (uint16_t)RTE_MIN(nb_pkts, txq->rs_thresh);
+ num = (uint16_t)RTE_MIN(nb_pkts, txq->tx_rs_thresh);
ret = iavf_xmit_fixed_burst_vec(tx_queue, &tx_pkts[nb_tx], num);
nb_tx += ret;
nb_pkts -= ret;
diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c
index 065ab3594c..0646a2f978 100644
--- a/drivers/net/iavf/iavf_vchnl.c
+++ b/drivers/net/iavf/iavf_vchnl.c
@@ -1247,7 +1247,7 @@ iavf_configure_queues(struct iavf_adapter *adapter,
/* Virtchnnl configure tx queues by pairs */
if (i < adapter->dev_data->nb_tx_queues) {
vc_qp->txq.ring_len = txq[i]->nb_tx_desc;
- vc_qp->txq.dma_ring_addr = txq[i]->tx_ring_phys_addr;
+ vc_qp->txq.dma_ring_addr = txq[i]->tx_ring_dma;
}
vc_qp->rxq.vsi_id = vf->vsi_res->vsi_id;
diff --git a/drivers/net/ixgbe/base/ixgbe_osdep.h b/drivers/net/ixgbe/base/ixgbe_osdep.h
index 502f386b56..95dbe2bedd 100644
--- a/drivers/net/ixgbe/base/ixgbe_osdep.h
+++ b/drivers/net/ixgbe/base/ixgbe_osdep.h
@@ -124,7 +124,7 @@ static inline uint32_t ixgbe_read_addr(volatile void* addr)
rte_write32_wc_relaxed((rte_cpu_to_le_32(value)), reg)
#define IXGBE_PCI_REG_ADDR(hw, reg) \
- ((volatile uint32_t *)((char *)(hw)->hw_addr + (reg)))
+ ((volatile void *)((char *)(hw)->hw_addr + (reg)))
#define IXGBE_PCI_REG_ARRAY_ADDR(hw, reg, index) \
IXGBE_PCI_REG_ADDR((hw), (reg) + ((index) << 2))
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 28dca3fb7b..96a1021e48 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -308,7 +308,7 @@ tx_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
/* update tail pointer */
rte_wmb();
- IXGBE_PCI_REG_WC_WRITE_RELAXED(txq->tdt_reg_addr, txq->tx_tail);
+ IXGBE_PCI_REG_WC_WRITE_RELAXED(txq->qtx_tail, txq->tx_tail);
return nb_pkts;
}
@@ -946,7 +946,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u tx_tail=%u nb_tx=%u",
(unsigned) txq->port_id, (unsigned) txq->queue_id,
(unsigned) tx_id, (unsigned) nb_tx);
- IXGBE_PCI_REG_WC_WRITE_RELAXED(txq->tdt_reg_addr, tx_id);
+ IXGBE_PCI_REG_WC_WRITE_RELAXED(txq->qtx_tail, tx_id);
txq->tx_tail = tx_id;
return nb_tx;
@@ -2786,11 +2786,11 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
hw->mac.type == ixgbe_mac_X550_vf ||
hw->mac.type == ixgbe_mac_X550EM_x_vf ||
hw->mac.type == ixgbe_mac_X550EM_a_vf)
- txq->tdt_reg_addr = IXGBE_PCI_REG_ADDR(hw, IXGBE_VFTDT(queue_idx));
+ txq->qtx_tail = IXGBE_PCI_REG_ADDR(hw, IXGBE_VFTDT(queue_idx));
else
- txq->tdt_reg_addr = IXGBE_PCI_REG_ADDR(hw, IXGBE_TDT(txq->reg_idx));
+ txq->qtx_tail = IXGBE_PCI_REG_ADDR(hw, IXGBE_TDT(txq->reg_idx));
- txq->tx_ring_phys_addr = tz->iova;
+ txq->tx_ring_dma = tz->iova;
txq->tx_ring = (union ixgbe_adv_tx_desc *) tz->addr;
/* Allocate software ring */
@@ -2802,7 +2802,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
return -ENOMEM;
}
PMD_INIT_LOG(DEBUG, "sw_ring=%p hw_ring=%p dma_addr=0x%"PRIx64,
- txq->sw_ring, txq->tx_ring, txq->tx_ring_phys_addr);
+ txq->sw_ring, txq->tx_ring, txq->tx_ring_dma);
/* set up vector or scalar TX function as appropriate */
ixgbe_set_tx_function(dev, txq);
@@ -5303,7 +5303,7 @@ ixgbe_dev_tx_init(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_tx_queues; i++) {
txq = dev->data->tx_queues[i];
- bus_addr = txq->tx_ring_phys_addr;
+ bus_addr = txq->tx_ring_dma;
IXGBE_WRITE_REG(hw, IXGBE_TDBAL(txq->reg_idx),
(uint32_t)(bus_addr & 0x00000000ffffffffULL));
IXGBE_WRITE_REG(hw, IXGBE_TDBAH(txq->reg_idx),
@@ -5886,7 +5886,7 @@ ixgbevf_dev_tx_init(struct rte_eth_dev *dev)
/* Setup the Base and Length of the Tx Descriptor Rings */
for (i = 0; i < dev->data->nb_tx_queues; i++) {
txq = dev->data->tx_queues[i];
- bus_addr = txq->tx_ring_phys_addr;
+ bus_addr = txq->tx_ring_dma;
IXGBE_WRITE_REG(hw, IXGBE_VFTDBAL(i),
(uint32_t)(bus_addr & 0x00000000ffffffffULL));
IXGBE_WRITE_REG(hw, IXGBE_VFTDBAH(i),
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index 552dd2b340..e3e6ebb9e8 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -186,12 +186,12 @@ struct ixgbe_advctx_info {
struct ixgbe_tx_queue {
/** TX ring virtual address. */
volatile union ixgbe_adv_tx_desc *tx_ring;
- uint64_t tx_ring_phys_addr; /**< TX ring DMA address. */
+ rte_iova_t tx_ring_dma; /**< TX ring DMA address. */
union {
struct ieth_tx_entry *sw_ring; /**< address of SW ring for scalar PMD. */
struct ieth_vec_tx_entry *sw_ring_v; /**< address of SW ring for vector PMD */
};
- volatile uint32_t *tdt_reg_addr; /**< Address of TDT register. */
+ volatile uint8_t *qtx_tail; /**< Address of TDT register. */
uint16_t nb_tx_desc; /**< number of TX descriptors. */
uint16_t tx_tail; /**< current value of TDT reg. */
/**< Start freeing TX buffers if there are less free descriptors than
@@ -218,7 +218,7 @@ struct ixgbe_tx_queue {
/** Hardware context0 history. */
struct ixgbe_advctx_info ctx_cache[IXGBE_CTX_NUM];
const struct ixgbe_txq_ops *ops; /**< txq ops */
- uint8_t tx_deferred_start; /**< not in global dev start. */
+ _Bool tx_deferred_start; /**< not in global dev start. */
#ifdef RTE_LIB_SECURITY
uint8_t using_ipsec;
/**< indicates that IPsec TX feature is in use */
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
index b8edef5228..100f77cea6 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
@@ -628,7 +628,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_tail = tx_id;
- IXGBE_PCI_REG_WRITE(txq->tdt_reg_addr, txq->tx_tail);
+ IXGBE_PCI_REG_WRITE(txq->qtx_tail, txq->tx_tail);
return nb_pkts;
}
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
index 0a9d21eaf3..017e3d6674 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -751,7 +751,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_tail = tx_id;
- IXGBE_PCI_REG_WC_WRITE(txq->tdt_reg_addr, txq->tx_tail);
+ IXGBE_PCI_REG_WC_WRITE(txq->qtx_tail, txq->tx_tail);
return nb_pkts;
}
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [RFC PATCH 05/21] drivers/net: add prefix for driver-specific structs
2024-11-22 12:53 [RFC PATCH 00/21] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (3 preceding siblings ...)
2024-11-22 12:53 ` [RFC PATCH 04/21] drivers/net: align Tx queue struct field names Bruce Richardson
@ 2024-11-22 12:53 ` Bruce Richardson
2024-11-22 12:53 ` [RFC PATCH 06/21] common/intel_eth: merge ice and i40e Tx queue struct Bruce Richardson
` (20 subsequent siblings)
25 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-11-22 12:53 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Ian Stokes, David Christensen,
Konstantin Ananyev, Wathsala Vithanage, Vladimir Medvedkin,
Anatoly Burakov
In preparation for merging the Tx structs for multiple drivers into a
single struct, rename the driver-specific pointers in each struct to
have a prefix on it, to avoid conflicts.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/i40e/i40e_fdir.c | 6 +--
.../net/i40e/i40e_recycle_mbufs_vec_common.c | 2 +-
drivers/net/i40e/i40e_rxtx.c | 30 ++++++------
drivers/net/i40e/i40e_rxtx.h | 4 +-
drivers/net/i40e/i40e_rxtx_vec_altivec.c | 6 +--
drivers/net/i40e/i40e_rxtx_vec_avx2.c | 6 +--
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 8 ++--
drivers/net/i40e/i40e_rxtx_vec_common.h | 2 +-
drivers/net/i40e/i40e_rxtx_vec_neon.c | 6 +--
drivers/net/i40e/i40e_rxtx_vec_sse.c | 6 +--
drivers/net/iavf/iavf_rxtx.c | 24 +++++-----
drivers/net/iavf/iavf_rxtx.h | 4 +-
drivers/net/iavf/iavf_rxtx_vec_avx2.c | 6 +--
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 14 +++---
drivers/net/iavf/iavf_rxtx_vec_common.h | 2 +-
drivers/net/iavf/iavf_rxtx_vec_sse.c | 6 +--
drivers/net/ice/ice_dcf_ethdev.c | 4 +-
drivers/net/ice/ice_rxtx.c | 48 +++++++++----------
drivers/net/ice/ice_rxtx.h | 4 +-
drivers/net/ice/ice_rxtx_vec_avx2.c | 6 +--
drivers/net/ice/ice_rxtx_vec_avx512.c | 8 ++--
drivers/net/ice/ice_rxtx_vec_common.h | 4 +-
drivers/net/ice/ice_rxtx_vec_sse.c | 6 +--
.../ixgbe/ixgbe_recycle_mbufs_vec_common.c | 2 +-
drivers/net/ixgbe/ixgbe_rxtx.c | 22 ++++-----
drivers/net/ixgbe/ixgbe_rxtx.h | 2 +-
drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 6 +--
drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 6 +--
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 6 +--
29 files changed, 128 insertions(+), 128 deletions(-)
diff --git a/drivers/net/i40e/i40e_fdir.c b/drivers/net/i40e/i40e_fdir.c
index 47f79ecf11..c600167634 100644
--- a/drivers/net/i40e/i40e_fdir.c
+++ b/drivers/net/i40e/i40e_fdir.c
@@ -1383,7 +1383,7 @@ i40e_find_available_buffer(struct rte_eth_dev *dev)
volatile struct i40e_tx_desc *tmp_txdp;
tmp_tail = txq->tx_tail;
- tmp_txdp = &txq->tx_ring[tmp_tail + 1];
+ tmp_txdp = &txq->i40e_tx_ring[tmp_tail + 1];
do {
if ((tmp_txdp->cmd_type_offset_bsz &
@@ -1640,7 +1640,7 @@ i40e_flow_fdir_filter_programming(struct i40e_pf *pf,
PMD_DRV_LOG(INFO, "filling filter programming descriptor.");
fdirdp = (volatile struct i40e_filter_program_desc *)
- (&txq->tx_ring[txq->tx_tail]);
+ (&txq->i40e_tx_ring[txq->tx_tail]);
fdirdp->qindex_flex_ptype_vsi =
rte_cpu_to_le_32((fdir_action->rx_queue <<
@@ -1710,7 +1710,7 @@ i40e_flow_fdir_filter_programming(struct i40e_pf *pf,
fdirdp->fd_id = rte_cpu_to_le_32(filter->soft_id);
PMD_DRV_LOG(INFO, "filling transmit descriptor.");
- txdp = &txq->tx_ring[txq->tx_tail + 1];
+ txdp = &txq->i40e_tx_ring[txq->tx_tail + 1];
txdp->buffer_addr = rte_cpu_to_le_64(pf->fdir.dma_addr[txq->tx_tail >> 1]);
td_cmd = I40E_TX_DESC_CMD_EOP |
diff --git a/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c b/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
index 5a23adc6a4..167ee8d428 100644
--- a/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
+++ b/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
@@ -75,7 +75,7 @@ i40e_recycle_tx_mbufs_reuse_vec(void *tx_queue,
return 0;
/* check DD bits on threshold descriptor */
- if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+ if ((txq->i40e_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) !=
rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE))
return 0;
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 20e72cac54..5b8edac3b2 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -379,7 +379,7 @@ static inline int
i40e_xmit_cleanup(struct i40e_tx_queue *txq)
{
struct ieth_tx_entry *sw_ring = txq->sw_ring;
- volatile struct i40e_tx_desc *txd = txq->tx_ring;
+ volatile struct i40e_tx_desc *txd = txq->i40e_tx_ring;
uint16_t last_desc_cleaned = txq->last_desc_cleaned;
uint16_t nb_tx_desc = txq->nb_tx_desc;
uint16_t desc_to_clean_to;
@@ -1103,7 +1103,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
txq = tx_queue;
sw_ring = txq->sw_ring;
- txr = txq->tx_ring;
+ txr = txq->i40e_tx_ring;
tx_id = txq->tx_tail;
txe = &sw_ring[tx_id];
@@ -1338,7 +1338,7 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
const uint16_t k = RTE_ALIGN_FLOOR(tx_rs_thresh, RTE_I40E_TX_MAX_FREE_BUF_SZ);
const uint16_t m = tx_rs_thresh % RTE_I40E_TX_MAX_FREE_BUF_SZ;
- if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+ if ((txq->i40e_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) !=
rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE))
return 0;
@@ -1417,7 +1417,7 @@ i40e_tx_fill_hw_ring(struct i40e_tx_queue *txq,
struct rte_mbuf **pkts,
uint16_t nb_pkts)
{
- volatile struct i40e_tx_desc *txdp = &(txq->tx_ring[txq->tx_tail]);
+ volatile struct i40e_tx_desc *txdp = &(txq->i40e_tx_ring[txq->tx_tail]);
struct ieth_tx_entry *txep = &(txq->sw_ring[txq->tx_tail]);
const int N_PER_LOOP = 4;
const int N_PER_LOOP_MASK = N_PER_LOOP - 1;
@@ -1445,7 +1445,7 @@ tx_xmit_pkts(struct i40e_tx_queue *txq,
struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- volatile struct i40e_tx_desc *txr = txq->tx_ring;
+ volatile struct i40e_tx_desc *txr = txq->i40e_tx_ring;
uint16_t n = 0;
/**
@@ -1556,7 +1556,7 @@ i40e_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts
bool pkt_error = false;
const char *reason = NULL;
uint16_t good_pkts = nb_pkts;
- struct i40e_adapter *adapter = txq->vsi->adapter;
+ struct i40e_adapter *adapter = txq->i40e_vsi->adapter;
for (idx = 0; idx < nb_pkts; idx++) {
mb = tx_pkts[idx];
@@ -2329,7 +2329,7 @@ i40e_dev_tx_descriptor_status(void *tx_queue, uint16_t offset)
desc -= txq->nb_tx_desc;
}
- status = &txq->tx_ring[desc].cmd_type_offset_bsz;
+ status = &txq->i40e_tx_ring[desc].cmd_type_offset_bsz;
mask = rte_le_to_cpu_64(I40E_TXD_QW1_DTYPE_MASK);
expect = rte_cpu_to_le_64(
I40E_TX_DESC_DTYPE_DESC_DONE << I40E_TXD_QW1_DTYPE_SHIFT);
@@ -2527,7 +2527,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate TX hardware ring descriptors. */
ring_size = sizeof(struct i40e_tx_desc) * I40E_MAX_RING_DESC;
ring_size = RTE_ALIGN(ring_size, I40E_DMA_MEM_ALIGN);
- tz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx,
+ tz = rte_eth_dma_zone_reserve(dev, "i40e_tx_ring", queue_idx,
ring_size, I40E_RING_BASE_ALIGN, socket_id);
if (!tz) {
i40e_tx_queue_release(txq);
@@ -2546,11 +2546,11 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->reg_idx = reg_idx;
txq->port_id = dev->data->port_id;
txq->offloads = offloads;
- txq->vsi = vsi;
+ txq->i40e_vsi = vsi;
txq->tx_deferred_start = tx_conf->tx_deferred_start;
txq->tx_ring_dma = tz->iova;
- txq->tx_ring = (struct i40e_tx_desc *)tz->addr;
+ txq->i40e_tx_ring = (struct i40e_tx_desc *)tz->addr;
/* Allocate software ring */
txq->sw_ring =
@@ -2885,11 +2885,11 @@ i40e_reset_tx_queue(struct i40e_tx_queue *txq)
txe = txq->sw_ring;
size = sizeof(struct i40e_tx_desc) * txq->nb_tx_desc;
for (i = 0; i < size; i++)
- ((volatile char *)txq->tx_ring)[i] = 0;
+ ((volatile char *)txq->i40e_tx_ring)[i] = 0;
prev = (uint16_t)(txq->nb_tx_desc - 1);
for (i = 0; i < txq->nb_tx_desc; i++) {
- volatile struct i40e_tx_desc *txd = &txq->tx_ring[i];
+ volatile struct i40e_tx_desc *txd = &txq->i40e_tx_ring[i];
txd->cmd_type_offset_bsz =
rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE);
@@ -2914,7 +2914,7 @@ int
i40e_tx_queue_init(struct i40e_tx_queue *txq)
{
enum i40e_status_code err = I40E_SUCCESS;
- struct i40e_vsi *vsi = txq->vsi;
+ struct i40e_vsi *vsi = txq->i40e_vsi;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
uint16_t pf_q = txq->reg_idx;
struct i40e_hmc_obj_txq tx_ctx;
@@ -3207,10 +3207,10 @@ i40e_fdir_setup_tx_resources(struct i40e_pf *pf)
txq->nb_tx_desc = I40E_FDIR_NUM_TX_DESC;
txq->queue_id = I40E_FDIR_QUEUE_ID;
txq->reg_idx = pf->fdir.fdir_vsi->base_queue;
- txq->vsi = pf->fdir.fdir_vsi;
+ txq->i40e_vsi = pf->fdir.fdir_vsi;
txq->tx_ring_dma = tz->iova;
- txq->tx_ring = (struct i40e_tx_desc *)tz->addr;
+ txq->i40e_tx_ring = (struct i40e_tx_desc *)tz->addr;
/*
* don't need to allocate software ring and reset for the fdir
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index c5fbadc9e2..030c381e0c 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -130,7 +130,7 @@ struct i40e_rx_queue {
struct i40e_tx_queue {
uint16_t nb_tx_desc; /**< number of TX descriptors */
rte_iova_t tx_ring_dma; /**< TX ring DMA address */
- volatile struct i40e_tx_desc *tx_ring; /**< TX ring virtual address */
+ volatile struct i40e_tx_desc *i40e_tx_ring; /**< TX ring virtual address */
struct ieth_tx_entry *sw_ring; /**< virtual address of SW ring */
uint16_t tx_tail; /**< current value of tail register */
volatile uint8_t *qtx_tail; /**< register address of tail */
@@ -150,7 +150,7 @@ struct i40e_tx_queue {
uint16_t port_id; /**< Device port identifier. */
uint16_t queue_id; /**< TX queue index. */
uint16_t reg_idx;
- struct i40e_vsi *vsi; /**< the VSI this queue belongs to */
+ struct i40e_vsi *i40e_vsi; /**< the VSI this queue belongs to */
uint16_t tx_next_dd;
uint16_t tx_next_rs;
bool q_set; /**< indicate if tx queue has been configured */
diff --git a/drivers/net/i40e/i40e_rxtx_vec_altivec.c b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
index 614af752b8..aed78e4a1a 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_altivec.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
@@ -568,7 +568,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->i40e_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -588,7 +588,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->i40e_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
}
@@ -598,7 +598,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->i40e_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)I40E_TX_DESC_CMD_RS) <<
I40E_TXD_QW1_CMD_SHIFT);
txq->tx_next_rs =
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx2.c b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
index 2b0a774d47..6b7c96c683 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx2.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
@@ -758,7 +758,7 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->i40e_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -779,7 +779,7 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->i40e_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
}
@@ -789,7 +789,7 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->i40e_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)I40E_TX_DESC_CMD_RS) <<
I40E_TXD_QW1_CMD_SHIFT);
txq->tx_next_rs =
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index 25ed4c78a7..33c1655c9a 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -764,7 +764,7 @@ i40e_tx_free_bufs_avx512(struct i40e_tx_queue *txq)
struct rte_mbuf *m, *free[RTE_I40E_TX_MAX_FREE_BUF_SZ];
/* check DD bits on threshold descriptor */
- if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+ if ((txq->i40e_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) !=
rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE))
return 0;
@@ -948,7 +948,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->i40e_tx_ring[tx_id];
txep = (void *)txq->sw_ring;
txep += tx_id;
@@ -970,7 +970,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = txq->tx_ring;
+ txdp = txq->i40e_tx_ring;
txep = (void *)txq->sw_ring;
}
@@ -980,7 +980,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->i40e_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)I40E_TX_DESC_CMD_RS) <<
I40E_TXD_QW1_CMD_SHIFT);
txq->tx_next_rs =
diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index 676c3b1034..a70d9fce78 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -26,7 +26,7 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
struct rte_mbuf *m, *free[RTE_I40E_TX_MAX_FREE_BUF_SZ];
/* check DD bits on threshold descriptor */
- if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+ if ((txq->i40e_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) !=
rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE))
return 0;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_neon.c b/drivers/net/i40e/i40e_rxtx_vec_neon.c
index 2df7f3fed2..23aaf3a739 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_neon.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_neon.c
@@ -695,7 +695,7 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->i40e_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -715,7 +715,7 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->i40e_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
}
@@ -725,7 +725,7 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->i40e_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)I40E_TX_DESC_CMD_RS) <<
I40E_TXD_QW1_CMD_SHIFT);
txq->tx_next_rs =
diff --git a/drivers/net/i40e/i40e_rxtx_vec_sse.c b/drivers/net/i40e/i40e_rxtx_vec_sse.c
index 23fbd9f852..499b6e6ff7 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_sse.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_sse.c
@@ -714,7 +714,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->i40e_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -734,7 +734,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->i40e_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
}
@@ -744,7 +744,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->i40e_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)I40E_TX_DESC_CMD_RS) <<
I40E_TXD_QW1_CMD_SHIFT);
txq->tx_next_rs =
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index b6d287245f..2d0f8eda79 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -296,11 +296,11 @@ reset_tx_queue(struct iavf_tx_queue *txq)
txe = txq->sw_ring;
size = sizeof(struct iavf_tx_desc) * txq->nb_tx_desc;
for (i = 0; i < size; i++)
- ((volatile char *)txq->tx_ring)[i] = 0;
+ ((volatile char *)txq->iavf_tx_ring)[i] = 0;
prev = (uint16_t)(txq->nb_tx_desc - 1);
for (i = 0; i < txq->nb_tx_desc; i++) {
- txq->tx_ring[i].cmd_type_offset_bsz =
+ txq->iavf_tx_ring[i].cmd_type_offset_bsz =
rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE);
txe[i].mbuf = NULL;
txe[i].last_id = i;
@@ -851,7 +851,7 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->port_id = dev->data->port_id;
txq->offloads = offloads;
txq->tx_deferred_start = tx_conf->tx_deferred_start;
- txq->vsi = vsi;
+ txq->iavf_vsi = vsi;
if (iavf_ipsec_crypto_supported(adapter))
txq->ipsec_crypto_pkt_md_offset =
@@ -872,7 +872,7 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate TX hardware ring descriptors. */
ring_size = sizeof(struct iavf_tx_desc) * IAVF_MAX_RING_DESC;
ring_size = RTE_ALIGN(ring_size, IAVF_DMA_MEM_ALIGN);
- mz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx,
+ mz = rte_eth_dma_zone_reserve(dev, "iavf_tx_ring", queue_idx,
ring_size, IAVF_RING_BASE_ALIGN,
socket_id);
if (!mz) {
@@ -882,7 +882,7 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,
return -ENOMEM;
}
txq->tx_ring_dma = mz->iova;
- txq->tx_ring = (struct iavf_tx_desc *)mz->addr;
+ txq->iavf_tx_ring = (struct iavf_tx_desc *)mz->addr;
txq->mz = mz;
reset_tx_queue(txq);
@@ -2385,7 +2385,7 @@ iavf_xmit_cleanup(struct iavf_tx_queue *txq)
uint16_t desc_to_clean_to;
uint16_t nb_tx_to_clean;
- volatile struct iavf_tx_desc *txd = txq->tx_ring;
+ volatile struct iavf_tx_desc *txd = txq->iavf_tx_ring;
desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->tx_rs_thresh);
if (desc_to_clean_to >= nb_tx_desc)
@@ -2796,7 +2796,7 @@ uint16_t
iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
struct iavf_tx_queue *txq = tx_queue;
- volatile struct iavf_tx_desc *txr = txq->tx_ring;
+ volatile struct iavf_tx_desc *txr = txq->iavf_tx_ring;
struct ieth_tx_entry *txe_ring = txq->sw_ring;
struct ieth_tx_entry *txe, *txn;
struct rte_mbuf *mb, *mb_seg;
@@ -3803,10 +3803,10 @@ iavf_xmit_pkts_no_poll(void *tx_queue, struct rte_mbuf **tx_pkts,
struct iavf_tx_queue *txq = tx_queue;
enum iavf_tx_burst_type tx_burst_type;
- if (!txq->vsi || txq->vsi->adapter->no_poll)
+ if (!txq->iavf_vsi || txq->iavf_vsi->adapter->no_poll)
return 0;
- tx_burst_type = txq->vsi->adapter->tx_burst_type;
+ tx_burst_type = txq->iavf_vsi->adapter->tx_burst_type;
return iavf_tx_pkt_burst_ops[tx_burst_type](tx_queue,
tx_pkts, nb_pkts);
@@ -3824,9 +3824,9 @@ iavf_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts,
const char *reason = NULL;
bool pkt_error = false;
struct iavf_tx_queue *txq = tx_queue;
- struct iavf_adapter *adapter = txq->vsi->adapter;
+ struct iavf_adapter *adapter = txq->iavf_vsi->adapter;
enum iavf_tx_burst_type tx_burst_type =
- txq->vsi->adapter->tx_burst_type;
+ txq->iavf_vsi->adapter->tx_burst_type;
for (idx = 0; idx < nb_pkts; idx++) {
mb = tx_pkts[idx];
@@ -4440,7 +4440,7 @@ iavf_dev_tx_desc_status(void *tx_queue, uint16_t offset)
desc -= txq->nb_tx_desc;
}
- status = &txq->tx_ring[desc].cmd_type_offset_bsz;
+ status = &txq->iavf_tx_ring[desc].cmd_type_offset_bsz;
mask = rte_le_to_cpu_64(IAVF_TXD_QW1_DTYPE_MASK);
expect = rte_cpu_to_le_64(
IAVF_TX_DESC_DTYPE_DESC_DONE << IAVF_TXD_QW1_DTYPE_SHIFT);
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index 759f1759a7..cba6d0573b 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -276,7 +276,7 @@ struct iavf_rx_queue {
/* Structure associated with each TX queue. */
struct iavf_tx_queue {
const struct rte_memzone *mz; /* memzone for Tx ring */
- volatile struct iavf_tx_desc *tx_ring; /* Tx ring virtual address */
+ volatile struct iavf_tx_desc *iavf_tx_ring; /* Tx ring virtual address */
rte_iova_t tx_ring_dma; /* Tx ring DMA address */
struct ieth_tx_entry *sw_ring; /* address array of SW ring */
uint16_t nb_tx_desc; /* ring length */
@@ -289,7 +289,7 @@ struct iavf_tx_queue {
uint16_t tx_free_thresh;
uint16_t tx_rs_thresh;
uint8_t rel_mbufs_type;
- struct iavf_vsi *vsi; /**< the VSI this queue belongs to */
+ struct iavf_vsi *iavf_vsi; /**< the VSI this queue belongs to */
uint16_t port_id;
uint16_t queue_id;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index a63763cdec..94cf9c0038 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -1750,7 +1750,7 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->iavf_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -1771,7 +1771,7 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->iavf_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
}
@@ -1781,7 +1781,7 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->iavf_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) <<
IAVF_TXD_QW1_CMD_SHIFT);
txq->tx_next_rs =
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index e04d66d757..dd45bc0fd9 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -1854,7 +1854,7 @@ iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq)
struct rte_mbuf *m, *free[IAVF_VPMD_TX_MAX_FREE_BUF];
/* check DD bits on threshold descriptor */
- if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+ if ((txq->iavf_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(IAVF_TXD_QW1_DTYPE_MASK)) !=
rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE))
return 0;
@@ -2327,7 +2327,7 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->iavf_tx_ring[tx_id];
txep = (void *)txq->sw_ring;
txep += tx_id;
@@ -2349,7 +2349,7 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->iavf_tx_ring[tx_id];
txep = (void *)txq->sw_ring;
txep += tx_id;
}
@@ -2360,7 +2360,7 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->iavf_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) <<
IAVF_TXD_QW1_CMD_SHIFT);
txq->tx_next_rs =
@@ -2396,7 +2396,7 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_pkts = nb_commit >> 1;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->iavf_tx_ring[tx_id];
txep = (void *)txq->sw_ring;
txep += (tx_id >> 1);
@@ -2417,7 +2417,7 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
tx_id = 0;
/* avoid reach the end of ring */
- txdp = txq->tx_ring;
+ txdp = txq->iavf_tx_ring;
txep = (void *)txq->sw_ring;
}
@@ -2428,7 +2428,7 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->iavf_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) <<
IAVF_TXD_QW1_CMD_SHIFT);
txq->tx_next_rs =
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index 0a9243a684..b8b5e74b89 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -26,7 +26,7 @@ iavf_tx_free_bufs(struct iavf_tx_queue *txq)
struct rte_mbuf *m, *free[IAVF_VPMD_TX_MAX_FREE_BUF];
/* check DD bits on threshold descriptor */
- if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+ if ((txq->iavf_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(IAVF_TXD_QW1_DTYPE_MASK)) !=
rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE))
return 0;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index e9d19525ae..0a896a6e6f 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -1383,7 +1383,7 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_commit = nb_pkts;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->iavf_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -1403,7 +1403,7 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->iavf_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
}
@@ -1413,7 +1413,7 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->iavf_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) <<
IAVF_TXD_QW1_CMD_SHIFT);
txq->tx_next_rs =
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index f37dd2fdc1..9485494f86 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -401,11 +401,11 @@ reset_tx_queue(struct ice_tx_queue *txq)
txe = txq->sw_ring;
size = sizeof(struct ice_tx_desc) * txq->nb_tx_desc;
for (i = 0; i < size; i++)
- ((volatile char *)txq->tx_ring)[i] = 0;
+ ((volatile char *)txq->ice_tx_ring)[i] = 0;
prev = (uint16_t)(txq->nb_tx_desc - 1);
for (i = 0; i < txq->nb_tx_desc; i++) {
- txq->tx_ring[i].cmd_type_offset_bsz =
+ txq->ice_tx_ring[i].cmd_type_offset_bsz =
rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE);
txe[i].mbuf = NULL;
txe[i].last_id = i;
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 9faa878caf..df9b09ae0c 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -776,7 +776,7 @@ ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
if (!txq_elem)
return -ENOMEM;
- vsi = txq->vsi;
+ vsi = txq->ice_vsi;
hw = ICE_VSI_TO_HW(vsi);
pf = ICE_VSI_TO_PF(vsi);
@@ -966,7 +966,7 @@ ice_fdir_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
if (!txq_elem)
return -ENOMEM;
- vsi = txq->vsi;
+ vsi = txq->ice_vsi;
hw = ICE_VSI_TO_HW(vsi);
memset(&tx_ctx, 0, sizeof(tx_ctx));
@@ -1039,11 +1039,11 @@ ice_reset_tx_queue(struct ice_tx_queue *txq)
txe = txq->sw_ring;
size = sizeof(struct ice_tx_desc) * txq->nb_tx_desc;
for (i = 0; i < size; i++)
- ((volatile char *)txq->tx_ring)[i] = 0;
+ ((volatile char *)txq->ice_tx_ring)[i] = 0;
prev = (uint16_t)(txq->nb_tx_desc - 1);
for (i = 0; i < txq->nb_tx_desc; i++) {
- volatile struct ice_tx_desc *txd = &txq->tx_ring[i];
+ volatile struct ice_tx_desc *txd = &txq->ice_tx_ring[i];
txd->cmd_type_offset_bsz =
rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE);
@@ -1153,7 +1153,7 @@ ice_fdir_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
PMD_DRV_LOG(INFO, "TX queue %u not started", tx_queue_id);
return 0;
}
- vsi = txq->vsi;
+ vsi = txq->ice_vsi;
q_ids[0] = txq->reg_idx;
q_teids[0] = txq->q_teid;
@@ -1479,7 +1479,7 @@ ice_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate TX hardware ring descriptors. */
ring_size = sizeof(struct ice_tx_desc) * ICE_MAX_RING_DESC;
ring_size = RTE_ALIGN(ring_size, ICE_DMA_MEM_ALIGN);
- tz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx,
+ tz = rte_eth_dma_zone_reserve(dev, "ice_tx_ring", queue_idx,
ring_size, ICE_RING_BASE_ALIGN,
socket_id);
if (!tz) {
@@ -1500,11 +1500,11 @@ ice_tx_queue_setup(struct rte_eth_dev *dev,
txq->reg_idx = vsi->base_queue + queue_idx;
txq->port_id = dev->data->port_id;
txq->offloads = offloads;
- txq->vsi = vsi;
+ txq->ice_vsi = vsi;
txq->tx_deferred_start = tx_conf->tx_deferred_start;
txq->tx_ring_dma = tz->iova;
- txq->tx_ring = tz->addr;
+ txq->ice_tx_ring = tz->addr;
/* Allocate software ring */
txq->sw_ring =
@@ -2372,7 +2372,7 @@ ice_tx_descriptor_status(void *tx_queue, uint16_t offset)
desc -= txq->nb_tx_desc;
}
- status = &txq->tx_ring[desc].cmd_type_offset_bsz;
+ status = &txq->ice_tx_ring[desc].cmd_type_offset_bsz;
mask = rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M);
expect = rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE <<
ICE_TXD_QW1_DTYPE_S);
@@ -2452,10 +2452,10 @@ ice_fdir_setup_tx_resources(struct ice_pf *pf)
txq->nb_tx_desc = ICE_FDIR_NUM_TX_DESC;
txq->queue_id = ICE_FDIR_QUEUE_ID;
txq->reg_idx = pf->fdir.fdir_vsi->base_queue;
- txq->vsi = pf->fdir.fdir_vsi;
+ txq->ice_vsi = pf->fdir.fdir_vsi;
txq->tx_ring_dma = tz->iova;
- txq->tx_ring = (struct ice_tx_desc *)tz->addr;
+ txq->ice_tx_ring = (struct ice_tx_desc *)tz->addr;
/*
* don't need to allocate software ring and reset for the fdir
* program queue just set the queue has been configured.
@@ -2838,7 +2838,7 @@ static inline int
ice_xmit_cleanup(struct ice_tx_queue *txq)
{
struct ieth_tx_entry *sw_ring = txq->sw_ring;
- volatile struct ice_tx_desc *txd = txq->tx_ring;
+ volatile struct ice_tx_desc *txd = txq->ice_tx_ring;
uint16_t last_desc_cleaned = txq->last_desc_cleaned;
uint16_t nb_tx_desc = txq->nb_tx_desc;
uint16_t desc_to_clean_to;
@@ -2959,7 +2959,7 @@ uint16_t
ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
struct ice_tx_queue *txq;
- volatile struct ice_tx_desc *tx_ring;
+ volatile struct ice_tx_desc *ice_tx_ring;
volatile struct ice_tx_desc *txd;
struct ieth_tx_entry *sw_ring;
struct ieth_tx_entry *txe, *txn;
@@ -2981,7 +2981,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
txq = tx_queue;
sw_ring = txq->sw_ring;
- tx_ring = txq->tx_ring;
+ ice_tx_ring = txq->ice_tx_ring;
tx_id = txq->tx_tail;
txe = &sw_ring[tx_id];
@@ -3064,7 +3064,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
/* Setup TX context descriptor if required */
volatile struct ice_tx_ctx_desc *ctx_txd =
(volatile struct ice_tx_ctx_desc *)
- &tx_ring[tx_id];
+ &ice_tx_ring[tx_id];
uint16_t cd_l2tag2 = 0;
uint64_t cd_type_cmd_tso_mss = ICE_TX_DESC_DTYPE_CTX;
@@ -3082,7 +3082,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
cd_type_cmd_tso_mss |=
((uint64_t)ICE_TX_CTX_DESC_TSYN <<
ICE_TXD_CTX_QW1_CMD_S) |
- (((uint64_t)txq->vsi->adapter->ptp_tx_index <<
+ (((uint64_t)txq->ice_vsi->adapter->ptp_tx_index <<
ICE_TXD_CTX_QW1_TSYN_S) & ICE_TXD_CTX_QW1_TSYN_M);
ctx_txd->tunneling_params =
@@ -3106,7 +3106,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
m_seg = tx_pkt;
do {
- txd = &tx_ring[tx_id];
+ txd = &ice_tx_ring[tx_id];
txn = &sw_ring[txe->next_id];
if (txe->mbuf)
@@ -3134,7 +3134,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
txe->last_id = tx_last;
tx_id = txe->next_id;
txe = txn;
- txd = &tx_ring[tx_id];
+ txd = &ice_tx_ring[tx_id];
txn = &sw_ring[txe->next_id];
}
@@ -3187,7 +3187,7 @@ ice_tx_free_bufs(struct ice_tx_queue *txq)
struct ieth_tx_entry *txep;
uint16_t i;
- if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+ if ((txq->ice_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) !=
rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))
return 0;
@@ -3360,7 +3360,7 @@ static inline void
ice_tx_fill_hw_ring(struct ice_tx_queue *txq, struct rte_mbuf **pkts,
uint16_t nb_pkts)
{
- volatile struct ice_tx_desc *txdp = &txq->tx_ring[txq->tx_tail];
+ volatile struct ice_tx_desc *txdp = &txq->ice_tx_ring[txq->tx_tail];
struct ieth_tx_entry *txep = &txq->sw_ring[txq->tx_tail];
const int N_PER_LOOP = 4;
const int N_PER_LOOP_MASK = N_PER_LOOP - 1;
@@ -3393,7 +3393,7 @@ tx_xmit_pkts(struct ice_tx_queue *txq,
struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- volatile struct ice_tx_desc *txr = txq->tx_ring;
+ volatile struct ice_tx_desc *txr = txq->ice_tx_ring;
uint16_t n = 0;
/**
@@ -3722,7 +3722,7 @@ ice_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
bool pkt_error = false;
uint16_t good_pkts = nb_pkts;
const char *reason = NULL;
- struct ice_adapter *adapter = txq->vsi->adapter;
+ struct ice_adapter *adapter = txq->ice_vsi->adapter;
uint64_t ol_flags;
for (idx = 0; idx < nb_pkts; idx++) {
@@ -4701,11 +4701,11 @@ ice_fdir_programming(struct ice_pf *pf, struct ice_fltr_desc *fdir_desc)
uint16_t i;
fdirdp = (volatile struct ice_fltr_desc *)
- (&txq->tx_ring[txq->tx_tail]);
+ (&txq->ice_tx_ring[txq->tx_tail]);
fdirdp->qidx_compq_space_stat = fdir_desc->qidx_compq_space_stat;
fdirdp->dtype_cmd_vsi_fdid = fdir_desc->dtype_cmd_vsi_fdid;
- txdp = &txq->tx_ring[txq->tx_tail + 1];
+ txdp = &txq->ice_tx_ring[txq->tx_tail + 1];
txdp->buf_addr = rte_cpu_to_le_64(pf->fdir.dma_addr);
td_cmd = ICE_TX_DESC_CMD_EOP |
ICE_TX_DESC_CMD_RS |
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index 615bed8a60..91f8ed2036 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -148,7 +148,7 @@ struct ice_rx_queue {
struct ice_tx_queue {
uint16_t nb_tx_desc; /* number of TX descriptors */
rte_iova_t tx_ring_dma; /* TX ring DMA address */
- volatile struct ice_tx_desc *tx_ring; /* TX ring virtual address */
+ volatile struct ice_tx_desc *ice_tx_ring; /* TX ring virtual address */
struct ieth_tx_entry *sw_ring; /* virtual address of SW ring */
uint16_t tx_tail; /* current value of tail register */
volatile uint8_t *qtx_tail; /* register address of tail */
@@ -171,7 +171,7 @@ struct ice_tx_queue {
uint32_t q_teid; /* TX schedule node id. */
uint16_t reg_idx;
uint64_t offloads;
- struct ice_vsi *vsi; /* the VSI this queue belongs to */
+ struct ice_vsi *ice_vsi; /* the VSI this queue belongs to */
uint16_t tx_next_dd;
uint16_t tx_next_rs;
uint64_t mbuf_errors;
diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index 657b40858b..d4c76686f7 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -874,7 +874,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->ice_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -895,7 +895,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->ice_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
}
@@ -905,7 +905,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->ice_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)ICE_TX_DESC_CMD_RS) <<
ICE_TXD_QW1_CMD_S);
txq->tx_next_rs =
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index 5ba6d15ef0..1126a30bf8 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -869,7 +869,7 @@ ice_tx_free_bufs_avx512(struct ice_tx_queue *txq)
struct rte_mbuf *m, *free[ICE_TX_MAX_FREE_BUF_SZ];
/* check DD bits on threshold descriptor */
- if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+ if ((txq->ice_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) !=
rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))
return 0;
@@ -1071,7 +1071,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->ice_tx_ring[tx_id];
txep = (void *)txq->sw_ring;
txep += tx_id;
@@ -1093,7 +1093,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = txq->tx_ring;
+ txdp = txq->ice_tx_ring;
txep = (void *)txq->sw_ring;
}
@@ -1103,7 +1103,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->ice_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)ICE_TX_DESC_CMD_RS) <<
ICE_TXD_QW1_CMD_S);
txq->tx_next_rs =
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index 5266ba2d53..b2e3c0f6b7 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -22,7 +22,7 @@ ice_tx_free_bufs_vec(struct ice_tx_queue *txq)
struct rte_mbuf *m, *free[ICE_TX_MAX_FREE_BUF_SZ];
/* check DD bits on threshold descriptor */
- if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+ if ((txq->ice_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) !=
rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))
return 0;
@@ -121,7 +121,7 @@ _ice_tx_queue_release_mbufs_vec(struct ice_tx_queue *txq)
i = txq->tx_next_dd - txq->tx_rs_thresh + 1;
#ifdef __AVX512VL__
- struct rte_eth_dev *dev = &rte_eth_devices[txq->vsi->adapter->pf.dev_data->port_id];
+ struct rte_eth_dev *dev = &rte_eth_devices[txq->ice_vsi->adapter->pf.dev_data->port_id];
if (dev->tx_pkt_burst == ice_xmit_pkts_vec_avx512 ||
dev->tx_pkt_burst == ice_xmit_pkts_vec_avx512_offload) {
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index 4f603976c5..5db66f3c6a 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -717,7 +717,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->ice_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -737,7 +737,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->ice_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
}
@@ -747,7 +747,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->ice_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)ICE_TX_DESC_CMD_RS) <<
ICE_TXD_QW1_CMD_S);
txq->tx_next_rs =
diff --git a/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c b/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c
index 4c8f6b7b64..546825f334 100644
--- a/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c
+++ b/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c
@@ -72,7 +72,7 @@ ixgbe_recycle_tx_mbufs_reuse_vec(void *tx_queue,
return 0;
/* check DD bits on threshold descriptor */
- status = txq->tx_ring[txq->tx_next_dd].wb.status;
+ status = txq->ixgbe_tx_ring[txq->tx_next_dd].wb.status;
if (!(status & IXGBE_ADVTXD_STAT_DD))
return 0;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 96a1021e48..c3b704c201 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -106,7 +106,7 @@ ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq)
struct rte_mbuf *m, *free[RTE_IXGBE_TX_MAX_FREE_BUF_SZ];
/* check DD bit on threshold descriptor */
- status = txq->tx_ring[txq->tx_next_dd].wb.status;
+ status = txq->ixgbe_tx_ring[txq->tx_next_dd].wb.status;
if (!(status & rte_cpu_to_le_32(IXGBE_ADVTXD_STAT_DD)))
return 0;
@@ -198,7 +198,7 @@ static inline void
ixgbe_tx_fill_hw_ring(struct ixgbe_tx_queue *txq, struct rte_mbuf **pkts,
uint16_t nb_pkts)
{
- volatile union ixgbe_adv_tx_desc *txdp = &(txq->tx_ring[txq->tx_tail]);
+ volatile union ixgbe_adv_tx_desc *txdp = &(txq->ixgbe_tx_ring[txq->tx_tail]);
struct ieth_tx_entry *txep = &(txq->sw_ring[txq->tx_tail]);
const int N_PER_LOOP = 4;
const int N_PER_LOOP_MASK = N_PER_LOOP-1;
@@ -232,7 +232,7 @@ tx_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
- volatile union ixgbe_adv_tx_desc *tx_r = txq->tx_ring;
+ volatile union ixgbe_adv_tx_desc *tx_r = txq->ixgbe_tx_ring;
uint16_t n = 0;
/*
@@ -564,7 +564,7 @@ static inline int
ixgbe_xmit_cleanup(struct ixgbe_tx_queue *txq)
{
struct ieth_tx_entry *sw_ring = txq->sw_ring;
- volatile union ixgbe_adv_tx_desc *txr = txq->tx_ring;
+ volatile union ixgbe_adv_tx_desc *txr = txq->ixgbe_tx_ring;
uint16_t last_desc_cleaned = txq->last_desc_cleaned;
uint16_t nb_tx_desc = txq->nb_tx_desc;
uint16_t desc_to_clean_to;
@@ -652,7 +652,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_offload.data[1] = 0;
txq = tx_queue;
sw_ring = txq->sw_ring;
- txr = txq->tx_ring;
+ txr = txq->ixgbe_tx_ring;
tx_id = txq->tx_tail;
txe = &sw_ring[tx_id];
txp = NULL;
@@ -2495,13 +2495,13 @@ ixgbe_reset_tx_queue(struct ixgbe_tx_queue *txq)
/* Zero out HW ring memory */
for (i = 0; i < txq->nb_tx_desc; i++) {
- txq->tx_ring[i] = zeroed_desc;
+ txq->ixgbe_tx_ring[i] = zeroed_desc;
}
/* Initialize SW ring entries */
prev = (uint16_t) (txq->nb_tx_desc - 1);
for (i = 0; i < txq->nb_tx_desc; i++) {
- volatile union ixgbe_adv_tx_desc *txd = &txq->tx_ring[i];
+ volatile union ixgbe_adv_tx_desc *txd = &txq->ixgbe_tx_ring[i];
txd->wb.status = rte_cpu_to_le_32(IXGBE_TXD_STAT_DD);
txe[i].mbuf = NULL;
@@ -2751,7 +2751,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
* handle the maximum ring size is allocated in order to allow for
* resizing in later calls to the queue setup function.
*/
- tz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx,
+ tz = rte_eth_dma_zone_reserve(dev, "ixgbe_tx_ring", queue_idx,
sizeof(union ixgbe_adv_tx_desc) * IXGBE_MAX_RING_DESC,
IXGBE_ALIGN, socket_id);
if (tz == NULL) {
@@ -2791,7 +2791,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->qtx_tail = IXGBE_PCI_REG_ADDR(hw, IXGBE_TDT(txq->reg_idx));
txq->tx_ring_dma = tz->iova;
- txq->tx_ring = (union ixgbe_adv_tx_desc *) tz->addr;
+ txq->ixgbe_tx_ring = (union ixgbe_adv_tx_desc *) tz->addr;
/* Allocate software ring */
txq->sw_ring = rte_zmalloc_socket("txq->sw_ring",
@@ -2802,7 +2802,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
return -ENOMEM;
}
PMD_INIT_LOG(DEBUG, "sw_ring=%p hw_ring=%p dma_addr=0x%"PRIx64,
- txq->sw_ring, txq->tx_ring, txq->tx_ring_dma);
+ txq->sw_ring, txq->ixgbe_tx_ring, txq->tx_ring_dma);
/* set up vector or scalar TX function as appropriate */
ixgbe_set_tx_function(dev, txq);
@@ -3328,7 +3328,7 @@ ixgbe_dev_tx_descriptor_status(void *tx_queue, uint16_t offset)
desc -= txq->nb_tx_desc;
}
- status = &txq->tx_ring[desc].wb.status;
+ status = &txq->ixgbe_tx_ring[desc].wb.status;
if (*status & rte_cpu_to_le_32(IXGBE_ADVTXD_STAT_DD))
return RTE_ETH_TX_DESC_DONE;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index e3e6ebb9e8..4e437f95e3 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -185,7 +185,7 @@ struct ixgbe_advctx_info {
*/
struct ixgbe_tx_queue {
/** TX ring virtual address. */
- volatile union ixgbe_adv_tx_desc *tx_ring;
+ volatile union ixgbe_adv_tx_desc *ixgbe_tx_ring;
rte_iova_t tx_ring_dma; /**< TX ring DMA address. */
union {
struct ieth_tx_entry *sw_ring; /**< address of SW ring for scalar PMD. */
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
index d25875935e..fc254ef3d3 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
@@ -22,7 +22,7 @@ ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq)
struct rte_mbuf *m, *free[RTE_IXGBE_TX_MAX_FREE_BUF_SZ];
/* check DD bit on threshold descriptor */
- status = txq->tx_ring[txq->tx_next_dd].wb.status;
+ status = txq->ixgbe_tx_ring[txq->tx_next_dd].wb.status;
if (!(status & IXGBE_ADVTXD_STAT_DD))
return 0;
@@ -154,11 +154,11 @@ _ixgbe_reset_tx_queue_vec(struct ixgbe_tx_queue *txq)
/* Zero out HW ring memory */
for (i = 0; i < txq->nb_tx_desc; i++)
- txq->tx_ring[i] = zeroed_desc;
+ txq->ixgbe_tx_ring[i] = zeroed_desc;
/* Initialize SW ring entries */
for (i = 0; i < txq->nb_tx_desc; i++) {
- volatile union ixgbe_adv_tx_desc *txd = &txq->tx_ring[i];
+ volatile union ixgbe_adv_tx_desc *txd = &txq->ixgbe_tx_ring[i];
txd->wb.status = IXGBE_TXD_STAT_DD;
txe[i].mbuf = NULL;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
index 100f77cea6..e4381802c8 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
@@ -590,7 +590,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->ixgbe_tx_ring[tx_id];
txep = &txq->sw_ring_v[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -610,7 +610,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->ixgbe_tx_ring[tx_id];
txep = &txq->sw_ring_v[tx_id];
}
@@ -620,7 +620,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].read.cmd_type_len |=
+ txq->ixgbe_tx_ring[txq->tx_next_rs].read.cmd_type_len |=
rte_cpu_to_le_32(IXGBE_ADVTXD_DCMD_RS);
txq->tx_next_rs = (uint16_t)(txq->tx_next_rs +
txq->tx_rs_thresh);
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
index 017e3d6674..4c8cc22f59 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -712,7 +712,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->ixgbe_tx_ring[tx_id];
txep = &txq->sw_ring_v[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -733,7 +733,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &(txq->tx_ring[tx_id]);
+ txdp = &(txq->ixgbe_tx_ring[tx_id]);
txep = &txq->sw_ring_v[tx_id];
}
@@ -743,7 +743,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].read.cmd_type_len |=
+ txq->ixgbe_tx_ring[txq->tx_next_rs].read.cmd_type_len |=
rte_cpu_to_le_32(IXGBE_ADVTXD_DCMD_RS);
txq->tx_next_rs = (uint16_t)(txq->tx_next_rs +
txq->tx_rs_thresh);
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [RFC PATCH 06/21] common/intel_eth: merge ice and i40e Tx queue struct
2024-11-22 12:53 [RFC PATCH 00/21] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (4 preceding siblings ...)
2024-11-22 12:53 ` [RFC PATCH 05/21] drivers/net: add prefix for driver-specific structs Bruce Richardson
@ 2024-11-22 12:53 ` Bruce Richardson
2024-11-22 12:54 ` [RFC PATCH 07/21] net/iavf: use common Tx queue structure Bruce Richardson
` (19 subsequent siblings)
25 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-11-22 12:53 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Ian Stokes, David Christensen,
Konstantin Ananyev, Wathsala Vithanage, Anatoly Burakov
The queue structures fo i40e and ice drivers are virtually identical, so
merge them into a common struct. This should allow easier function
merging in future using that common struct.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/common/intel_eth/ieth_rxtx.h | 54 +++++++++++++++++
drivers/net/i40e/i40e_ethdev.c | 4 +-
drivers/net/i40e/i40e_ethdev.h | 4 +-
drivers/net/i40e/i40e_fdir.c | 4 +-
.../net/i40e/i40e_recycle_mbufs_vec_common.c | 2 +-
drivers/net/i40e/i40e_rxtx.c | 58 +++++++++---------
drivers/net/i40e/i40e_rxtx.h | 50 ++--------------
drivers/net/i40e/i40e_rxtx_vec_altivec.c | 4 +-
drivers/net/i40e/i40e_rxtx_vec_avx2.c | 4 +-
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 6 +-
drivers/net/i40e/i40e_rxtx_vec_common.h | 2 +-
drivers/net/i40e/i40e_rxtx_vec_neon.c | 4 +-
drivers/net/i40e/i40e_rxtx_vec_sse.c | 4 +-
drivers/net/ice/ice_dcf.c | 4 +-
drivers/net/ice/ice_dcf_ethdev.c | 10 ++--
drivers/net/ice/ice_diagnose.c | 2 +-
drivers/net/ice/ice_ethdev.c | 2 +-
drivers/net/ice/ice_ethdev.h | 4 +-
drivers/net/ice/ice_rxtx.c | 60 +++++++++----------
drivers/net/ice/ice_rxtx.h | 41 +------------
drivers/net/ice/ice_rxtx_vec_avx2.c | 4 +-
drivers/net/ice/ice_rxtx_vec_avx512.c | 8 +--
drivers/net/ice/ice_rxtx_vec_common.h | 8 +--
drivers/net/ice/ice_rxtx_vec_sse.c | 6 +-
24 files changed, 164 insertions(+), 185 deletions(-)
diff --git a/drivers/common/intel_eth/ieth_rxtx.h b/drivers/common/intel_eth/ieth_rxtx.h
index 95a3cff048..8b12ff59e4 100644
--- a/drivers/common/intel_eth/ieth_rxtx.h
+++ b/drivers/common/intel_eth/ieth_rxtx.h
@@ -26,4 +26,58 @@ struct ieth_vec_tx_entry
struct rte_mbuf *mbuf; /* mbuf associated with TX desc, if any. */
};
+struct ieth_tx_queue;
+
+typedef void (*ice_tx_release_mbufs_t)(struct ieth_tx_queue *txq);
+
+struct ieth_tx_queue {
+ union { /* TX ring virtual address */
+ volatile struct ice_tx_desc *ice_tx_ring;
+ volatile struct i40e_tx_desc *i40e_tx_ring;
+ };
+ volatile uint8_t *qtx_tail; /* register address of tail */
+ struct ieth_tx_entry *sw_ring; /* virtual address of SW ring */
+ rte_iova_t tx_ring_dma; /* TX ring DMA address */
+ uint16_t nb_tx_desc; /* number of TX descriptors */
+ uint16_t tx_tail; /* current value of tail register */
+ uint16_t nb_tx_used; /* number of TX desc used since RS bit set */
+ /* index to last TX descriptor to have been cleaned */
+ uint16_t last_desc_cleaned;
+ /* Total number of TX descriptors ready to be allocated. */
+ uint16_t nb_tx_free;
+ /* Start freeing TX buffers if there are less free descriptors than
+ * this value.
+ */
+ uint16_t tx_free_thresh;
+ /* Number of TX descriptors to use before RS bit is set. */
+ uint16_t tx_rs_thresh;
+ uint8_t pthresh; /**< Prefetch threshold register. */
+ uint8_t hthresh; /**< Host threshold register. */
+ uint8_t wthresh; /**< Write-back threshold reg. */
+ uint16_t port_id; /* Device port identifier. */
+ uint16_t queue_id; /* TX queue index. */
+ uint16_t reg_idx;
+ uint64_t offloads;
+ uint16_t tx_next_dd;
+ uint16_t tx_next_rs;
+ uint64_t mbuf_errors;
+ _Bool tx_deferred_start; /* don't start this queue in dev start */
+ _Bool q_set; /* indicate if tx queue has been configured */
+ union { /* the VSI this queue belongs to */
+ struct ice_vsi *ice_vsi;
+ struct i40e_vsi *i40e_vsi;
+ };
+ const struct rte_memzone *mz;
+
+ union {
+ struct { /* ICE driver specific values */
+ ice_tx_release_mbufs_t tx_rel_mbufs;
+ uint32_t q_teid; /* TX schedule node id. */
+ };
+ struct { /* I40E driver specific values */
+ uint8_t dcb_tc;
+ };
+ };
+};
+
#endif /* IETH_RXTX_H_ */
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index ca128c7556..4d74513812 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -3685,7 +3685,7 @@ i40e_dev_update_mbuf_stats(struct rte_eth_dev *ethdev,
struct i40e_mbuf_stats *mbuf_stats)
{
uint16_t idx;
- struct i40e_tx_queue *txq;
+ struct ieth_tx_queue *txq;
for (idx = 0; idx < ethdev->data->nb_tx_queues; idx++) {
txq = ethdev->data->tx_queues[idx];
@@ -6585,7 +6585,7 @@ i40e_dev_tx_init(struct i40e_pf *pf)
struct rte_eth_dev_data *data = pf->dev_data;
uint16_t i;
uint32_t ret = I40E_SUCCESS;
- struct i40e_tx_queue *txq;
+ struct ieth_tx_queue *txq;
for (i = 0; i < data->nb_tx_queues; i++) {
txq = data->tx_queues[i];
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index 98213948b4..8c8c0a1bcf 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -334,7 +334,7 @@ struct i40e_vsi_list {
};
struct i40e_rx_queue;
-struct i40e_tx_queue;
+struct ieth_tx_queue;
/* Bandwidth limit information */
struct i40e_bw_info {
@@ -738,7 +738,7 @@ TAILQ_HEAD(i40e_fdir_filter_list, i40e_fdir_filter);
struct i40e_fdir_info {
struct i40e_vsi *fdir_vsi; /* pointer to fdir VSI structure */
uint16_t match_counter_index; /* Statistic counter index used for fdir*/
- struct i40e_tx_queue *txq;
+ struct ieth_tx_queue *txq;
struct i40e_rx_queue *rxq;
void *prg_pkt[I40E_FDIR_PRG_PKT_CNT]; /* memory for fdir program packet */
uint64_t dma_addr[I40E_FDIR_PRG_PKT_CNT]; /* physic address of packet memory*/
diff --git a/drivers/net/i40e/i40e_fdir.c b/drivers/net/i40e/i40e_fdir.c
index c600167634..c5298ffae0 100644
--- a/drivers/net/i40e/i40e_fdir.c
+++ b/drivers/net/i40e/i40e_fdir.c
@@ -1372,7 +1372,7 @@ i40e_find_available_buffer(struct rte_eth_dev *dev)
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_fdir_info *fdir_info = &pf->fdir;
- struct i40e_tx_queue *txq = pf->fdir.txq;
+ struct ieth_tx_queue *txq = pf->fdir.txq;
/* no available buffer
* search for more available buffers from the current
@@ -1628,7 +1628,7 @@ i40e_flow_fdir_filter_programming(struct i40e_pf *pf,
const struct i40e_fdir_filter_conf *filter,
bool add, bool wait_status)
{
- struct i40e_tx_queue *txq = pf->fdir.txq;
+ struct ieth_tx_queue *txq = pf->fdir.txq;
struct i40e_rx_queue *rxq = pf->fdir.rxq;
const struct i40e_fdir_action *fdir_action = &filter->action;
volatile struct i40e_tx_desc *txdp;
diff --git a/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c b/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
index 167ee8d428..39bf59d526 100644
--- a/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
+++ b/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
@@ -55,7 +55,7 @@ uint16_t
i40e_recycle_tx_mbufs_reuse_vec(void *tx_queue,
struct rte_eth_recycle_rxq_info *recycle_rxq_info)
{
- struct i40e_tx_queue *txq = tx_queue;
+ struct ieth_tx_queue *txq = tx_queue;
struct ieth_tx_entry *txep;
struct rte_mbuf **rxep;
int i, n;
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 5b8edac3b2..fce3f5ec2a 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -376,7 +376,7 @@ i40e_build_ctob(uint32_t td_cmd,
}
static inline int
-i40e_xmit_cleanup(struct i40e_tx_queue *txq)
+i40e_xmit_cleanup(struct ieth_tx_queue *txq)
{
struct ieth_tx_entry *sw_ring = txq->sw_ring;
volatile struct i40e_tx_desc *txd = txq->i40e_tx_ring;
@@ -1080,7 +1080,7 @@ i40e_calc_pkt_desc(struct rte_mbuf *tx_pkt)
uint16_t
i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
- struct i40e_tx_queue *txq;
+ struct ieth_tx_queue *txq;
struct ieth_tx_entry *sw_ring;
struct ieth_tx_entry *txe, *txn;
volatile struct i40e_tx_desc *txd;
@@ -1329,7 +1329,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
}
static __rte_always_inline int
-i40e_tx_free_bufs(struct i40e_tx_queue *txq)
+i40e_tx_free_bufs(struct ieth_tx_queue *txq)
{
struct ieth_tx_entry *txep;
uint16_t tx_rs_thresh = txq->tx_rs_thresh;
@@ -1413,7 +1413,7 @@ tx1(volatile struct i40e_tx_desc *txdp, struct rte_mbuf **pkts)
/* Fill hardware descriptor ring with mbuf data */
static inline void
-i40e_tx_fill_hw_ring(struct i40e_tx_queue *txq,
+i40e_tx_fill_hw_ring(struct ieth_tx_queue *txq,
struct rte_mbuf **pkts,
uint16_t nb_pkts)
{
@@ -1441,7 +1441,7 @@ i40e_tx_fill_hw_ring(struct i40e_tx_queue *txq,
}
static inline uint16_t
-tx_xmit_pkts(struct i40e_tx_queue *txq,
+tx_xmit_pkts(struct ieth_tx_queue *txq,
struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
@@ -1504,14 +1504,14 @@ i40e_xmit_pkts_simple(void *tx_queue,
uint16_t nb_tx = 0;
if (likely(nb_pkts <= I40E_TX_MAX_BURST))
- return tx_xmit_pkts((struct i40e_tx_queue *)tx_queue,
+ return tx_xmit_pkts((struct ieth_tx_queue *)tx_queue,
tx_pkts, nb_pkts);
while (nb_pkts) {
uint16_t ret, num = (uint16_t)RTE_MIN(nb_pkts,
I40E_TX_MAX_BURST);
- ret = tx_xmit_pkts((struct i40e_tx_queue *)tx_queue,
+ ret = tx_xmit_pkts((struct ieth_tx_queue *)tx_queue,
&tx_pkts[nb_tx], num);
nb_tx = (uint16_t)(nb_tx + ret);
nb_pkts = (uint16_t)(nb_pkts - ret);
@@ -1527,7 +1527,7 @@ i40e_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
uint16_t nb_tx = 0;
- struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
+ struct ieth_tx_queue *txq = (struct ieth_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
@@ -1549,7 +1549,7 @@ i40e_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
static uint16_t
i40e_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
- struct i40e_tx_queue *txq = tx_queue;
+ struct ieth_tx_queue *txq = tx_queue;
uint16_t idx;
uint64_t ol_flags;
struct rte_mbuf *mb;
@@ -1611,7 +1611,7 @@ i40e_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts
pkt_error = true;
break;
}
- if (mb->nb_segs > ((struct i40e_tx_queue *)tx_queue)->nb_tx_desc) {
+ if (mb->nb_segs > ((struct ieth_tx_queue *)tx_queue)->nb_tx_desc) {
PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs out of ring length");
pkt_error = true;
break;
@@ -1873,7 +1873,7 @@ int
i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
int err;
- struct i40e_tx_queue *txq;
+ struct ieth_tx_queue *txq;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
PMD_INIT_FUNC_TRACE();
@@ -1907,7 +1907,7 @@ i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
int
i40e_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
- struct i40e_tx_queue *txq;
+ struct ieth_tx_queue *txq;
int err;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
@@ -2311,7 +2311,7 @@ i40e_dev_rx_descriptor_status(void *rx_queue, uint16_t offset)
int
i40e_dev_tx_descriptor_status(void *tx_queue, uint16_t offset)
{
- struct i40e_tx_queue *txq = tx_queue;
+ struct ieth_tx_queue *txq = tx_queue;
volatile uint64_t *status;
uint64_t mask, expect;
uint32_t desc;
@@ -2341,7 +2341,7 @@ i40e_dev_tx_descriptor_status(void *tx_queue, uint16_t offset)
static int
i40e_dev_tx_queue_setup_runtime(struct rte_eth_dev *dev,
- struct i40e_tx_queue *txq)
+ struct ieth_tx_queue *txq)
{
struct i40e_adapter *ad =
I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
@@ -2394,7 +2394,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
{
struct i40e_vsi *vsi;
struct i40e_pf *pf = NULL;
- struct i40e_tx_queue *txq;
+ struct ieth_tx_queue *txq;
const struct rte_memzone *tz;
uint32_t ring_size;
uint16_t tx_rs_thresh, tx_free_thresh;
@@ -2515,7 +2515,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate the TX queue data structure. */
txq = rte_zmalloc_socket("i40e tx queue",
- sizeof(struct i40e_tx_queue),
+ sizeof(struct ieth_tx_queue),
RTE_CACHE_LINE_SIZE,
socket_id);
if (!txq) {
@@ -2600,7 +2600,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
void
i40e_tx_queue_release(void *txq)
{
- struct i40e_tx_queue *q = (struct i40e_tx_queue *)txq;
+ struct ieth_tx_queue *q = (struct ieth_tx_queue *)txq;
if (!q) {
PMD_DRV_LOG(DEBUG, "Pointer to TX queue is NULL");
@@ -2705,7 +2705,7 @@ i40e_reset_rx_queue(struct i40e_rx_queue *rxq)
}
void
-i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq)
+i40e_tx_queue_release_mbufs(struct ieth_tx_queue *txq)
{
struct rte_eth_dev *dev;
uint16_t i;
@@ -2765,7 +2765,7 @@ i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq)
}
static int
-i40e_tx_done_cleanup_full(struct i40e_tx_queue *txq,
+i40e_tx_done_cleanup_full(struct ieth_tx_queue *txq,
uint32_t free_cnt)
{
struct ieth_tx_entry *swr_ring = txq->sw_ring;
@@ -2824,7 +2824,7 @@ i40e_tx_done_cleanup_full(struct i40e_tx_queue *txq,
}
static int
-i40e_tx_done_cleanup_simple(struct i40e_tx_queue *txq,
+i40e_tx_done_cleanup_simple(struct ieth_tx_queue *txq,
uint32_t free_cnt)
{
int i, n, cnt;
@@ -2848,7 +2848,7 @@ i40e_tx_done_cleanup_simple(struct i40e_tx_queue *txq,
}
static int
-i40e_tx_done_cleanup_vec(struct i40e_tx_queue *txq __rte_unused,
+i40e_tx_done_cleanup_vec(struct ieth_tx_queue *txq __rte_unused,
uint32_t free_cnt __rte_unused)
{
return -ENOTSUP;
@@ -2856,7 +2856,7 @@ i40e_tx_done_cleanup_vec(struct i40e_tx_queue *txq __rte_unused,
int
i40e_tx_done_cleanup(void *txq, uint32_t free_cnt)
{
- struct i40e_tx_queue *q = (struct i40e_tx_queue *)txq;
+ struct ieth_tx_queue *q = (struct ieth_tx_queue *)txq;
struct rte_eth_dev *dev = &rte_eth_devices[q->port_id];
struct i40e_adapter *ad =
I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
@@ -2872,7 +2872,7 @@ i40e_tx_done_cleanup(void *txq, uint32_t free_cnt)
}
void
-i40e_reset_tx_queue(struct i40e_tx_queue *txq)
+i40e_reset_tx_queue(struct ieth_tx_queue *txq)
{
struct ieth_tx_entry *txe;
uint16_t i, prev, size;
@@ -2911,7 +2911,7 @@ i40e_reset_tx_queue(struct i40e_tx_queue *txq)
/* Init the TX queue in hardware */
int
-i40e_tx_queue_init(struct i40e_tx_queue *txq)
+i40e_tx_queue_init(struct ieth_tx_queue *txq)
{
enum i40e_status_code err = I40E_SUCCESS;
struct i40e_vsi *vsi = txq->i40e_vsi;
@@ -3167,7 +3167,7 @@ i40e_dev_free_queues(struct rte_eth_dev *dev)
enum i40e_status_code
i40e_fdir_setup_tx_resources(struct i40e_pf *pf)
{
- struct i40e_tx_queue *txq;
+ struct ieth_tx_queue *txq;
const struct rte_memzone *tz = NULL;
struct rte_eth_dev *dev;
uint32_t ring_size;
@@ -3181,7 +3181,7 @@ i40e_fdir_setup_tx_resources(struct i40e_pf *pf)
/* Allocate the TX queue data structure. */
txq = rte_zmalloc_socket("i40e fdir tx queue",
- sizeof(struct i40e_tx_queue),
+ sizeof(struct ieth_tx_queue),
RTE_CACHE_LINE_SIZE,
SOCKET_ID_ANY);
if (!txq) {
@@ -3304,7 +3304,7 @@ void
i40e_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
struct rte_eth_txq_info *qinfo)
{
- struct i40e_tx_queue *txq;
+ struct ieth_tx_queue *txq;
txq = dev->data->tx_queues[queue_id];
@@ -3552,7 +3552,7 @@ i40e_rx_burst_mode_get(struct rte_eth_dev *dev, __rte_unused uint16_t queue_id,
}
void __rte_cold
-i40e_set_tx_function_flag(struct rte_eth_dev *dev, struct i40e_tx_queue *txq)
+i40e_set_tx_function_flag(struct rte_eth_dev *dev, struct ieth_tx_queue *txq)
{
struct i40e_adapter *ad =
I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
@@ -3592,7 +3592,7 @@ i40e_set_tx_function(struct rte_eth_dev *dev)
#endif
if (ad->tx_vec_allowed) {
for (i = 0; i < dev->data->nb_tx_queues; i++) {
- struct i40e_tx_queue *txq =
+ struct ieth_tx_queue *txq =
dev->data->tx_queues[i];
if (txq && i40e_txq_vec_setup(txq)) {
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 030c381e0c..e6e36d8e69 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -124,44 +124,6 @@ struct i40e_rx_queue {
const struct rte_memzone *mz;
};
-/*
- * Structure associated with each TX queue.
- */
-struct i40e_tx_queue {
- uint16_t nb_tx_desc; /**< number of TX descriptors */
- rte_iova_t tx_ring_dma; /**< TX ring DMA address */
- volatile struct i40e_tx_desc *i40e_tx_ring; /**< TX ring virtual address */
- struct ieth_tx_entry *sw_ring; /**< virtual address of SW ring */
- uint16_t tx_tail; /**< current value of tail register */
- volatile uint8_t *qtx_tail; /**< register address of tail */
- uint16_t nb_tx_used; /**< number of TX desc used since RS bit set */
- /**< index to last TX descriptor to have been cleaned */
- uint16_t last_desc_cleaned;
- /**< Total number of TX descriptors ready to be allocated. */
- uint16_t nb_tx_free;
- /**< Start freeing TX buffers if there are less free descriptors than
- this value. */
- uint16_t tx_free_thresh;
- /** Number of TX descriptors to use before RS bit is set. */
- uint16_t tx_rs_thresh;
- uint8_t pthresh; /**< Prefetch threshold register. */
- uint8_t hthresh; /**< Host threshold register. */
- uint8_t wthresh; /**< Write-back threshold reg. */
- uint16_t port_id; /**< Device port identifier. */
- uint16_t queue_id; /**< TX queue index. */
- uint16_t reg_idx;
- struct i40e_vsi *i40e_vsi; /**< the VSI this queue belongs to */
- uint16_t tx_next_dd;
- uint16_t tx_next_rs;
- bool q_set; /**< indicate if tx queue has been configured */
- uint64_t mbuf_errors;
-
- bool tx_deferred_start; /**< don't start this queue in dev start */
- uint8_t dcb_tc; /**< Traffic class of tx queue */
- uint64_t offloads; /**< Tx offload flags of RTE_ETH_TX_OFFLOAD_* */
- const struct rte_memzone *mz;
-};
-
/** Offload features */
union i40e_tx_offload {
uint64_t data;
@@ -209,15 +171,15 @@ uint16_t i40e_simple_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
uint16_t i40e_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
-int i40e_tx_queue_init(struct i40e_tx_queue *txq);
+int i40e_tx_queue_init(struct ieth_tx_queue *txq);
int i40e_rx_queue_init(struct i40e_rx_queue *rxq);
-void i40e_free_tx_resources(struct i40e_tx_queue *txq);
+void i40e_free_tx_resources(struct ieth_tx_queue *txq);
void i40e_free_rx_resources(struct i40e_rx_queue *rxq);
void i40e_dev_clear_queues(struct rte_eth_dev *dev);
void i40e_dev_free_queues(struct rte_eth_dev *dev);
void i40e_reset_rx_queue(struct i40e_rx_queue *rxq);
-void i40e_reset_tx_queue(struct i40e_tx_queue *txq);
-void i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq);
+void i40e_reset_tx_queue(struct ieth_tx_queue *txq);
+void i40e_tx_queue_release_mbufs(struct ieth_tx_queue *txq);
int i40e_tx_done_cleanup(void *txq, uint32_t free_cnt);
int i40e_alloc_rx_queue_mbufs(struct i40e_rx_queue *rxq);
void i40e_rx_queue_release_mbufs(struct i40e_rx_queue *rxq);
@@ -237,13 +199,13 @@ uint16_t i40e_recv_scattered_pkts_vec(void *rx_queue,
uint16_t nb_pkts);
int i40e_rx_vec_dev_conf_condition_check(struct rte_eth_dev *dev);
int i40e_rxq_vec_setup(struct i40e_rx_queue *rxq);
-int i40e_txq_vec_setup(struct i40e_tx_queue *txq);
+int i40e_txq_vec_setup(struct ieth_tx_queue *txq);
void i40e_rx_queue_release_mbufs_vec(struct i40e_rx_queue *rxq);
uint16_t i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
void i40e_set_rx_function(struct rte_eth_dev *dev);
void i40e_set_tx_function_flag(struct rte_eth_dev *dev,
- struct i40e_tx_queue *txq);
+ struct ieth_tx_queue *txq);
void i40e_set_tx_function(struct rte_eth_dev *dev);
void i40e_set_default_ptype_table(struct rte_eth_dev *dev);
void i40e_set_default_pctype_table(struct rte_eth_dev *dev);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_altivec.c b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
index aed78e4a1a..2ab09eb167 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_altivec.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
@@ -551,7 +551,7 @@ uint16_t
i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
+ struct ieth_tx_queue *txq = (struct ieth_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
struct ieth_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
@@ -625,7 +625,7 @@ i40e_rxq_vec_setup(struct i40e_rx_queue *rxq)
}
int __rte_cold
-i40e_txq_vec_setup(struct i40e_tx_queue __rte_unused * txq)
+i40e_txq_vec_setup(struct ieth_tx_queue __rte_unused * txq)
{
return 0;
}
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx2.c b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
index 6b7c96c683..e32fa160bf 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx2.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
@@ -743,7 +743,7 @@ static inline uint16_t
i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
+ struct ieth_tx_queue *txq = (struct ieth_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
struct ieth_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
@@ -808,7 +808,7 @@ i40e_xmit_pkts_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
uint16_t nb_tx = 0;
- struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
+ struct ieth_tx_queue *txq = (struct ieth_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index 33c1655c9a..b4b38d7db6 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -755,7 +755,7 @@ i40e_recv_scattered_pkts_vec_avx512(void *rx_queue,
}
static __rte_always_inline int
-i40e_tx_free_bufs_avx512(struct i40e_tx_queue *txq)
+i40e_tx_free_bufs_avx512(struct ieth_tx_queue *txq)
{
struct ieth_vec_tx_entry *txep;
uint32_t n;
@@ -933,7 +933,7 @@ static inline uint16_t
i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
+ struct ieth_tx_queue *txq = (struct ieth_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
struct ieth_vec_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
@@ -999,7 +999,7 @@ i40e_xmit_pkts_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
uint16_t nb_tx = 0;
- struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
+ struct ieth_tx_queue *txq = (struct ieth_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index a70d9fce78..66e38994a5 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -17,7 +17,7 @@
#endif
static __rte_always_inline int
-i40e_tx_free_bufs(struct i40e_tx_queue *txq)
+i40e_tx_free_bufs(struct ieth_tx_queue *txq)
{
struct ieth_tx_entry *txep;
uint32_t n;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_neon.c b/drivers/net/i40e/i40e_rxtx_vec_neon.c
index 23aaf3a739..b30da1a78c 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_neon.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_neon.c
@@ -679,7 +679,7 @@ uint16_t
i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
struct rte_mbuf **__rte_restrict tx_pkts, uint16_t nb_pkts)
{
- struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
+ struct ieth_tx_queue *txq = (struct ieth_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
struct ieth_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
@@ -753,7 +753,7 @@ i40e_rxq_vec_setup(struct i40e_rx_queue *rxq)
}
int __rte_cold
-i40e_txq_vec_setup(struct i40e_tx_queue __rte_unused *txq)
+i40e_txq_vec_setup(struct ieth_tx_queue __rte_unused *txq)
{
return 0;
}
diff --git a/drivers/net/i40e/i40e_rxtx_vec_sse.c b/drivers/net/i40e/i40e_rxtx_vec_sse.c
index 499b6e6ff7..5107cb9f01 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_sse.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_sse.c
@@ -698,7 +698,7 @@ uint16_t
i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
+ struct ieth_tx_queue *txq = (struct ieth_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
struct ieth_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
@@ -771,7 +771,7 @@ i40e_rxq_vec_setup(struct i40e_rx_queue *rxq)
}
int __rte_cold
-i40e_txq_vec_setup(struct i40e_tx_queue __rte_unused *txq)
+i40e_txq_vec_setup(struct ieth_tx_queue __rte_unused *txq)
{
return 0;
}
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 204d4eadbb..0b262d34c6 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -1177,8 +1177,8 @@ ice_dcf_configure_queues(struct ice_dcf_hw *hw)
{
struct ice_rx_queue **rxq =
(struct ice_rx_queue **)hw->eth_dev->data->rx_queues;
- struct ice_tx_queue **txq =
- (struct ice_tx_queue **)hw->eth_dev->data->tx_queues;
+ struct ieth_tx_queue **txq =
+ (struct ieth_tx_queue **)hw->eth_dev->data->tx_queues;
struct virtchnl_vsi_queue_config_info *vc_config;
struct virtchnl_queue_pair_info *vc_qp;
struct dcf_virtchnl_cmd args;
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 9485494f86..b5bab35d77 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -387,7 +387,7 @@ reset_rx_queue(struct ice_rx_queue *rxq)
}
static inline void
-reset_tx_queue(struct ice_tx_queue *txq)
+reset_tx_queue(struct ieth_tx_queue *txq)
{
struct ieth_tx_entry *txe;
uint32_t i, size;
@@ -454,7 +454,7 @@ ice_dcf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
struct ice_dcf_adapter *ad = dev->data->dev_private;
struct iavf_hw *hw = &ad->real_hw.avf;
- struct ice_tx_queue *txq;
+ struct ieth_tx_queue *txq;
int err = 0;
if (tx_queue_id >= dev->data->nb_tx_queues)
@@ -486,7 +486,7 @@ ice_dcf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
struct ice_dcf_adapter *ad = dev->data->dev_private;
struct ice_dcf_hw *hw = &ad->real_hw;
- struct ice_tx_queue *txq;
+ struct ieth_tx_queue *txq;
int err;
if (tx_queue_id >= dev->data->nb_tx_queues)
@@ -511,7 +511,7 @@ static int
ice_dcf_start_queues(struct rte_eth_dev *dev)
{
struct ice_rx_queue *rxq;
- struct ice_tx_queue *txq;
+ struct ieth_tx_queue *txq;
int nb_rxq = 0;
int nb_txq, i;
@@ -638,7 +638,7 @@ ice_dcf_stop_queues(struct rte_eth_dev *dev)
struct ice_dcf_adapter *ad = dev->data->dev_private;
struct ice_dcf_hw *hw = &ad->real_hw;
struct ice_rx_queue *rxq;
- struct ice_tx_queue *txq;
+ struct ieth_tx_queue *txq;
int ret, i;
/* Stop All queues */
diff --git a/drivers/net/ice/ice_diagnose.c b/drivers/net/ice/ice_diagnose.c
index 5bec9d00ad..2d0e8e66ce 100644
--- a/drivers/net/ice/ice_diagnose.c
+++ b/drivers/net/ice/ice_diagnose.c
@@ -605,7 +605,7 @@ void print_node(const struct rte_eth_dev_data *ethdata,
get_elem_type(data->data.elem_type));
if (data->data.elem_type == ICE_AQC_ELEM_TYPE_LEAF) {
for (uint16_t i = 0; i < ethdata->nb_tx_queues; i++) {
- struct ice_tx_queue *q = ethdata->tx_queues[i];
+ struct ieth_tx_queue *q = ethdata->tx_queues[i];
if (q->q_teid == data->node_teid) {
fprintf(stream, "\t\t\t\t<tr><td>TXQ</td><td>%u</td></tr>\n", i);
break;
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 93a6308a86..378979b858 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -6448,7 +6448,7 @@ ice_update_mbuf_stats(struct rte_eth_dev *ethdev,
struct ice_mbuf_stats *mbuf_stats)
{
uint16_t idx;
- struct ice_tx_queue *txq;
+ struct ieth_tx_queue *txq;
for (idx = 0; idx < ethdev->data->nb_tx_queues; idx++) {
txq = ethdev->data->tx_queues[idx];
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index a5b27fabd2..b5d39b3fc6 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -258,7 +258,7 @@ struct ice_vsi_list {
};
struct ice_rx_queue;
-struct ice_tx_queue;
+struct ieth_tx_queue;
/**
* Structure that defines a VSI, associated with a adapter.
@@ -408,7 +408,7 @@ struct ice_fdir_counter_pool_container {
*/
struct ice_fdir_info {
struct ice_vsi *fdir_vsi; /* pointer to fdir VSI structure */
- struct ice_tx_queue *txq;
+ struct ieth_tx_queue *txq;
struct ice_rx_queue *rxq;
void *prg_pkt; /* memory for fdir program packet */
uint64_t dma_addr; /* physic address of packet memory*/
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index df9b09ae0c..20ebda68c7 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -743,7 +743,7 @@ ice_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
int
ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
- struct ice_tx_queue *txq;
+ struct ieth_tx_queue *txq;
int err;
struct ice_vsi *vsi;
struct ice_hw *hw;
@@ -944,7 +944,7 @@ int
ice_fdir_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct ice_tx_queue *txq;
+ struct ieth_tx_queue *txq;
int err;
struct ice_vsi *vsi;
struct ice_hw *hw;
@@ -1008,7 +1008,7 @@ ice_fdir_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
/* Free all mbufs for descriptors in tx queue */
static void
-_ice_tx_queue_release_mbufs(struct ice_tx_queue *txq)
+_ice_tx_queue_release_mbufs(struct ieth_tx_queue *txq)
{
uint16_t i;
@@ -1026,7 +1026,7 @@ _ice_tx_queue_release_mbufs(struct ice_tx_queue *txq)
}
static void
-ice_reset_tx_queue(struct ice_tx_queue *txq)
+ice_reset_tx_queue(struct ieth_tx_queue *txq)
{
struct ieth_tx_entry *txe;
uint16_t i, prev, size;
@@ -1066,7 +1066,7 @@ ice_reset_tx_queue(struct ice_tx_queue *txq)
int
ice_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
- struct ice_tx_queue *txq;
+ struct ieth_tx_queue *txq;
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_vsi *vsi = pf->main_vsi;
@@ -1134,7 +1134,7 @@ ice_fdir_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
int
ice_fdir_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
- struct ice_tx_queue *txq;
+ struct ieth_tx_queue *txq;
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_vsi *vsi = pf->main_vsi;
@@ -1354,7 +1354,7 @@ ice_tx_queue_setup(struct rte_eth_dev *dev,
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_vsi *vsi = pf->main_vsi;
- struct ice_tx_queue *txq;
+ struct ieth_tx_queue *txq;
const struct rte_memzone *tz;
uint32_t ring_size;
uint16_t tx_rs_thresh, tx_free_thresh;
@@ -1467,7 +1467,7 @@ ice_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate the TX queue data structure. */
txq = rte_zmalloc_socket(NULL,
- sizeof(struct ice_tx_queue),
+ sizeof(struct ieth_tx_queue),
RTE_CACHE_LINE_SIZE,
socket_id);
if (!txq) {
@@ -1542,7 +1542,7 @@ ice_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
void
ice_tx_queue_release(void *txq)
{
- struct ice_tx_queue *q = (struct ice_tx_queue *)txq;
+ struct ieth_tx_queue *q = (struct ieth_tx_queue *)txq;
if (!q) {
PMD_DRV_LOG(DEBUG, "Pointer to TX queue is NULL");
@@ -1577,7 +1577,7 @@ void
ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
struct rte_eth_txq_info *qinfo)
{
- struct ice_tx_queue *txq;
+ struct ieth_tx_queue *txq;
txq = dev->data->tx_queues[queue_id];
@@ -2354,7 +2354,7 @@ ice_rx_descriptor_status(void *rx_queue, uint16_t offset)
int
ice_tx_descriptor_status(void *tx_queue, uint16_t offset)
{
- struct ice_tx_queue *txq = tx_queue;
+ struct ieth_tx_queue *txq = tx_queue;
volatile uint64_t *status;
uint64_t mask, expect;
uint32_t desc;
@@ -2412,7 +2412,7 @@ ice_free_queues(struct rte_eth_dev *dev)
int
ice_fdir_setup_tx_resources(struct ice_pf *pf)
{
- struct ice_tx_queue *txq;
+ struct ieth_tx_queue *txq;
const struct rte_memzone *tz = NULL;
uint32_t ring_size;
struct rte_eth_dev *dev;
@@ -2426,7 +2426,7 @@ ice_fdir_setup_tx_resources(struct ice_pf *pf)
/* Allocate the TX queue data structure. */
txq = rte_zmalloc_socket("ice fdir tx queue",
- sizeof(struct ice_tx_queue),
+ sizeof(struct ieth_tx_queue),
RTE_CACHE_LINE_SIZE,
SOCKET_ID_ANY);
if (!txq) {
@@ -2835,7 +2835,7 @@ ice_txd_enable_checksum(uint64_t ol_flags,
}
static inline int
-ice_xmit_cleanup(struct ice_tx_queue *txq)
+ice_xmit_cleanup(struct ieth_tx_queue *txq)
{
struct ieth_tx_entry *sw_ring = txq->sw_ring;
volatile struct ice_tx_desc *txd = txq->ice_tx_ring;
@@ -2958,7 +2958,7 @@ ice_calc_pkt_desc(struct rte_mbuf *tx_pkt)
uint16_t
ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
- struct ice_tx_queue *txq;
+ struct ieth_tx_queue *txq;
volatile struct ice_tx_desc *ice_tx_ring;
volatile struct ice_tx_desc *txd;
struct ieth_tx_entry *sw_ring;
@@ -3182,7 +3182,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
}
static __rte_always_inline int
-ice_tx_free_bufs(struct ice_tx_queue *txq)
+ice_tx_free_bufs(struct ieth_tx_queue *txq)
{
struct ieth_tx_entry *txep;
uint16_t i;
@@ -3218,7 +3218,7 @@ ice_tx_free_bufs(struct ice_tx_queue *txq)
}
static int
-ice_tx_done_cleanup_full(struct ice_tx_queue *txq,
+ice_tx_done_cleanup_full(struct ieth_tx_queue *txq,
uint32_t free_cnt)
{
struct ieth_tx_entry *swr_ring = txq->sw_ring;
@@ -3278,7 +3278,7 @@ ice_tx_done_cleanup_full(struct ice_tx_queue *txq,
#ifdef RTE_ARCH_X86
static int
-ice_tx_done_cleanup_vec(struct ice_tx_queue *txq __rte_unused,
+ice_tx_done_cleanup_vec(struct ieth_tx_queue *txq __rte_unused,
uint32_t free_cnt __rte_unused)
{
return -ENOTSUP;
@@ -3286,7 +3286,7 @@ ice_tx_done_cleanup_vec(struct ice_tx_queue *txq __rte_unused,
#endif
static int
-ice_tx_done_cleanup_simple(struct ice_tx_queue *txq,
+ice_tx_done_cleanup_simple(struct ieth_tx_queue *txq,
uint32_t free_cnt)
{
int i, n, cnt;
@@ -3312,7 +3312,7 @@ ice_tx_done_cleanup_simple(struct ice_tx_queue *txq,
int
ice_tx_done_cleanup(void *txq, uint32_t free_cnt)
{
- struct ice_tx_queue *q = (struct ice_tx_queue *)txq;
+ struct ieth_tx_queue *q = (struct ieth_tx_queue *)txq;
struct rte_eth_dev *dev = &rte_eth_devices[q->port_id];
struct ice_adapter *ad =
ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
@@ -3357,7 +3357,7 @@ tx1(volatile struct ice_tx_desc *txdp, struct rte_mbuf **pkts)
}
static inline void
-ice_tx_fill_hw_ring(struct ice_tx_queue *txq, struct rte_mbuf **pkts,
+ice_tx_fill_hw_ring(struct ieth_tx_queue *txq, struct rte_mbuf **pkts,
uint16_t nb_pkts)
{
volatile struct ice_tx_desc *txdp = &txq->ice_tx_ring[txq->tx_tail];
@@ -3389,7 +3389,7 @@ ice_tx_fill_hw_ring(struct ice_tx_queue *txq, struct rte_mbuf **pkts,
}
static inline uint16_t
-tx_xmit_pkts(struct ice_tx_queue *txq,
+tx_xmit_pkts(struct ieth_tx_queue *txq,
struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
@@ -3452,14 +3452,14 @@ ice_xmit_pkts_simple(void *tx_queue,
uint16_t nb_tx = 0;
if (likely(nb_pkts <= ICE_TX_MAX_BURST))
- return tx_xmit_pkts((struct ice_tx_queue *)tx_queue,
+ return tx_xmit_pkts((struct ieth_tx_queue *)tx_queue,
tx_pkts, nb_pkts);
while (nb_pkts) {
uint16_t ret, num = (uint16_t)RTE_MIN(nb_pkts,
ICE_TX_MAX_BURST);
- ret = tx_xmit_pkts((struct ice_tx_queue *)tx_queue,
+ ret = tx_xmit_pkts((struct ieth_tx_queue *)tx_queue,
&tx_pkts[nb_tx], num);
nb_tx = (uint16_t)(nb_tx + ret);
nb_pkts = (uint16_t)(nb_pkts - ret);
@@ -3667,7 +3667,7 @@ ice_rx_burst_mode_get(struct rte_eth_dev *dev, __rte_unused uint16_t queue_id,
}
void __rte_cold
-ice_set_tx_function_flag(struct rte_eth_dev *dev, struct ice_tx_queue *txq)
+ice_set_tx_function_flag(struct rte_eth_dev *dev, struct ieth_tx_queue *txq)
{
struct ice_adapter *ad =
ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
@@ -3716,7 +3716,7 @@ ice_check_empty_mbuf(struct rte_mbuf *tx_pkt)
static uint16_t
ice_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
- struct ice_tx_queue *txq = tx_queue;
+ struct ieth_tx_queue *txq = tx_queue;
uint16_t idx;
struct rte_mbuf *mb;
bool pkt_error = false;
@@ -3778,7 +3778,7 @@ ice_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
pkt_error = true;
break;
}
- if (mb->nb_segs > ((struct ice_tx_queue *)tx_queue)->nb_tx_desc) {
+ if (mb->nb_segs > ((struct ieth_tx_queue *)tx_queue)->nb_tx_desc) {
PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs out of ring length");
pkt_error = true;
break;
@@ -3839,7 +3839,7 @@ ice_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
(m->tso_segsz < ICE_MIN_TSO_MSS ||
m->tso_segsz > ICE_MAX_TSO_MSS ||
m->nb_segs >
- ((struct ice_tx_queue *)tx_queue)->nb_tx_desc ||
+ ((struct ieth_tx_queue *)tx_queue)->nb_tx_desc ||
m->pkt_len > ICE_MAX_TSO_FRAME_SIZE)) {
/**
* MSS outside the range are considered malicious
@@ -3881,7 +3881,7 @@ ice_set_tx_function(struct rte_eth_dev *dev)
ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
int mbuf_check = ad->devargs.mbuf_check;
#ifdef RTE_ARCH_X86
- struct ice_tx_queue *txq;
+ struct ieth_tx_queue *txq;
int i;
int tx_check_ret = -1;
@@ -4693,7 +4693,7 @@ ice_check_fdir_programming_status(struct ice_rx_queue *rxq)
int
ice_fdir_programming(struct ice_pf *pf, struct ice_fltr_desc *fdir_desc)
{
- struct ice_tx_queue *txq = pf->fdir.txq;
+ struct ieth_tx_queue *txq = pf->fdir.txq;
struct ice_rx_queue *rxq = pf->fdir.rxq;
volatile struct ice_fltr_desc *fdirdp;
volatile struct ice_tx_desc *txdp;
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index 91f8ed2036..9c8022d1be 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -79,7 +79,6 @@ extern int ice_timestamp_dynfield_offset;
#define ICE_TX_MTU_SEG_MAX 8
typedef void (*ice_rx_release_mbufs_t)(struct ice_rx_queue *rxq);
-typedef void (*ice_tx_release_mbufs_t)(struct ice_tx_queue *txq);
typedef void (*ice_rxd_to_pkt_fields_t)(struct ice_rx_queue *rxq,
struct rte_mbuf *mb,
volatile union ice_rx_flex_desc *rxdp);
@@ -145,42 +144,6 @@ struct ice_rx_queue {
bool ts_enable; /* if rxq timestamp is enabled */
};
-struct ice_tx_queue {
- uint16_t nb_tx_desc; /* number of TX descriptors */
- rte_iova_t tx_ring_dma; /* TX ring DMA address */
- volatile struct ice_tx_desc *ice_tx_ring; /* TX ring virtual address */
- struct ieth_tx_entry *sw_ring; /* virtual address of SW ring */
- uint16_t tx_tail; /* current value of tail register */
- volatile uint8_t *qtx_tail; /* register address of tail */
- uint16_t nb_tx_used; /* number of TX desc used since RS bit set */
- /* index to last TX descriptor to have been cleaned */
- uint16_t last_desc_cleaned;
- /* Total number of TX descriptors ready to be allocated. */
- uint16_t nb_tx_free;
- /* Start freeing TX buffers if there are less free descriptors than
- * this value.
- */
- uint16_t tx_free_thresh;
- /* Number of TX descriptors to use before RS bit is set. */
- uint16_t tx_rs_thresh;
- uint8_t pthresh; /**< Prefetch threshold register. */
- uint8_t hthresh; /**< Host threshold register. */
- uint8_t wthresh; /**< Write-back threshold reg. */
- uint16_t port_id; /* Device port identifier. */
- uint16_t queue_id; /* TX queue index. */
- uint32_t q_teid; /* TX schedule node id. */
- uint16_t reg_idx;
- uint64_t offloads;
- struct ice_vsi *ice_vsi; /* the VSI this queue belongs to */
- uint16_t tx_next_dd;
- uint16_t tx_next_rs;
- uint64_t mbuf_errors;
- bool tx_deferred_start; /* don't start this queue in dev start */
- bool q_set; /* indicate if tx queue has been configured */
- ice_tx_release_mbufs_t tx_rel_mbufs;
- const struct rte_memzone *mz;
-};
-
/* Offload features */
union ice_tx_offload {
uint64_t data;
@@ -268,7 +231,7 @@ void ice_set_rx_function(struct rte_eth_dev *dev);
uint16_t ice_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
void ice_set_tx_function_flag(struct rte_eth_dev *dev,
- struct ice_tx_queue *txq);
+ struct ieth_tx_queue *txq);
void ice_set_tx_function(struct rte_eth_dev *dev);
uint32_t ice_rx_queue_count(void *rx_queue);
void ice_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
@@ -290,7 +253,7 @@ void ice_select_rxd_to_pkt_fields_handler(struct ice_rx_queue *rxq,
int ice_rx_vec_dev_check(struct rte_eth_dev *dev);
int ice_tx_vec_dev_check(struct rte_eth_dev *dev);
int ice_rxq_vec_setup(struct ice_rx_queue *rxq);
-int ice_txq_vec_setup(struct ice_tx_queue *txq);
+int ice_txq_vec_setup(struct ieth_tx_queue *txq);
uint16_t ice_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
uint16_t ice_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index d4c76686f7..370871c320 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -856,7 +856,7 @@ static __rte_always_inline uint16_t
ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool offload)
{
- struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
+ struct ieth_tx_queue *txq = (struct ieth_tx_queue *)tx_queue;
volatile struct ice_tx_desc *txdp;
struct ieth_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
@@ -924,7 +924,7 @@ ice_xmit_pkts_vec_avx2_common(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool offload)
{
uint16_t nb_tx = 0;
- struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
+ struct ieth_tx_queue *txq = (struct ieth_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index 1126a30bf8..4d95561f8c 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -860,7 +860,7 @@ ice_recv_scattered_pkts_vec_avx512_offload(void *rx_queue,
}
static __rte_always_inline int
-ice_tx_free_bufs_avx512(struct ice_tx_queue *txq)
+ice_tx_free_bufs_avx512(struct ieth_tx_queue *txq)
{
struct ieth_vec_tx_entry *txep;
uint32_t n;
@@ -1053,7 +1053,7 @@ static __rte_always_inline uint16_t
ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool do_offload)
{
- struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
+ struct ieth_tx_queue *txq = (struct ieth_tx_queue *)tx_queue;
volatile struct ice_tx_desc *txdp;
struct ieth_vec_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
@@ -1122,7 +1122,7 @@ ice_xmit_pkts_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
uint16_t nb_tx = 0;
- struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
+ struct ieth_tx_queue *txq = (struct ieth_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
@@ -1144,7 +1144,7 @@ ice_xmit_pkts_vec_avx512_offload(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
uint16_t nb_tx = 0;
- struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
+ struct ieth_tx_queue *txq = (struct ieth_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index b2e3c0f6b7..b8e69f3c12 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -13,7 +13,7 @@
#endif
static __rte_always_inline int
-ice_tx_free_bufs_vec(struct ice_tx_queue *txq)
+ice_tx_free_bufs_vec(struct ieth_tx_queue *txq)
{
struct ieth_tx_entry *txep;
uint32_t n;
@@ -105,7 +105,7 @@ _ice_rx_queue_release_mbufs_vec(struct ice_rx_queue *rxq)
}
static inline void
-_ice_tx_queue_release_mbufs_vec(struct ice_tx_queue *txq)
+_ice_tx_queue_release_mbufs_vec(struct ieth_tx_queue *txq)
{
uint16_t i;
@@ -231,7 +231,7 @@ ice_rx_vec_queue_default(struct ice_rx_queue *rxq)
}
static inline int
-ice_tx_vec_queue_default(struct ice_tx_queue *txq)
+ice_tx_vec_queue_default(struct ieth_tx_queue *txq)
{
if (!txq)
return -1;
@@ -273,7 +273,7 @@ static inline int
ice_tx_vec_dev_check_default(struct rte_eth_dev *dev)
{
int i;
- struct ice_tx_queue *txq;
+ struct ieth_tx_queue *txq;
int ret = 0;
int result = 0;
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index 5db66f3c6a..b951d85cfd 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -697,7 +697,7 @@ static uint16_t
ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
+ struct ieth_tx_queue *txq = (struct ieth_tx_queue *)tx_queue;
volatile struct ice_tx_desc *txdp;
struct ieth_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
@@ -766,7 +766,7 @@ ice_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
uint16_t nb_tx = 0;
- struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
+ struct ieth_tx_queue *txq = (struct ieth_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
@@ -793,7 +793,7 @@ ice_rxq_vec_setup(struct ice_rx_queue *rxq)
}
int __rte_cold
-ice_txq_vec_setup(struct ice_tx_queue __rte_unused *txq)
+ice_txq_vec_setup(struct ieth_tx_queue __rte_unused *txq)
{
if (!txq)
return -1;
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [RFC PATCH 07/21] net/iavf: use common Tx queue structure
2024-11-22 12:53 [RFC PATCH 00/21] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (5 preceding siblings ...)
2024-11-22 12:53 ` [RFC PATCH 06/21] common/intel_eth: merge ice and i40e Tx queue struct Bruce Richardson
@ 2024-11-22 12:54 ` Bruce Richardson
2024-11-22 12:54 ` [RFC PATCH 08/21] net/ixgbe: convert Tx queue context cache field to ptr Bruce Richardson
` (18 subsequent siblings)
25 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-11-22 12:54 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Vladimir Medvedkin, Ian Stokes, Konstantin Ananyev
Merge in the few additional fields used by iavf driver and convert it to
using the common Tx queue structure also.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/common/intel_eth/ieth_rxtx.h | 16 +++++++-
drivers/net/iavf/iavf.h | 2 +-
drivers/net/iavf/iavf_ethdev.c | 4 +-
drivers/net/iavf/iavf_rxtx.c | 42 ++++++++++-----------
drivers/net/iavf/iavf_rxtx.h | 49 +++----------------------
drivers/net/iavf/iavf_rxtx_vec_avx2.c | 4 +-
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 14 +++----
drivers/net/iavf/iavf_rxtx_vec_common.h | 8 ++--
drivers/net/iavf/iavf_rxtx_vec_sse.c | 8 ++--
drivers/net/iavf/iavf_vchnl.c | 4 +-
10 files changed, 63 insertions(+), 88 deletions(-)
diff --git a/drivers/common/intel_eth/ieth_rxtx.h b/drivers/common/intel_eth/ieth_rxtx.h
index 8b12ff59e4..986e0a6d42 100644
--- a/drivers/common/intel_eth/ieth_rxtx.h
+++ b/drivers/common/intel_eth/ieth_rxtx.h
@@ -32,8 +32,9 @@ typedef void (*ice_tx_release_mbufs_t)(struct ieth_tx_queue *txq);
struct ieth_tx_queue {
union { /* TX ring virtual address */
- volatile struct ice_tx_desc *ice_tx_ring;
volatile struct i40e_tx_desc *i40e_tx_ring;
+ volatile struct iavf_tx_desc *iavf_tx_ring;
+ volatile struct ice_tx_desc *ice_tx_ring;
};
volatile uint8_t *qtx_tail; /* register address of tail */
struct ieth_tx_entry *sw_ring; /* virtual address of SW ring */
@@ -64,8 +65,9 @@ struct ieth_tx_queue {
_Bool tx_deferred_start; /* don't start this queue in dev start */
_Bool q_set; /* indicate if tx queue has been configured */
union { /* the VSI this queue belongs to */
- struct ice_vsi *ice_vsi;
struct i40e_vsi *i40e_vsi;
+ struct iavf_vsi *iavf_vsi;
+ struct ice_vsi *ice_vsi;
};
const struct rte_memzone *mz;
@@ -77,6 +79,16 @@ struct ieth_tx_queue {
struct { /* I40E driver specific values */
uint8_t dcb_tc;
};
+ struct { /* iavf driver specific values */
+ uint16_t ipsec_crypto_pkt_md_offset;
+ uint8_t rel_mbufs_type;
+#define IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG1 BIT(0)
+#define IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2 BIT(1)
+ uint8_t vlan_flag;
+ uint8_t tc;
+ uint8_t use_ctx : 1; /* if use the ctx desc, a packet needs
+ two descriptors */
+ };
};
};
diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h
index ad526c644c..7f52ca54f1 100644
--- a/drivers/net/iavf/iavf.h
+++ b/drivers/net/iavf/iavf.h
@@ -98,7 +98,7 @@
struct iavf_adapter;
struct iavf_rx_queue;
-struct iavf_tx_queue;
+struct ieth_tx_queue;
struct iavf_ipsec_crypto_stats {
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 7f80cd6258..3d3803f5e9 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -954,7 +954,7 @@ static int
iavf_start_queues(struct rte_eth_dev *dev)
{
struct iavf_rx_queue *rxq;
- struct iavf_tx_queue *txq;
+ struct ieth_tx_queue *txq;
int i;
uint16_t nb_txq, nb_rxq;
@@ -1885,7 +1885,7 @@ iavf_dev_update_mbuf_stats(struct rte_eth_dev *ethdev,
struct iavf_mbuf_stats *mbuf_stats)
{
uint16_t idx;
- struct iavf_tx_queue *txq;
+ struct ieth_tx_queue *txq;
for (idx = 0; idx < ethdev->data->nb_tx_queues; idx++) {
txq = ethdev->data->tx_queues[idx];
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 2d0f8eda79..c0f7d12804 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -213,7 +213,7 @@ check_rx_vec_allow(struct iavf_rx_queue *rxq)
}
static inline bool
-check_tx_vec_allow(struct iavf_tx_queue *txq)
+check_tx_vec_allow(struct ieth_tx_queue *txq)
{
if (!(txq->offloads & IAVF_TX_NO_VECTOR_FLAGS) &&
txq->tx_rs_thresh >= IAVF_VPMD_TX_MAX_BURST &&
@@ -282,7 +282,7 @@ reset_rx_queue(struct iavf_rx_queue *rxq)
}
static inline void
-reset_tx_queue(struct iavf_tx_queue *txq)
+reset_tx_queue(struct ieth_tx_queue *txq)
{
struct ieth_tx_entry *txe;
uint32_t i, size;
@@ -388,7 +388,7 @@ release_rxq_mbufs(struct iavf_rx_queue *rxq)
}
static inline void
-release_txq_mbufs(struct iavf_tx_queue *txq)
+release_txq_mbufs(struct ieth_tx_queue *txq)
{
uint16_t i;
@@ -778,7 +778,7 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,
struct iavf_info *vf =
IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
struct iavf_vsi *vsi = &vf->vsi;
- struct iavf_tx_queue *txq;
+ struct ieth_tx_queue *txq;
const struct rte_memzone *mz;
uint32_t ring_size;
uint16_t tx_rs_thresh, tx_free_thresh;
@@ -814,7 +814,7 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate the TX queue data structure. */
txq = rte_zmalloc_socket("iavf txq",
- sizeof(struct iavf_tx_queue),
+ sizeof(struct ieth_tx_queue),
RTE_CACHE_LINE_SIZE,
socket_id);
if (!txq) {
@@ -979,7 +979,7 @@ iavf_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- struct iavf_tx_queue *txq;
+ struct ieth_tx_queue *txq;
int err = 0;
PMD_DRV_FUNC_TRACE();
@@ -1048,7 +1048,7 @@ iavf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
struct iavf_adapter *adapter =
IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
- struct iavf_tx_queue *txq;
+ struct ieth_tx_queue *txq;
int err;
PMD_DRV_FUNC_TRACE();
@@ -1092,7 +1092,7 @@ iavf_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
void
iavf_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
{
- struct iavf_tx_queue *q = dev->data->tx_queues[qid];
+ struct ieth_tx_queue *q = dev->data->tx_queues[qid];
if (!q)
return;
@@ -1107,7 +1107,7 @@ static void
iavf_reset_queues(struct rte_eth_dev *dev)
{
struct iavf_rx_queue *rxq;
- struct iavf_tx_queue *txq;
+ struct ieth_tx_queue *txq;
int i;
for (i = 0; i < dev->data->nb_tx_queues; i++) {
@@ -2377,7 +2377,7 @@ iavf_recv_pkts_bulk_alloc(void *rx_queue,
}
static inline int
-iavf_xmit_cleanup(struct iavf_tx_queue *txq)
+iavf_xmit_cleanup(struct ieth_tx_queue *txq)
{
struct ieth_tx_entry *sw_ring = txq->sw_ring;
uint16_t last_desc_cleaned = txq->last_desc_cleaned;
@@ -2781,7 +2781,7 @@ iavf_fill_data_desc(volatile struct iavf_tx_desc *desc,
static struct iavf_ipsec_crypto_pkt_metadata *
-iavf_ipsec_crypto_get_pkt_metadata(const struct iavf_tx_queue *txq,
+iavf_ipsec_crypto_get_pkt_metadata(const struct ieth_tx_queue *txq,
struct rte_mbuf *m)
{
if (m->ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD)
@@ -2795,7 +2795,7 @@ iavf_ipsec_crypto_get_pkt_metadata(const struct iavf_tx_queue *txq,
uint16_t
iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
- struct iavf_tx_queue *txq = tx_queue;
+ struct ieth_tx_queue *txq = tx_queue;
volatile struct iavf_tx_desc *txr = txq->iavf_tx_ring;
struct ieth_tx_entry *txe_ring = txq->sw_ring;
struct ieth_tx_entry *txe, *txn;
@@ -3027,7 +3027,7 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
* correct queue.
*/
static int
-iavf_check_vlan_up2tc(struct iavf_tx_queue *txq, struct rte_mbuf *m)
+iavf_check_vlan_up2tc(struct ieth_tx_queue *txq, struct rte_mbuf *m)
{
struct rte_eth_dev *dev = &rte_eth_devices[txq->port_id];
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
@@ -3646,7 +3646,7 @@ iavf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
int i, ret;
uint64_t ol_flags;
struct rte_mbuf *m;
- struct iavf_tx_queue *txq = tx_queue;
+ struct ieth_tx_queue *txq = tx_queue;
struct rte_eth_dev *dev = &rte_eth_devices[txq->port_id];
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
struct iavf_adapter *adapter = IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
@@ -3800,7 +3800,7 @@ static uint16_t
iavf_xmit_pkts_no_poll(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct iavf_tx_queue *txq = tx_queue;
+ struct ieth_tx_queue *txq = tx_queue;
enum iavf_tx_burst_type tx_burst_type;
if (!txq->iavf_vsi || txq->iavf_vsi->adapter->no_poll)
@@ -3823,7 +3823,7 @@ iavf_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t good_pkts = nb_pkts;
const char *reason = NULL;
bool pkt_error = false;
- struct iavf_tx_queue *txq = tx_queue;
+ struct ieth_tx_queue *txq = tx_queue;
struct iavf_adapter *adapter = txq->iavf_vsi->adapter;
enum iavf_tx_burst_type tx_burst_type =
txq->iavf_vsi->adapter->tx_burst_type;
@@ -4144,7 +4144,7 @@ iavf_set_tx_function(struct rte_eth_dev *dev)
int mbuf_check = adapter->devargs.mbuf_check;
int no_poll_on_link_down = adapter->devargs.no_poll_on_link_down;
#ifdef RTE_ARCH_X86
- struct iavf_tx_queue *txq;
+ struct ieth_tx_queue *txq;
int i;
int check_ret;
bool use_sse = false;
@@ -4265,7 +4265,7 @@ iavf_set_tx_function(struct rte_eth_dev *dev)
}
static int
-iavf_tx_done_cleanup_full(struct iavf_tx_queue *txq,
+iavf_tx_done_cleanup_full(struct ieth_tx_queue *txq,
uint32_t free_cnt)
{
struct ieth_tx_entry *swr_ring = txq->sw_ring;
@@ -4324,7 +4324,7 @@ iavf_tx_done_cleanup_full(struct iavf_tx_queue *txq,
int
iavf_dev_tx_done_cleanup(void *txq, uint32_t free_cnt)
{
- struct iavf_tx_queue *q = (struct iavf_tx_queue *)txq;
+ struct ieth_tx_queue *q = (struct ieth_tx_queue *)txq;
return iavf_tx_done_cleanup_full(q, free_cnt);
}
@@ -4350,7 +4350,7 @@ void
iavf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
struct rte_eth_txq_info *qinfo)
{
- struct iavf_tx_queue *txq;
+ struct ieth_tx_queue *txq;
txq = dev->data->tx_queues[queue_id];
@@ -4422,7 +4422,7 @@ iavf_dev_rx_desc_status(void *rx_queue, uint16_t offset)
int
iavf_dev_tx_desc_status(void *tx_queue, uint16_t offset)
{
- struct iavf_tx_queue *txq = tx_queue;
+ struct ieth_tx_queue *txq = tx_queue;
volatile uint64_t *status;
uint64_t mask, expect;
uint32_t desc;
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index cba6d0573b..835fc8f08f 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -211,7 +211,7 @@ struct iavf_rxq_ops {
};
struct iavf_txq_ops {
- void (*release_mbufs)(struct iavf_tx_queue *txq);
+ void (*release_mbufs)(struct ieth_tx_queue *txq);
};
@@ -273,43 +273,6 @@ struct iavf_rx_queue {
uint64_t hw_time_update;
};
-/* Structure associated with each TX queue. */
-struct iavf_tx_queue {
- const struct rte_memzone *mz; /* memzone for Tx ring */
- volatile struct iavf_tx_desc *iavf_tx_ring; /* Tx ring virtual address */
- rte_iova_t tx_ring_dma; /* Tx ring DMA address */
- struct ieth_tx_entry *sw_ring; /* address array of SW ring */
- uint16_t nb_tx_desc; /* ring length */
- uint16_t tx_tail; /* current value of tail */
- volatile uint8_t *qtx_tail; /* register address of tail */
- /* number of used desc since RS bit set */
- uint16_t nb_tx_used;
- uint16_t nb_tx_free;
- uint16_t last_desc_cleaned; /* last desc have been cleaned*/
- uint16_t tx_free_thresh;
- uint16_t tx_rs_thresh;
- uint8_t rel_mbufs_type;
- struct iavf_vsi *iavf_vsi; /**< the VSI this queue belongs to */
-
- uint16_t port_id;
- uint16_t queue_id;
- uint64_t offloads;
- uint16_t tx_next_dd; /* next to set RS, for VPMD */
- uint16_t tx_next_rs; /* next to check DD, for VPMD */
- uint16_t ipsec_crypto_pkt_md_offset;
-
- uint64_t mbuf_errors;
-
- bool q_set; /* if rx queue has been configured */
- bool tx_deferred_start; /* don't start this queue in dev start */
- const struct iavf_txq_ops *ops;
-#define IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG1 BIT(0)
-#define IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2 BIT(1)
- uint8_t vlan_flag;
- uint8_t tc;
- uint8_t use_ctx:1; /* if use the ctx desc, a packet needs two descriptors */
-};
-
/* Offload features */
union iavf_tx_offload {
uint64_t data;
@@ -724,7 +687,7 @@ int iavf_get_monitor_addr(void *rx_queue, struct rte_power_monitor_cond *pmc);
int iavf_rx_vec_dev_check(struct rte_eth_dev *dev);
int iavf_tx_vec_dev_check(struct rte_eth_dev *dev);
int iavf_rxq_vec_setup(struct iavf_rx_queue *rxq);
-int iavf_txq_vec_setup(struct iavf_tx_queue *txq);
+int iavf_txq_vec_setup(struct ieth_tx_queue *txq);
uint16_t iavf_recv_pkts_vec_avx512(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
uint16_t iavf_recv_pkts_vec_avx512_offload(void *rx_queue,
@@ -757,14 +720,14 @@ uint16_t iavf_xmit_pkts_vec_avx512_ctx_offload(void *tx_queue, struct rte_mbuf *
uint16_t nb_pkts);
uint16_t iavf_xmit_pkts_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
-int iavf_txq_vec_setup_avx512(struct iavf_tx_queue *txq);
+int iavf_txq_vec_setup_avx512(struct ieth_tx_queue *txq);
uint8_t iavf_proto_xtr_type_to_rxdid(uint8_t xtr_type);
void iavf_set_default_ptype_table(struct rte_eth_dev *dev);
-void iavf_tx_queue_release_mbufs_avx512(struct iavf_tx_queue *txq);
+void iavf_tx_queue_release_mbufs_avx512(struct ieth_tx_queue *txq);
void iavf_rx_queue_release_mbufs_sse(struct iavf_rx_queue *rxq);
-void iavf_tx_queue_release_mbufs_sse(struct iavf_tx_queue *txq);
+void iavf_tx_queue_release_mbufs_sse(struct ieth_tx_queue *txq);
static inline
void iavf_dump_rx_descriptor(struct iavf_rx_queue *rxq,
@@ -791,7 +754,7 @@ void iavf_dump_rx_descriptor(struct iavf_rx_queue *rxq,
* to print the qwords
*/
static inline
-void iavf_dump_tx_descriptor(const struct iavf_tx_queue *txq,
+void iavf_dump_tx_descriptor(const struct ieth_tx_queue *txq,
const volatile void *desc, uint16_t tx_id)
{
const char *name;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index 94cf9c0038..25dc339303 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -1734,7 +1734,7 @@ static __rte_always_inline uint16_t
iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool offload)
{
- struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
+ struct ieth_tx_queue *txq = (struct ieth_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
struct ieth_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
@@ -1800,7 +1800,7 @@ iavf_xmit_pkts_vec_avx2_common(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool offload)
{
uint16_t nb_tx = 0;
- struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
+ struct ieth_tx_queue *txq = (struct ieth_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index dd45bc0fd9..c774c0c365 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -1845,7 +1845,7 @@ iavf_recv_scattered_pkts_vec_avx512_flex_rxd_offload(void *rx_queue,
}
static __rte_always_inline int
-iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq)
+iavf_tx_free_bufs_avx512(struct ieth_tx_queue *txq)
{
struct ieth_vec_tx_entry *txep;
uint32_t n;
@@ -2311,7 +2311,7 @@ static __rte_always_inline uint16_t
iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool offload)
{
- struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
+ struct ieth_tx_queue *txq = (struct ieth_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
struct ieth_vec_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
@@ -2378,7 +2378,7 @@ static __rte_always_inline uint16_t
iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool offload)
{
- struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
+ struct ieth_tx_queue *txq = (struct ieth_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
struct ieth_vec_tx_entry *txep;
uint16_t n, nb_commit, nb_mbuf, tx_id;
@@ -2446,7 +2446,7 @@ iavf_xmit_pkts_vec_avx512_cmn(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool offload)
{
uint16_t nb_tx = 0;
- struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
+ struct ieth_tx_queue *txq = (struct ieth_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
@@ -2472,7 +2472,7 @@ iavf_xmit_pkts_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
}
void __rte_cold
-iavf_tx_queue_release_mbufs_avx512(struct iavf_tx_queue *txq)
+iavf_tx_queue_release_mbufs_avx512(struct ieth_tx_queue *txq)
{
unsigned int i;
const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1);
@@ -2493,7 +2493,7 @@ iavf_tx_queue_release_mbufs_avx512(struct iavf_tx_queue *txq)
}
int __rte_cold
-iavf_txq_vec_setup_avx512(struct iavf_tx_queue *txq)
+iavf_txq_vec_setup_avx512(struct ieth_tx_queue *txq)
{
txq->rel_mbufs_type = IAVF_REL_MBUFS_AVX512_VEC;
return 0;
@@ -2511,7 +2511,7 @@ iavf_xmit_pkts_vec_avx512_ctx_cmn(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool offload)
{
uint16_t nb_tx = 0;
- struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
+ struct ieth_tx_queue *txq = (struct ieth_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index b8b5e74b89..7a31c777f0 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -17,7 +17,7 @@
#endif
static __rte_always_inline int
-iavf_tx_free_bufs(struct iavf_tx_queue *txq)
+iavf_tx_free_bufs(struct ieth_tx_queue *txq)
{
struct ieth_tx_entry *txep;
uint32_t n;
@@ -104,7 +104,7 @@ _iavf_rx_queue_release_mbufs_vec(struct iavf_rx_queue *rxq)
}
static inline void
-_iavf_tx_queue_release_mbufs_vec(struct iavf_tx_queue *txq)
+_iavf_tx_queue_release_mbufs_vec(struct ieth_tx_queue *txq)
{
unsigned i;
const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1);
@@ -164,7 +164,7 @@ iavf_rx_vec_queue_default(struct iavf_rx_queue *rxq)
}
static inline int
-iavf_tx_vec_queue_default(struct iavf_tx_queue *txq)
+iavf_tx_vec_queue_default(struct ieth_tx_queue *txq)
{
if (!txq)
return -1;
@@ -227,7 +227,7 @@ static inline int
iavf_tx_vec_dev_check_default(struct rte_eth_dev *dev)
{
int i;
- struct iavf_tx_queue *txq;
+ struct ieth_tx_queue *txq;
int ret;
int result = 0;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index 0a896a6e6f..de632c6de8 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -1366,7 +1366,7 @@ uint16_t
iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
+ struct ieth_tx_queue *txq = (struct ieth_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
struct ieth_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
@@ -1435,7 +1435,7 @@ iavf_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
uint16_t nb_tx = 0;
- struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
+ struct ieth_tx_queue *txq = (struct ieth_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
@@ -1459,13 +1459,13 @@ iavf_rx_queue_release_mbufs_sse(struct iavf_rx_queue *rxq)
}
void __rte_cold
-iavf_tx_queue_release_mbufs_sse(struct iavf_tx_queue *txq)
+iavf_tx_queue_release_mbufs_sse(struct ieth_tx_queue *txq)
{
_iavf_tx_queue_release_mbufs_vec(txq);
}
int __rte_cold
-iavf_txq_vec_setup(struct iavf_tx_queue *txq)
+iavf_txq_vec_setup(struct ieth_tx_queue *txq)
{
txq->rel_mbufs_type = IAVF_REL_MBUFS_SSE_VEC;
return 0;
diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c
index 0646a2f978..3bdea403c0 100644
--- a/drivers/net/iavf/iavf_vchnl.c
+++ b/drivers/net/iavf/iavf_vchnl.c
@@ -1220,8 +1220,8 @@ iavf_configure_queues(struct iavf_adapter *adapter,
{
struct iavf_rx_queue **rxq =
(struct iavf_rx_queue **)adapter->dev_data->rx_queues;
- struct iavf_tx_queue **txq =
- (struct iavf_tx_queue **)adapter->dev_data->tx_queues;
+ struct ieth_tx_queue **txq =
+ (struct ieth_tx_queue **)adapter->dev_data->tx_queues;
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
struct virtchnl_vsi_queue_config_info *vc_config;
struct virtchnl_queue_pair_info *vc_qp;
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [RFC PATCH 08/21] net/ixgbe: convert Tx queue context cache field to ptr
2024-11-22 12:53 [RFC PATCH 00/21] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (6 preceding siblings ...)
2024-11-22 12:54 ` [RFC PATCH 07/21] net/iavf: use common Tx queue structure Bruce Richardson
@ 2024-11-22 12:54 ` Bruce Richardson
2024-11-22 12:54 ` [RFC PATCH 09/21] net/ixgbe: use common Tx queue structure Bruce Richardson
` (17 subsequent siblings)
25 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-11-22 12:54 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Anatoly Burakov, Vladimir Medvedkin
Rather than having a two element array of context cache values inside
the Tx queue structure, convert it to a pointer to a cache at the end of
the structure. This makes future merging of the structure easier as we
don't need the "ixgbe_advctx_info" struct defined when defining a
combined queue structure.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/ixgbe/ixgbe_rxtx.c | 7 ++++---
drivers/net/ixgbe/ixgbe_rxtx.h | 4 ++--
2 files changed, 6 insertions(+), 5 deletions(-)
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index c3b704c201..96eafd52a0 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -2522,8 +2522,7 @@ ixgbe_reset_tx_queue(struct ixgbe_tx_queue *txq)
txq->last_desc_cleaned = (uint16_t)(txq->nb_tx_desc - 1);
txq->nb_tx_free = (uint16_t)(txq->nb_tx_desc - 1);
txq->ctx_curr = 0;
- memset((void *)&txq->ctx_cache, 0,
- IXGBE_CTX_NUM * sizeof(struct ixgbe_advctx_info));
+ memset(txq->ctx_cache, 0, IXGBE_CTX_NUM * sizeof(struct ixgbe_advctx_info));
}
static const struct ixgbe_txq_ops def_txq_ops = {
@@ -2741,10 +2740,12 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
}
/* First allocate the tx queue data structure */
- txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct ixgbe_tx_queue),
+ txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct ixgbe_tx_queue) +
+ sizeof(struct ixgbe_advctx_info) * IXGBE_CTX_NUM,
RTE_CACHE_LINE_SIZE, socket_id);
if (txq == NULL)
return -ENOMEM;
+ txq->ctx_cache = RTE_PTR_ADD(txq, sizeof(struct ixgbe_tx_queue));
/*
* Allocate TX ring hardware descriptors. A memzone large enough to
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index 4e437f95e3..8efb46e07a 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -215,8 +215,8 @@ struct ixgbe_tx_queue {
uint8_t wthresh; /**< Write-back threshold reg. */
uint64_t offloads; /**< Tx offload flags of RTE_ETH_TX_OFFLOAD_* */
uint32_t ctx_curr; /**< Hardware context states. */
- /** Hardware context0 history. */
- struct ixgbe_advctx_info ctx_cache[IXGBE_CTX_NUM];
+ /** Hardware context history. */
+ struct ixgbe_advctx_info *ctx_cache;
const struct ixgbe_txq_ops *ops; /**< txq ops */
_Bool tx_deferred_start; /**< not in global dev start. */
#ifdef RTE_LIB_SECURITY
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [RFC PATCH 09/21] net/ixgbe: use common Tx queue structure
2024-11-22 12:53 [RFC PATCH 00/21] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (7 preceding siblings ...)
2024-11-22 12:54 ` [RFC PATCH 08/21] net/ixgbe: convert Tx queue context cache field to ptr Bruce Richardson
@ 2024-11-22 12:54 ` Bruce Richardson
2024-11-22 12:54 ` [RFC PATCH 10/21] common/intel_eth: pack " Bruce Richardson
` (16 subsequent siblings)
25 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-11-22 12:54 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Anatoly Burakov, Vladimir Medvedkin,
Wathsala Vithanage, Konstantin Ananyev
Merge in additional fields used by the ixgbe driver and then convert it
over to using the common Tx queue structure.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/common/intel_eth/ieth_rxtx.h | 14 +++-
drivers/net/ixgbe/ixgbe_ethdev.c | 4 +-
.../ixgbe/ixgbe_recycle_mbufs_vec_common.c | 2 +-
drivers/net/ixgbe/ixgbe_rxtx.c | 64 +++++++++----------
drivers/net/ixgbe/ixgbe_rxtx.h | 56 ++--------------
drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 10 +--
drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 10 +--
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 10 +--
8 files changed, 68 insertions(+), 102 deletions(-)
diff --git a/drivers/common/intel_eth/ieth_rxtx.h b/drivers/common/intel_eth/ieth_rxtx.h
index 986e0a6d42..9f8a1d7141 100644
--- a/drivers/common/intel_eth/ieth_rxtx.h
+++ b/drivers/common/intel_eth/ieth_rxtx.h
@@ -35,9 +35,13 @@ struct ieth_tx_queue {
volatile struct i40e_tx_desc *i40e_tx_ring;
volatile struct iavf_tx_desc *iavf_tx_ring;
volatile struct ice_tx_desc *ice_tx_ring;
+ volatile union ixgbe_adv_tx_desc *ixgbe_tx_ring;
};
volatile uint8_t *qtx_tail; /* register address of tail */
- struct ieth_tx_entry *sw_ring; /* virtual address of SW ring */
+ union {
+ struct ieth_tx_entry *sw_ring; /* virtual address of SW ring */
+ struct ieth_vec_tx_entry *sw_ring_v;
+ };
rte_iova_t tx_ring_dma; /* TX ring DMA address */
uint16_t nb_tx_desc; /* number of TX descriptors */
uint16_t tx_tail; /* current value of tail register */
@@ -89,6 +93,14 @@ struct ieth_tx_queue {
uint8_t use_ctx : 1; /* if use the ctx desc, a packet needs
two descriptors */
};
+ struct { /* ixgbe specific values */
+ const struct ixgbe_txq_ops *ops;
+ struct ixgbe_advctx_info *ctx_cache;
+ uint32_t ctx_curr;
+#ifdef RTE_LIB_SECURITY
+ uint8_t using_ipsec; /**< indicates that IPsec TX feature is in use */
+#endif
+ };
};
};
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index eb431889c3..e774c51f67 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1116,7 +1116,7 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
* RX and TX function.
*/
if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
- struct ixgbe_tx_queue *txq;
+ struct ieth_tx_queue *txq;
/* TX queue function in primary, set by last queue initialized
* Tx queue may not initialized by primary process
*/
@@ -1621,7 +1621,7 @@ eth_ixgbevf_dev_init(struct rte_eth_dev *eth_dev)
* RX function
*/
if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
- struct ixgbe_tx_queue *txq;
+ struct ieth_tx_queue *txq;
/* TX queue function in primary, set by last queue initialized
* Tx queue may not initialized by primary process
*/
diff --git a/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c b/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c
index 546825f334..d6edc9d0aa 100644
--- a/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c
+++ b/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c
@@ -51,7 +51,7 @@ uint16_t
ixgbe_recycle_tx_mbufs_reuse_vec(void *tx_queue,
struct rte_eth_recycle_rxq_info *recycle_rxq_info)
{
- struct ixgbe_tx_queue *txq = tx_queue;
+ struct ieth_tx_queue *txq = tx_queue;
struct ieth_tx_entry *txep;
struct rte_mbuf **rxep;
int i, n;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 96eafd52a0..e80bd6fccc 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -98,7 +98,7 @@
* Return the total number of buffers freed.
*/
static __rte_always_inline int
-ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq)
+ixgbe_tx_free_bufs(struct ieth_tx_queue *txq)
{
struct ieth_tx_entry *txep;
uint32_t status;
@@ -195,7 +195,7 @@ tx1(volatile union ixgbe_adv_tx_desc *txdp, struct rte_mbuf **pkts)
* Copy mbuf pointers to the S/W ring.
*/
static inline void
-ixgbe_tx_fill_hw_ring(struct ixgbe_tx_queue *txq, struct rte_mbuf **pkts,
+ixgbe_tx_fill_hw_ring(struct ieth_tx_queue *txq, struct rte_mbuf **pkts,
uint16_t nb_pkts)
{
volatile union ixgbe_adv_tx_desc *txdp = &(txq->ixgbe_tx_ring[txq->tx_tail]);
@@ -231,7 +231,7 @@ static inline uint16_t
tx_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
+ struct ieth_tx_queue *txq = (struct ieth_tx_queue *)tx_queue;
volatile union ixgbe_adv_tx_desc *tx_r = txq->ixgbe_tx_ring;
uint16_t n = 0;
@@ -344,7 +344,7 @@ ixgbe_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
uint16_t nb_tx = 0;
- struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
+ struct ieth_tx_queue *txq = (struct ieth_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
@@ -362,7 +362,7 @@ ixgbe_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
}
static inline void
-ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
+ixgbe_set_xmit_ctx(struct ieth_tx_queue *txq,
volatile struct ixgbe_adv_tx_context_desc *ctx_txd,
uint64_t ol_flags, union ixgbe_tx_offload tx_offload,
__rte_unused uint64_t *mdata)
@@ -493,7 +493,7 @@ ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
* or create a new context descriptor.
*/
static inline uint32_t
-what_advctx_update(struct ixgbe_tx_queue *txq, uint64_t flags,
+what_advctx_update(struct ieth_tx_queue *txq, uint64_t flags,
union ixgbe_tx_offload tx_offload)
{
/* If match with the current used context */
@@ -561,7 +561,7 @@ tx_desc_ol_flags_to_cmdtype(uint64_t ol_flags)
/* Reset transmit descriptors after they have been used */
static inline int
-ixgbe_xmit_cleanup(struct ixgbe_tx_queue *txq)
+ixgbe_xmit_cleanup(struct ieth_tx_queue *txq)
{
struct ieth_tx_entry *sw_ring = txq->sw_ring;
volatile union ixgbe_adv_tx_desc *txr = txq->ixgbe_tx_ring;
@@ -623,7 +623,7 @@ uint16_t
ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct ixgbe_tx_queue *txq;
+ struct ieth_tx_queue *txq;
struct ieth_tx_entry *sw_ring;
struct ieth_tx_entry *txe, *txn;
volatile union ixgbe_adv_tx_desc *txr;
@@ -963,7 +963,7 @@ ixgbe_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
int i, ret;
uint64_t ol_flags;
struct rte_mbuf *m;
- struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
+ struct ieth_tx_queue *txq = (struct ieth_tx_queue *)tx_queue;
for (i = 0; i < nb_pkts; i++) {
m = tx_pkts[i];
@@ -2335,7 +2335,7 @@ ixgbe_recv_pkts_lro_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts,
**********************************************************************/
static void __rte_cold
-ixgbe_tx_queue_release_mbufs(struct ixgbe_tx_queue *txq)
+ixgbe_tx_queue_release_mbufs(struct ieth_tx_queue *txq)
{
unsigned i;
@@ -2350,7 +2350,7 @@ ixgbe_tx_queue_release_mbufs(struct ixgbe_tx_queue *txq)
}
static int
-ixgbe_tx_done_cleanup_full(struct ixgbe_tx_queue *txq, uint32_t free_cnt)
+ixgbe_tx_done_cleanup_full(struct ieth_tx_queue *txq, uint32_t free_cnt)
{
struct ieth_tx_entry *swr_ring = txq->sw_ring;
uint16_t i, tx_last, tx_id;
@@ -2408,7 +2408,7 @@ ixgbe_tx_done_cleanup_full(struct ixgbe_tx_queue *txq, uint32_t free_cnt)
}
static int
-ixgbe_tx_done_cleanup_simple(struct ixgbe_tx_queue *txq,
+ixgbe_tx_done_cleanup_simple(struct ieth_tx_queue *txq,
uint32_t free_cnt)
{
int i, n, cnt;
@@ -2432,7 +2432,7 @@ ixgbe_tx_done_cleanup_simple(struct ixgbe_tx_queue *txq,
}
static int
-ixgbe_tx_done_cleanup_vec(struct ixgbe_tx_queue *txq __rte_unused,
+ixgbe_tx_done_cleanup_vec(struct ieth_tx_queue *txq __rte_unused,
uint32_t free_cnt __rte_unused)
{
return -ENOTSUP;
@@ -2441,7 +2441,7 @@ ixgbe_tx_done_cleanup_vec(struct ixgbe_tx_queue *txq __rte_unused,
int
ixgbe_dev_tx_done_cleanup(void *tx_queue, uint32_t free_cnt)
{
- struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
+ struct ieth_tx_queue *txq = (struct ieth_tx_queue *)tx_queue;
if (txq->offloads == 0 &&
#ifdef RTE_LIB_SECURITY
!(txq->using_ipsec) &&
@@ -2461,7 +2461,7 @@ ixgbe_dev_tx_done_cleanup(void *tx_queue, uint32_t free_cnt)
}
static void __rte_cold
-ixgbe_tx_free_swring(struct ixgbe_tx_queue *txq)
+ixgbe_tx_free_swring(struct ieth_tx_queue *txq)
{
if (txq != NULL &&
txq->sw_ring != NULL)
@@ -2469,7 +2469,7 @@ ixgbe_tx_free_swring(struct ixgbe_tx_queue *txq)
}
static void __rte_cold
-ixgbe_tx_queue_release(struct ixgbe_tx_queue *txq)
+ixgbe_tx_queue_release(struct ieth_tx_queue *txq)
{
if (txq != NULL && txq->ops != NULL) {
txq->ops->release_mbufs(txq);
@@ -2487,7 +2487,7 @@ ixgbe_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
/* (Re)set dynamic ixgbe_tx_queue fields to defaults */
static void __rte_cold
-ixgbe_reset_tx_queue(struct ixgbe_tx_queue *txq)
+ixgbe_reset_tx_queue(struct ieth_tx_queue *txq)
{
static const union ixgbe_adv_tx_desc zeroed_desc = {{0}};
struct ieth_tx_entry *txe = txq->sw_ring;
@@ -2536,7 +2536,7 @@ static const struct ixgbe_txq_ops def_txq_ops = {
* in dev_init by secondary process when attaching to an existing ethdev.
*/
void __rte_cold
-ixgbe_set_tx_function(struct rte_eth_dev *dev, struct ixgbe_tx_queue *txq)
+ixgbe_set_tx_function(struct rte_eth_dev *dev, struct ieth_tx_queue *txq)
{
/* Use a simple Tx queue (no offloads, no multi segs) if possible */
if ((txq->offloads == 0) &&
@@ -2618,7 +2618,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
const struct rte_eth_txconf *tx_conf)
{
const struct rte_memzone *tz;
- struct ixgbe_tx_queue *txq;
+ struct ieth_tx_queue *txq;
struct ixgbe_hw *hw;
uint16_t tx_rs_thresh, tx_free_thresh;
uint64_t offloads;
@@ -2740,12 +2740,12 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
}
/* First allocate the tx queue data structure */
- txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct ixgbe_tx_queue) +
- sizeof(struct ixgbe_advctx_info) * IXGBE_CTX_NUM,
+ txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct ieth_tx_queue) +
+ sizeof(struct ixgbe_advctx_info) * IXGBE_CTX_NUM,
RTE_CACHE_LINE_SIZE, socket_id);
if (txq == NULL)
return -ENOMEM;
- txq->ctx_cache = RTE_PTR_ADD(txq, sizeof(struct ixgbe_tx_queue));
+ txq->ctx_cache = RTE_PTR_ADD(txq, sizeof(struct ieth_tx_queue));
/*
* Allocate TX ring hardware descriptors. A memzone large enough to
@@ -3312,7 +3312,7 @@ ixgbe_dev_rx_descriptor_status(void *rx_queue, uint16_t offset)
int
ixgbe_dev_tx_descriptor_status(void *tx_queue, uint16_t offset)
{
- struct ixgbe_tx_queue *txq = tx_queue;
+ struct ieth_tx_queue *txq = tx_queue;
volatile uint32_t *status;
uint32_t desc;
@@ -3377,7 +3377,7 @@ ixgbe_dev_clear_queues(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
for (i = 0; i < dev->data->nb_tx_queues; i++) {
- struct ixgbe_tx_queue *txq = dev->data->tx_queues[i];
+ struct ieth_tx_queue *txq = dev->data->tx_queues[i];
if (txq != NULL) {
txq->ops->release_mbufs(txq);
@@ -5284,7 +5284,7 @@ void __rte_cold
ixgbe_dev_tx_init(struct rte_eth_dev *dev)
{
struct ixgbe_hw *hw;
- struct ixgbe_tx_queue *txq;
+ struct ieth_tx_queue *txq;
uint64_t bus_addr;
uint32_t hlreg0;
uint32_t txctrl;
@@ -5401,7 +5401,7 @@ int __rte_cold
ixgbe_dev_rxtx_start(struct rte_eth_dev *dev)
{
struct ixgbe_hw *hw;
- struct ixgbe_tx_queue *txq;
+ struct ieth_tx_queue *txq;
struct ixgbe_rx_queue *rxq;
uint32_t txdctl;
uint32_t dmatxctl;
@@ -5571,7 +5571,7 @@ int __rte_cold
ixgbe_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
struct ixgbe_hw *hw;
- struct ixgbe_tx_queue *txq;
+ struct ieth_tx_queue *txq;
uint32_t txdctl;
int poll_ms;
@@ -5610,7 +5610,7 @@ int __rte_cold
ixgbe_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
struct ixgbe_hw *hw;
- struct ixgbe_tx_queue *txq;
+ struct ieth_tx_queue *txq;
uint32_t txdctl;
uint32_t txtdh, txtdt;
int poll_ms;
@@ -5684,7 +5684,7 @@ void
ixgbe_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
struct rte_eth_txq_info *qinfo)
{
- struct ixgbe_tx_queue *txq;
+ struct ieth_tx_queue *txq;
txq = dev->data->tx_queues[queue_id];
@@ -5876,7 +5876,7 @@ void __rte_cold
ixgbevf_dev_tx_init(struct rte_eth_dev *dev)
{
struct ixgbe_hw *hw;
- struct ixgbe_tx_queue *txq;
+ struct ieth_tx_queue *txq;
uint64_t bus_addr;
uint32_t txctrl;
uint16_t i;
@@ -5917,7 +5917,7 @@ void __rte_cold
ixgbevf_dev_rxtx_start(struct rte_eth_dev *dev)
{
struct ixgbe_hw *hw;
- struct ixgbe_tx_queue *txq;
+ struct ieth_tx_queue *txq;
struct ixgbe_rx_queue *rxq;
uint32_t txdctl;
uint32_t rxdctl;
@@ -6126,7 +6126,7 @@ ixgbe_xmit_fixed_burst_vec(void __rte_unused *tx_queue,
}
int
-ixgbe_txq_vec_setup(struct ixgbe_tx_queue __rte_unused *txq)
+ixgbe_txq_vec_setup(struct ieth_tx_queue __rte_unused *txq)
{
return -1;
}
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index 8efb46e07a..5b56e48498 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -180,56 +180,10 @@ struct ixgbe_advctx_info {
union ixgbe_tx_offload tx_offload_mask;
};
-/**
- * Structure associated with each TX queue.
- */
-struct ixgbe_tx_queue {
- /** TX ring virtual address. */
- volatile union ixgbe_adv_tx_desc *ixgbe_tx_ring;
- rte_iova_t tx_ring_dma; /**< TX ring DMA address. */
- union {
- struct ieth_tx_entry *sw_ring; /**< address of SW ring for scalar PMD. */
- struct ieth_vec_tx_entry *sw_ring_v; /**< address of SW ring for vector PMD */
- };
- volatile uint8_t *qtx_tail; /**< Address of TDT register. */
- uint16_t nb_tx_desc; /**< number of TX descriptors. */
- uint16_t tx_tail; /**< current value of TDT reg. */
- /**< Start freeing TX buffers if there are less free descriptors than
- this value. */
- uint16_t tx_free_thresh;
- /** Number of TX descriptors to use before RS bit is set. */
- uint16_t tx_rs_thresh;
- /** Number of TX descriptors used since RS bit was set. */
- uint16_t nb_tx_used;
- /** Index to last TX descriptor to have been cleaned. */
- uint16_t last_desc_cleaned;
- /** Total number of TX descriptors ready to be allocated. */
- uint16_t nb_tx_free;
- uint16_t tx_next_dd; /**< next desc to scan for DD bit */
- uint16_t tx_next_rs; /**< next desc to set RS bit */
- uint16_t queue_id; /**< TX queue index. */
- uint16_t reg_idx; /**< TX queue register index. */
- uint16_t port_id; /**< Device port identifier. */
- uint8_t pthresh; /**< Prefetch threshold register. */
- uint8_t hthresh; /**< Host threshold register. */
- uint8_t wthresh; /**< Write-back threshold reg. */
- uint64_t offloads; /**< Tx offload flags of RTE_ETH_TX_OFFLOAD_* */
- uint32_t ctx_curr; /**< Hardware context states. */
- /** Hardware context history. */
- struct ixgbe_advctx_info *ctx_cache;
- const struct ixgbe_txq_ops *ops; /**< txq ops */
- _Bool tx_deferred_start; /**< not in global dev start. */
-#ifdef RTE_LIB_SECURITY
- uint8_t using_ipsec;
- /**< indicates that IPsec TX feature is in use */
-#endif
- const struct rte_memzone *mz;
-};
-
struct ixgbe_txq_ops {
- void (*release_mbufs)(struct ixgbe_tx_queue *txq);
- void (*free_swring)(struct ixgbe_tx_queue *txq);
- void (*reset)(struct ixgbe_tx_queue *txq);
+ void (*release_mbufs)(struct ieth_tx_queue *txq);
+ void (*free_swring)(struct ieth_tx_queue *txq);
+ void (*reset)(struct ieth_tx_queue *txq);
};
/*
@@ -250,7 +204,7 @@ struct ixgbe_txq_ops {
* the queue parameters. Used in tx_queue_setup by primary process and then
* in dev_init by secondary process when attaching to an existing ethdev.
*/
-void ixgbe_set_tx_function(struct rte_eth_dev *dev, struct ixgbe_tx_queue *txq);
+void ixgbe_set_tx_function(struct rte_eth_dev *dev, struct ieth_tx_queue *txq);
/**
* Sets the rx_pkt_burst callback in the ixgbe rte_eth_dev instance.
@@ -287,7 +241,7 @@ void ixgbe_recycle_rx_descriptors_refill_vec(void *rx_queue, uint16_t nb_mbufs);
uint16_t ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
-int ixgbe_txq_vec_setup(struct ixgbe_tx_queue *txq);
+int ixgbe_txq_vec_setup(struct ieth_tx_queue *txq);
uint64_t ixgbe_get_tx_port_offloads(struct rte_eth_dev *dev);
uint64_t ixgbe_get_rx_queue_offloads(struct rte_eth_dev *dev);
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
index fc254ef3d3..c2fcc51610 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
@@ -12,7 +12,7 @@
#include "ixgbe_rxtx.h"
static __rte_always_inline int
-ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq)
+ixgbe_tx_free_bufs(struct ieth_tx_queue *txq)
{
struct ieth_vec_tx_entry *txep;
uint32_t status;
@@ -79,7 +79,7 @@ tx_backlog_entry(struct ieth_vec_tx_entry *txep,
}
static inline void
-_ixgbe_tx_queue_release_mbufs_vec(struct ixgbe_tx_queue *txq)
+_ixgbe_tx_queue_release_mbufs_vec(struct ieth_tx_queue *txq)
{
unsigned int i;
struct ieth_vec_tx_entry *txe;
@@ -134,7 +134,7 @@ _ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq)
}
static inline void
-_ixgbe_tx_free_swring_vec(struct ixgbe_tx_queue *txq)
+_ixgbe_tx_free_swring_vec(struct ieth_tx_queue *txq)
{
if (txq == NULL)
return;
@@ -146,7 +146,7 @@ _ixgbe_tx_free_swring_vec(struct ixgbe_tx_queue *txq)
}
static inline void
-_ixgbe_reset_tx_queue_vec(struct ixgbe_tx_queue *txq)
+_ixgbe_reset_tx_queue_vec(struct ieth_tx_queue *txq)
{
static const union ixgbe_adv_tx_desc zeroed_desc = { { 0 } };
struct ieth_vec_tx_entry *txe = txq->sw_ring_v;
@@ -199,7 +199,7 @@ ixgbe_rxq_vec_setup_default(struct ixgbe_rx_queue *rxq)
}
static inline int
-ixgbe_txq_vec_setup_default(struct ixgbe_tx_queue *txq,
+ixgbe_txq_vec_setup_default(struct ieth_tx_queue *txq,
const struct ixgbe_txq_ops *txq_ops)
{
if (txq->sw_ring_v == NULL)
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
index e4381802c8..b51072b294 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
@@ -571,7 +571,7 @@ uint16_t
ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
+ struct ieth_tx_queue *txq = (struct ieth_tx_queue *)tx_queue;
volatile union ixgbe_adv_tx_desc *txdp;
struct ieth_vec_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
@@ -634,7 +634,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
}
static void __rte_cold
-ixgbe_tx_queue_release_mbufs_vec(struct ixgbe_tx_queue *txq)
+ixgbe_tx_queue_release_mbufs_vec(struct ieth_tx_queue *txq)
{
_ixgbe_tx_queue_release_mbufs_vec(txq);
}
@@ -646,13 +646,13 @@ ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq)
}
static void __rte_cold
-ixgbe_tx_free_swring(struct ixgbe_tx_queue *txq)
+ixgbe_tx_free_swring(struct ieth_tx_queue *txq)
{
_ixgbe_tx_free_swring_vec(txq);
}
static void __rte_cold
-ixgbe_reset_tx_queue(struct ixgbe_tx_queue *txq)
+ixgbe_reset_tx_queue(struct ieth_tx_queue *txq)
{
_ixgbe_reset_tx_queue_vec(txq);
}
@@ -670,7 +670,7 @@ ixgbe_rxq_vec_setup(struct ixgbe_rx_queue *rxq)
}
int __rte_cold
-ixgbe_txq_vec_setup(struct ixgbe_tx_queue *txq)
+ixgbe_txq_vec_setup(struct ieth_tx_queue *txq)
{
return ixgbe_txq_vec_setup_default(txq, &vec_txq_ops);
}
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
index 4c8cc22f59..ddba15ad52 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -693,7 +693,7 @@ uint16_t
ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
+ struct ieth_tx_queue *txq = (struct ieth_tx_queue *)tx_queue;
volatile union ixgbe_adv_tx_desc *txdp;
struct ieth_vec_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
@@ -757,7 +757,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
}
static void __rte_cold
-ixgbe_tx_queue_release_mbufs_vec(struct ixgbe_tx_queue *txq)
+ixgbe_tx_queue_release_mbufs_vec(struct ieth_tx_queue *txq)
{
_ixgbe_tx_queue_release_mbufs_vec(txq);
}
@@ -769,13 +769,13 @@ ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq)
}
static void __rte_cold
-ixgbe_tx_free_swring(struct ixgbe_tx_queue *txq)
+ixgbe_tx_free_swring(struct ieth_tx_queue *txq)
{
_ixgbe_tx_free_swring_vec(txq);
}
static void __rte_cold
-ixgbe_reset_tx_queue(struct ixgbe_tx_queue *txq)
+ixgbe_reset_tx_queue(struct ieth_tx_queue *txq)
{
_ixgbe_reset_tx_queue_vec(txq);
}
@@ -793,7 +793,7 @@ ixgbe_rxq_vec_setup(struct ixgbe_rx_queue *rxq)
}
int __rte_cold
-ixgbe_txq_vec_setup(struct ixgbe_tx_queue *txq)
+ixgbe_txq_vec_setup(struct ieth_tx_queue *txq)
{
return ixgbe_txq_vec_setup_default(txq, &vec_txq_ops);
}
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [RFC PATCH 10/21] common/intel_eth: pack Tx queue structure
2024-11-22 12:53 [RFC PATCH 00/21] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (8 preceding siblings ...)
2024-11-22 12:54 ` [RFC PATCH 09/21] net/ixgbe: use common Tx queue structure Bruce Richardson
@ 2024-11-22 12:54 ` Bruce Richardson
2024-11-22 12:54 ` [RFC PATCH 11/21] common/intel_eth: add post-Tx buffer free function Bruce Richardson
` (15 subsequent siblings)
25 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-11-22 12:54 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Ian Stokes, Anatoly Burakov
Move some fields about to better pack the Tx queue structure and make
sure all data used by the vector codepaths is on the first cacheline of
the structure. Checking with "pahole" on 64-bit build, only one 6-byte
hole is left in the structure - on second cacheline - after this patch.
As part of the reordering, move the p/h/wthresh values to the
ixgbe-specific part of the union. That is the only driver which actually
uses those values. i40e and ice drivers just record the values for later
return, so we can drop them from the Tx queue structure for those
drivers and just report the defaults in all cases.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/common/intel_eth/ieth_rxtx.h | 12 +++++-------
drivers/net/i40e/i40e_rxtx.c | 9 +++------
drivers/net/ice/ice_rxtx.c | 9 +++------
3 files changed, 11 insertions(+), 19 deletions(-)
diff --git a/drivers/common/intel_eth/ieth_rxtx.h b/drivers/common/intel_eth/ieth_rxtx.h
index 9f8a1d7141..c336ec81b3 100644
--- a/drivers/common/intel_eth/ieth_rxtx.h
+++ b/drivers/common/intel_eth/ieth_rxtx.h
@@ -42,7 +42,6 @@ struct ieth_tx_queue {
struct ieth_tx_entry *sw_ring; /* virtual address of SW ring */
struct ieth_vec_tx_entry *sw_ring_v;
};
- rte_iova_t tx_ring_dma; /* TX ring DMA address */
uint16_t nb_tx_desc; /* number of TX descriptors */
uint16_t tx_tail; /* current value of tail register */
uint16_t nb_tx_used; /* number of TX desc used since RS bit set */
@@ -56,16 +55,14 @@ struct ieth_tx_queue {
uint16_t tx_free_thresh;
/* Number of TX descriptors to use before RS bit is set. */
uint16_t tx_rs_thresh;
- uint8_t pthresh; /**< Prefetch threshold register. */
- uint8_t hthresh; /**< Host threshold register. */
- uint8_t wthresh; /**< Write-back threshold reg. */
uint16_t port_id; /* Device port identifier. */
uint16_t queue_id; /* TX queue index. */
uint16_t reg_idx;
- uint64_t offloads;
uint16_t tx_next_dd;
uint16_t tx_next_rs;
+ uint64_t offloads;
uint64_t mbuf_errors;
+ rte_iova_t tx_ring_dma; /* TX ring DMA address */
_Bool tx_deferred_start; /* don't start this queue in dev start */
_Bool q_set; /* indicate if tx queue has been configured */
union { /* the VSI this queue belongs to */
@@ -97,9 +94,10 @@ struct ieth_tx_queue {
const struct ixgbe_txq_ops *ops;
struct ixgbe_advctx_info *ctx_cache;
uint32_t ctx_curr;
-#ifdef RTE_LIB_SECURITY
+ uint8_t pthresh; /**< Prefetch threshold register. */
+ uint8_t hthresh; /**< Host threshold register. */
+ uint8_t wthresh; /**< Write-back threshold reg. */
uint8_t using_ipsec; /**< indicates that IPsec TX feature is in use */
-#endif
};
};
};
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index fce3f5ec2a..29df978019 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2539,9 +2539,6 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->nb_tx_desc = nb_desc;
txq->tx_rs_thresh = tx_rs_thresh;
txq->tx_free_thresh = tx_free_thresh;
- txq->pthresh = tx_conf->tx_thresh.pthresh;
- txq->hthresh = tx_conf->tx_thresh.hthresh;
- txq->wthresh = tx_conf->tx_thresh.wthresh;
txq->queue_id = queue_idx;
txq->reg_idx = reg_idx;
txq->port_id = dev->data->port_id;
@@ -3310,9 +3307,9 @@ i40e_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
qinfo->nb_desc = txq->nb_tx_desc;
- qinfo->conf.tx_thresh.pthresh = txq->pthresh;
- qinfo->conf.tx_thresh.hthresh = txq->hthresh;
- qinfo->conf.tx_thresh.wthresh = txq->wthresh;
+ qinfo->conf.tx_thresh.pthresh = I40E_DEFAULT_TX_PTHRESH;
+ qinfo->conf.tx_thresh.hthresh = I40E_DEFAULT_TX_HTHRESH;
+ qinfo->conf.tx_thresh.wthresh = I40E_DEFAULT_TX_WTHRESH;
qinfo->conf.tx_free_thresh = txq->tx_free_thresh;
qinfo->conf.tx_rs_thresh = txq->tx_rs_thresh;
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 20ebda68c7..9606ac7862 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -1492,9 +1492,6 @@ ice_tx_queue_setup(struct rte_eth_dev *dev,
txq->nb_tx_desc = nb_desc;
txq->tx_rs_thresh = tx_rs_thresh;
txq->tx_free_thresh = tx_free_thresh;
- txq->pthresh = tx_conf->tx_thresh.pthresh;
- txq->hthresh = tx_conf->tx_thresh.hthresh;
- txq->wthresh = tx_conf->tx_thresh.wthresh;
txq->queue_id = queue_idx;
txq->reg_idx = vsi->base_queue + queue_idx;
@@ -1583,9 +1580,9 @@ ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
qinfo->nb_desc = txq->nb_tx_desc;
- qinfo->conf.tx_thresh.pthresh = txq->pthresh;
- qinfo->conf.tx_thresh.hthresh = txq->hthresh;
- qinfo->conf.tx_thresh.wthresh = txq->wthresh;
+ qinfo->conf.tx_thresh.pthresh = ICE_DEFAULT_TX_PTHRESH;
+ qinfo->conf.tx_thresh.hthresh = ICE_DEFAULT_TX_HTHRESH;
+ qinfo->conf.tx_thresh.wthresh = ICE_DEFAULT_TX_WTHRESH;
qinfo->conf.tx_free_thresh = txq->tx_free_thresh;
qinfo->conf.tx_rs_thresh = txq->tx_rs_thresh;
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [RFC PATCH 11/21] common/intel_eth: add post-Tx buffer free function
2024-11-22 12:53 [RFC PATCH 00/21] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (9 preceding siblings ...)
2024-11-22 12:54 ` [RFC PATCH 10/21] common/intel_eth: pack " Bruce Richardson
@ 2024-11-22 12:54 ` Bruce Richardson
2024-11-22 12:54 ` [RFC PATCH 12/21] common/intel_eth: add Tx buffer free fn for AVX-512 Bruce Richardson
` (14 subsequent siblings)
25 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-11-22 12:54 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Ian Stokes, Vladimir Medvedkin, Anatoly Burakov
The actions taken for post-Tx buffer free for the SSE and AVX drivers
for i40e, iavf and ice drivers are all common, so centralize those in
common/intel_eth driver.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
.../common/intel_eth/ieth_rxtx_vec_common.h | 72 +++++++++++++++++++
drivers/net/i40e/i40e_rxtx_vec_common.h | 72 +++----------------
drivers/net/iavf/iavf_rxtx_vec_common.h | 61 +++-------------
drivers/net/ice/ice_rxtx_vec_common.h | 61 +++-------------
4 files changed, 99 insertions(+), 167 deletions(-)
diff --git a/drivers/common/intel_eth/ieth_rxtx_vec_common.h b/drivers/common/intel_eth/ieth_rxtx_vec_common.h
index 49096d2a41..aadc3dcfac 100644
--- a/drivers/common/intel_eth/ieth_rxtx_vec_common.h
+++ b/drivers/common/intel_eth/ieth_rxtx_vec_common.h
@@ -8,6 +8,7 @@
#include <stdint.h>
#include <unistd.h>
#include <rte_mbuf.h>
+#include <rte_ethdev.h>
#include "ieth_rxtx.h"
#define IETH_RX_BURST 32
@@ -85,4 +86,75 @@ ieth_tx_backlog_entry(struct ieth_tx_entry *txep, struct rte_mbuf **tx_pkts, uin
for (uint16_t i = 0; i < (int)nb_pkts; ++i)
txep[i].mbuf = tx_pkts[i];
}
+
+#define IETH_VPMD_TX_MAX_FREE_BUF 64
+
+typedef int (*ieth_desc_done_fn)(struct ieth_tx_queue *txq, uint16_t idx);
+
+static __rte_always_inline int
+ieth_tx_free_bufs(struct ieth_tx_queue *txq, ieth_desc_done_fn desc_done)
+{
+ struct ieth_tx_entry *txep;
+ uint32_t n;
+ uint32_t i;
+ int nb_free = 0;
+ struct rte_mbuf *m, *free[IETH_VPMD_TX_MAX_FREE_BUF];
+
+ /* check DD bits on threshold descriptor */
+ if (!desc_done(txq, txq->tx_next_dd))
+ return 0;
+
+ n = txq->tx_rs_thresh;
+
+ /* first buffer to free from S/W ring is at index
+ * tx_next_dd - (tx_rs_thresh-1)
+ */
+ txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)];
+
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
+ for (i = 0; i < n; i++) {
+ free[i] = txep[i].mbuf;
+ /* no need to reset txep[i].mbuf in vector path */
+ }
+ rte_mempool_put_bulk(free[0]->pool, (void **)free, n);
+ goto done;
+ }
+
+ m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
+ if (likely(m != NULL)) {
+ free[0] = m;
+ nb_free = 1;
+ for (i = 1; i < n; i++) {
+ m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
+ if (likely(m != NULL)) {
+ if (likely(m->pool == free[0]->pool)) {
+ free[nb_free++] = m;
+ } else {
+ rte_mempool_put_bulk(free[0]->pool,
+ (void *)free,
+ nb_free);
+ free[0] = m;
+ nb_free = 1;
+ }
+ }
+ }
+ rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
+ } else {
+ for (i = 1; i < n; i++) {
+ m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
+ if (m != NULL)
+ rte_mempool_put(m->pool, m);
+ }
+ }
+
+done:
+ /* buffers were freed, update counters */
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
+ txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
+ if (txq->tx_next_dd >= txq->nb_tx_desc)
+ txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
+
+ return txq->tx_rs_thresh;
+}
+
#endif /* IETH_RXTX_VEC_COMMON_H_ */
diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index 66e38994a5..60f2130f4d 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -16,72 +16,18 @@
#pragma GCC diagnostic ignored "-Wcast-qual"
#endif
+static inline int
+i40e_tx_desc_done(struct ieth_tx_queue *txq, uint16_t idx)
+{
+ return (txq->i40e_tx_ring[idx].cmd_type_offset_bsz &
+ rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) ==
+ rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE);
+}
+
static __rte_always_inline int
i40e_tx_free_bufs(struct ieth_tx_queue *txq)
{
- struct ieth_tx_entry *txep;
- uint32_t n;
- uint32_t i;
- int nb_free = 0;
- struct rte_mbuf *m, *free[RTE_I40E_TX_MAX_FREE_BUF_SZ];
-
- /* check DD bits on threshold descriptor */
- if ((txq->i40e_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
- rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) !=
- rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE))
- return 0;
-
- n = txq->tx_rs_thresh;
-
- /* first buffer to free from S/W ring is at index
- * tx_next_dd - (tx_rs_thresh-1)
- */
- txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)];
-
- if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
- for (i = 0; i < n; i++) {
- free[i] = txep[i].mbuf;
- /* no need to reset txep[i].mbuf in vector path */
- }
- rte_mempool_put_bulk(free[0]->pool, (void **)free, n);
- goto done;
- }
-
- m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
- if (likely(m != NULL)) {
- free[0] = m;
- nb_free = 1;
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (likely(m != NULL)) {
- if (likely(m->pool == free[0]->pool)) {
- free[nb_free++] = m;
- } else {
- rte_mempool_put_bulk(free[0]->pool,
- (void *)free,
- nb_free);
- free[0] = m;
- nb_free = 1;
- }
- }
- }
- rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
- } else {
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (m != NULL)
- rte_mempool_put(m->pool, m);
- }
- }
-
-done:
- /* buffers were freed, update counters */
- txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
- txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
- if (txq->tx_next_dd >= txq->nb_tx_desc)
- txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
-
- return txq->tx_rs_thresh;
+ return ieth_tx_free_bufs(txq, i40e_tx_desc_done);
}
static inline void
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index 7a31c777f0..ccc447e28d 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -16,61 +16,18 @@
#pragma GCC diagnostic ignored "-Wcast-qual"
#endif
+static inline int
+iavf_tx_desc_done(struct ieth_tx_queue *txq, uint16_t idx)
+{
+ return (txq->iavf_tx_ring[idx].cmd_type_offset_bsz &
+ rte_cpu_to_le_64(IAVF_TXD_QW1_DTYPE_MASK)) ==
+ rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE);
+}
+
static __rte_always_inline int
iavf_tx_free_bufs(struct ieth_tx_queue *txq)
{
- struct ieth_tx_entry *txep;
- uint32_t n;
- uint32_t i;
- int nb_free = 0;
- struct rte_mbuf *m, *free[IAVF_VPMD_TX_MAX_FREE_BUF];
-
- /* check DD bits on threshold descriptor */
- if ((txq->iavf_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
- rte_cpu_to_le_64(IAVF_TXD_QW1_DTYPE_MASK)) !=
- rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE))
- return 0;
-
- n = txq->tx_rs_thresh;
-
- /* first buffer to free from S/W ring is at index
- * tx_next_dd - (tx_rs_thresh-1)
- */
- txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)];
- m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
- if (likely(m != NULL)) {
- free[0] = m;
- nb_free = 1;
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (likely(m != NULL)) {
- if (likely(m->pool == free[0]->pool)) {
- free[nb_free++] = m;
- } else {
- rte_mempool_put_bulk(free[0]->pool,
- (void *)free,
- nb_free);
- free[0] = m;
- nb_free = 1;
- }
- }
- }
- rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
- } else {
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (m)
- rte_mempool_put(m->pool, m);
- }
- }
-
- /* buffers were freed, update counters */
- txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
- txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
- if (txq->tx_next_dd >= txq->nb_tx_desc)
- txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
-
- return txq->tx_rs_thresh;
+ return ieth_tx_free_bufs(txq, iavf_tx_desc_done);
}
static inline void
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index b8e69f3c12..ef020a9f89 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -12,61 +12,18 @@
#pragma GCC diagnostic ignored "-Wcast-qual"
#endif
+static inline int
+ice_tx_desc_done(struct ieth_tx_queue *txq, uint16_t idx)
+{
+ return (txq->ice_tx_ring[idx].cmd_type_offset_bsz &
+ rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) ==
+ rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE);
+}
+
static __rte_always_inline int
ice_tx_free_bufs_vec(struct ieth_tx_queue *txq)
{
- struct ieth_tx_entry *txep;
- uint32_t n;
- uint32_t i;
- int nb_free = 0;
- struct rte_mbuf *m, *free[ICE_TX_MAX_FREE_BUF_SZ];
-
- /* check DD bits on threshold descriptor */
- if ((txq->ice_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
- rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) !=
- rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))
- return 0;
-
- n = txq->tx_rs_thresh;
-
- /* first buffer to free from S/W ring is at index
- * tx_next_dd - (tx_rs_thresh-1)
- */
- txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)];
- m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
- if (likely(m)) {
- free[0] = m;
- nb_free = 1;
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (likely(m)) {
- if (likely(m->pool == free[0]->pool)) {
- free[nb_free++] = m;
- } else {
- rte_mempool_put_bulk(free[0]->pool,
- (void *)free,
- nb_free);
- free[0] = m;
- nb_free = 1;
- }
- }
- }
- rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
- } else {
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (m)
- rte_mempool_put(m->pool, m);
- }
- }
-
- /* buffers were freed, update counters */
- txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
- txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
- if (txq->tx_next_dd >= txq->nb_tx_desc)
- txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
-
- return txq->tx_rs_thresh;
+ return ieth_tx_free_bufs(txq, ice_tx_desc_done);
}
static inline void
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [RFC PATCH 12/21] common/intel_eth: add Tx buffer free fn for AVX-512
2024-11-22 12:53 [RFC PATCH 00/21] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (10 preceding siblings ...)
2024-11-22 12:54 ` [RFC PATCH 11/21] common/intel_eth: add post-Tx buffer free function Bruce Richardson
@ 2024-11-22 12:54 ` Bruce Richardson
2024-11-22 12:54 ` [RFC PATCH 13/21] net/iavf: use common Tx " Bruce Richardson
` (13 subsequent siblings)
25 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-11-22 12:54 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Konstantin Ananyev, Ian Stokes, Anatoly Burakov
AVX-512 code paths for ice and i40e drivers are common, and differ from
the regular post-Tx free function in that the SW ring from which the
buffers are freed does not contain anything other than the mbuf pointer.
Merge these into a common function in common/intel_eth saving
duplication.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
.../common/intel_eth/ieth_rxtx_vec_common.h | 93 ++++++++++++++
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 114 +----------------
drivers/net/ice/ice_rxtx_vec_avx512.c | 117 +-----------------
3 files changed, 95 insertions(+), 229 deletions(-)
diff --git a/drivers/common/intel_eth/ieth_rxtx_vec_common.h b/drivers/common/intel_eth/ieth_rxtx_vec_common.h
index aadc3dcfac..61b48c88da 100644
--- a/drivers/common/intel_eth/ieth_rxtx_vec_common.h
+++ b/drivers/common/intel_eth/ieth_rxtx_vec_common.h
@@ -157,4 +157,97 @@ ieth_tx_free_bufs(struct ieth_tx_queue *txq, ieth_desc_done_fn desc_done)
return txq->tx_rs_thresh;
}
+static __rte_always_inline int
+ieth_tx_free_bufs_vector(struct ieth_tx_queue *txq, ieth_desc_done_fn desc_done)
+{
+ int nb_free = 0;
+ struct rte_mbuf *free[IETH_VPMD_TX_MAX_FREE_BUF];
+ struct rte_mbuf *m;
+
+ /* check DD bits on threshold descriptor */
+ if (!desc_done(txq, txq->tx_next_dd))
+ return 0;
+
+ const uint32_t n = txq->tx_rs_thresh;
+
+ /* first buffer to free from S/W ring is at index
+ * tx_next_dd - (tx_rs_thresh - 1)
+ */
+ struct ieth_vec_tx_entry *txep = txq->sw_ring_v;
+ txep += txq->tx_next_dd - (n - 1);
+
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
+ struct rte_mempool *mp = txep[0].mbuf->pool;
+ void **cache_objs;
+ struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
+ rte_lcore_id());
+
+ if (!cache || cache->len == 0)
+ goto normal;
+
+ cache_objs = &cache->objs[cache->len];
+
+ if (n > RTE_MEMPOOL_CACHE_MAX_SIZE) {
+ rte_mempool_ops_enqueue_bulk(mp, (void *)txep, n);
+ goto done;
+ }
+
+ /* The cache follows the following algorithm
+ * 1. Add the objects to the cache
+ * 2. Anything greater than the cache min value (if it
+ * crosses the cache flush threshold) is flushed to the ring.
+ */
+ /* Add elements back into the cache */
+ uint32_t copied = 0;
+ /* n is multiple of 32 */
+ while (copied < n) {
+ memcpy(&cache_objs[copied], &txep[copied], 32 * sizeof(void *));
+ copied += 32;
+ }
+ cache->len += n;
+
+ if (cache->len >= cache->flushthresh) {
+ rte_mempool_ops_enqueue_bulk(mp, &cache->objs[cache->size],
+ cache->len - cache->size);
+ cache->len = cache->size;
+ }
+ goto done;
+ }
+
+normal:
+ m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
+ if (likely(m)) {
+ free[0] = m;
+ nb_free = 1;
+ for (uint32_t i = 1; i < n; i++) {
+ m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
+ if (likely(m)) {
+ if (likely(m->pool == free[0]->pool)) {
+ free[nb_free++] = m;
+ } else {
+ rte_mempool_put_bulk(free[0]->pool, (void *)free, nb_free);
+ free[0] = m;
+ nb_free = 1;
+ }
+ }
+ }
+ rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
+ } else {
+ for (uint32_t i = 1; i < n; i++) {
+ m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
+ if (m)
+ rte_mempool_put(m->pool, m);
+ }
+ }
+
+done:
+ /* buffers were freed, update counters */
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
+ txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
+ if (txq->tx_next_dd >= txq->nb_tx_desc)
+ txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
+
+ return txq->tx_rs_thresh;
+}
+
#endif /* IETH_RXTX_VEC_COMMON_H_ */
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index b4b38d7db6..23415c4949 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -754,118 +754,6 @@ i40e_recv_scattered_pkts_vec_avx512(void *rx_queue,
rx_pkts + retval, nb_pkts);
}
-static __rte_always_inline int
-i40e_tx_free_bufs_avx512(struct ieth_tx_queue *txq)
-{
- struct ieth_vec_tx_entry *txep;
- uint32_t n;
- uint32_t i;
- int nb_free = 0;
- struct rte_mbuf *m, *free[RTE_I40E_TX_MAX_FREE_BUF_SZ];
-
- /* check DD bits on threshold descriptor */
- if ((txq->i40e_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
- rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) !=
- rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE))
- return 0;
-
- n = txq->tx_rs_thresh;
-
- /* first buffer to free from S/W ring is at index
- * tx_next_dd - (tx_rs_thresh-1)
- */
- txep = (void *)txq->sw_ring;
- txep += txq->tx_next_dd - (n - 1);
-
- if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
- struct rte_mempool *mp = txep[0].mbuf->pool;
- void **cache_objs;
- struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
- rte_lcore_id());
-
- if (!cache || n > RTE_MEMPOOL_CACHE_MAX_SIZE) {
- rte_mempool_generic_put(mp, (void *)txep, n, cache);
- goto done;
- }
-
- cache_objs = &cache->objs[cache->len];
-
- /* The cache follows the following algorithm
- * 1. Add the objects to the cache
- * 2. Anything greater than the cache min value (if it
- * crosses the cache flush threshold) is flushed to the ring.
- */
- /* Add elements back into the cache */
- uint32_t copied = 0;
- /* n is multiple of 32 */
- while (copied < n) {
-#ifdef RTE_ARCH_64
- const __m512i a = _mm512_load_si512(&txep[copied]);
- const __m512i b = _mm512_load_si512(&txep[copied + 8]);
- const __m512i c = _mm512_load_si512(&txep[copied + 16]);
- const __m512i d = _mm512_load_si512(&txep[copied + 24]);
-
- _mm512_storeu_si512(&cache_objs[copied], a);
- _mm512_storeu_si512(&cache_objs[copied + 8], b);
- _mm512_storeu_si512(&cache_objs[copied + 16], c);
- _mm512_storeu_si512(&cache_objs[copied + 24], d);
-#else
- const __m512i a = _mm512_load_si512(&txep[copied]);
- const __m512i b = _mm512_load_si512(&txep[copied + 16]);
- _mm512_storeu_si512(&cache_objs[copied], a);
- _mm512_storeu_si512(&cache_objs[copied + 16], b);
-#endif
- copied += 32;
- }
- cache->len += n;
-
- if (cache->len >= cache->flushthresh) {
- rte_mempool_ops_enqueue_bulk
- (mp, &cache->objs[cache->size],
- cache->len - cache->size);
- cache->len = cache->size;
- }
- goto done;
- }
-
- m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
- if (likely(m)) {
- free[0] = m;
- nb_free = 1;
- for (i = 1; i < n; i++) {
- rte_mbuf_prefetch_part2(txep[i + 3].mbuf);
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (likely(m)) {
- if (likely(m->pool == free[0]->pool)) {
- free[nb_free++] = m;
- } else {
- rte_mempool_put_bulk(free[0]->pool,
- (void *)free,
- nb_free);
- free[0] = m;
- nb_free = 1;
- }
- }
- }
- rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
- } else {
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (m)
- rte_mempool_put(m->pool, m);
- }
- }
-
-done:
- /* buffers were freed, update counters */
- txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
- txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
- if (txq->tx_next_dd >= txq->nb_tx_desc)
- txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
-
- return txq->tx_rs_thresh;
-}
-
static inline void
vtx1(volatile struct i40e_tx_desc *txdp, struct rte_mbuf *pkt, uint64_t flags)
{
@@ -941,7 +829,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
if (txq->nb_tx_free < txq->tx_free_thresh)
- i40e_tx_free_bufs_avx512(txq);
+ ieth_tx_free_bufs_vector(txq, i40e_tx_desc_done);
nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index 4d95561f8c..fc8f9ad34a 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -859,121 +859,6 @@ ice_recv_scattered_pkts_vec_avx512_offload(void *rx_queue,
rx_pkts + retval, nb_pkts);
}
-static __rte_always_inline int
-ice_tx_free_bufs_avx512(struct ieth_tx_queue *txq)
-{
- struct ieth_vec_tx_entry *txep;
- uint32_t n;
- uint32_t i;
- int nb_free = 0;
- struct rte_mbuf *m, *free[ICE_TX_MAX_FREE_BUF_SZ];
-
- /* check DD bits on threshold descriptor */
- if ((txq->ice_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
- rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) !=
- rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))
- return 0;
-
- n = txq->tx_rs_thresh;
-
- /* first buffer to free from S/W ring is at index
- * tx_next_dd - (tx_rs_thresh - 1)
- */
- txep = (void *)txq->sw_ring;
- txep += txq->tx_next_dd - (n - 1);
-
- if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
- struct rte_mempool *mp = txep[0].mbuf->pool;
- void **cache_objs;
- struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
- rte_lcore_id());
-
- if (!cache || cache->len == 0)
- goto normal;
-
- cache_objs = &cache->objs[cache->len];
-
- if (n > RTE_MEMPOOL_CACHE_MAX_SIZE) {
- rte_mempool_ops_enqueue_bulk(mp, (void *)txep, n);
- goto done;
- }
-
- /* The cache follows the following algorithm
- * 1. Add the objects to the cache
- * 2. Anything greater than the cache min value (if it
- * crosses the cache flush threshold) is flushed to the ring.
- */
- /* Add elements back into the cache */
- uint32_t copied = 0;
- /* n is multiple of 32 */
- while (copied < n) {
-#ifdef RTE_ARCH_64
- const __m512i a = _mm512_loadu_si512(&txep[copied]);
- const __m512i b = _mm512_loadu_si512(&txep[copied + 8]);
- const __m512i c = _mm512_loadu_si512(&txep[copied + 16]);
- const __m512i d = _mm512_loadu_si512(&txep[copied + 24]);
-
- _mm512_storeu_si512(&cache_objs[copied], a);
- _mm512_storeu_si512(&cache_objs[copied + 8], b);
- _mm512_storeu_si512(&cache_objs[copied + 16], c);
- _mm512_storeu_si512(&cache_objs[copied + 24], d);
-#else
- const __m512i a = _mm512_loadu_si512(&txep[copied]);
- const __m512i b = _mm512_loadu_si512(&txep[copied + 16]);
- _mm512_storeu_si512(&cache_objs[copied], a);
- _mm512_storeu_si512(&cache_objs[copied + 16], b);
-#endif
- copied += 32;
- }
- cache->len += n;
-
- if (cache->len >= cache->flushthresh) {
- rte_mempool_ops_enqueue_bulk
- (mp, &cache->objs[cache->size],
- cache->len - cache->size);
- cache->len = cache->size;
- }
- goto done;
- }
-
-normal:
- m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
- if (likely(m)) {
- free[0] = m;
- nb_free = 1;
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (likely(m)) {
- if (likely(m->pool == free[0]->pool)) {
- free[nb_free++] = m;
- } else {
- rte_mempool_put_bulk(free[0]->pool,
- (void *)free,
- nb_free);
- free[0] = m;
- nb_free = 1;
- }
- }
- }
- rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
- } else {
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (m)
- rte_mempool_put(m->pool, m);
- }
- }
-
-done:
- /* buffers were freed, update counters */
- txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
- txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
- if (txq->tx_next_dd >= txq->nb_tx_desc)
- txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
-
- return txq->tx_rs_thresh;
-}
-
static __rte_always_inline void
ice_vtx1(volatile struct ice_tx_desc *txdp,
struct rte_mbuf *pkt, uint64_t flags, bool do_offload)
@@ -1064,7 +949,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_pkts = RTE_MIN(nb_pkts, txq->tx_rs_thresh);
if (txq->nb_tx_free < txq->tx_free_thresh)
- ice_tx_free_bufs_avx512(txq);
+ ieth_tx_free_bufs_vector(txq, ice_tx_desc_done);
nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [RFC PATCH 13/21] net/iavf: use common Tx free fn for AVX-512
2024-11-22 12:53 [RFC PATCH 00/21] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (11 preceding siblings ...)
2024-11-22 12:54 ` [RFC PATCH 12/21] common/intel_eth: add Tx buffer free fn for AVX-512 Bruce Richardson
@ 2024-11-22 12:54 ` Bruce Richardson
2024-11-22 12:54 ` [RFC PATCH 14/21] net/ice: move Tx queue mbuf cleanup fn to common Bruce Richardson
` (12 subsequent siblings)
25 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-11-22 12:54 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Konstantin Ananyev, Ian Stokes,
Vladimir Medvedkin, Anatoly Burakov
Switch the iavf driver to use the common Tx free function. This requires
one additional parameter to that function, since iavf sometimes uses
context descriptors which means that we have double the descriptors per
SW ring slot.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
.../common/intel_eth/ieth_rxtx_vec_common.h | 6 +-
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 2 +-
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 119 +-----------------
drivers/net/ice/ice_rxtx_vec_avx512.c | 2 +-
4 files changed, 7 insertions(+), 122 deletions(-)
diff --git a/drivers/common/intel_eth/ieth_rxtx_vec_common.h b/drivers/common/intel_eth/ieth_rxtx_vec_common.h
index 61b48c88da..a4490f2dca 100644
--- a/drivers/common/intel_eth/ieth_rxtx_vec_common.h
+++ b/drivers/common/intel_eth/ieth_rxtx_vec_common.h
@@ -158,7 +158,7 @@ ieth_tx_free_bufs(struct ieth_tx_queue *txq, ieth_desc_done_fn desc_done)
}
static __rte_always_inline int
-ieth_tx_free_bufs_vector(struct ieth_tx_queue *txq, ieth_desc_done_fn desc_done)
+ieth_tx_free_bufs_vector(struct ieth_tx_queue *txq, ieth_desc_done_fn desc_done, bool ctx_descs)
{
int nb_free = 0;
struct rte_mbuf *free[IETH_VPMD_TX_MAX_FREE_BUF];
@@ -168,13 +168,13 @@ ieth_tx_free_bufs_vector(struct ieth_tx_queue *txq, ieth_desc_done_fn desc_done)
if (!desc_done(txq, txq->tx_next_dd))
return 0;
- const uint32_t n = txq->tx_rs_thresh;
+ const uint32_t n = txq->tx_rs_thresh >> ctx_descs;
/* first buffer to free from S/W ring is at index
* tx_next_dd - (tx_rs_thresh - 1)
*/
struct ieth_vec_tx_entry *txep = txq->sw_ring_v;
- txep += txq->tx_next_dd - (n - 1);
+ txep += (txq->tx_next_dd >> ctx_descs) - (n - 1);
if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
struct rte_mempool *mp = txep[0].mbuf->pool;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index 23415c4949..0ab3a4f02c 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -829,7 +829,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
if (txq->nb_tx_free < txq->tx_free_thresh)
- ieth_tx_free_bufs_vector(txq, i40e_tx_desc_done);
+ ieth_tx_free_bufs_vector(txq, i40e_tx_desc_done, false);
nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index c774c0c365..391fbfcd4d 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -1844,121 +1844,6 @@ iavf_recv_scattered_pkts_vec_avx512_flex_rxd_offload(void *rx_queue,
true);
}
-static __rte_always_inline int
-iavf_tx_free_bufs_avx512(struct ieth_tx_queue *txq)
-{
- struct ieth_vec_tx_entry *txep;
- uint32_t n;
- uint32_t i;
- int nb_free = 0;
- struct rte_mbuf *m, *free[IAVF_VPMD_TX_MAX_FREE_BUF];
-
- /* check DD bits on threshold descriptor */
- if ((txq->iavf_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
- rte_cpu_to_le_64(IAVF_TXD_QW1_DTYPE_MASK)) !=
- rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE))
- return 0;
-
- n = txq->tx_rs_thresh >> txq->use_ctx;
-
- /* first buffer to free from S/W ring is at index
- * tx_next_dd - (tx_rs_thresh-1)
- */
- txep = (void *)txq->sw_ring;
- txep += (txq->tx_next_dd >> txq->use_ctx) - (n - 1);
-
- if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
- struct rte_mempool *mp = txep[0].mbuf->pool;
- struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
- rte_lcore_id());
- void **cache_objs;
-
- if (!cache || cache->len == 0)
- goto normal;
-
- cache_objs = &cache->objs[cache->len];
-
- if (n > RTE_MEMPOOL_CACHE_MAX_SIZE) {
- rte_mempool_ops_enqueue_bulk(mp, (void *)txep, n);
- goto done;
- }
-
- /* The cache follows the following algorithm
- * 1. Add the objects to the cache
- * 2. Anything greater than the cache min value (if it crosses the
- * cache flush threshold) is flushed to the ring.
- */
- /* Add elements back into the cache */
- uint32_t copied = 0;
- /* n is multiple of 32 */
- while (copied < n) {
-#ifdef RTE_ARCH_64
- const __m512i a = _mm512_loadu_si512(&txep[copied]);
- const __m512i b = _mm512_loadu_si512(&txep[copied + 8]);
- const __m512i c = _mm512_loadu_si512(&txep[copied + 16]);
- const __m512i d = _mm512_loadu_si512(&txep[copied + 24]);
-
- _mm512_storeu_si512(&cache_objs[copied], a);
- _mm512_storeu_si512(&cache_objs[copied + 8], b);
- _mm512_storeu_si512(&cache_objs[copied + 16], c);
- _mm512_storeu_si512(&cache_objs[copied + 24], d);
-#else
- const __m512i a = _mm512_loadu_si512(&txep[copied]);
- const __m512i b = _mm512_loadu_si512(&txep[copied + 16]);
- _mm512_storeu_si512(&cache_objs[copied], a);
- _mm512_storeu_si512(&cache_objs[copied + 16], b);
-#endif
- copied += 32;
- }
- cache->len += n;
-
- if (cache->len >= cache->flushthresh) {
- rte_mempool_ops_enqueue_bulk(mp,
- &cache->objs[cache->size],
- cache->len - cache->size);
- cache->len = cache->size;
- }
- goto done;
- }
-
-normal:
- m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
- if (likely(m)) {
- free[0] = m;
- nb_free = 1;
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (likely(m)) {
- if (likely(m->pool == free[0]->pool)) {
- free[nb_free++] = m;
- } else {
- rte_mempool_put_bulk(free[0]->pool,
- (void *)free,
- nb_free);
- free[0] = m;
- nb_free = 1;
- }
- }
- }
- rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
- } else {
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (m)
- rte_mempool_put(m->pool, m);
- }
- }
-
-done:
- /* buffers were freed, update counters */
- txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
- txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
- if (txq->tx_next_dd >= txq->nb_tx_desc)
- txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
-
- return txq->tx_rs_thresh;
-}
-
static __rte_always_inline void
tx_backlog_entry_avx512(struct ieth_vec_tx_entry *txep,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
@@ -2320,7 +2205,7 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint64_t rs = IAVF_TX_DESC_CMD_RS | flags;
if (txq->nb_tx_free < txq->tx_free_thresh)
- iavf_tx_free_bufs_avx512(txq);
+ ieth_tx_free_bufs_vector(txq, iavf_tx_desc_done, false);
nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
@@ -2387,7 +2272,7 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
uint64_t rs = IAVF_TX_DESC_CMD_RS | flags;
if (txq->nb_tx_free < txq->tx_free_thresh)
- iavf_tx_free_bufs_avx512(txq);
+ ieth_tx_free_bufs_vector(txq, iavf_tx_desc_done, true);
nb_commit = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts << 1);
nb_commit &= 0xFFFE;
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index fc8f9ad34a..c3cbd601b3 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -949,7 +949,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_pkts = RTE_MIN(nb_pkts, txq->tx_rs_thresh);
if (txq->nb_tx_free < txq->tx_free_thresh)
- ieth_tx_free_bufs_vector(txq, ice_tx_desc_done);
+ ieth_tx_free_bufs_vector(txq, ice_tx_desc_done, false);
nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [RFC PATCH 14/21] net/ice: move Tx queue mbuf cleanup fn to common
2024-11-22 12:53 [RFC PATCH 00/21] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (12 preceding siblings ...)
2024-11-22 12:54 ` [RFC PATCH 13/21] net/iavf: use common Tx " Bruce Richardson
@ 2024-11-22 12:54 ` Bruce Richardson
2024-11-22 12:54 ` [RFC PATCH 15/21] net/i40e: use common Tx queue mbuf cleanup fn Bruce Richardson
` (11 subsequent siblings)
25 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-11-22 12:54 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Anatoly Burakov, Konstantin Ananyev
The functions to loop over the Tx queue and clean up all the mbufs on
it, e.g. for queue shutdown, is not device specific and so can move into
the common/intel_eth driver. Only complication is ensuring that the
correct ring format, either minimal vector or full structure, is used.
Ice driver currently uses two functions and a function pointer to help
with this - though actually one of those functions uses a further check
inside it - so we can simplify this down to just one common function,
with a flag set in the appropriate place. This avoids checking for
AVX-512-specific functions, which were the only function using the
smaller struct in this driver.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/common/intel_eth/ieth_rxtx.h | 49 ++++++++++++++++++++++++-
drivers/net/ice/ice_dcf_ethdev.c | 5 +--
drivers/net/ice/ice_ethdev.h | 3 +-
drivers/net/ice/ice_rxtx.c | 33 +++++------------
drivers/net/ice/ice_rxtx_vec_common.h | 51 ---------------------------
drivers/net/ice/ice_rxtx_vec_sse.c | 4 +--
6 files changed, 61 insertions(+), 84 deletions(-)
diff --git a/drivers/common/intel_eth/ieth_rxtx.h b/drivers/common/intel_eth/ieth_rxtx.h
index c336ec81b3..c8e5e1ad76 100644
--- a/drivers/common/intel_eth/ieth_rxtx.h
+++ b/drivers/common/intel_eth/ieth_rxtx.h
@@ -65,6 +65,8 @@ struct ieth_tx_queue {
rte_iova_t tx_ring_dma; /* TX ring DMA address */
_Bool tx_deferred_start; /* don't start this queue in dev start */
_Bool q_set; /* indicate if tx queue has been configured */
+ _Bool vector_tx; /* port is using vector TX */
+ _Bool vector_sw_ring; /* port is using vectorized SW ring (ieth_vec_tx_entry) */
union { /* the VSI this queue belongs to */
struct i40e_vsi *i40e_vsi;
struct iavf_vsi *iavf_vsi;
@@ -74,7 +76,6 @@ struct ieth_tx_queue {
union {
struct { /* ICE driver specific values */
- ice_tx_release_mbufs_t tx_rel_mbufs;
uint32_t q_teid; /* TX schedule node id. */
};
struct { /* I40E driver specific values */
@@ -102,4 +103,50 @@ struct ieth_tx_queue {
};
};
+#define IETH_FREE_BUFS_LOOP(txq, swr, start) do { \
+ uint16_t i = start; \
+ if (txq->tx_tail < i) { \
+ for (; i < txq->nb_tx_desc; i++) { \
+ rte_pktmbuf_free_seg(swr[i].mbuf); \
+ swr[i].mbuf = NULL; \
+ } \
+ i = 0; \
+ } \
+ for (; i < txq->tx_tail; i++) { \
+ rte_pktmbuf_free_seg(swr[i].mbuf); \
+ swr[i].mbuf = NULL; \
+ } \
+} while(0)
+
+static inline void
+ieth_txq_release_all_mbufs(struct ieth_tx_queue *txq)
+{
+ if (unlikely(!txq || !txq->sw_ring))
+ return;
+
+ if (!txq->vector_tx) {
+ for (uint16_t i = 0; i < txq->nb_tx_desc; i++) {
+ if (txq->sw_ring[i].mbuf != NULL) {
+ rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
+ txq->sw_ring[i].mbuf = NULL;
+ }
+ }
+ return;
+ }
+
+ /**
+ * vPMD tx will not set sw_ring's mbuf to NULL after free,
+ * so need to free remains more carefully.
+ */
+ const uint16_t start = txq->tx_next_dd - txq->tx_rs_thresh + 1;
+
+ if (txq->vector_sw_ring) {
+ struct ieth_vec_tx_entry *swr = txq->sw_ring_v;
+ IETH_FREE_BUFS_LOOP(txq, swr, start);
+ } else {
+ struct ieth_tx_entry *swr = txq->sw_ring;
+ IETH_FREE_BUFS_LOOP(txq, swr, start);
+ }
+}
+
#endif /* IETH_RXTX_H_ */
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index b5bab35d77..54d17875bb 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -24,6 +24,7 @@
#include "ice_generic_flow.h"
#include "ice_dcf_ethdev.h"
#include "ice_rxtx.h"
+#include "ieth_rxtx.h"
#define DCF_NUM_MACADDR_MAX 64
@@ -500,7 +501,7 @@ ice_dcf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
}
txq = dev->data->tx_queues[tx_queue_id];
- txq->tx_rel_mbufs(txq);
+ ieth_txq_release_all_mbufs(txq);
reset_tx_queue(txq);
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
@@ -650,7 +651,7 @@ ice_dcf_stop_queues(struct rte_eth_dev *dev)
txq = dev->data->tx_queues[i];
if (!txq)
continue;
- txq->tx_rel_mbufs(txq);
+ ieth_txq_release_all_mbufs(txq);
reset_tx_queue(txq);
dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
}
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index b5d39b3fc6..a99f65c8dc 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -621,13 +621,12 @@ struct ice_adapter {
/* Set bit if the engine is disabled */
unsigned long disabled_engine_mask;
struct ice_parser *psr;
-#ifdef RTE_ARCH_X86
+ /* used only on X86, zero on other Archs */
bool rx_use_avx2;
bool rx_use_avx512;
bool tx_use_avx2;
bool tx_use_avx512;
bool rx_vec_offload_support;
-#endif
};
struct ice_vsi_vlan_pvid_info {
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 9606ac7862..51f82738d5 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -751,6 +751,7 @@ ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
struct ice_aqc_add_tx_qgrp *txq_elem;
struct ice_tlan_ctx tx_ctx;
int buf_len;
+ struct ice_adapter *ad = ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
PMD_INIT_FUNC_TRACE();
@@ -822,6 +823,10 @@ ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
return -EIO;
}
+ /* record what kind of descriptor cleanup we need on teardown */
+ txq->vector_tx = ad->tx_vec_allowed;
+ txq->vector_sw_ring = ad->tx_use_avx512;
+
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED;
rte_free(txq_elem);
@@ -1006,25 +1011,6 @@ ice_fdir_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
return 0;
}
-/* Free all mbufs for descriptors in tx queue */
-static void
-_ice_tx_queue_release_mbufs(struct ieth_tx_queue *txq)
-{
- uint16_t i;
-
- if (!txq || !txq->sw_ring) {
- PMD_DRV_LOG(DEBUG, "Pointer to txq or sw_ring is NULL");
- return;
- }
-
- for (i = 0; i < txq->nb_tx_desc; i++) {
- if (txq->sw_ring[i].mbuf) {
- rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
- txq->sw_ring[i].mbuf = NULL;
- }
- }
-}
-
static void
ice_reset_tx_queue(struct ieth_tx_queue *txq)
{
@@ -1103,7 +1089,7 @@ ice_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
return -EINVAL;
}
- txq->tx_rel_mbufs(txq);
+ ieth_txq_release_all_mbufs(txq);
ice_reset_tx_queue(txq);
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
@@ -1166,7 +1152,7 @@ ice_fdir_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
return -EINVAL;
}
- txq->tx_rel_mbufs(txq);
+ ieth_txq_release_all_mbufs(txq);
txq->qtx_tail = NULL;
return 0;
@@ -1518,7 +1504,6 @@ ice_tx_queue_setup(struct rte_eth_dev *dev,
ice_reset_tx_queue(txq);
txq->q_set = true;
dev->data->tx_queues[queue_idx] = txq;
- txq->tx_rel_mbufs = _ice_tx_queue_release_mbufs;
ice_set_tx_function_flag(dev, txq);
return 0;
@@ -1546,8 +1531,7 @@ ice_tx_queue_release(void *txq)
return;
}
- if (q->tx_rel_mbufs != NULL)
- q->tx_rel_mbufs(q);
+ ieth_txq_release_all_mbufs(q);
rte_free(q->sw_ring);
rte_memzone_free(q->mz);
rte_free(q);
@@ -2460,7 +2444,6 @@ ice_fdir_setup_tx_resources(struct ice_pf *pf)
txq->q_set = true;
pf->fdir.txq = txq;
- txq->tx_rel_mbufs = _ice_tx_queue_release_mbufs;
return ICE_SUCCESS;
}
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index ef020a9f89..e1493cc28b 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -61,57 +61,6 @@ _ice_rx_queue_release_mbufs_vec(struct ice_rx_queue *rxq)
memset(rxq->sw_ring, 0, sizeof(rxq->sw_ring[0]) * rxq->nb_rx_desc);
}
-static inline void
-_ice_tx_queue_release_mbufs_vec(struct ieth_tx_queue *txq)
-{
- uint16_t i;
-
- if (unlikely(!txq || !txq->sw_ring)) {
- PMD_DRV_LOG(DEBUG, "Pointer to rxq or sw_ring is NULL");
- return;
- }
-
- /**
- * vPMD tx will not set sw_ring's mbuf to NULL after free,
- * so need to free remains more carefully.
- */
- i = txq->tx_next_dd - txq->tx_rs_thresh + 1;
-
-#ifdef __AVX512VL__
- struct rte_eth_dev *dev = &rte_eth_devices[txq->ice_vsi->adapter->pf.dev_data->port_id];
-
- if (dev->tx_pkt_burst == ice_xmit_pkts_vec_avx512 ||
- dev->tx_pkt_burst == ice_xmit_pkts_vec_avx512_offload) {
- struct ieth_vec_tx_entry *swr = (void *)txq->sw_ring;
-
- if (txq->tx_tail < i) {
- for (; i < txq->nb_tx_desc; i++) {
- rte_pktmbuf_free_seg(swr[i].mbuf);
- swr[i].mbuf = NULL;
- }
- i = 0;
- }
- for (; i < txq->tx_tail; i++) {
- rte_pktmbuf_free_seg(swr[i].mbuf);
- swr[i].mbuf = NULL;
- }
- } else
-#endif
- {
- if (txq->tx_tail < i) {
- for (; i < txq->nb_tx_desc; i++) {
- rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
- txq->sw_ring[i].mbuf = NULL;
- }
- i = 0;
- }
- for (; i < txq->tx_tail; i++) {
- rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
- txq->sw_ring[i].mbuf = NULL;
- }
- }
-}
-
static inline int
ice_rxq_vec_setup_default(struct ice_rx_queue *rxq)
{
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index b951d85cfd..c89cbf2b15 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -793,12 +793,10 @@ ice_rxq_vec_setup(struct ice_rx_queue *rxq)
}
int __rte_cold
-ice_txq_vec_setup(struct ieth_tx_queue __rte_unused *txq)
+ice_txq_vec_setup(struct ieth_tx_queue *txq)
{
if (!txq)
return -1;
-
- txq->tx_rel_mbufs = _ice_tx_queue_release_mbufs_vec;
return 0;
}
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [RFC PATCH 15/21] net/i40e: use common Tx queue mbuf cleanup fn
2024-11-22 12:53 [RFC PATCH 00/21] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (13 preceding siblings ...)
2024-11-22 12:54 ` [RFC PATCH 14/21] net/ice: move Tx queue mbuf cleanup fn to common Bruce Richardson
@ 2024-11-22 12:54 ` Bruce Richardson
2024-11-22 12:54 ` [RFC PATCH 16/21] net/ixgbe: " Bruce Richardson
` (10 subsequent siblings)
25 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-11-22 12:54 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Ian Stokes
Update driver to be similar to the "ice" driver and use the common mbuf
ring cleanup code on shutdown of a Tx queue.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/i40e/i40e_ethdev.h | 4 +-
drivers/net/i40e/i40e_rxtx.c | 71 ++++------------------------------
drivers/net/i40e/i40e_rxtx.h | 1 -
3 files changed, 10 insertions(+), 66 deletions(-)
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index 8c8c0a1bcf..0da85b1212 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -1260,12 +1260,12 @@ struct i40e_adapter {
/* For RSS reta table update */
uint8_t rss_reta_updated;
-#ifdef RTE_ARCH_X86
+
+ /* used only on x86, zero on other architectures */
bool rx_use_avx2;
bool rx_use_avx512;
bool tx_use_avx2;
bool tx_use_avx512;
-#endif
};
/**
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 29df978019..362a71c8b2 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -30,6 +30,7 @@
#include "base/i40e_type.h"
#include "i40e_ethdev.h"
#include "i40e_rxtx.h"
+#include "ieth_rxtx.h"
#define DEFAULT_TX_RS_THRESH 32
#define DEFAULT_TX_FREE_THRESH 32
@@ -1875,6 +1876,7 @@ i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
int err;
struct ieth_tx_queue *txq;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ const struct i40e_adapter *ad = I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
PMD_INIT_FUNC_TRACE();
@@ -1889,6 +1891,9 @@ i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
PMD_DRV_LOG(WARNING, "TX queue %u is deferred start",
tx_queue_id);
+ txq->vector_tx = ad->tx_vec_allowed;
+ txq->vector_sw_ring = ad->tx_use_avx512;
+
/*
* tx_queue_id is queue id application refers to, while
* rxq->reg_idx is the real queue index.
@@ -1929,7 +1934,7 @@ i40e_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
return err;
}
- i40e_tx_queue_release_mbufs(txq);
+ ieth_txq_release_all_mbufs(txq);
i40e_reset_tx_queue(txq);
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
@@ -2604,7 +2609,7 @@ i40e_tx_queue_release(void *txq)
return;
}
- i40e_tx_queue_release_mbufs(q);
+ ieth_txq_release_all_mbufs(q);
rte_free(q->sw_ring);
rte_memzone_free(q->mz);
rte_free(q);
@@ -2701,66 +2706,6 @@ i40e_reset_rx_queue(struct i40e_rx_queue *rxq)
rxq->rxrearm_nb = 0;
}
-void
-i40e_tx_queue_release_mbufs(struct ieth_tx_queue *txq)
-{
- struct rte_eth_dev *dev;
- uint16_t i;
-
- if (!txq || !txq->sw_ring) {
- PMD_DRV_LOG(DEBUG, "Pointer to txq or sw_ring is NULL");
- return;
- }
-
- dev = &rte_eth_devices[txq->port_id];
-
- /**
- * vPMD tx will not set sw_ring's mbuf to NULL after free,
- * so need to free remains more carefully.
- */
-#ifdef CC_AVX512_SUPPORT
- if (dev->tx_pkt_burst == i40e_xmit_pkts_vec_avx512) {
- struct ieth_vec_tx_entry *swr = (void *)txq->sw_ring;
-
- i = txq->tx_next_dd - txq->tx_rs_thresh + 1;
- if (txq->tx_tail < i) {
- for (; i < txq->nb_tx_desc; i++) {
- rte_pktmbuf_free_seg(swr[i].mbuf);
- swr[i].mbuf = NULL;
- }
- i = 0;
- }
- for (; i < txq->tx_tail; i++) {
- rte_pktmbuf_free_seg(swr[i].mbuf);
- swr[i].mbuf = NULL;
- }
- return;
- }
-#endif
- if (dev->tx_pkt_burst == i40e_xmit_pkts_vec_avx2 ||
- dev->tx_pkt_burst == i40e_xmit_pkts_vec) {
- i = txq->tx_next_dd - txq->tx_rs_thresh + 1;
- if (txq->tx_tail < i) {
- for (; i < txq->nb_tx_desc; i++) {
- rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
- txq->sw_ring[i].mbuf = NULL;
- }
- i = 0;
- }
- for (; i < txq->tx_tail; i++) {
- rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
- txq->sw_ring[i].mbuf = NULL;
- }
- } else {
- for (i = 0; i < txq->nb_tx_desc; i++) {
- if (txq->sw_ring[i].mbuf) {
- rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
- txq->sw_ring[i].mbuf = NULL;
- }
- }
- }
-}
-
static int
i40e_tx_done_cleanup_full(struct ieth_tx_queue *txq,
uint32_t free_cnt)
@@ -3127,7 +3072,7 @@ i40e_dev_clear_queues(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_tx_queues; i++) {
if (!dev->data->tx_queues[i])
continue;
- i40e_tx_queue_release_mbufs(dev->data->tx_queues[i]);
+ ieth_txq_release_all_mbufs(dev->data->tx_queues[i]);
i40e_reset_tx_queue(dev->data->tx_queues[i]);
}
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index e6e36d8e69..cfd12e3972 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -179,7 +179,6 @@ void i40e_dev_clear_queues(struct rte_eth_dev *dev);
void i40e_dev_free_queues(struct rte_eth_dev *dev);
void i40e_reset_rx_queue(struct i40e_rx_queue *rxq);
void i40e_reset_tx_queue(struct ieth_tx_queue *txq);
-void i40e_tx_queue_release_mbufs(struct ieth_tx_queue *txq);
int i40e_tx_done_cleanup(void *txq, uint32_t free_cnt);
int i40e_alloc_rx_queue_mbufs(struct i40e_rx_queue *rxq);
void i40e_rx_queue_release_mbufs(struct i40e_rx_queue *rxq);
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [RFC PATCH 16/21] net/ixgbe: use common Tx queue mbuf cleanup fn
2024-11-22 12:53 [RFC PATCH 00/21] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (14 preceding siblings ...)
2024-11-22 12:54 ` [RFC PATCH 15/21] net/i40e: use common Tx queue mbuf cleanup fn Bruce Richardson
@ 2024-11-22 12:54 ` Bruce Richardson
2024-11-22 12:54 ` [RFC PATCH 17/21] net/iavf: " Bruce Richardson
` (9 subsequent siblings)
25 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-11-22 12:54 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Anatoly Burakov, Vladimir Medvedkin,
Wathsala Vithanage, Konstantin Ananyev
Update driver to use the common cleanup function.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/ixgbe/ixgbe_rxtx.c | 22 +++---------------
drivers/net/ixgbe/ixgbe_rxtx.h | 1 -
drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 28 ++---------------------
drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 7 ------
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 7 ------
5 files changed, 5 insertions(+), 60 deletions(-)
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index e80bd6fccc..0d5f4803e5 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -2334,21 +2334,6 @@ ixgbe_recv_pkts_lro_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts,
*
**********************************************************************/
-static void __rte_cold
-ixgbe_tx_queue_release_mbufs(struct ieth_tx_queue *txq)
-{
- unsigned i;
-
- if (txq->sw_ring != NULL) {
- for (i = 0; i < txq->nb_tx_desc; i++) {
- if (txq->sw_ring[i].mbuf != NULL) {
- rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
- txq->sw_ring[i].mbuf = NULL;
- }
- }
- }
-}
-
static int
ixgbe_tx_done_cleanup_full(struct ieth_tx_queue *txq, uint32_t free_cnt)
{
@@ -2472,7 +2457,7 @@ static void __rte_cold
ixgbe_tx_queue_release(struct ieth_tx_queue *txq)
{
if (txq != NULL && txq->ops != NULL) {
- txq->ops->release_mbufs(txq);
+ ieth_txq_release_all_mbufs(txq);
txq->ops->free_swring(txq);
rte_memzone_free(txq->mz);
rte_free(txq);
@@ -2526,7 +2511,6 @@ ixgbe_reset_tx_queue(struct ieth_tx_queue *txq)
}
static const struct ixgbe_txq_ops def_txq_ops = {
- .release_mbufs = ixgbe_tx_queue_release_mbufs,
.free_swring = ixgbe_tx_free_swring,
.reset = ixgbe_reset_tx_queue,
};
@@ -3380,7 +3364,7 @@ ixgbe_dev_clear_queues(struct rte_eth_dev *dev)
struct ieth_tx_queue *txq = dev->data->tx_queues[i];
if (txq != NULL) {
- txq->ops->release_mbufs(txq);
+ ieth_txq_release_all_mbufs(txq);
txq->ops->reset(txq);
dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
}
@@ -5654,7 +5638,7 @@ ixgbe_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
}
if (txq->ops != NULL) {
- txq->ops->release_mbufs(txq);
+ ieth_txq_release_all_mbufs(txq);
txq->ops->reset(txq);
}
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index 5b56e48498..0a990ee1ca 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -181,7 +181,6 @@ struct ixgbe_advctx_info {
};
struct ixgbe_txq_ops {
- void (*release_mbufs)(struct ieth_tx_queue *txq);
void (*free_swring)(struct ieth_tx_queue *txq);
void (*reset)(struct ieth_tx_queue *txq);
};
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
index c2fcc51610..3064b92533 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
@@ -78,32 +78,6 @@ tx_backlog_entry(struct ieth_vec_tx_entry *txep,
txep[i].mbuf = tx_pkts[i];
}
-static inline void
-_ixgbe_tx_queue_release_mbufs_vec(struct ieth_tx_queue *txq)
-{
- unsigned int i;
- struct ieth_vec_tx_entry *txe;
- const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1);
-
- if (txq->sw_ring == NULL || txq->nb_tx_free == max_desc)
- return;
-
- /* release the used mbufs in sw_ring */
- for (i = txq->tx_next_dd - (txq->tx_rs_thresh - 1);
- i != txq->tx_tail;
- i = (i + 1) % txq->nb_tx_desc) {
- txe = &txq->sw_ring_v[i];
- rte_pktmbuf_free_seg(txe->mbuf);
- }
- txq->nb_tx_free = max_desc;
-
- /* reset tx_entry */
- for (i = 0; i < txq->nb_tx_desc; i++) {
- txe = &txq->sw_ring_v[i];
- txe->mbuf = NULL;
- }
-}
-
static inline void
_ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq)
{
@@ -208,6 +182,8 @@ ixgbe_txq_vec_setup_default(struct ieth_tx_queue *txq,
/* leave the first one for overflow */
txq->sw_ring_v = txq->sw_ring_v + 1;
txq->ops = txq_ops;
+ txq->vector_tx = 1;
+ txq->vector_sw_ring = 1;
return 0;
}
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
index b51072b294..2336a86dd2 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
@@ -633,12 +633,6 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
return nb_pkts;
}
-static void __rte_cold
-ixgbe_tx_queue_release_mbufs_vec(struct ieth_tx_queue *txq)
-{
- _ixgbe_tx_queue_release_mbufs_vec(txq);
-}
-
void __rte_cold
ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq)
{
@@ -658,7 +652,6 @@ ixgbe_reset_tx_queue(struct ieth_tx_queue *txq)
}
static const struct ixgbe_txq_ops vec_txq_ops = {
- .release_mbufs = ixgbe_tx_queue_release_mbufs_vec,
.free_swring = ixgbe_tx_free_swring,
.reset = ixgbe_reset_tx_queue,
};
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
index ddba15ad52..9707dd80eb 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -756,12 +756,6 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
return nb_pkts;
}
-static void __rte_cold
-ixgbe_tx_queue_release_mbufs_vec(struct ieth_tx_queue *txq)
-{
- _ixgbe_tx_queue_release_mbufs_vec(txq);
-}
-
void __rte_cold
ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq)
{
@@ -781,7 +775,6 @@ ixgbe_reset_tx_queue(struct ieth_tx_queue *txq)
}
static const struct ixgbe_txq_ops vec_txq_ops = {
- .release_mbufs = ixgbe_tx_queue_release_mbufs_vec,
.free_swring = ixgbe_tx_free_swring,
.reset = ixgbe_reset_tx_queue,
};
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [RFC PATCH 17/21] net/iavf: use common Tx queue mbuf cleanup fn
2024-11-22 12:53 [RFC PATCH 00/21] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (15 preceding siblings ...)
2024-11-22 12:54 ` [RFC PATCH 16/21] net/ixgbe: " Bruce Richardson
@ 2024-11-22 12:54 ` Bruce Richardson
2024-11-22 12:54 ` [RFC PATCH 18/21] net/ice: use vector SW ring for all vector paths Bruce Richardson
` (8 subsequent siblings)
25 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-11-22 12:54 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Ian Stokes, Vladimir Medvedkin,
Konstantin Ananyev, Anatoly Burakov
Adjust iavf driver to also use the common mbuf freeing functions on Tx
queue release/cleanup. The implementation is complicated a little by the
need to integrate the additional "has_ctx" parameter for the iavf code,
but changes in other drivers are minimal - just a constant "false"
parameter.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/common/intel_eth/ieth_rxtx.h | 19 +++++++------
drivers/net/i40e/i40e_rxtx.c | 6 ++--
drivers/net/iavf/iavf_rxtx.c | 37 ++-----------------------
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 24 ++--------------
drivers/net/iavf/iavf_rxtx_vec_common.h | 18 ------------
drivers/net/iavf/iavf_rxtx_vec_sse.c | 9 ++----
drivers/net/ice/ice_dcf_ethdev.c | 4 +--
drivers/net/ice/ice_rxtx.c | 6 ++--
drivers/net/ixgbe/ixgbe_rxtx.c | 6 ++--
9 files changed, 28 insertions(+), 101 deletions(-)
diff --git a/drivers/common/intel_eth/ieth_rxtx.h b/drivers/common/intel_eth/ieth_rxtx.h
index c8e5e1ad76..dad1ba4ae1 100644
--- a/drivers/common/intel_eth/ieth_rxtx.h
+++ b/drivers/common/intel_eth/ieth_rxtx.h
@@ -83,7 +83,6 @@ struct ieth_tx_queue {
};
struct { /* iavf driver specific values */
uint16_t ipsec_crypto_pkt_md_offset;
- uint8_t rel_mbufs_type;
#define IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG1 BIT(0)
#define IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2 BIT(1)
uint8_t vlan_flag;
@@ -103,23 +102,23 @@ struct ieth_tx_queue {
};
};
-#define IETH_FREE_BUFS_LOOP(txq, swr, start) do { \
+#define IETH_FREE_BUFS_LOOP(swr, nb_desc, start, end) do { \
uint16_t i = start; \
- if (txq->tx_tail < i) { \
- for (; i < txq->nb_tx_desc; i++) { \
+ if (end < i) { \
+ for (; i < nb_desc; i++) { \
rte_pktmbuf_free_seg(swr[i].mbuf); \
swr[i].mbuf = NULL; \
} \
i = 0; \
} \
- for (; i < txq->tx_tail; i++) { \
+ for (; i < end; i++) { \
rte_pktmbuf_free_seg(swr[i].mbuf); \
swr[i].mbuf = NULL; \
} \
} while(0)
static inline void
-ieth_txq_release_all_mbufs(struct ieth_tx_queue *txq)
+ieth_txq_release_all_mbufs(struct ieth_tx_queue *txq, bool use_ctx)
{
if (unlikely(!txq || !txq->sw_ring))
return;
@@ -138,14 +137,16 @@ ieth_txq_release_all_mbufs(struct ieth_tx_queue *txq)
* vPMD tx will not set sw_ring's mbuf to NULL after free,
* so need to free remains more carefully.
*/
- const uint16_t start = txq->tx_next_dd - txq->tx_rs_thresh + 1;
+ const uint16_t start = (txq->tx_next_dd - txq->tx_rs_thresh + 1) >> use_ctx;
+ const uint16_t nb_desc = txq->nb_tx_desc >> use_ctx;
+ const uint16_t end = txq->tx_tail >> use_ctx;
if (txq->vector_sw_ring) {
struct ieth_vec_tx_entry *swr = txq->sw_ring_v;
- IETH_FREE_BUFS_LOOP(txq, swr, start);
+ IETH_FREE_BUFS_LOOP(swr, nb_desc, start, end);
} else {
struct ieth_tx_entry *swr = txq->sw_ring;
- IETH_FREE_BUFS_LOOP(txq, swr, start);
+ IETH_FREE_BUFS_LOOP(swr, nb_desc, start, end);
}
}
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 362a71c8b2..4878b9b8aa 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -1934,7 +1934,7 @@ i40e_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
return err;
}
- ieth_txq_release_all_mbufs(txq);
+ ieth_txq_release_all_mbufs(txq, false);
i40e_reset_tx_queue(txq);
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
@@ -2609,7 +2609,7 @@ i40e_tx_queue_release(void *txq)
return;
}
- ieth_txq_release_all_mbufs(q);
+ ieth_txq_release_all_mbufs(q, false);
rte_free(q->sw_ring);
rte_memzone_free(q->mz);
rte_free(q);
@@ -3072,7 +3072,7 @@ i40e_dev_clear_queues(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_tx_queues; i++) {
if (!dev->data->tx_queues[i])
continue;
- ieth_txq_release_all_mbufs(dev->data->tx_queues[i]);
+ ieth_txq_release_all_mbufs(dev->data->tx_queues[i], false);
i40e_reset_tx_queue(dev->data->tx_queues[i]);
}
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index c0f7d12804..c574b23f34 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -387,24 +387,6 @@ release_rxq_mbufs(struct iavf_rx_queue *rxq)
rxq->rx_nb_avail = 0;
}
-static inline void
-release_txq_mbufs(struct ieth_tx_queue *txq)
-{
- uint16_t i;
-
- if (!txq || !txq->sw_ring) {
- PMD_DRV_LOG(DEBUG, "Pointer to rxq or sw_ring is NULL");
- return;
- }
-
- for (i = 0; i < txq->nb_tx_desc; i++) {
- if (txq->sw_ring[i].mbuf) {
- rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
- txq->sw_ring[i].mbuf = NULL;
- }
- }
-}
-
static const
struct iavf_rxq_ops iavf_rxq_release_mbufs_ops[] = {
[IAVF_REL_MBUFS_DEFAULT].release_mbufs = release_rxq_mbufs,
@@ -413,18 +395,6 @@ struct iavf_rxq_ops iavf_rxq_release_mbufs_ops[] = {
#endif
};
-static const
-struct iavf_txq_ops iavf_txq_release_mbufs_ops[] = {
- [IAVF_REL_MBUFS_DEFAULT].release_mbufs = release_txq_mbufs,
-#ifdef RTE_ARCH_X86
- [IAVF_REL_MBUFS_SSE_VEC].release_mbufs = iavf_tx_queue_release_mbufs_sse,
-#ifdef CC_AVX512_SUPPORT
- [IAVF_REL_MBUFS_AVX512_VEC].release_mbufs = iavf_tx_queue_release_mbufs_avx512,
-#endif
-#endif
-
-};
-
static inline void
iavf_rxd_to_pkt_fields_by_comms_ovs(__rte_unused struct iavf_rx_queue *rxq,
struct rte_mbuf *mb,
@@ -889,7 +859,6 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->q_set = true;
dev->data->tx_queues[queue_idx] = txq;
txq->qtx_tail = hw->hw_addr + IAVF_QTX_TAIL1(queue_idx);
- txq->rel_mbufs_type = IAVF_REL_MBUFS_DEFAULT;
if (check_tx_vec_allow(txq) == false) {
struct iavf_adapter *ad =
@@ -1068,7 +1037,7 @@ iavf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
}
txq = dev->data->tx_queues[tx_queue_id];
- iavf_txq_release_mbufs_ops[txq->rel_mbufs_type].release_mbufs(txq);
+ ieth_txq_release_all_mbufs(txq, txq->use_ctx);
reset_tx_queue(txq);
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
@@ -1097,7 +1066,7 @@ iavf_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
if (!q)
return;
- iavf_txq_release_mbufs_ops[q->rel_mbufs_type].release_mbufs(q);
+ ieth_txq_release_all_mbufs(q, q->use_ctx);
rte_free(q->sw_ring);
rte_memzone_free(q->mz);
rte_free(q);
@@ -1114,7 +1083,7 @@ iavf_reset_queues(struct rte_eth_dev *dev)
txq = dev->data->tx_queues[i];
if (!txq)
continue;
- iavf_txq_release_mbufs_ops[txq->rel_mbufs_type].release_mbufs(txq);
+ ieth_txq_release_all_mbufs(txq, txq->use_ctx);
reset_tx_queue(txq);
dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
}
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index 391fbfcd4d..16cfd6a5b3 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -2356,31 +2356,11 @@ iavf_xmit_pkts_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
return iavf_xmit_pkts_vec_avx512_cmn(tx_queue, tx_pkts, nb_pkts, false);
}
-void __rte_cold
-iavf_tx_queue_release_mbufs_avx512(struct ieth_tx_queue *txq)
-{
- unsigned int i;
- const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1);
- const uint16_t end_desc = txq->tx_tail >> txq->use_ctx; /* next empty slot */
- const uint16_t wrap_point = txq->nb_tx_desc >> txq->use_ctx; /* end of SW ring */
- struct ieth_vec_tx_entry *swr = (void *)txq->sw_ring;
-
- if (!txq->sw_ring || txq->nb_tx_free == max_desc)
- return;
-
- i = (txq->tx_next_dd - txq->tx_rs_thresh + 1) >> txq->use_ctx;
- while (i != end_desc) {
- rte_pktmbuf_free_seg(swr[i].mbuf);
- swr[i].mbuf = NULL;
- if (++i == wrap_point)
- i = 0;
- }
-}
-
int __rte_cold
iavf_txq_vec_setup_avx512(struct ieth_tx_queue *txq)
{
- txq->rel_mbufs_type = IAVF_REL_MBUFS_AVX512_VEC;
+ txq->vector_tx = true;
+ txq->vector_sw_ring = true;
return 0;
}
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index ccc447e28d..20d8262e7f 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -60,24 +60,6 @@ _iavf_rx_queue_release_mbufs_vec(struct iavf_rx_queue *rxq)
memset(rxq->sw_ring, 0, sizeof(rxq->sw_ring[0]) * rxq->nb_rx_desc);
}
-static inline void
-_iavf_tx_queue_release_mbufs_vec(struct ieth_tx_queue *txq)
-{
- unsigned i;
- const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1);
-
- if (!txq->sw_ring || txq->nb_tx_free == max_desc)
- return;
-
- i = txq->tx_next_dd - txq->tx_rs_thresh + 1;
- while (i != txq->tx_tail) {
- rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
- txq->sw_ring[i].mbuf = NULL;
- if (++i == txq->nb_tx_desc)
- i = 0;
- }
-}
-
static inline int
iavf_rxq_vec_setup_default(struct iavf_rx_queue *rxq)
{
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index de632c6de8..21ad685ff1 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -1458,16 +1458,11 @@ iavf_rx_queue_release_mbufs_sse(struct iavf_rx_queue *rxq)
_iavf_rx_queue_release_mbufs_vec(rxq);
}
-void __rte_cold
-iavf_tx_queue_release_mbufs_sse(struct ieth_tx_queue *txq)
-{
- _iavf_tx_queue_release_mbufs_vec(txq);
-}
-
int __rte_cold
iavf_txq_vec_setup(struct ieth_tx_queue *txq)
{
- txq->rel_mbufs_type = IAVF_REL_MBUFS_SSE_VEC;
+ txq->vector_tx = true;
+ txq->vector_sw_ring = false;
return 0;
}
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 54d17875bb..959215117f 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -501,7 +501,7 @@ ice_dcf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
}
txq = dev->data->tx_queues[tx_queue_id];
- ieth_txq_release_all_mbufs(txq);
+ ieth_txq_release_all_mbufs(txq, false);
reset_tx_queue(txq);
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
@@ -651,7 +651,7 @@ ice_dcf_stop_queues(struct rte_eth_dev *dev)
txq = dev->data->tx_queues[i];
if (!txq)
continue;
- ieth_txq_release_all_mbufs(txq);
+ ieth_txq_release_all_mbufs(txq, false);
reset_tx_queue(txq);
dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
}
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 51f82738d5..5e58314b57 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -1089,7 +1089,7 @@ ice_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
return -EINVAL;
}
- ieth_txq_release_all_mbufs(txq);
+ ieth_txq_release_all_mbufs(txq, false);
ice_reset_tx_queue(txq);
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
@@ -1152,7 +1152,7 @@ ice_fdir_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
return -EINVAL;
}
- ieth_txq_release_all_mbufs(txq);
+ ieth_txq_release_all_mbufs(txq, false);
txq->qtx_tail = NULL;
return 0;
@@ -1531,7 +1531,7 @@ ice_tx_queue_release(void *txq)
return;
}
- ieth_txq_release_all_mbufs(q);
+ ieth_txq_release_all_mbufs(q, false);
rte_free(q->sw_ring);
rte_memzone_free(q->mz);
rte_free(q);
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 0d5f4803e5..9299171db0 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -2457,7 +2457,7 @@ static void __rte_cold
ixgbe_tx_queue_release(struct ieth_tx_queue *txq)
{
if (txq != NULL && txq->ops != NULL) {
- ieth_txq_release_all_mbufs(txq);
+ ieth_txq_release_all_mbufs(txq, false);
txq->ops->free_swring(txq);
rte_memzone_free(txq->mz);
rte_free(txq);
@@ -3364,7 +3364,7 @@ ixgbe_dev_clear_queues(struct rte_eth_dev *dev)
struct ieth_tx_queue *txq = dev->data->tx_queues[i];
if (txq != NULL) {
- ieth_txq_release_all_mbufs(txq);
+ ieth_txq_release_all_mbufs(txq, false);
txq->ops->reset(txq);
dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
}
@@ -5638,7 +5638,7 @@ ixgbe_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
}
if (txq->ops != NULL) {
- ieth_txq_release_all_mbufs(txq);
+ ieth_txq_release_all_mbufs(txq, false);
txq->ops->reset(txq);
}
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [RFC PATCH 18/21] net/ice: use vector SW ring for all vector paths
2024-11-22 12:53 [RFC PATCH 00/21] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (16 preceding siblings ...)
2024-11-22 12:54 ` [RFC PATCH 17/21] net/iavf: " Bruce Richardson
@ 2024-11-22 12:54 ` Bruce Richardson
2024-11-22 12:54 ` [RFC PATCH 19/21] net/i40e: " Bruce Richardson
` (7 subsequent siblings)
25 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-11-22 12:54 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Anatoly Burakov, Konstantin Ananyev
The AVX-512 code path used a smaller SW ring structure only containing
the mbuf pointer, but no other fields. The other fields are only used in
the scalar code path, so update all vector driver code paths to use the
smaller, faster structure.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/common/intel_eth/ieth_rxtx_vec_common.h | 7 +++++++
drivers/net/ice/ice_rxtx.c | 2 +-
drivers/net/ice/ice_rxtx_vec_avx2.c | 12 ++++++------
drivers/net/ice/ice_rxtx_vec_avx512.c | 14 ++------------
drivers/net/ice/ice_rxtx_vec_common.h | 6 ------
drivers/net/ice/ice_rxtx_vec_sse.c | 12 ++++++------
6 files changed, 22 insertions(+), 31 deletions(-)
diff --git a/drivers/common/intel_eth/ieth_rxtx_vec_common.h b/drivers/common/intel_eth/ieth_rxtx_vec_common.h
index a4490f2dca..c8ac788f98 100644
--- a/drivers/common/intel_eth/ieth_rxtx_vec_common.h
+++ b/drivers/common/intel_eth/ieth_rxtx_vec_common.h
@@ -87,6 +87,13 @@ ieth_tx_backlog_entry(struct ieth_tx_entry *txep, struct rte_mbuf **tx_pkts, uin
txep[i].mbuf = tx_pkts[i];
}
+static __rte_always_inline void
+ieth_tx_backlog_entry_vec(struct ieth_vec_tx_entry *txep, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+ for (uint16_t i = 0; i < (int)nb_pkts; ++i)
+ txep[i].mbuf = tx_pkts[i];
+}
+
#define IETH_VPMD_TX_MAX_FREE_BUF 64
typedef int (*ieth_desc_done_fn)(struct ieth_tx_queue *txq, uint16_t idx);
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 5e58314b57..127bc604f0 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -825,7 +825,7 @@ ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
/* record what kind of descriptor cleanup we need on teardown */
txq->vector_tx = ad->tx_vec_allowed;
- txq->vector_sw_ring = ad->tx_use_avx512;
+ txq->vector_sw_ring = txq->vector_tx;
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED;
diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index 370871c320..7799d631f8 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -858,7 +858,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ieth_tx_queue *txq = (struct ieth_tx_queue *)tx_queue;
volatile struct ice_tx_desc *txdp;
- struct ieth_tx_entry *txep;
+ struct ieth_vec_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = ICE_TD_CMD;
uint64_t rs = ICE_TX_DESC_CMD_RS | ICE_TD_CMD;
@@ -867,7 +867,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_pkts = RTE_MIN(nb_pkts, txq->tx_rs_thresh);
if (txq->nb_tx_free < txq->tx_free_thresh)
- ice_tx_free_bufs_vec(txq);
+ ieth_tx_free_bufs_vector(txq, ice_tx_desc_done, false);
nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
@@ -875,13 +875,13 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = txq->tx_tail;
txdp = &txq->ice_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_v[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ieth_tx_backlog_entry(txep, tx_pkts, n);
+ ieth_tx_backlog_entry_vec(txep, tx_pkts, n);
ice_vtx(txdp, tx_pkts, n - 1, flags, offload);
tx_pkts += (n - 1);
@@ -896,10 +896,10 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
/* avoid reach the end of ring */
txdp = &txq->ice_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_v[tx_id];
}
- ieth_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ieth_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
ice_vtx(txdp, tx_pkts, nb_commit, flags, offload);
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index c3cbd601b3..6c2c76f6fc 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -924,16 +924,6 @@ ice_vtx(volatile struct ice_tx_desc *txdp, struct rte_mbuf **pkt,
}
}
-static __rte_always_inline void
-ice_tx_backlog_entry_avx512(struct ieth_vec_tx_entry *txep,
- struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
-{
- int i;
-
- for (i = 0; i < (int)nb_pkts; ++i)
- txep[i].mbuf = tx_pkts[i];
-}
-
static __rte_always_inline uint16_t
ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool do_offload)
@@ -964,7 +954,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ice_tx_backlog_entry_avx512(txep, tx_pkts, n);
+ ieth_tx_backlog_entry_vec(txep, tx_pkts, n);
ice_vtx(txdp, tx_pkts, n - 1, flags, do_offload);
tx_pkts += (n - 1);
@@ -982,7 +972,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = (void *)txq->sw_ring;
}
- ice_tx_backlog_entry_avx512(txep, tx_pkts, nb_commit);
+ ieth_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
ice_vtx(txdp, tx_pkts, nb_commit, flags, do_offload);
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index e1493cc28b..7ddc62e4a1 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -20,12 +20,6 @@ ice_tx_desc_done(struct ieth_tx_queue *txq, uint16_t idx)
rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE);
}
-static __rte_always_inline int
-ice_tx_free_bufs_vec(struct ieth_tx_queue *txq)
-{
- return ieth_tx_free_bufs(txq, ice_tx_desc_done);
-}
-
static inline void
_ice_rx_queue_release_mbufs_vec(struct ice_rx_queue *rxq)
{
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index c89cbf2b15..0cbb84eeb0 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -699,7 +699,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ieth_tx_queue *txq = (struct ieth_tx_queue *)tx_queue;
volatile struct ice_tx_desc *txdp;
- struct ieth_tx_entry *txep;
+ struct ieth_vec_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = ICE_TD_CMD;
uint64_t rs = ICE_TX_DESC_CMD_RS | ICE_TD_CMD;
@@ -709,7 +709,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_pkts = RTE_MIN(nb_pkts, txq->tx_rs_thresh);
if (txq->nb_tx_free < txq->tx_free_thresh)
- ice_tx_free_bufs_vec(txq);
+ ieth_tx_free_bufs_vector(txq, ice_tx_desc_done, false);
nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
nb_commit = nb_pkts;
@@ -718,13 +718,13 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = txq->tx_tail;
txdp = &txq->ice_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_v[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ieth_tx_backlog_entry(txep, tx_pkts, n);
+ ieth_tx_backlog_entry_vec(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
ice_vtx1(txdp, *tx_pkts, flags);
@@ -738,10 +738,10 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
/* avoid reach the end of ring */
txdp = &txq->ice_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_v[tx_id];
}
- ieth_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ieth_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
ice_vtx(txdp, tx_pkts, nb_commit, flags);
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [RFC PATCH 19/21] net/i40e: use vector SW ring for all vector paths
2024-11-22 12:53 [RFC PATCH 00/21] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (17 preceding siblings ...)
2024-11-22 12:54 ` [RFC PATCH 18/21] net/ice: use vector SW ring for all vector paths Bruce Richardson
@ 2024-11-22 12:54 ` Bruce Richardson
2024-11-22 12:54 ` [RFC PATCH 20/21] net/iavf: " Bruce Richardson
` (6 subsequent siblings)
25 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-11-22 12:54 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Ian Stokes, David Christensen,
Konstantin Ananyev, Wathsala Vithanage
The AVX-512 code path used a smaller SW ring structure only containing
the mbuf pointer, but no other fields. The other fields are only used in
the scalar code path, so update all vector driver code paths (AVX2, SSE,
Neon, Altivec) to use the smaller, faster structure.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/i40e/i40e_rxtx.c | 8 +++++---
drivers/net/i40e/i40e_rxtx_vec_altivec.c | 12 ++++++------
drivers/net/i40e/i40e_rxtx_vec_avx2.c | 12 ++++++------
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 14 ++------------
drivers/net/i40e/i40e_rxtx_vec_common.h | 6 ------
drivers/net/i40e/i40e_rxtx_vec_neon.c | 12 ++++++------
drivers/net/i40e/i40e_rxtx_vec_sse.c | 12 ++++++------
7 files changed, 31 insertions(+), 45 deletions(-)
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 4878b9b8aa..05f7f380c4 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -1892,7 +1892,7 @@ i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
tx_queue_id);
txq->vector_tx = ad->tx_vec_allowed;
- txq->vector_sw_ring = ad->tx_use_avx512;
+ txq->vector_sw_ring = txq->vector_tx;
/*
* tx_queue_id is queue id application refers to, while
@@ -3551,9 +3551,11 @@ i40e_set_tx_function(struct rte_eth_dev *dev)
}
}
+ if (rte_vect_get_max_simd_bitwidth() < RTE_VECT_SIMD_128)
+ ad->tx_vec_allowed = false;
+
if (ad->tx_simple_allowed) {
- if (ad->tx_vec_allowed &&
- rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) {
+ if (ad->tx_vec_allowed) {
#ifdef RTE_ARCH_X86
if (ad->tx_use_avx512) {
#ifdef CC_AVX512_SUPPORT
diff --git a/drivers/net/i40e/i40e_rxtx_vec_altivec.c b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
index 2ab09eb167..7acf44d3fe 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_altivec.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
@@ -553,14 +553,14 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ieth_tx_queue *txq = (struct ieth_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
- struct ieth_tx_entry *txep;
+ struct ieth_vec_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = I40E_TD_CMD;
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
int i;
if (txq->nb_tx_free < txq->tx_free_thresh)
- i40e_tx_free_bufs(txq);
+ ieth_tx_free_bufs_vector(txq, i40e_tx_desc_done, false);
nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
nb_commit = nb_pkts;
@@ -569,13 +569,13 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = txq->tx_tail;
txdp = &txq->i40e_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_v[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ieth_tx_backlog_entry(txep, tx_pkts, n);
+ ieth_tx_backlog_entry_vec(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -589,10 +589,10 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
/* avoid reach the end of ring */
txdp = &txq->i40e_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_v[tx_id];
}
- ieth_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ieth_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx2.c b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
index e32fa160bf..8f593378d3 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx2.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
@@ -745,13 +745,13 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ieth_tx_queue *txq = (struct ieth_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
- struct ieth_tx_entry *txep;
+ struct ieth_vec_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = I40E_TD_CMD;
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
if (txq->nb_tx_free < txq->tx_free_thresh)
- i40e_tx_free_bufs(txq);
+ ieth_tx_free_bufs_vector(txq, i40e_tx_desc_done, false);
nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
@@ -759,13 +759,13 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = txq->tx_tail;
txdp = &txq->i40e_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_v[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ieth_tx_backlog_entry(txep, tx_pkts, n);
+ ieth_tx_backlog_entry_vec(txep, tx_pkts, n);
vtx(txdp, tx_pkts, n - 1, flags);
tx_pkts += (n - 1);
@@ -780,10 +780,10 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
/* avoid reach the end of ring */
txdp = &txq->i40e_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_v[tx_id];
}
- ieth_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ieth_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index 0ab3a4f02c..e0f1b2bc10 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -807,16 +807,6 @@ vtx(volatile struct i40e_tx_desc *txdp,
}
}
-static __rte_always_inline void
-tx_backlog_entry_avx512(struct ieth_vec_tx_entry *txep,
- struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
-{
- int i;
-
- for (i = 0; i < (int)nb_pkts; ++i)
- txep[i].mbuf = tx_pkts[i];
-}
-
static inline uint16_t
i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
@@ -844,7 +834,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- tx_backlog_entry_avx512(txep, tx_pkts, n);
+ ieth_tx_backlog_entry_vec(txep, tx_pkts, n);
vtx(txdp, tx_pkts, n - 1, flags);
tx_pkts += (n - 1);
@@ -862,7 +852,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = (void *)txq->sw_ring;
}
- tx_backlog_entry_avx512(txep, tx_pkts, nb_commit);
+ ieth_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index 60f2130f4d..72b4a44faf 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -24,12 +24,6 @@ i40e_tx_desc_done(struct ieth_tx_queue *txq, uint16_t idx)
rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE);
}
-static __rte_always_inline int
-i40e_tx_free_bufs(struct ieth_tx_queue *txq)
-{
- return ieth_tx_free_bufs(txq, i40e_tx_desc_done);
-}
-
static inline void
_i40e_rx_queue_release_mbufs_vec(struct i40e_rx_queue *rxq)
{
diff --git a/drivers/net/i40e/i40e_rxtx_vec_neon.c b/drivers/net/i40e/i40e_rxtx_vec_neon.c
index b30da1a78c..502dcc9407 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_neon.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_neon.c
@@ -681,14 +681,14 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
{
struct ieth_tx_queue *txq = (struct ieth_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
- struct ieth_tx_entry *txep;
+ struct ieth_vec_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = I40E_TD_CMD;
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
int i;
if (txq->nb_tx_free < txq->tx_free_thresh)
- i40e_tx_free_bufs(txq);
+ ieth_tx_free_bufs_vector(txq, i40e_tx_desc_done, false);
nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
@@ -696,13 +696,13 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
tx_id = txq->tx_tail;
txdp = &txq->i40e_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_v[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ieth_tx_backlog_entry(txep, tx_pkts, n);
+ ieth_tx_backlog_entry_vec(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -716,10 +716,10 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
/* avoid reach the end of ring */
txdp = &txq->i40e_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_v[tx_id];
}
- ieth_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ieth_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_sse.c b/drivers/net/i40e/i40e_rxtx_vec_sse.c
index 5107cb9f01..958380815a 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_sse.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_sse.c
@@ -700,14 +700,14 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ieth_tx_queue *txq = (struct ieth_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
- struct ieth_tx_entry *txep;
+ struct ieth_vec_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = I40E_TD_CMD;
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
int i;
if (txq->nb_tx_free < txq->tx_free_thresh)
- i40e_tx_free_bufs(txq);
+ ieth_tx_free_bufs_vector(txq, i40e_tx_desc_done, false);
nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
@@ -715,13 +715,13 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = txq->tx_tail;
txdp = &txq->i40e_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_v[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ieth_tx_backlog_entry(txep, tx_pkts, n);
+ ieth_tx_backlog_entry_vec(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -735,10 +735,10 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
/* avoid reach the end of ring */
txdp = &txq->i40e_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_v[tx_id];
}
- ieth_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ieth_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [RFC PATCH 20/21] net/iavf: use vector SW ring for all vector paths
2024-11-22 12:53 [RFC PATCH 00/21] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (18 preceding siblings ...)
2024-11-22 12:54 ` [RFC PATCH 19/21] net/i40e: " Bruce Richardson
@ 2024-11-22 12:54 ` Bruce Richardson
2024-11-22 12:54 ` [RFC PATCH 21/21] net/ixgbe: use common Tx backlog entry fn Bruce Richardson
` (5 subsequent siblings)
25 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-11-22 12:54 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Vladimir Medvedkin, Ian Stokes, Konstantin Ananyev
The AVX-512 code path used a smaller SW ring structure only containing
the mbuf pointer, but no other fields. The other fields are only used in
the scalar code path, so update all vector driver code paths (AVX2, SSE)
to use the smaller, faster structure.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/iavf/iavf_rxtx.c | 7 -------
drivers/net/iavf/iavf_rxtx_vec_avx2.c | 12 ++++++------
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 8 --------
drivers/net/iavf/iavf_rxtx_vec_common.h | 6 ------
drivers/net/iavf/iavf_rxtx_vec_sse.c | 14 +++++++-------
5 files changed, 13 insertions(+), 34 deletions(-)
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index c574b23f34..869fce00eb 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -4193,14 +4193,7 @@ iavf_set_tx_function(struct rte_eth_dev *dev)
txq = dev->data->tx_queues[i];
if (!txq)
continue;
-#ifdef CC_AVX512_SUPPORT
- if (use_avx512)
- iavf_txq_vec_setup_avx512(txq);
- else
- iavf_txq_vec_setup(txq);
-#else
iavf_txq_vec_setup(txq);
-#endif
}
if (no_poll_on_link_down) {
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index 25dc339303..e0c7146c9b 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -1736,14 +1736,14 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ieth_tx_queue *txq = (struct ieth_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
- struct ieth_tx_entry *txep;
+ struct ieth_vec_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
/* bit2 is reserved and must be set to 1 according to Spec */
uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC;
uint64_t rs = IAVF_TX_DESC_CMD_RS | flags;
if (txq->nb_tx_free < txq->tx_free_thresh)
- iavf_tx_free_bufs(txq);
+ ieth_tx_free_bufs_vector(txq, iavf_tx_desc_done, false);
nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
@@ -1751,13 +1751,13 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = txq->tx_tail;
txdp = &txq->iavf_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_v[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ieth_tx_backlog_entry(txep, tx_pkts, n);
+ ieth_tx_backlog_entry_vec(txep, tx_pkts, n);
iavf_vtx(txdp, tx_pkts, n - 1, flags, offload);
tx_pkts += (n - 1);
@@ -1772,10 +1772,10 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
/* avoid reach the end of ring */
txdp = &txq->iavf_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_v[tx_id];
}
- ieth_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ieth_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
iavf_vtx(txdp, tx_pkts, nb_commit, flags, offload);
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index 16cfd6a5b3..bda5fb3b22 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -2356,14 +2356,6 @@ iavf_xmit_pkts_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
return iavf_xmit_pkts_vec_avx512_cmn(tx_queue, tx_pkts, nb_pkts, false);
}
-int __rte_cold
-iavf_txq_vec_setup_avx512(struct ieth_tx_queue *txq)
-{
- txq->vector_tx = true;
- txq->vector_sw_ring = true;
- return 0;
-}
-
uint16_t
iavf_xmit_pkts_vec_avx512_offload(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index 20d8262e7f..14569e9e3b 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -24,12 +24,6 @@ iavf_tx_desc_done(struct ieth_tx_queue *txq, uint16_t idx)
rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE);
}
-static __rte_always_inline int
-iavf_tx_free_bufs(struct ieth_tx_queue *txq)
-{
- return ieth_tx_free_bufs(txq, iavf_tx_desc_done);
-}
-
static inline void
_iavf_rx_queue_release_mbufs_vec(struct iavf_rx_queue *rxq)
{
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index 21ad685ff1..89f4a22271 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -1368,14 +1368,14 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ieth_tx_queue *txq = (struct ieth_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
- struct ieth_tx_entry *txep;
+ struct ieth_vec_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = IAVF_TX_DESC_CMD_EOP | 0x04; /* bit 2 must be set */
uint64_t rs = IAVF_TX_DESC_CMD_RS | flags;
int i;
if (txq->nb_tx_free < txq->tx_free_thresh)
- iavf_tx_free_bufs(txq);
+ ieth_tx_free_bufs_vector(txq, iavf_tx_desc_done, false);
nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
@@ -1384,13 +1384,13 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = txq->tx_tail;
txdp = &txq->iavf_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_v[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ieth_tx_backlog_entry(txep, tx_pkts, n);
+ ieth_tx_backlog_entry_vec(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -1404,10 +1404,10 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
/* avoid reach the end of ring */
txdp = &txq->iavf_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_v[tx_id];
}
- ieth_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ieth_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
iavf_vtx(txdp, tx_pkts, nb_commit, flags);
@@ -1462,7 +1462,7 @@ int __rte_cold
iavf_txq_vec_setup(struct ieth_tx_queue *txq)
{
txq->vector_tx = true;
- txq->vector_sw_ring = false;
+ txq->vector_sw_ring = txq->vector_tx;
return 0;
}
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [RFC PATCH 21/21] net/ixgbe: use common Tx backlog entry fn
2024-11-22 12:53 [RFC PATCH 00/21] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (19 preceding siblings ...)
2024-11-22 12:54 ` [RFC PATCH 20/21] net/iavf: " Bruce Richardson
@ 2024-11-22 12:54 ` Bruce Richardson
2024-11-25 16:25 ` [RFC PATCH 00/21] Reduce code duplication across Intel NIC drivers David Marchand
` (4 subsequent siblings)
25 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-11-22 12:54 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Anatoly Burakov, Vladimir Medvedkin,
Wathsala Vithanage, Konstantin Ananyev
Remove the custom vector Tx backlog entry function and use the standard
"ieth" one, now that all vector drivers are using the same, smaller ring
structure.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 10 ----------
drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 4 ++--
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 4 ++--
3 files changed, 4 insertions(+), 14 deletions(-)
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
index 3064b92533..91828e2c54 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
@@ -68,16 +68,6 @@ ixgbe_tx_free_bufs(struct ieth_tx_queue *txq)
return txq->tx_rs_thresh;
}
-static __rte_always_inline void
-tx_backlog_entry(struct ieth_vec_tx_entry *txep,
- struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
-{
- int i;
-
- for (i = 0; i < (int)nb_pkts; ++i)
- txep[i].mbuf = tx_pkts[i];
-}
-
static inline void
_ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq)
{
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
index 2336a86dd2..021e14565d 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
@@ -597,7 +597,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- tx_backlog_entry(txep, tx_pkts, n);
+ ieth_tx_backlog_entry_vec(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -614,7 +614,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = &txq->sw_ring_v[tx_id];
}
- tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ieth_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
index 9707dd80eb..5209c21af7 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -720,7 +720,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- tx_backlog_entry(txep, tx_pkts, n);
+ ieth_tx_backlog_entry_vec(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -737,7 +737,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = &txq->sw_ring_v[tx_id];
}
- tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ieth_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* Re: [RFC PATCH 00/21] Reduce code duplication across Intel NIC drivers
2024-11-22 12:53 [RFC PATCH 00/21] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (20 preceding siblings ...)
2024-11-22 12:54 ` [RFC PATCH 21/21] net/ixgbe: use common Tx backlog entry fn Bruce Richardson
@ 2024-11-25 16:25 ` David Marchand
2024-11-25 16:31 ` Bruce Richardson
2024-12-02 11:24 ` [PATCH v1 " Bruce Richardson
` (3 subsequent siblings)
25 siblings, 1 reply; 127+ messages in thread
From: David Marchand @ 2024-11-25 16:25 UTC (permalink / raw)
To: Bruce Richardson; +Cc: dev, Thomas Monjalon
Hello Bruce,
On Fri, Nov 22, 2024 at 1:54 PM Bruce Richardson
<bruce.richardson@intel.com> wrote:
>
> This RFC attempts to reduce the amount of code duplication across a
> number of Intel NIC drivers, specifically: ixgbe, i40e, iavf, and ice.
Thanks for starting this effort!
>
> The first patch extract a function from the Rx side, otherwise the
> majority of the changes are on the Tx side, leading to a converged Tx
> queue structure across the 4 drivers, and a large number of common
> functions.
>
> Open question:
> * How should common code across drivers within a single device class be
> managed?
> - For now, I've created an "intel_eth" folder within the "common"
> driver directory, thinking about it after, it implies to me that
> it is common across driver classes.
> - Would it be better to create an "intel_common" directory within the
> "net" folder?
common/ drivers currently host code that is device class agnostic,
like providing helpers to talk with hw.
No common/ driver has a dependency on some device class library.
This series adds code that is not built into a library so there is no
need to express dependencies in meson.
But if the need arises, could it become a problem? (adding a
dependency to lib/ethdev to some drivers/common/xx/).
For now, I prefer the second proposition and have this code hosted in
drivers/net/.
--
David Marchand
^ permalink raw reply [flat|nested] 127+ messages in thread
* Re: [RFC PATCH 00/21] Reduce code duplication across Intel NIC drivers
2024-11-25 16:25 ` [RFC PATCH 00/21] Reduce code duplication across Intel NIC drivers David Marchand
@ 2024-11-25 16:31 ` Bruce Richardson
2024-11-26 14:57 ` Thomas Monjalon
0 siblings, 1 reply; 127+ messages in thread
From: Bruce Richardson @ 2024-11-25 16:31 UTC (permalink / raw)
To: David Marchand; +Cc: dev, Thomas Monjalon
On Mon, Nov 25, 2024 at 05:25:47PM +0100, David Marchand wrote:
> Hello Bruce,
>
> On Fri, Nov 22, 2024 at 1:54 PM Bruce Richardson
> <bruce.richardson@intel.com> wrote:
> >
> > This RFC attempts to reduce the amount of code duplication across a
> > number of Intel NIC drivers, specifically: ixgbe, i40e, iavf, and ice.
>
> Thanks for starting this effort!
>
> >
> > The first patch extract a function from the Rx side, otherwise the
> > majority of the changes are on the Tx side, leading to a converged Tx
> > queue structure across the 4 drivers, and a large number of common
> > functions.
> >
> > Open question:
> > * How should common code across drivers within a single device class be
> > managed?
> > - For now, I've created an "intel_eth" folder within the "common"
> > driver directory, thinking about it after, it implies to me that
> > it is common across driver classes.
> > - Would it be better to create an "intel_common" directory within the
> > "net" folder?
>
> common/ drivers currently host code that is device class agnostic,
> like providing helpers to talk with hw.
> No common/ driver has a dependency on some device class library.
>
> This series adds code that is not built into a library so there is no
> need to express dependencies in meson.
> But if the need arises, could it become a problem? (adding a
> dependency to lib/ethdev to some drivers/common/xx/).
>
>
> For now, I prefer the second proposition and have this code hosted in
> drivers/net/.
>
Thanks for the feedback. While when I started this prototyping I felt that
common was the right place for it, at this point I'm now tending towards
this second location - keeping it in net.
Any other thoughts on the relative merits of the various locations?
/Bruce
^ permalink raw reply [flat|nested] 127+ messages in thread
* Re: [RFC PATCH 00/21] Reduce code duplication across Intel NIC drivers
2024-11-25 16:31 ` Bruce Richardson
@ 2024-11-26 14:57 ` Thomas Monjalon
2024-11-26 15:27 ` Bruce Richardson
0 siblings, 1 reply; 127+ messages in thread
From: Thomas Monjalon @ 2024-11-26 14:57 UTC (permalink / raw)
To: Bruce Richardson; +Cc: David Marchand, dev
25/11/2024 17:31, Bruce Richardson:
> On Mon, Nov 25, 2024 at 05:25:47PM +0100, David Marchand wrote:
> > Hello Bruce,
> >
> > On Fri, Nov 22, 2024 at 1:54 PM Bruce Richardson
> > <bruce.richardson@intel.com> wrote:
> > >
> > > This RFC attempts to reduce the amount of code duplication across a
> > > number of Intel NIC drivers, specifically: ixgbe, i40e, iavf, and ice.
> >
> > Thanks for starting this effort!
> >
> > >
> > > The first patch extract a function from the Rx side, otherwise the
> > > majority of the changes are on the Tx side, leading to a converged Tx
> > > queue structure across the 4 drivers, and a large number of common
> > > functions.
> > >
> > > Open question:
> > > * How should common code across drivers within a single device class be
> > > managed?
> > > - For now, I've created an "intel_eth" folder within the "common"
> > > driver directory, thinking about it after, it implies to me that
> > > it is common across driver classes.
> > > - Would it be better to create an "intel_common" directory within the
> > > "net" folder?
> >
> > common/ drivers currently host code that is device class agnostic,
> > like providing helpers to talk with hw.
> > No common/ driver has a dependency on some device class library.
> >
> > This series adds code that is not built into a library so there is no
> > need to express dependencies in meson.
> > But if the need arises, could it become a problem? (adding a
> > dependency to lib/ethdev to some drivers/common/xx/).
> >
> >
> > For now, I prefer the second proposition and have this code hosted in
> > drivers/net/.
> >
> Thanks for the feedback. While when I started this prototyping I felt that
> common was the right place for it, at this point I'm now tending towards
> this second location - keeping it in net.
> Any other thoughts on the relative merits of the various locations?
We just need to know in which order building the common directory.
It can be before bus drivers or later, but you cannot have bus common code,
and some ethdev code at the same time.
If you just want to share code inside drivers/net/, I suppose it is OK to keep it there.
In any choice you do, you will have to maintain some restrictions on the content due to the location.
^ permalink raw reply [flat|nested] 127+ messages in thread
* Re: [RFC PATCH 00/21] Reduce code duplication across Intel NIC drivers
2024-11-26 14:57 ` Thomas Monjalon
@ 2024-11-26 15:27 ` Bruce Richardson
0 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-11-26 15:27 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: David Marchand, dev
On Tue, Nov 26, 2024 at 03:57:42PM +0100, Thomas Monjalon wrote:
> 25/11/2024 17:31, Bruce Richardson:
> > On Mon, Nov 25, 2024 at 05:25:47PM +0100, David Marchand wrote:
> > > Hello Bruce,
> > >
> > > On Fri, Nov 22, 2024 at 1:54 PM Bruce Richardson
> > > <bruce.richardson@intel.com> wrote:
> > > >
> > > > This RFC attempts to reduce the amount of code duplication across a
> > > > number of Intel NIC drivers, specifically: ixgbe, i40e, iavf, and ice.
> > >
> > > Thanks for starting this effort!
> > >
> > > >
> > > > The first patch extract a function from the Rx side, otherwise the
> > > > majority of the changes are on the Tx side, leading to a converged Tx
> > > > queue structure across the 4 drivers, and a large number of common
> > > > functions.
> > > >
> > > > Open question:
> > > > * How should common code across drivers within a single device class be
> > > > managed?
> > > > - For now, I've created an "intel_eth" folder within the "common"
> > > > driver directory, thinking about it after, it implies to me that
> > > > it is common across driver classes.
> > > > - Would it be better to create an "intel_common" directory within the
> > > > "net" folder?
> > >
> > > common/ drivers currently host code that is device class agnostic,
> > > like providing helpers to talk with hw.
> > > No common/ driver has a dependency on some device class library.
> > >
> > > This series adds code that is not built into a library so there is no
> > > need to express dependencies in meson.
> > > But if the need arises, could it become a problem? (adding a
> > > dependency to lib/ethdev to some drivers/common/xx/).
> > >
> > >
> > > For now, I prefer the second proposition and have this code hosted in
> > > drivers/net/.
> > >
> > Thanks for the feedback. While when I started this prototyping I felt that
> > common was the right place for it, at this point I'm now tending towards
> > this second location - keeping it in net.
> > Any other thoughts on the relative merits of the various locations?
>
> We just need to know in which order building the common directory.
> It can be before bus drivers or later, but you cannot have bus common code,
> and some ethdev code at the same time.
> If you just want to share code inside drivers/net/, I suppose it is OK to keep it there.
> In any choice you do, you will have to maintain some restrictions on the content due to the location.
>
Yes, good point. However, that would only apply if the common code was
being built into a separate component to be linked into the other
components using it. The initial prototype work has resulted in header
files only, and my thinking was that even if .c files are added, they would
not be compiled into a .a file, but rather compiled as source into each
individual driver using them. This would allow for a certain amount of
ifdef usage, for example, where things may have slight differences.
However, I think we cross that bridge when we come to it.
/Bruce
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v1 00/21] Reduce code duplication across Intel NIC drivers
2024-11-22 12:53 [RFC PATCH 00/21] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (21 preceding siblings ...)
2024-11-25 16:25 ` [RFC PATCH 00/21] Reduce code duplication across Intel NIC drivers David Marchand
@ 2024-12-02 11:24 ` Bruce Richardson
2024-12-02 11:24 ` [PATCH v1 01/21] net/_common_intel: add pkt reassembly fn for intel drivers Bruce Richardson
` (20 more replies)
2024-12-03 16:41 ` [PATCH v2 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (2 subsequent siblings)
25 siblings, 21 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-02 11:24 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson
This RFC attempts to reduce the amount of code duplication across a
number of Intel NIC drivers, specifically: ixgbe, i40e, iavf, and ice.
The first patch extract a function from the Rx side, otherwise the
majority of the changes are on the Tx side, leading to a converged Tx
queue structure across the 4 drivers, and a large number of common
functions.
RFC->v1:
* Moved the location of the common code from "common/intel_eth" to
"net/_common_intel", and added only ".." to the driver include path so
that the paths included "_common_intel" in them, to make it clear it's
not driver-local headers.
* Due to change in location, structure/fn prefix changes from "ieth" to
"ci" for "common intel".
* Removed the seeming-arbitrary split of vector and non-vector code -
since much of the code taken from vector files was scalar code which
was used by the vector drivers.
* Split code into separate Rx and Tx files.
* Fixed multiple checkpatch issues (but not all).
* Attempted to improve name standardization, by using "_vec" as a common
suffix for all vector-related fns and data. Previously, some names had
"vec" in the middle, others had just "_v" suffix or full word "vector"
as suffix.
* Other minor changes...
Bruce Richardson (21):
net/_common_intel: add pkt reassembly fn for intel drivers
net/_common_intel: provide common Tx entry structures
net/_common_intel: add Tx mbuf ring replenish fn
drivers/net: align Tx queue struct field names
drivers/net: add prefix for driver-specific structs
net/_common_intel: merge ice and i40e Tx queue struct
net/iavf: use common Tx queue structure
net/ixgbe: convert Tx queue context cache field to ptr
net/ixgbe: use common Tx queue structure
net/_common_intel: pack Tx queue structure
net/_common_intel: add post-Tx buffer free function
net/_common_intel: add Tx buffer free fn for AVX-512
net/iavf: use common Tx free fn for AVX-512
net/ice: move Tx queue mbuf cleanup fn to common
net/i40e: use common Tx queue mbuf cleanup fn
net/ixgbe: use common Tx queue mbuf cleanup fn
net/iavf: use common Tx queue mbuf cleanup fn
net/ice: use vector SW ring for all vector paths
net/i40e: use vector SW ring for all vector paths
net/iavf: use vector SW ring for all vector paths
net/ixgbe: use common Tx backlog entry fn
drivers/net/_common_intel/rx.h | 81 +++++
drivers/net/_common_intel/tx.h | 327 ++++++++++++++++++
drivers/net/i40e/i40e_ethdev.c | 4 +-
drivers/net/i40e/i40e_ethdev.h | 8 +-
drivers/net/i40e/i40e_fdir.c | 10 +-
.../net/i40e/i40e_recycle_mbufs_vec_common.c | 6 +-
drivers/net/i40e/i40e_rxtx.c | 193 ++++-------
drivers/net/i40e/i40e_rxtx.h | 61 +---
drivers/net/i40e/i40e_rxtx_vec_altivec.c | 26 +-
drivers/net/i40e/i40e_rxtx_vec_avx2.c | 26 +-
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 144 +-------
drivers/net/i40e/i40e_rxtx_vec_common.h | 144 +-------
drivers/net/i40e/i40e_rxtx_vec_neon.c | 26 +-
drivers/net/i40e/i40e_rxtx_vec_sse.c | 26 +-
drivers/net/i40e/meson.build | 2 +-
drivers/net/iavf/iavf.h | 2 +-
drivers/net/iavf/iavf_ethdev.c | 4 +-
drivers/net/iavf/iavf_rxtx.c | 180 ++++------
drivers/net/iavf/iavf_rxtx.h | 61 +---
drivers/net/iavf/iavf_rxtx_vec_avx2.c | 47 +--
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 214 ++----------
drivers/net/iavf/iavf_rxtx_vec_common.h | 160 +--------
drivers/net/iavf/iavf_rxtx_vec_sse.c | 57 ++-
drivers/net/iavf/iavf_vchnl.c | 8 +-
drivers/net/iavf/meson.build | 2 +-
drivers/net/ice/ice_dcf.c | 4 +-
drivers/net/ice/ice_dcf_ethdev.c | 21 +-
drivers/net/ice/ice_diagnose.c | 2 +-
drivers/net/ice/ice_ethdev.c | 2 +-
drivers/net/ice/ice_ethdev.h | 7 +-
drivers/net/ice/ice_rxtx.c | 164 ++++-----
drivers/net/ice/ice_rxtx.h | 52 +--
drivers/net/ice/ice_rxtx_vec_avx2.c | 26 +-
drivers/net/ice/ice_rxtx_vec_avx512.c | 153 +-------
drivers/net/ice/ice_rxtx_vec_common.h | 190 +---------
drivers/net/ice/ice_rxtx_vec_sse.c | 30 +-
drivers/net/ice/meson.build | 2 +-
drivers/net/ixgbe/base/ixgbe_osdep.h | 2 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 4 +-
.../ixgbe/ixgbe_recycle_mbufs_vec_common.c | 6 +-
drivers/net/ixgbe/ixgbe_rxtx.c | 139 ++++----
drivers/net/ixgbe/ixgbe_rxtx.h | 73 +---
drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 129 +------
drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 37 +-
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 37 +-
drivers/net/ixgbe/meson.build | 2 +-
46 files changed, 1014 insertions(+), 1887 deletions(-)
create mode 100644 drivers/net/_common_intel/rx.h
create mode 100644 drivers/net/_common_intel/tx.h
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v1 01/21] net/_common_intel: add pkt reassembly fn for intel drivers
2024-12-02 11:24 ` [PATCH v1 " Bruce Richardson
@ 2024-12-02 11:24 ` Bruce Richardson
2024-12-02 11:24 ` [PATCH v1 02/21] net/_common_intel: provide common Tx entry structures Bruce Richardson
` (19 subsequent siblings)
20 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-02 11:24 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, David Christensen, Ian Stokes,
Konstantin Ananyev, Wathsala Vithanage, Vladimir Medvedkin,
Anatoly Burakov
The code for reassembling a single, multi-mbuf packet from multiple
buffers received from the NIC is duplicated across many drivers. Rather
than having multiple copies of this function, we can create an
"_common_intel" directory to hold such functions and consolidate
multiple functions down to a single one for easier maintenance.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/rx.h | 81 +++++++++++++++++++++++
drivers/net/i40e/i40e_rxtx_vec_altivec.c | 4 +-
drivers/net/i40e/i40e_rxtx_vec_avx2.c | 4 +-
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 4 +-
drivers/net/i40e/i40e_rxtx_vec_common.h | 64 +-----------------
drivers/net/i40e/i40e_rxtx_vec_neon.c | 4 +-
drivers/net/i40e/i40e_rxtx_vec_sse.c | 4 +-
drivers/net/i40e/meson.build | 2 +-
drivers/net/iavf/iavf_rxtx_vec_avx2.c | 8 +--
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 8 +--
drivers/net/iavf/iavf_rxtx_vec_common.h | 65 +-----------------
drivers/net/iavf/iavf_rxtx_vec_sse.c | 8 +--
drivers/net/iavf/meson.build | 2 +-
drivers/net/ice/ice_rxtx_vec_avx2.c | 4 +-
drivers/net/ice/ice_rxtx_vec_avx512.c | 8 +--
drivers/net/ice/ice_rxtx_vec_common.h | 66 +-----------------
drivers/net/ice/ice_rxtx_vec_sse.c | 4 +-
drivers/net/ice/meson.build | 2 +-
drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 63 +-----------------
drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 4 +-
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 4 +-
drivers/net/ixgbe/meson.build | 2 +-
22 files changed, 123 insertions(+), 292 deletions(-)
create mode 100644 drivers/net/_common_intel/rx.h
diff --git a/drivers/net/_common_intel/rx.h b/drivers/net/_common_intel/rx.h
new file mode 100644
index 0000000000..f0155ceb50
--- /dev/null
+++ b/drivers/net/_common_intel/rx.h
@@ -0,0 +1,81 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Intel Corporation
+ */
+
+#ifndef _COMMON_INTEL_RX_H_
+#define _COMMON_INTEL_RX_H_
+
+#include <stdint.h>
+#include <unistd.h>
+#include <rte_mbuf.h>
+
+#define CI_RX_BURST 32
+
+static inline uint16_t
+ci_rx_reassemble_packets(struct rte_mbuf **rx_bufs,
+ uint16_t nb_bufs, uint8_t *split_flags,
+ struct rte_mbuf **pkt_first_seg,
+ struct rte_mbuf **pkt_last_seg,
+ const uint8_t crc_len)
+{
+ struct rte_mbuf *pkts[CI_RX_BURST] = {0}; /*finished pkts*/
+ struct rte_mbuf *start = *pkt_first_seg;
+ struct rte_mbuf *end = *pkt_last_seg;
+ unsigned int pkt_idx, buf_idx;
+
+ for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) {
+ if (end) {
+ /* processing a split packet */
+ end->next = rx_bufs[buf_idx];
+ rx_bufs[buf_idx]->data_len += crc_len;
+
+ start->nb_segs++;
+ start->pkt_len += rx_bufs[buf_idx]->data_len;
+ end = end->next;
+
+ if (!split_flags[buf_idx]) {
+ /* it's the last packet of the set */
+ start->hash = end->hash;
+ start->vlan_tci = end->vlan_tci;
+ start->ol_flags = end->ol_flags;
+ /* we need to strip crc for the whole packet */
+ start->pkt_len -= crc_len;
+ if (end->data_len > crc_len) {
+ end->data_len -= crc_len;
+ } else {
+ /* free up last mbuf */
+ struct rte_mbuf *secondlast = start;
+
+ start->nb_segs--;
+ while (secondlast->next != end)
+ secondlast = secondlast->next;
+ secondlast->data_len -= (crc_len - end->data_len);
+ secondlast->next = NULL;
+ rte_pktmbuf_free_seg(end);
+ }
+ pkts[pkt_idx++] = start;
+ start = NULL;
+ end = NULL;
+ }
+ } else {
+ /* not processing a split packet */
+ if (!split_flags[buf_idx]) {
+ /* not a split packet, save and skip */
+ pkts[pkt_idx++] = rx_bufs[buf_idx];
+ continue;
+ }
+ start = rx_bufs[buf_idx];
+ end = start;
+ rx_bufs[buf_idx]->data_len += crc_len;
+ rx_bufs[buf_idx]->pkt_len += crc_len;
+ }
+ }
+
+ /* save the partial packet for next time */
+ *pkt_first_seg = start;
+ *pkt_last_seg = end;
+ memcpy(rx_bufs, pkts, pkt_idx * (sizeof(*pkts)));
+ return pkt_idx;
+}
+
+#endif /* _COMMON_INTEL_RX_H_ */
diff --git a/drivers/net/i40e/i40e_rxtx_vec_altivec.c b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
index b6b0d38ec1..95829f65d5 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_altivec.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
@@ -494,8 +494,8 @@ i40e_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
if (i == nb_bufs)
return nb_bufs;
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx2.c b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
index 19cf0ac718..6dd6e55d9c 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx2.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
@@ -657,8 +657,8 @@ i40e_recv_scattered_burst_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/*
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index 3b2750221b..506f1b5878 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -725,8 +725,8 @@ i40e_recv_scattered_burst_vec_avx512(void *rx_queue,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index 8b745630e4..1248cecacd 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -8,6 +8,7 @@
#include <ethdev_driver.h>
#include <rte_malloc.h>
+#include <_common_intel/rx.h>
#include "i40e_ethdev.h"
#include "i40e_rxtx.h"
@@ -15,69 +16,6 @@
#pragma GCC diagnostic ignored "-Wcast-qual"
#endif
-static inline uint16_t
-reassemble_packets(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_bufs,
- uint16_t nb_bufs, uint8_t *split_flags)
-{
- struct rte_mbuf *pkts[RTE_I40E_VPMD_RX_BURST]; /*finished pkts*/
- struct rte_mbuf *start = rxq->pkt_first_seg;
- struct rte_mbuf *end = rxq->pkt_last_seg;
- unsigned pkt_idx, buf_idx;
-
- for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) {
- if (end != NULL) {
- /* processing a split packet */
- end->next = rx_bufs[buf_idx];
- rx_bufs[buf_idx]->data_len += rxq->crc_len;
-
- start->nb_segs++;
- start->pkt_len += rx_bufs[buf_idx]->data_len;
- end = end->next;
-
- if (!split_flags[buf_idx]) {
- /* it's the last packet of the set */
- start->hash = end->hash;
- start->vlan_tci = end->vlan_tci;
- start->ol_flags = end->ol_flags;
- /* we need to strip crc for the whole packet */
- start->pkt_len -= rxq->crc_len;
- if (end->data_len > rxq->crc_len)
- end->data_len -= rxq->crc_len;
- else {
- /* free up last mbuf */
- struct rte_mbuf *secondlast = start;
-
- start->nb_segs--;
- while (secondlast->next != end)
- secondlast = secondlast->next;
- secondlast->data_len -= (rxq->crc_len -
- end->data_len);
- secondlast->next = NULL;
- rte_pktmbuf_free_seg(end);
- }
- pkts[pkt_idx++] = start;
- start = end = NULL;
- }
- } else {
- /* not processing a split packet */
- if (!split_flags[buf_idx]) {
- /* not a split packet, save and skip */
- pkts[pkt_idx++] = rx_bufs[buf_idx];
- continue;
- }
- end = start = rx_bufs[buf_idx];
- rx_bufs[buf_idx]->data_len += rxq->crc_len;
- rx_bufs[buf_idx]->pkt_len += rxq->crc_len;
- }
- }
-
- /* save the partial packet for next time */
- rxq->pkt_first_seg = start;
- rxq->pkt_last_seg = end;
- memcpy(rx_bufs, pkts, pkt_idx * (sizeof(*pkts)));
- return pkt_idx;
-}
-
static __rte_always_inline int
i40e_tx_free_bufs(struct i40e_tx_queue *txq)
{
diff --git a/drivers/net/i40e/i40e_rxtx_vec_neon.c b/drivers/net/i40e/i40e_rxtx_vec_neon.c
index e1c5c7041b..159d971796 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_neon.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_neon.c
@@ -623,8 +623,8 @@ i40e_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/i40e/i40e_rxtx_vec_sse.c b/drivers/net/i40e/i40e_rxtx_vec_sse.c
index ad560d2b6b..3a8128e014 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_sse.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_sse.c
@@ -641,8 +641,8 @@ i40e_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/i40e/meson.build b/drivers/net/i40e/meson.build
index 5c93493124..0e0b416b8f 100644
--- a/drivers/net/i40e/meson.build
+++ b/drivers/net/i40e/meson.build
@@ -36,7 +36,7 @@ sources = files(
testpmd_sources = files('i40e_testpmd.c')
deps += ['hash']
-includes += include_directories('base')
+includes += include_directories('base', '..')
if arch_subdir == 'x86'
sources += files('i40e_rxtx_vec_sse.c')
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index 49d41af953..0baf5045c8 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -1508,8 +1508,8 @@ iavf_recv_scattered_burst_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
@@ -1597,8 +1597,8 @@ iavf_recv_scattered_burst_vec_avx2_flex_rxd(void *rx_queue,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index d6a861bf80..5a88007096 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -1685,8 +1685,8 @@ iavf_recv_scattered_burst_vec_avx512(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
@@ -1761,8 +1761,8 @@ iavf_recv_scattered_burst_vec_avx512_flex_rxd(void *rx_queue,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index 5c5220048d..26b6f07614 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -8,6 +8,7 @@
#include <ethdev_driver.h>
#include <rte_malloc.h>
+#include <_common_intel/rx.h>
#include "iavf.h"
#include "iavf_rxtx.h"
@@ -15,70 +16,6 @@
#pragma GCC diagnostic ignored "-Wcast-qual"
#endif
-static __rte_always_inline uint16_t
-reassemble_packets(struct iavf_rx_queue *rxq, struct rte_mbuf **rx_bufs,
- uint16_t nb_bufs, uint8_t *split_flags)
-{
- struct rte_mbuf *pkts[IAVF_VPMD_RX_MAX_BURST];
- struct rte_mbuf *start = rxq->pkt_first_seg;
- struct rte_mbuf *end = rxq->pkt_last_seg;
- unsigned int pkt_idx, buf_idx;
-
- for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) {
- if (end) {
- /* processing a split packet */
- end->next = rx_bufs[buf_idx];
- rx_bufs[buf_idx]->data_len += rxq->crc_len;
-
- start->nb_segs++;
- start->pkt_len += rx_bufs[buf_idx]->data_len;
- end = end->next;
-
- if (!split_flags[buf_idx]) {
- /* it's the last packet of the set */
- start->hash = end->hash;
- start->vlan_tci = end->vlan_tci;
- start->ol_flags = end->ol_flags;
- /* we need to strip crc for the whole packet */
- start->pkt_len -= rxq->crc_len;
- if (end->data_len > rxq->crc_len) {
- end->data_len -= rxq->crc_len;
- } else {
- /* free up last mbuf */
- struct rte_mbuf *secondlast = start;
-
- start->nb_segs--;
- while (secondlast->next != end)
- secondlast = secondlast->next;
- secondlast->data_len -= (rxq->crc_len -
- end->data_len);
- secondlast->next = NULL;
- rte_pktmbuf_free_seg(end);
- }
- pkts[pkt_idx++] = start;
- start = NULL;
- end = NULL;
- }
- } else {
- /* not processing a split packet */
- if (!split_flags[buf_idx]) {
- /* not a split packet, save and skip */
- pkts[pkt_idx++] = rx_bufs[buf_idx];
- continue;
- }
- end = start = rx_bufs[buf_idx];
- rx_bufs[buf_idx]->data_len += rxq->crc_len;
- rx_bufs[buf_idx]->pkt_len += rxq->crc_len;
- }
- }
-
- /* save the partial packet for next time */
- rxq->pkt_first_seg = start;
- rxq->pkt_last_seg = end;
- memcpy(rx_bufs, pkts, pkt_idx * (sizeof(*pkts)));
- return pkt_idx;
-}
-
static __rte_always_inline int
iavf_tx_free_bufs(struct iavf_tx_queue *txq)
{
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index 0db6fa8bd4..48b01462ea 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -1238,8 +1238,8 @@ iavf_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
@@ -1307,8 +1307,8 @@ iavf_recv_scattered_burst_vec_flex_rxd(void *rx_queue,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/iavf/meson.build b/drivers/net/iavf/meson.build
index b48bb83438..9106e016ef 100644
--- a/drivers/net/iavf/meson.build
+++ b/drivers/net/iavf/meson.build
@@ -5,7 +5,7 @@ if dpdk_conf.get('RTE_IOVA_IN_MBUF') == 0
subdir_done()
endif
-includes += include_directories('../../common/iavf')
+includes += include_directories('../../common/iavf', '..')
testpmd_sources = files('iavf_testpmd.c')
diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index d6e88dbb29..ca247b155c 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -726,8 +726,8 @@ ice_recv_scattered_burst_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + ice_rx_reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index add095ef06..1e603d5d8f 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -763,8 +763,8 @@ ice_recv_scattered_burst_vec_avx512(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + ice_rx_reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
@@ -805,8 +805,8 @@ ice_recv_scattered_burst_vec_avx512_offload(void *rx_queue,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + ice_rx_reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index 4b73465af5..dd7da4761f 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -5,77 +5,13 @@
#ifndef _ICE_RXTX_VEC_COMMON_H_
#define _ICE_RXTX_VEC_COMMON_H_
+#include <_common_intel/rx.h>
#include "ice_rxtx.h"
#ifndef __INTEL_COMPILER
#pragma GCC diagnostic ignored "-Wcast-qual"
#endif
-static inline uint16_t
-ice_rx_reassemble_packets(struct ice_rx_queue *rxq, struct rte_mbuf **rx_bufs,
- uint16_t nb_bufs, uint8_t *split_flags)
-{
- struct rte_mbuf *pkts[ICE_VPMD_RX_BURST] = {0}; /*finished pkts*/
- struct rte_mbuf *start = rxq->pkt_first_seg;
- struct rte_mbuf *end = rxq->pkt_last_seg;
- unsigned int pkt_idx, buf_idx;
-
- for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) {
- if (end) {
- /* processing a split packet */
- end->next = rx_bufs[buf_idx];
- rx_bufs[buf_idx]->data_len += rxq->crc_len;
-
- start->nb_segs++;
- start->pkt_len += rx_bufs[buf_idx]->data_len;
- end = end->next;
-
- if (!split_flags[buf_idx]) {
- /* it's the last packet of the set */
- start->hash = end->hash;
- start->vlan_tci = end->vlan_tci;
- start->ol_flags = end->ol_flags;
- /* we need to strip crc for the whole packet */
- start->pkt_len -= rxq->crc_len;
- if (end->data_len > rxq->crc_len) {
- end->data_len -= rxq->crc_len;
- } else {
- /* free up last mbuf */
- struct rte_mbuf *secondlast = start;
-
- start->nb_segs--;
- while (secondlast->next != end)
- secondlast = secondlast->next;
- secondlast->data_len -= (rxq->crc_len -
- end->data_len);
- secondlast->next = NULL;
- rte_pktmbuf_free_seg(end);
- }
- pkts[pkt_idx++] = start;
- start = NULL;
- end = NULL;
- }
- } else {
- /* not processing a split packet */
- if (!split_flags[buf_idx]) {
- /* not a split packet, save and skip */
- pkts[pkt_idx++] = rx_bufs[buf_idx];
- continue;
- }
- start = rx_bufs[buf_idx];
- end = start;
- rx_bufs[buf_idx]->data_len += rxq->crc_len;
- rx_bufs[buf_idx]->pkt_len += rxq->crc_len;
- }
- }
-
- /* save the partial packet for next time */
- rxq->pkt_first_seg = start;
- rxq->pkt_last_seg = end;
- memcpy(rx_bufs, pkts, pkt_idx * (sizeof(*pkts)));
- return pkt_idx;
-}
-
static __rte_always_inline int
ice_tx_free_bufs_vec(struct ice_tx_queue *txq)
{
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index c01d8ede29..01533454ba 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -640,8 +640,8 @@ ice_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + ice_rx_reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/ice/meson.build b/drivers/net/ice/meson.build
index 1c9dc0cc6d..02c028db73 100644
--- a/drivers/net/ice/meson.build
+++ b/drivers/net/ice/meson.build
@@ -19,7 +19,7 @@ sources = files(
testpmd_sources = files('ice_testpmd.c')
deps += ['hash', 'net', 'common_iavf']
-includes += include_directories('base', '../../common/iavf')
+includes += include_directories('base', '..')
if arch_subdir == 'x86'
sources += files('ice_rxtx_vec_sse.c')
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
index a4d9ec9b08..2bab17c934 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
@@ -7,71 +7,10 @@
#include <stdint.h>
#include <ethdev_driver.h>
+#include <_common_intel/rx.h>
#include "ixgbe_ethdev.h"
#include "ixgbe_rxtx.h"
-static inline uint16_t
-reassemble_packets(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_bufs,
- uint16_t nb_bufs, uint8_t *split_flags)
-{
- struct rte_mbuf *pkts[nb_bufs]; /*finished pkts*/
- struct rte_mbuf *start = rxq->pkt_first_seg;
- struct rte_mbuf *end = rxq->pkt_last_seg;
- unsigned int pkt_idx, buf_idx;
-
- for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) {
- if (end != NULL) {
- /* processing a split packet */
- end->next = rx_bufs[buf_idx];
- rx_bufs[buf_idx]->data_len += rxq->crc_len;
-
- start->nb_segs++;
- start->pkt_len += rx_bufs[buf_idx]->data_len;
- end = end->next;
-
- if (!split_flags[buf_idx]) {
- /* it's the last packet of the set */
- start->hash = end->hash;
- start->ol_flags = end->ol_flags;
- /* we need to strip crc for the whole packet */
- start->pkt_len -= rxq->crc_len;
- if (end->data_len > rxq->crc_len)
- end->data_len -= rxq->crc_len;
- else {
- /* free up last mbuf */
- struct rte_mbuf *secondlast = start;
-
- start->nb_segs--;
- while (secondlast->next != end)
- secondlast = secondlast->next;
- secondlast->data_len -= (rxq->crc_len -
- end->data_len);
- secondlast->next = NULL;
- rte_pktmbuf_free_seg(end);
- }
- pkts[pkt_idx++] = start;
- start = end = NULL;
- }
- } else {
- /* not processing a split packet */
- if (!split_flags[buf_idx]) {
- /* not a split packet, save and skip */
- pkts[pkt_idx++] = rx_bufs[buf_idx];
- continue;
- }
- end = start = rx_bufs[buf_idx];
- rx_bufs[buf_idx]->data_len += rxq->crc_len;
- rx_bufs[buf_idx]->pkt_len += rxq->crc_len;
- }
- }
-
- /* save the partial packet for next time */
- rxq->pkt_first_seg = start;
- rxq->pkt_last_seg = end;
- memcpy(rx_bufs, pkts, pkt_idx * (sizeof(*pkts)));
- return pkt_idx;
-}
-
static __rte_always_inline int
ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq)
{
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
index 952b032eb6..7b35093075 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
@@ -516,8 +516,8 @@ ixgbe_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
index a77370cdb7..a709bf8c7f 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -639,8 +639,8 @@ ixgbe_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/ixgbe/meson.build b/drivers/net/ixgbe/meson.build
index 0ae12dd5ff..a65ff51379 100644
--- a/drivers/net/ixgbe/meson.build
+++ b/drivers/net/ixgbe/meson.build
@@ -35,6 +35,6 @@ elif arch_subdir == 'arm'
sources += files('ixgbe_recycle_mbufs_vec_common.c')
endif
-includes += include_directories('base')
+includes += include_directories('base', '..')
headers = files('rte_pmd_ixgbe.h')
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v1 02/21] net/_common_intel: provide common Tx entry structures
2024-12-02 11:24 ` [PATCH v1 " Bruce Richardson
2024-12-02 11:24 ` [PATCH v1 01/21] net/_common_intel: add pkt reassembly fn for intel drivers Bruce Richardson
@ 2024-12-02 11:24 ` Bruce Richardson
2024-12-02 11:24 ` [PATCH v1 03/21] net/_common_intel: add Tx mbuf ring replenish fn Bruce Richardson
` (18 subsequent siblings)
20 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-02 11:24 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Ian Stokes, David Christensen,
Konstantin Ananyev, Wathsala Vithanage, Vladimir Medvedkin,
Anatoly Burakov
The Tx entry structures, both vector and scalar, are common across Intel
drivers, so provide a single definition to be used everywhere.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 27 +++++++++++++++++++
.../net/i40e/i40e_recycle_mbufs_vec_common.c | 2 +-
drivers/net/i40e/i40e_rxtx.c | 18 ++++++-------
drivers/net/i40e/i40e_rxtx.h | 14 +++-------
drivers/net/i40e/i40e_rxtx_vec_altivec.c | 2 +-
drivers/net/i40e/i40e_rxtx_vec_avx2.c | 2 +-
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 6 ++---
drivers/net/i40e/i40e_rxtx_vec_common.h | 4 +--
drivers/net/i40e/i40e_rxtx_vec_neon.c | 2 +-
drivers/net/i40e/i40e_rxtx_vec_sse.c | 2 +-
drivers/net/iavf/iavf_rxtx.c | 12 ++++-----
drivers/net/iavf/iavf_rxtx.h | 14 +++-------
drivers/net/iavf/iavf_rxtx_vec_avx2.c | 2 +-
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 10 +++----
drivers/net/iavf/iavf_rxtx_vec_common.h | 4 +--
drivers/net/iavf/iavf_rxtx_vec_sse.c | 2 +-
drivers/net/ice/ice_dcf_ethdev.c | 2 +-
drivers/net/ice/ice_rxtx.c | 16 +++++------
drivers/net/ice/ice_rxtx.h | 13 ++-------
drivers/net/ice/ice_rxtx_vec_avx2.c | 2 +-
drivers/net/ice/ice_rxtx_vec_avx512.c | 6 ++---
drivers/net/ice/ice_rxtx_vec_common.h | 6 ++---
drivers/net/ice/ice_rxtx_vec_sse.c | 2 +-
.../ixgbe/ixgbe_recycle_mbufs_vec_common.c | 2 +-
drivers/net/ixgbe/ixgbe_rxtx.c | 16 +++++------
drivers/net/ixgbe/ixgbe_rxtx.h | 22 +++------------
drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 8 +++---
drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 2 +-
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 2 +-
29 files changed, 105 insertions(+), 117 deletions(-)
create mode 100644 drivers/net/_common_intel/tx.h
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
new file mode 100644
index 0000000000..384352b9db
--- /dev/null
+++ b/drivers/net/_common_intel/tx.h
@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Intel Corporation
+ */
+
+#ifndef _COMMON_INTEL_TX_H_
+#define _COMMON_INTEL_TX_H_
+
+#include <stdint.h>
+#include <rte_mbuf.h>
+
+/**
+ * Structure associated with each descriptor of the TX ring of a TX queue.
+ */
+struct ci_tx_entry {
+ struct rte_mbuf *mbuf; /* mbuf associated with TX desc, if any. */
+ uint16_t next_id; /* Index of next descriptor in ring. */
+ uint16_t last_id; /* Index of last scattered descriptor. */
+};
+
+/**
+ * Structure associated with each descriptor of the TX ring of a TX queue in vector Tx.
+ */
+struct ci_tx_entry_vec {
+ struct rte_mbuf *mbuf; /* mbuf associated with TX desc, if any. */
+};
+
+#endif /* _COMMON_INTEL_TX_H_ */
diff --git a/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c b/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
index 14424c9921..260d238ce4 100644
--- a/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
+++ b/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
@@ -56,7 +56,7 @@ i40e_recycle_tx_mbufs_reuse_vec(void *tx_queue,
struct rte_eth_recycle_rxq_info *recycle_rxq_info)
{
struct i40e_tx_queue *txq = tx_queue;
- struct i40e_tx_entry *txep;
+ struct ci_tx_entry *txep;
struct rte_mbuf **rxep;
int i, n;
uint16_t nb_recycle_mbufs;
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 839c8a5442..2e1f07d2a1 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -378,7 +378,7 @@ i40e_build_ctob(uint32_t td_cmd,
static inline int
i40e_xmit_cleanup(struct i40e_tx_queue *txq)
{
- struct i40e_tx_entry *sw_ring = txq->sw_ring;
+ struct ci_tx_entry *sw_ring = txq->sw_ring;
volatile struct i40e_tx_desc *txd = txq->tx_ring;
uint16_t last_desc_cleaned = txq->last_desc_cleaned;
uint16_t nb_tx_desc = txq->nb_tx_desc;
@@ -1081,8 +1081,8 @@ uint16_t
i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
struct i40e_tx_queue *txq;
- struct i40e_tx_entry *sw_ring;
- struct i40e_tx_entry *txe, *txn;
+ struct ci_tx_entry *sw_ring;
+ struct ci_tx_entry *txe, *txn;
volatile struct i40e_tx_desc *txd;
volatile struct i40e_tx_desc *txr;
struct rte_mbuf *tx_pkt;
@@ -1331,7 +1331,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
static __rte_always_inline int
i40e_tx_free_bufs(struct i40e_tx_queue *txq)
{
- struct i40e_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint16_t tx_rs_thresh = txq->tx_rs_thresh;
uint16_t i = 0, j = 0;
struct rte_mbuf *free[RTE_I40E_TX_MAX_FREE_BUF_SZ];
@@ -1418,7 +1418,7 @@ i40e_tx_fill_hw_ring(struct i40e_tx_queue *txq,
uint16_t nb_pkts)
{
volatile struct i40e_tx_desc *txdp = &(txq->tx_ring[txq->tx_tail]);
- struct i40e_tx_entry *txep = &(txq->sw_ring[txq->tx_tail]);
+ struct ci_tx_entry *txep = &txq->sw_ring[txq->tx_tail];
const int N_PER_LOOP = 4;
const int N_PER_LOOP_MASK = N_PER_LOOP - 1;
int mainpart, leftover;
@@ -2555,7 +2555,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate software ring */
txq->sw_ring =
rte_zmalloc_socket("i40e tx sw ring",
- sizeof(struct i40e_tx_entry) * nb_desc,
+ sizeof(struct ci_tx_entry) * nb_desc,
RTE_CACHE_LINE_SIZE,
socket_id);
if (!txq->sw_ring) {
@@ -2723,7 +2723,7 @@ i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq)
*/
#ifdef CC_AVX512_SUPPORT
if (dev->tx_pkt_burst == i40e_xmit_pkts_vec_avx512) {
- struct i40e_vec_tx_entry *swr = (void *)txq->sw_ring;
+ struct ci_tx_entry_vec *swr = (void *)txq->sw_ring;
i = txq->tx_next_dd - txq->tx_rs_thresh + 1;
if (txq->tx_tail < i) {
@@ -2768,7 +2768,7 @@ static int
i40e_tx_done_cleanup_full(struct i40e_tx_queue *txq,
uint32_t free_cnt)
{
- struct i40e_tx_entry *swr_ring = txq->sw_ring;
+ struct ci_tx_entry *swr_ring = txq->sw_ring;
uint16_t i, tx_last, tx_id;
uint16_t nb_tx_free_last;
uint16_t nb_tx_to_clean;
@@ -2874,7 +2874,7 @@ i40e_tx_done_cleanup(void *txq, uint32_t free_cnt)
void
i40e_reset_tx_queue(struct i40e_tx_queue *txq)
{
- struct i40e_tx_entry *txe;
+ struct ci_tx_entry *txe;
uint16_t i, prev, size;
if (!txq) {
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 33fc9770d9..0f5d3cb0b7 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -5,6 +5,8 @@
#ifndef _I40E_RXTX_H_
#define _I40E_RXTX_H_
+#include <_common_intel/tx.h>
+
#define RTE_PMD_I40E_RX_MAX_BURST 32
#define RTE_PMD_I40E_TX_MAX_BURST 32
@@ -122,16 +124,6 @@ struct i40e_rx_queue {
const struct rte_memzone *mz;
};
-struct i40e_tx_entry {
- struct rte_mbuf *mbuf;
- uint16_t next_id;
- uint16_t last_id;
-};
-
-struct i40e_vec_tx_entry {
- struct rte_mbuf *mbuf;
-};
-
/*
* Structure associated with each TX queue.
*/
@@ -139,7 +131,7 @@ struct i40e_tx_queue {
uint16_t nb_tx_desc; /**< number of TX descriptors */
uint64_t tx_ring_phys_addr; /**< TX ring DMA address */
volatile struct i40e_tx_desc *tx_ring; /**< TX ring virtual address */
- struct i40e_tx_entry *sw_ring; /**< virtual address of SW ring */
+ struct ci_tx_entry *sw_ring; /**< virtual address of SW ring */
uint16_t tx_tail; /**< current value of tail register */
volatile uint8_t *qtx_tail; /**< register address of tail */
uint16_t nb_tx_used; /**< number of TX desc used since RS bit set */
diff --git a/drivers/net/i40e/i40e_rxtx_vec_altivec.c b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
index 95829f65d5..ca1038eaa6 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_altivec.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
@@ -553,7 +553,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
- struct i40e_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = I40E_TD_CMD;
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx2.c b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
index 6dd6e55d9c..e8441de759 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx2.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
@@ -745,7 +745,7 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
- struct i40e_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = I40E_TD_CMD;
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index 506f1b5878..8b8a16daa8 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -757,7 +757,7 @@ i40e_recv_scattered_pkts_vec_avx512(void *rx_queue,
static __rte_always_inline int
i40e_tx_free_bufs_avx512(struct i40e_tx_queue *txq)
{
- struct i40e_vec_tx_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint32_t n;
uint32_t i;
int nb_free = 0;
@@ -920,7 +920,7 @@ vtx(volatile struct i40e_tx_desc *txdp,
}
static __rte_always_inline void
-tx_backlog_entry_avx512(struct i40e_vec_tx_entry *txep,
+tx_backlog_entry_avx512(struct ci_tx_entry_vec *txep,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
int i;
@@ -935,7 +935,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
- struct i40e_vec_tx_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = I40E_TD_CMD;
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index 1248cecacd..619fb89110 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -19,7 +19,7 @@
static __rte_always_inline int
i40e_tx_free_bufs(struct i40e_tx_queue *txq)
{
- struct i40e_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint32_t n;
uint32_t i;
int nb_free = 0;
@@ -85,7 +85,7 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
}
static __rte_always_inline void
-tx_backlog_entry(struct i40e_tx_entry *txep,
+tx_backlog_entry(struct ci_tx_entry *txep,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
int i;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_neon.c b/drivers/net/i40e/i40e_rxtx_vec_neon.c
index 159d971796..9b90a32e28 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_neon.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_neon.c
@@ -681,7 +681,7 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
{
struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
- struct i40e_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = I40E_TD_CMD;
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_sse.c b/drivers/net/i40e/i40e_rxtx_vec_sse.c
index 3a8128e014..e1fa2ed543 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_sse.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_sse.c
@@ -700,7 +700,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
- struct i40e_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = I40E_TD_CMD;
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 6a093c6746..e337f20073 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -284,7 +284,7 @@ reset_rx_queue(struct iavf_rx_queue *rxq)
static inline void
reset_tx_queue(struct iavf_tx_queue *txq)
{
- struct iavf_tx_entry *txe;
+ struct ci_tx_entry *txe;
uint32_t i, size;
uint16_t prev;
@@ -860,7 +860,7 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate software ring */
txq->sw_ring =
rte_zmalloc_socket("iavf tx sw ring",
- sizeof(struct iavf_tx_entry) * nb_desc,
+ sizeof(struct ci_tx_entry) * nb_desc,
RTE_CACHE_LINE_SIZE,
socket_id);
if (!txq->sw_ring) {
@@ -2379,7 +2379,7 @@ iavf_recv_pkts_bulk_alloc(void *rx_queue,
static inline int
iavf_xmit_cleanup(struct iavf_tx_queue *txq)
{
- struct iavf_tx_entry *sw_ring = txq->sw_ring;
+ struct ci_tx_entry *sw_ring = txq->sw_ring;
uint16_t last_desc_cleaned = txq->last_desc_cleaned;
uint16_t nb_tx_desc = txq->nb_tx_desc;
uint16_t desc_to_clean_to;
@@ -2797,8 +2797,8 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
struct iavf_tx_queue *txq = tx_queue;
volatile struct iavf_tx_desc *txr = txq->tx_ring;
- struct iavf_tx_entry *txe_ring = txq->sw_ring;
- struct iavf_tx_entry *txe, *txn;
+ struct ci_tx_entry *txe_ring = txq->sw_ring;
+ struct ci_tx_entry *txe, *txn;
struct rte_mbuf *mb, *mb_seg;
uint64_t buf_dma_addr;
uint16_t desc_idx, desc_idx_last;
@@ -4268,7 +4268,7 @@ static int
iavf_tx_done_cleanup_full(struct iavf_tx_queue *txq,
uint32_t free_cnt)
{
- struct iavf_tx_entry *swr_ring = txq->sw_ring;
+ struct ci_tx_entry *swr_ring = txq->sw_ring;
uint16_t tx_last, tx_id;
uint16_t nb_tx_free_last;
uint16_t nb_tx_to_clean;
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index 7b56076d32..1a191f2c89 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -5,6 +5,8 @@
#ifndef _IAVF_RXTX_H_
#define _IAVF_RXTX_H_
+#include <_common_intel/tx.h>
+
/* In QLEN must be whole number of 32 descriptors. */
#define IAVF_ALIGN_RING_DESC 32
#define IAVF_MIN_RING_DESC 64
@@ -271,22 +273,12 @@ struct iavf_rx_queue {
uint64_t hw_time_update;
};
-struct iavf_tx_entry {
- struct rte_mbuf *mbuf;
- uint16_t next_id;
- uint16_t last_id;
-};
-
-struct iavf_tx_vec_entry {
- struct rte_mbuf *mbuf;
-};
-
/* Structure associated with each TX queue. */
struct iavf_tx_queue {
const struct rte_memzone *mz; /* memzone for Tx ring */
volatile struct iavf_tx_desc *tx_ring; /* Tx ring virtual address */
uint64_t tx_ring_phys_addr; /* Tx ring DMA address */
- struct iavf_tx_entry *sw_ring; /* address array of SW ring */
+ struct ci_tx_entry *sw_ring; /* address array of SW ring */
uint16_t nb_tx_desc; /* ring length */
uint16_t tx_tail; /* current value of tail */
volatile uint8_t *qtx_tail; /* register address of tail */
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index 0baf5045c8..e7d3d52655 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -1736,7 +1736,7 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
- struct iavf_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
/* bit2 is reserved and must be set to 1 according to Spec */
uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index 5a88007096..a899309f94 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -1847,7 +1847,7 @@ iavf_recv_scattered_pkts_vec_avx512_flex_rxd_offload(void *rx_queue,
static __rte_always_inline int
iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq)
{
- struct iavf_tx_vec_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint32_t n;
uint32_t i;
int nb_free = 0;
@@ -1960,7 +1960,7 @@ iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq)
}
static __rte_always_inline void
-tx_backlog_entry_avx512(struct iavf_tx_vec_entry *txep,
+tx_backlog_entry_avx512(struct ci_tx_entry_vec *txep,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
int i;
@@ -2313,7 +2313,7 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
- struct iavf_tx_vec_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
/* bit2 is reserved and must be set to 1 according to Spec */
uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC;
@@ -2380,7 +2380,7 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
- struct iavf_tx_vec_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, nb_mbuf, tx_id;
/* bit2 is reserved and must be set to 1 according to Spec */
uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC;
@@ -2478,7 +2478,7 @@ iavf_tx_queue_release_mbufs_avx512(struct iavf_tx_queue *txq)
const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1);
const uint16_t end_desc = txq->tx_tail >> txq->use_ctx; /* next empty slot */
const uint16_t wrap_point = txq->nb_tx_desc >> txq->use_ctx; /* end of SW ring */
- struct iavf_tx_vec_entry *swr = (void *)txq->sw_ring;
+ struct ci_tx_entry_vec *swr = (void *)txq->sw_ring;
if (!txq->sw_ring || txq->nb_free == max_desc)
return;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index 26b6f07614..df40857218 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -19,7 +19,7 @@
static __rte_always_inline int
iavf_tx_free_bufs(struct iavf_tx_queue *txq)
{
- struct iavf_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint32_t n;
uint32_t i;
int nb_free = 0;
@@ -74,7 +74,7 @@ iavf_tx_free_bufs(struct iavf_tx_queue *txq)
}
static __rte_always_inline void
-tx_backlog_entry(struct iavf_tx_entry *txep,
+tx_backlog_entry(struct ci_tx_entry *txep,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
int i;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index 48b01462ea..0a30b1ef64 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -1368,7 +1368,7 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
- struct iavf_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = IAVF_TX_DESC_CMD_EOP | 0x04; /* bit 2 must be set */
uint64_t rs = IAVF_TX_DESC_CMD_RS | flags;
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 91f4943a11..4b98e4066b 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -389,7 +389,7 @@ reset_rx_queue(struct ice_rx_queue *rxq)
static inline void
reset_tx_queue(struct ice_tx_queue *txq)
{
- struct ice_tx_entry *txe;
+ struct ci_tx_entry *txe;
uint32_t i, size;
uint16_t prev;
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 0c7106c7e0..d584086a36 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -1028,7 +1028,7 @@ _ice_tx_queue_release_mbufs(struct ice_tx_queue *txq)
static void
ice_reset_tx_queue(struct ice_tx_queue *txq)
{
- struct ice_tx_entry *txe;
+ struct ci_tx_entry *txe;
uint16_t i, prev, size;
if (!txq) {
@@ -1509,7 +1509,7 @@ ice_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate software ring */
txq->sw_ring =
rte_zmalloc_socket(NULL,
- sizeof(struct ice_tx_entry) * nb_desc,
+ sizeof(struct ci_tx_entry) * nb_desc,
RTE_CACHE_LINE_SIZE,
socket_id);
if (!txq->sw_ring) {
@@ -2837,7 +2837,7 @@ ice_txd_enable_checksum(uint64_t ol_flags,
static inline int
ice_xmit_cleanup(struct ice_tx_queue *txq)
{
- struct ice_tx_entry *sw_ring = txq->sw_ring;
+ struct ci_tx_entry *sw_ring = txq->sw_ring;
volatile struct ice_tx_desc *txd = txq->tx_ring;
uint16_t last_desc_cleaned = txq->last_desc_cleaned;
uint16_t nb_tx_desc = txq->nb_tx_desc;
@@ -2961,8 +2961,8 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
struct ice_tx_queue *txq;
volatile struct ice_tx_desc *tx_ring;
volatile struct ice_tx_desc *txd;
- struct ice_tx_entry *sw_ring;
- struct ice_tx_entry *txe, *txn;
+ struct ci_tx_entry *sw_ring;
+ struct ci_tx_entry *txe, *txn;
struct rte_mbuf *tx_pkt;
struct rte_mbuf *m_seg;
uint32_t cd_tunneling_params;
@@ -3184,7 +3184,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
static __rte_always_inline int
ice_tx_free_bufs(struct ice_tx_queue *txq)
{
- struct ice_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint16_t i;
if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
@@ -3221,7 +3221,7 @@ static int
ice_tx_done_cleanup_full(struct ice_tx_queue *txq,
uint32_t free_cnt)
{
- struct ice_tx_entry *swr_ring = txq->sw_ring;
+ struct ci_tx_entry *swr_ring = txq->sw_ring;
uint16_t i, tx_last, tx_id;
uint16_t nb_tx_free_last;
uint16_t nb_tx_to_clean;
@@ -3361,7 +3361,7 @@ ice_tx_fill_hw_ring(struct ice_tx_queue *txq, struct rte_mbuf **pkts,
uint16_t nb_pkts)
{
volatile struct ice_tx_desc *txdp = &txq->tx_ring[txq->tx_tail];
- struct ice_tx_entry *txep = &txq->sw_ring[txq->tx_tail];
+ struct ci_tx_entry *txep = &txq->sw_ring[txq->tx_tail];
const int N_PER_LOOP = 4;
const int N_PER_LOOP_MASK = N_PER_LOOP - 1;
int mainpart, leftover;
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index 45f25b3609..8d1a1a8676 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -5,6 +5,7 @@
#ifndef _ICE_RXTX_H_
#define _ICE_RXTX_H_
+#include <_common_intel/tx.h>
#include "ice_ethdev.h"
#define ICE_ALIGN_RING_DESC 32
@@ -144,21 +145,11 @@ struct ice_rx_queue {
bool ts_enable; /* if rxq timestamp is enabled */
};
-struct ice_tx_entry {
- struct rte_mbuf *mbuf;
- uint16_t next_id;
- uint16_t last_id;
-};
-
-struct ice_vec_tx_entry {
- struct rte_mbuf *mbuf;
-};
-
struct ice_tx_queue {
uint16_t nb_tx_desc; /* number of TX descriptors */
rte_iova_t tx_ring_dma; /* TX ring DMA address */
volatile struct ice_tx_desc *tx_ring; /* TX ring virtual address */
- struct ice_tx_entry *sw_ring; /* virtual address of SW ring */
+ struct ci_tx_entry *sw_ring; /* virtual address of SW ring */
uint16_t tx_tail; /* current value of tail register */
volatile uint8_t *qtx_tail; /* register address of tail */
uint16_t nb_tx_used; /* number of TX desc used since RS bit set */
diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index ca247b155c..cf1862263a 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -858,7 +858,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
volatile struct ice_tx_desc *txdp;
- struct ice_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = ICE_TD_CMD;
uint64_t rs = ICE_TX_DESC_CMD_RS | ICE_TD_CMD;
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index 1e603d5d8f..6b6aa3f1fe 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -862,7 +862,7 @@ ice_recv_scattered_pkts_vec_avx512_offload(void *rx_queue,
static __rte_always_inline int
ice_tx_free_bufs_avx512(struct ice_tx_queue *txq)
{
- struct ice_vec_tx_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint32_t n;
uint32_t i;
int nb_free = 0;
@@ -1040,7 +1040,7 @@ ice_vtx(volatile struct ice_tx_desc *txdp, struct rte_mbuf **pkt,
}
static __rte_always_inline void
-ice_tx_backlog_entry_avx512(struct ice_vec_tx_entry *txep,
+ice_tx_backlog_entry_avx512(struct ci_tx_entry_vec *txep,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
int i;
@@ -1055,7 +1055,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
volatile struct ice_tx_desc *txdp;
- struct ice_vec_tx_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = ICE_TD_CMD;
uint64_t rs = ICE_TX_DESC_CMD_RS | ICE_TD_CMD;
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index dd7da4761f..3dc6061e84 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -15,7 +15,7 @@
static __rte_always_inline int
ice_tx_free_bufs_vec(struct ice_tx_queue *txq)
{
- struct ice_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint32_t n;
uint32_t i;
int nb_free = 0;
@@ -70,7 +70,7 @@ ice_tx_free_bufs_vec(struct ice_tx_queue *txq)
}
static __rte_always_inline void
-ice_tx_backlog_entry(struct ice_tx_entry *txep,
+ice_tx_backlog_entry(struct ci_tx_entry *txep,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
int i;
@@ -135,7 +135,7 @@ _ice_tx_queue_release_mbufs_vec(struct ice_tx_queue *txq)
if (dev->tx_pkt_burst == ice_xmit_pkts_vec_avx512 ||
dev->tx_pkt_burst == ice_xmit_pkts_vec_avx512_offload) {
- struct ice_vec_tx_entry *swr = (void *)txq->sw_ring;
+ struct ci_tx_entry_vec *swr = (void *)txq->sw_ring;
if (txq->tx_tail < i) {
for (; i < txq->nb_tx_desc; i++) {
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index 01533454ba..889b754cc1 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -699,7 +699,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
volatile struct ice_tx_desc *txdp;
- struct ice_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = ICE_TD_CMD;
uint64_t rs = ICE_TX_DESC_CMD_RS | ICE_TD_CMD;
diff --git a/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c b/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c
index d451562269..2241726ad8 100644
--- a/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c
+++ b/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c
@@ -52,7 +52,7 @@ ixgbe_recycle_tx_mbufs_reuse_vec(void *tx_queue,
struct rte_eth_recycle_rxq_info *recycle_rxq_info)
{
struct ixgbe_tx_queue *txq = tx_queue;
- struct ixgbe_tx_entry *txep;
+ struct ci_tx_entry *txep;
struct rte_mbuf **rxep;
int i, n;
uint32_t status;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 7d16eb9df7..db4b993ebc 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -100,7 +100,7 @@
static __rte_always_inline int
ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq)
{
- struct ixgbe_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint32_t status;
int i, nb_free = 0;
struct rte_mbuf *m, *free[RTE_IXGBE_TX_MAX_FREE_BUF_SZ];
@@ -199,7 +199,7 @@ ixgbe_tx_fill_hw_ring(struct ixgbe_tx_queue *txq, struct rte_mbuf **pkts,
uint16_t nb_pkts)
{
volatile union ixgbe_adv_tx_desc *txdp = &(txq->tx_ring[txq->tx_tail]);
- struct ixgbe_tx_entry *txep = &(txq->sw_ring[txq->tx_tail]);
+ struct ci_tx_entry *txep = &txq->sw_ring[txq->tx_tail];
const int N_PER_LOOP = 4;
const int N_PER_LOOP_MASK = N_PER_LOOP-1;
int mainpart, leftover;
@@ -563,7 +563,7 @@ tx_desc_ol_flags_to_cmdtype(uint64_t ol_flags)
static inline int
ixgbe_xmit_cleanup(struct ixgbe_tx_queue *txq)
{
- struct ixgbe_tx_entry *sw_ring = txq->sw_ring;
+ struct ci_tx_entry *sw_ring = txq->sw_ring;
volatile union ixgbe_adv_tx_desc *txr = txq->tx_ring;
uint16_t last_desc_cleaned = txq->last_desc_cleaned;
uint16_t nb_tx_desc = txq->nb_tx_desc;
@@ -624,8 +624,8 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
struct ixgbe_tx_queue *txq;
- struct ixgbe_tx_entry *sw_ring;
- struct ixgbe_tx_entry *txe, *txn;
+ struct ci_tx_entry *sw_ring;
+ struct ci_tx_entry *txe, *txn;
volatile union ixgbe_adv_tx_desc *txr;
volatile union ixgbe_adv_tx_desc *txd, *txp;
struct rte_mbuf *tx_pkt;
@@ -2352,7 +2352,7 @@ ixgbe_tx_queue_release_mbufs(struct ixgbe_tx_queue *txq)
static int
ixgbe_tx_done_cleanup_full(struct ixgbe_tx_queue *txq, uint32_t free_cnt)
{
- struct ixgbe_tx_entry *swr_ring = txq->sw_ring;
+ struct ci_tx_entry *swr_ring = txq->sw_ring;
uint16_t i, tx_last, tx_id;
uint16_t nb_tx_free_last;
uint16_t nb_tx_to_clean;
@@ -2490,7 +2490,7 @@ static void __rte_cold
ixgbe_reset_tx_queue(struct ixgbe_tx_queue *txq)
{
static const union ixgbe_adv_tx_desc zeroed_desc = {{0}};
- struct ixgbe_tx_entry *txe = txq->sw_ring;
+ struct ci_tx_entry *txe = txq->sw_ring;
uint16_t prev, i;
/* Zero out HW ring memory */
@@ -2795,7 +2795,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate software ring */
txq->sw_ring = rte_zmalloc_socket("txq->sw_ring",
- sizeof(struct ixgbe_tx_entry) * nb_desc,
+ sizeof(struct ci_tx_entry) * nb_desc,
RTE_CACHE_LINE_SIZE, socket_id);
if (txq->sw_ring == NULL) {
ixgbe_tx_queue_release(txq);
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index 0550c1da60..1647396419 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -5,6 +5,8 @@
#ifndef _IXGBE_RXTX_H_
#define _IXGBE_RXTX_H_
+#include <_common_intel/tx.h>
+
/*
* Rings setup and release.
*
@@ -75,22 +77,6 @@ struct ixgbe_scattered_rx_entry {
struct rte_mbuf *fbuf; /**< First segment of the fragmented packet. */
};
-/**
- * Structure associated with each descriptor of the TX ring of a TX queue.
- */
-struct ixgbe_tx_entry {
- struct rte_mbuf *mbuf; /**< mbuf associated with TX desc, if any. */
- uint16_t next_id; /**< Index of next descriptor in ring. */
- uint16_t last_id; /**< Index of last scattered descriptor. */
-};
-
-/**
- * Structure associated with each descriptor of the TX ring of a TX queue.
- */
-struct ixgbe_tx_entry_v {
- struct rte_mbuf *mbuf; /**< mbuf associated with TX desc, if any. */
-};
-
/**
* Structure associated with each RX queue.
*/
@@ -202,8 +188,8 @@ struct ixgbe_tx_queue {
volatile union ixgbe_adv_tx_desc *tx_ring;
uint64_t tx_ring_phys_addr; /**< TX ring DMA address. */
union {
- struct ixgbe_tx_entry *sw_ring; /**< address of SW ring for scalar PMD. */
- struct ixgbe_tx_entry_v *sw_ring_v; /**< address of SW ring for vector PMD */
+ struct ci_tx_entry *sw_ring; /**< address of SW ring for scalar PMD. */
+ struct ci_tx_entry_vec *sw_ring_v; /**< address of SW ring for vector PMD */
};
volatile uint32_t *tdt_reg_addr; /**< Address of TDT register. */
uint16_t nb_tx_desc; /**< number of TX descriptors. */
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
index 2bab17c934..e9592c0d08 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
@@ -14,7 +14,7 @@
static __rte_always_inline int
ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq)
{
- struct ixgbe_tx_entry_v *txep;
+ struct ci_tx_entry_vec *txep;
uint32_t status;
uint32_t n;
uint32_t i;
@@ -69,7 +69,7 @@ ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq)
}
static __rte_always_inline void
-tx_backlog_entry(struct ixgbe_tx_entry_v *txep,
+tx_backlog_entry(struct ci_tx_entry_vec *txep,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
int i;
@@ -82,7 +82,7 @@ static inline void
_ixgbe_tx_queue_release_mbufs_vec(struct ixgbe_tx_queue *txq)
{
unsigned int i;
- struct ixgbe_tx_entry_v *txe;
+ struct ci_tx_entry_vec *txe;
const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1);
if (txq->sw_ring == NULL || txq->nb_tx_free == max_desc)
@@ -149,7 +149,7 @@ static inline void
_ixgbe_reset_tx_queue_vec(struct ixgbe_tx_queue *txq)
{
static const union ixgbe_adv_tx_desc zeroed_desc = { { 0 } };
- struct ixgbe_tx_entry_v *txe = txq->sw_ring_v;
+ struct ci_tx_entry_vec *txe = txq->sw_ring_v;
uint16_t i;
/* Zero out HW ring memory */
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
index 7b35093075..02b53c008e 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
@@ -573,7 +573,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
volatile union ixgbe_adv_tx_desc *txdp;
- struct ixgbe_tx_entry_v *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = DCMD_DTYP_FLAGS;
uint64_t rs = IXGBE_ADVTXD_DCMD_RS | DCMD_DTYP_FLAGS;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
index a709bf8c7f..c8b5377c9f 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -695,7 +695,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
volatile union ixgbe_adv_tx_desc *txdp;
- struct ixgbe_tx_entry_v *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = DCMD_DTYP_FLAGS;
uint64_t rs = IXGBE_ADVTXD_DCMD_RS|DCMD_DTYP_FLAGS;
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v1 03/21] net/_common_intel: add Tx mbuf ring replenish fn
2024-12-02 11:24 ` [PATCH v1 " Bruce Richardson
2024-12-02 11:24 ` [PATCH v1 01/21] net/_common_intel: add pkt reassembly fn for intel drivers Bruce Richardson
2024-12-02 11:24 ` [PATCH v1 02/21] net/_common_intel: provide common Tx entry structures Bruce Richardson
@ 2024-12-02 11:24 ` Bruce Richardson
2024-12-02 11:24 ` [PATCH v1 04/21] drivers/net: align Tx queue struct field names Bruce Richardson
` (17 subsequent siblings)
20 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-02 11:24 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, David Christensen, Ian Stokes,
Konstantin Ananyev, Wathsala Vithanage, Vladimir Medvedkin,
Anatoly Burakov
Move the short function used to place mbufs on the SW Tx ring to common
code to avoid duplication.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 7 +++++++
drivers/net/i40e/i40e_rxtx_vec_altivec.c | 4 ++--
drivers/net/i40e/i40e_rxtx_vec_avx2.c | 4 ++--
drivers/net/i40e/i40e_rxtx_vec_common.h | 10 ----------
drivers/net/i40e/i40e_rxtx_vec_neon.c | 4 ++--
drivers/net/i40e/i40e_rxtx_vec_sse.c | 4 ++--
drivers/net/iavf/iavf_rxtx_vec_avx2.c | 4 ++--
drivers/net/iavf/iavf_rxtx_vec_common.h | 10 ----------
drivers/net/iavf/iavf_rxtx_vec_sse.c | 4 ++--
drivers/net/ice/ice_rxtx_vec_avx2.c | 4 ++--
drivers/net/ice/ice_rxtx_vec_common.h | 10 ----------
drivers/net/ice/ice_rxtx_vec_sse.c | 4 ++--
12 files changed, 23 insertions(+), 46 deletions(-)
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
index 384352b9db..5397007411 100644
--- a/drivers/net/_common_intel/tx.h
+++ b/drivers/net/_common_intel/tx.h
@@ -24,4 +24,11 @@ struct ci_tx_entry_vec {
struct rte_mbuf *mbuf; /* mbuf associated with TX desc, if any. */
};
+static __rte_always_inline void
+ci_tx_backlog_entry(struct ci_tx_entry *txep, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+ for (uint16_t i = 0; i < (int)nb_pkts; ++i)
+ txep[i].mbuf = tx_pkts[i];
+}
+
#endif /* _COMMON_INTEL_TX_H_ */
diff --git a/drivers/net/i40e/i40e_rxtx_vec_altivec.c b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
index ca1038eaa6..80f07a3e10 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_altivec.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
@@ -575,7 +575,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -592,7 +592,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = &txq->sw_ring[tx_id];
}
- tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx2.c b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
index e8441de759..b26bae4757 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx2.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
@@ -765,7 +765,7 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry(txep, tx_pkts, n);
vtx(txdp, tx_pkts, n - 1, flags);
tx_pkts += (n - 1);
@@ -783,7 +783,7 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = &txq->sw_ring[tx_id];
}
- tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index 619fb89110..325e99c1a4 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -84,16 +84,6 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
return txq->tx_rs_thresh;
}
-static __rte_always_inline void
-tx_backlog_entry(struct ci_tx_entry *txep,
- struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
-{
- int i;
-
- for (i = 0; i < (int)nb_pkts; ++i)
- txep[i].mbuf = tx_pkts[i];
-}
-
static inline void
_i40e_rx_queue_release_mbufs_vec(struct i40e_rx_queue *rxq)
{
diff --git a/drivers/net/i40e/i40e_rxtx_vec_neon.c b/drivers/net/i40e/i40e_rxtx_vec_neon.c
index 9b90a32e28..26bc345a0a 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_neon.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_neon.c
@@ -702,7 +702,7 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -719,7 +719,7 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
txep = &txq->sw_ring[tx_id];
}
- tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_sse.c b/drivers/net/i40e/i40e_rxtx_vec_sse.c
index e1fa2ed543..ebc32b0d27 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_sse.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_sse.c
@@ -721,7 +721,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -738,7 +738,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = &txq->sw_ring[tx_id];
}
- tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index e7d3d52655..28885800e0 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -1757,7 +1757,7 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry(txep, tx_pkts, n);
iavf_vtx(txdp, tx_pkts, n - 1, flags, offload);
tx_pkts += (n - 1);
@@ -1775,7 +1775,7 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = &txq->sw_ring[tx_id];
}
- tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
iavf_vtx(txdp, tx_pkts, nb_commit, flags, offload);
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index df40857218..2c118cc059 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -73,16 +73,6 @@ iavf_tx_free_bufs(struct iavf_tx_queue *txq)
return txq->rs_thresh;
}
-static __rte_always_inline void
-tx_backlog_entry(struct ci_tx_entry *txep,
- struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
-{
- int i;
-
- for (i = 0; i < (int)nb_pkts; ++i)
- txep[i].mbuf = tx_pkts[i];
-}
-
static inline void
_iavf_rx_queue_release_mbufs_vec(struct iavf_rx_queue *rxq)
{
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index 0a30b1ef64..bc4b8f14c8 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -1390,7 +1390,7 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -1407,7 +1407,7 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = &txq->sw_ring[tx_id];
}
- tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
iavf_vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index cf1862263a..336697e72d 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -881,7 +881,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ice_tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry(txep, tx_pkts, n);
ice_vtx(txdp, tx_pkts, n - 1, flags, offload);
tx_pkts += (n - 1);
@@ -899,7 +899,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = &txq->sw_ring[tx_id];
}
- ice_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
ice_vtx(txdp, tx_pkts, nb_commit, flags, offload);
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index 3dc6061e84..32e4541267 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -69,16 +69,6 @@ ice_tx_free_bufs_vec(struct ice_tx_queue *txq)
return txq->tx_rs_thresh;
}
-static __rte_always_inline void
-ice_tx_backlog_entry(struct ci_tx_entry *txep,
- struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
-{
- int i;
-
- for (i = 0; i < (int)nb_pkts; ++i)
- txep[i].mbuf = tx_pkts[i];
-}
-
static inline void
_ice_rx_queue_release_mbufs_vec(struct ice_rx_queue *rxq)
{
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index 889b754cc1..debdd8f6a2 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -724,7 +724,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ice_tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
ice_vtx1(txdp, *tx_pkts, flags);
@@ -741,7 +741,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = &txq->sw_ring[tx_id];
}
- ice_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
ice_vtx(txdp, tx_pkts, nb_commit, flags);
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v1 04/21] drivers/net: align Tx queue struct field names
2024-12-02 11:24 ` [PATCH v1 " Bruce Richardson
` (2 preceding siblings ...)
2024-12-02 11:24 ` [PATCH v1 03/21] net/_common_intel: add Tx mbuf ring replenish fn Bruce Richardson
@ 2024-12-02 11:24 ` Bruce Richardson
2024-12-02 11:24 ` [PATCH v1 05/21] drivers/net: add prefix for driver-specific structs Bruce Richardson
` (16 subsequent siblings)
20 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-02 11:24 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Ian Stokes, Vladimir Medvedkin,
Konstantin Ananyev, Anatoly Burakov, Wathsala Vithanage
Across the various Intel drivers sometimes different names are given to
fields in the Tx queue structure which have the same function. Do some
renaming to align things better for future merging.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/i40e/i40e_rxtx.c | 6 +--
drivers/net/i40e/i40e_rxtx.h | 2 +-
drivers/net/iavf/iavf_rxtx.c | 60 ++++++++++++-------------
drivers/net/iavf/iavf_rxtx.h | 14 +++---
drivers/net/iavf/iavf_rxtx_vec_avx2.c | 19 ++++----
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 57 +++++++++++------------
drivers/net/iavf/iavf_rxtx_vec_common.h | 24 +++++-----
drivers/net/iavf/iavf_rxtx_vec_sse.c | 18 ++++----
drivers/net/iavf/iavf_vchnl.c | 2 +-
drivers/net/ixgbe/base/ixgbe_osdep.h | 2 +-
drivers/net/ixgbe/ixgbe_rxtx.c | 16 +++----
drivers/net/ixgbe/ixgbe_rxtx.h | 6 +--
drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 2 +-
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 2 +-
14 files changed, 116 insertions(+), 114 deletions(-)
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 2e1f07d2a1..b0bb20fe9a 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2549,7 +2549,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->vsi = vsi;
txq->tx_deferred_start = tx_conf->tx_deferred_start;
- txq->tx_ring_phys_addr = tz->iova;
+ txq->tx_ring_dma = tz->iova;
txq->tx_ring = (struct i40e_tx_desc *)tz->addr;
/* Allocate software ring */
@@ -2923,7 +2923,7 @@ i40e_tx_queue_init(struct i40e_tx_queue *txq)
/* clear the context structure first */
memset(&tx_ctx, 0, sizeof(tx_ctx));
tx_ctx.new_context = 1;
- tx_ctx.base = txq->tx_ring_phys_addr / I40E_QUEUE_BASE_ADDR_UNIT;
+ tx_ctx.base = txq->tx_ring_dma / I40E_QUEUE_BASE_ADDR_UNIT;
tx_ctx.qlen = txq->nb_tx_desc;
#ifdef RTE_LIBRTE_IEEE1588
@@ -3209,7 +3209,7 @@ i40e_fdir_setup_tx_resources(struct i40e_pf *pf)
txq->reg_idx = pf->fdir.fdir_vsi->base_queue;
txq->vsi = pf->fdir.fdir_vsi;
- txq->tx_ring_phys_addr = tz->iova;
+ txq->tx_ring_dma = tz->iova;
txq->tx_ring = (struct i40e_tx_desc *)tz->addr;
/*
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 0f5d3cb0b7..f420c98687 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -129,7 +129,7 @@ struct i40e_rx_queue {
*/
struct i40e_tx_queue {
uint16_t nb_tx_desc; /**< number of TX descriptors */
- uint64_t tx_ring_phys_addr; /**< TX ring DMA address */
+ rte_iova_t tx_ring_dma; /**< TX ring DMA address */
volatile struct i40e_tx_desc *tx_ring; /**< TX ring virtual address */
struct ci_tx_entry *sw_ring; /**< virtual address of SW ring */
uint16_t tx_tail; /**< current value of tail register */
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index e337f20073..adaaeb4625 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -216,8 +216,8 @@ static inline bool
check_tx_vec_allow(struct iavf_tx_queue *txq)
{
if (!(txq->offloads & IAVF_TX_NO_VECTOR_FLAGS) &&
- txq->rs_thresh >= IAVF_VPMD_TX_MAX_BURST &&
- txq->rs_thresh <= IAVF_VPMD_TX_MAX_FREE_BUF) {
+ txq->tx_rs_thresh >= IAVF_VPMD_TX_MAX_BURST &&
+ txq->tx_rs_thresh <= IAVF_VPMD_TX_MAX_FREE_BUF) {
PMD_INIT_LOG(DEBUG, "Vector tx can be enabled on this txq.");
return true;
}
@@ -309,13 +309,13 @@ reset_tx_queue(struct iavf_tx_queue *txq)
}
txq->tx_tail = 0;
- txq->nb_used = 0;
+ txq->nb_tx_used = 0;
txq->last_desc_cleaned = txq->nb_tx_desc - 1;
- txq->nb_free = txq->nb_tx_desc - 1;
+ txq->nb_tx_free = txq->nb_tx_desc - 1;
- txq->next_dd = txq->rs_thresh - 1;
- txq->next_rs = txq->rs_thresh - 1;
+ txq->tx_next_dd = txq->tx_rs_thresh - 1;
+ txq->tx_next_rs = txq->tx_rs_thresh - 1;
}
static int
@@ -845,8 +845,8 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,
}
txq->nb_tx_desc = nb_desc;
- txq->rs_thresh = tx_rs_thresh;
- txq->free_thresh = tx_free_thresh;
+ txq->tx_rs_thresh = tx_rs_thresh;
+ txq->tx_free_thresh = tx_free_thresh;
txq->queue_id = queue_idx;
txq->port_id = dev->data->port_id;
txq->offloads = offloads;
@@ -881,7 +881,7 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,
rte_free(txq);
return -ENOMEM;
}
- txq->tx_ring_phys_addr = mz->iova;
+ txq->tx_ring_dma = mz->iova;
txq->tx_ring = (struct iavf_tx_desc *)mz->addr;
txq->mz = mz;
@@ -2387,7 +2387,7 @@ iavf_xmit_cleanup(struct iavf_tx_queue *txq)
volatile struct iavf_tx_desc *txd = txq->tx_ring;
- desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->rs_thresh);
+ desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->tx_rs_thresh);
if (desc_to_clean_to >= nb_tx_desc)
desc_to_clean_to = (uint16_t)(desc_to_clean_to - nb_tx_desc);
@@ -2411,7 +2411,7 @@ iavf_xmit_cleanup(struct iavf_tx_queue *txq)
txd[desc_to_clean_to].cmd_type_offset_bsz = 0;
txq->last_desc_cleaned = desc_to_clean_to;
- txq->nb_free = (uint16_t)(txq->nb_free + nb_tx_to_clean);
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + nb_tx_to_clean);
return 0;
}
@@ -2807,7 +2807,7 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
/* Check if the descriptor ring needs to be cleaned. */
- if (txq->nb_free < txq->free_thresh)
+ if (txq->nb_tx_free < txq->tx_free_thresh)
iavf_xmit_cleanup(txq);
desc_idx = txq->tx_tail;
@@ -2862,14 +2862,14 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
"port_id=%u queue_id=%u tx_first=%u tx_last=%u",
txq->port_id, txq->queue_id, desc_idx, desc_idx_last);
- if (nb_desc_required > txq->nb_free) {
+ if (nb_desc_required > txq->nb_tx_free) {
if (iavf_xmit_cleanup(txq)) {
if (idx == 0)
return 0;
goto end_of_tx;
}
- if (unlikely(nb_desc_required > txq->rs_thresh)) {
- while (nb_desc_required > txq->nb_free) {
+ if (unlikely(nb_desc_required > txq->tx_rs_thresh)) {
+ while (nb_desc_required > txq->nb_tx_free) {
if (iavf_xmit_cleanup(txq)) {
if (idx == 0)
return 0;
@@ -2991,10 +2991,10 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
/* The last packet data descriptor needs End Of Packet (EOP) */
ddesc_cmd = IAVF_TX_DESC_CMD_EOP;
- txq->nb_used = (uint16_t)(txq->nb_used + nb_desc_required);
- txq->nb_free = (uint16_t)(txq->nb_free - nb_desc_required);
+ txq->nb_tx_used = (uint16_t)(txq->nb_tx_used + nb_desc_required);
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_desc_required);
- if (txq->nb_used >= txq->rs_thresh) {
+ if (txq->nb_tx_used >= txq->tx_rs_thresh) {
PMD_TX_LOG(DEBUG, "Setting RS bit on TXD id="
"%4u (port=%d queue=%d)",
desc_idx_last, txq->port_id, txq->queue_id);
@@ -3002,7 +3002,7 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
ddesc_cmd |= IAVF_TX_DESC_CMD_RS;
/* Update txq RS bit counters */
- txq->nb_used = 0;
+ txq->nb_tx_used = 0;
}
ddesc->cmd_type_offset_bsz |= rte_cpu_to_le_64(ddesc_cmd <<
@@ -4278,11 +4278,11 @@ iavf_tx_done_cleanup_full(struct iavf_tx_queue *txq,
tx_id = txq->tx_tail;
tx_last = tx_id;
- if (txq->nb_free == 0 && iavf_xmit_cleanup(txq))
+ if (txq->nb_tx_free == 0 && iavf_xmit_cleanup(txq))
return 0;
- nb_tx_to_clean = txq->nb_free;
- nb_tx_free_last = txq->nb_free;
+ nb_tx_to_clean = txq->nb_tx_free;
+ nb_tx_free_last = txq->nb_tx_free;
if (!free_cnt)
free_cnt = txq->nb_tx_desc;
@@ -4305,16 +4305,16 @@ iavf_tx_done_cleanup_full(struct iavf_tx_queue *txq,
tx_id = swr_ring[tx_id].next_id;
} while (--nb_tx_to_clean && pkt_cnt < free_cnt && tx_id != tx_last);
- if (txq->rs_thresh > txq->nb_tx_desc -
- txq->nb_free || tx_id == tx_last)
+ if (txq->tx_rs_thresh > txq->nb_tx_desc -
+ txq->nb_tx_free || tx_id == tx_last)
break;
if (pkt_cnt < free_cnt) {
if (iavf_xmit_cleanup(txq))
break;
- nb_tx_to_clean = txq->nb_free - nb_tx_free_last;
- nb_tx_free_last = txq->nb_free;
+ nb_tx_to_clean = txq->nb_tx_free - nb_tx_free_last;
+ nb_tx_free_last = txq->nb_tx_free;
}
}
@@ -4356,8 +4356,8 @@ iavf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
qinfo->nb_desc = txq->nb_tx_desc;
- qinfo->conf.tx_free_thresh = txq->free_thresh;
- qinfo->conf.tx_rs_thresh = txq->rs_thresh;
+ qinfo->conf.tx_free_thresh = txq->tx_free_thresh;
+ qinfo->conf.tx_rs_thresh = txq->tx_rs_thresh;
qinfo->conf.offloads = txq->offloads;
qinfo->conf.tx_deferred_start = txq->tx_deferred_start;
}
@@ -4432,8 +4432,8 @@ iavf_dev_tx_desc_status(void *tx_queue, uint16_t offset)
desc = txq->tx_tail + offset;
/* go to next desc that has the RS bit */
- desc = ((desc + txq->rs_thresh - 1) / txq->rs_thresh) *
- txq->rs_thresh;
+ desc = ((desc + txq->tx_rs_thresh - 1) / txq->tx_rs_thresh) *
+ txq->tx_rs_thresh;
if (desc >= txq->nb_tx_desc) {
desc -= txq->nb_tx_desc;
if (desc >= txq->nb_tx_desc)
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index 1a191f2c89..44e2de731c 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -277,25 +277,25 @@ struct iavf_rx_queue {
struct iavf_tx_queue {
const struct rte_memzone *mz; /* memzone for Tx ring */
volatile struct iavf_tx_desc *tx_ring; /* Tx ring virtual address */
- uint64_t tx_ring_phys_addr; /* Tx ring DMA address */
+ rte_iova_t tx_ring_dma; /* Tx ring DMA address */
struct ci_tx_entry *sw_ring; /* address array of SW ring */
uint16_t nb_tx_desc; /* ring length */
uint16_t tx_tail; /* current value of tail */
volatile uint8_t *qtx_tail; /* register address of tail */
/* number of used desc since RS bit set */
- uint16_t nb_used;
- uint16_t nb_free;
+ uint16_t nb_tx_used;
+ uint16_t nb_tx_free;
uint16_t last_desc_cleaned; /* last desc have been cleaned*/
- uint16_t free_thresh;
- uint16_t rs_thresh;
+ uint16_t tx_free_thresh;
+ uint16_t tx_rs_thresh;
uint8_t rel_mbufs_type;
struct iavf_vsi *vsi; /**< the VSI this queue belongs to */
uint16_t port_id;
uint16_t queue_id;
uint64_t offloads;
- uint16_t next_dd; /* next to set RS, for VPMD */
- uint16_t next_rs; /* next to check DD, for VPMD */
+ uint16_t tx_next_dd; /* next to set RS, for VPMD */
+ uint16_t tx_next_rs; /* next to check DD, for VPMD */
uint16_t ipsec_crypto_pkt_md_offset;
uint64_t mbuf_errors;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index 28885800e0..42e09a2adf 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -1742,18 +1742,19 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC;
uint64_t rs = IAVF_TX_DESC_CMD_RS | flags;
- if (txq->nb_free < txq->free_thresh)
+ if (txq->nb_tx_free < txq->tx_free_thresh)
iavf_tx_free_bufs(txq);
- nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_free, nb_pkts);
+ nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
return 0;
+ nb_commit = nb_pkts;
tx_id = txq->tx_tail;
txdp = &txq->tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
- txq->nb_free = (uint16_t)(txq->nb_free - nb_pkts);
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
@@ -1768,7 +1769,7 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_commit = (uint16_t)(nb_commit - n);
tx_id = 0;
- txq->next_rs = (uint16_t)(txq->rs_thresh - 1);
+ txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
txdp = &txq->tx_ring[tx_id];
@@ -1780,12 +1781,12 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
iavf_vtx(txdp, tx_pkts, nb_commit, flags, offload);
tx_id = (uint16_t)(tx_id + nb_commit);
- if (tx_id > txq->next_rs) {
- txq->tx_ring[txq->next_rs].cmd_type_offset_bsz |=
+ if (tx_id > txq->tx_next_rs) {
+ txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) <<
IAVF_TXD_QW1_CMD_SHIFT);
- txq->next_rs =
- (uint16_t)(txq->next_rs + txq->rs_thresh);
+ txq->tx_next_rs =
+ (uint16_t)(txq->tx_next_rs + txq->tx_rs_thresh);
}
txq->tx_tail = tx_id;
@@ -1806,7 +1807,7 @@ iavf_xmit_pkts_vec_avx2_common(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t ret, num;
/* cross rs_thresh boundary is not allowed */
- num = (uint16_t)RTE_MIN(nb_pkts, txq->rs_thresh);
+ num = (uint16_t)RTE_MIN(nb_pkts, txq->tx_rs_thresh);
ret = iavf_xmit_fixed_burst_vec_avx2(tx_queue, &tx_pkts[nb_tx],
num, offload);
nb_tx += ret;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index a899309f94..dc1fef24f0 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -1854,18 +1854,18 @@ iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq)
struct rte_mbuf *m, *free[IAVF_VPMD_TX_MAX_FREE_BUF];
/* check DD bits on threshold descriptor */
- if ((txq->tx_ring[txq->next_dd].cmd_type_offset_bsz &
+ if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(IAVF_TXD_QW1_DTYPE_MASK)) !=
rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE))
return 0;
- n = txq->rs_thresh >> txq->use_ctx;
+ n = txq->tx_rs_thresh >> txq->use_ctx;
/* first buffer to free from S/W ring is at index
* tx_next_dd - (tx_rs_thresh-1)
*/
txep = (void *)txq->sw_ring;
- txep += (txq->next_dd >> txq->use_ctx) - (n - 1);
+ txep += (txq->tx_next_dd >> txq->use_ctx) - (n - 1);
if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
struct rte_mempool *mp = txep[0].mbuf->pool;
@@ -1951,12 +1951,12 @@ iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq)
done:
/* buffers were freed, update counters */
- txq->nb_free = (uint16_t)(txq->nb_free + txq->rs_thresh);
- txq->next_dd = (uint16_t)(txq->next_dd + txq->rs_thresh);
- if (txq->next_dd >= txq->nb_tx_desc)
- txq->next_dd = (uint16_t)(txq->rs_thresh - 1);
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
+ txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
+ if (txq->tx_next_dd >= txq->nb_tx_desc)
+ txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
- return txq->rs_thresh;
+ return txq->tx_rs_thresh;
}
static __rte_always_inline void
@@ -2319,19 +2319,20 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC;
uint64_t rs = IAVF_TX_DESC_CMD_RS | flags;
- if (txq->nb_free < txq->free_thresh)
+ if (txq->nb_tx_free < txq->tx_free_thresh)
iavf_tx_free_bufs_avx512(txq);
- nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_free, nb_pkts);
+ nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
return 0;
+ nb_commit = nb_pkts;
tx_id = txq->tx_tail;
txdp = &txq->tx_ring[tx_id];
txep = (void *)txq->sw_ring;
txep += tx_id;
- txq->nb_free = (uint16_t)(txq->nb_free - nb_pkts);
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
@@ -2346,7 +2347,7 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_commit = (uint16_t)(nb_commit - n);
tx_id = 0;
- txq->next_rs = (uint16_t)(txq->rs_thresh - 1);
+ txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
txdp = &txq->tx_ring[tx_id];
@@ -2359,12 +2360,12 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
iavf_vtx(txdp, tx_pkts, nb_commit, flags, offload);
tx_id = (uint16_t)(tx_id + nb_commit);
- if (tx_id > txq->next_rs) {
- txq->tx_ring[txq->next_rs].cmd_type_offset_bsz |=
+ if (tx_id > txq->tx_next_rs) {
+ txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) <<
IAVF_TXD_QW1_CMD_SHIFT);
- txq->next_rs =
- (uint16_t)(txq->next_rs + txq->rs_thresh);
+ txq->tx_next_rs =
+ (uint16_t)(txq->tx_next_rs + txq->tx_rs_thresh);
}
txq->tx_tail = tx_id;
@@ -2386,10 +2387,10 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC;
uint64_t rs = IAVF_TX_DESC_CMD_RS | flags;
- if (txq->nb_free < txq->free_thresh)
+ if (txq->nb_tx_free < txq->tx_free_thresh)
iavf_tx_free_bufs_avx512(txq);
- nb_commit = (uint16_t)RTE_MIN(txq->nb_free, nb_pkts << 1);
+ nb_commit = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts << 1);
nb_commit &= 0xFFFE;
if (unlikely(nb_commit == 0))
return 0;
@@ -2400,7 +2401,7 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = (void *)txq->sw_ring;
txep += (tx_id >> 1);
- txq->nb_free = (uint16_t)(txq->nb_free - nb_commit);
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_commit);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (n != 0 && nb_commit >= n) {
@@ -2414,7 +2415,7 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_commit = (uint16_t)(nb_commit - n);
- txq->next_rs = (uint16_t)(txq->rs_thresh - 1);
+ txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
tx_id = 0;
/* avoid reach the end of ring */
txdp = txq->tx_ring;
@@ -2427,12 +2428,12 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
ctx_vtx(txdp, tx_pkts, nb_mbuf, flags, offload, txq->vlan_flag);
tx_id = (uint16_t)(tx_id + nb_commit);
- if (tx_id > txq->next_rs) {
- txq->tx_ring[txq->next_rs].cmd_type_offset_bsz |=
+ if (tx_id > txq->tx_next_rs) {
+ txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) <<
IAVF_TXD_QW1_CMD_SHIFT);
- txq->next_rs =
- (uint16_t)(txq->next_rs + txq->rs_thresh);
+ txq->tx_next_rs =
+ (uint16_t)(txq->tx_next_rs + txq->tx_rs_thresh);
}
txq->tx_tail = tx_id;
@@ -2452,7 +2453,7 @@ iavf_xmit_pkts_vec_avx512_cmn(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t ret, num;
/* cross rs_thresh boundary is not allowed */
- num = (uint16_t)RTE_MIN(nb_pkts, txq->rs_thresh);
+ num = (uint16_t)RTE_MIN(nb_pkts, txq->tx_rs_thresh);
ret = iavf_xmit_fixed_burst_vec_avx512(tx_queue, &tx_pkts[nb_tx],
num, offload);
nb_tx += ret;
@@ -2480,10 +2481,10 @@ iavf_tx_queue_release_mbufs_avx512(struct iavf_tx_queue *txq)
const uint16_t wrap_point = txq->nb_tx_desc >> txq->use_ctx; /* end of SW ring */
struct ci_tx_entry_vec *swr = (void *)txq->sw_ring;
- if (!txq->sw_ring || txq->nb_free == max_desc)
+ if (!txq->sw_ring || txq->nb_tx_free == max_desc)
return;
- i = (txq->next_dd - txq->rs_thresh + 1) >> txq->use_ctx;
+ i = (txq->tx_next_dd - txq->tx_rs_thresh + 1) >> txq->use_ctx;
while (i != end_desc) {
rte_pktmbuf_free_seg(swr[i].mbuf);
swr[i].mbuf = NULL;
@@ -2517,7 +2518,7 @@ iavf_xmit_pkts_vec_avx512_ctx_cmn(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t ret, num;
/* cross rs_thresh boundary is not allowed */
- num = (uint16_t)RTE_MIN(nb_pkts << 1, txq->rs_thresh);
+ num = (uint16_t)RTE_MIN(nb_pkts << 1, txq->tx_rs_thresh);
num = num >> 1;
ret = iavf_xmit_fixed_burst_vec_avx512_ctx(tx_queue, &tx_pkts[nb_tx],
num, offload);
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index 2c118cc059..ff24055c34 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -26,17 +26,17 @@ iavf_tx_free_bufs(struct iavf_tx_queue *txq)
struct rte_mbuf *m, *free[IAVF_VPMD_TX_MAX_FREE_BUF];
/* check DD bits on threshold descriptor */
- if ((txq->tx_ring[txq->next_dd].cmd_type_offset_bsz &
+ if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(IAVF_TXD_QW1_DTYPE_MASK)) !=
rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE))
return 0;
- n = txq->rs_thresh;
+ n = txq->tx_rs_thresh;
/* first buffer to free from S/W ring is at index
* tx_next_dd - (tx_rs_thresh-1)
*/
- txep = &txq->sw_ring[txq->next_dd - (n - 1)];
+ txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)];
m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
if (likely(m != NULL)) {
free[0] = m;
@@ -65,12 +65,12 @@ iavf_tx_free_bufs(struct iavf_tx_queue *txq)
}
/* buffers were freed, update counters */
- txq->nb_free = (uint16_t)(txq->nb_free + txq->rs_thresh);
- txq->next_dd = (uint16_t)(txq->next_dd + txq->rs_thresh);
- if (txq->next_dd >= txq->nb_tx_desc)
- txq->next_dd = (uint16_t)(txq->rs_thresh - 1);
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
+ txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
+ if (txq->tx_next_dd >= txq->nb_tx_desc)
+ txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
- return txq->rs_thresh;
+ return txq->tx_rs_thresh;
}
static inline void
@@ -109,10 +109,10 @@ _iavf_tx_queue_release_mbufs_vec(struct iavf_tx_queue *txq)
unsigned i;
const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1);
- if (!txq->sw_ring || txq->nb_free == max_desc)
+ if (!txq->sw_ring || txq->nb_tx_free == max_desc)
return;
- i = txq->next_dd - txq->rs_thresh + 1;
+ i = txq->tx_next_dd - txq->tx_rs_thresh + 1;
while (i != txq->tx_tail) {
rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
txq->sw_ring[i].mbuf = NULL;
@@ -169,8 +169,8 @@ iavf_tx_vec_queue_default(struct iavf_tx_queue *txq)
if (!txq)
return -1;
- if (txq->rs_thresh < IAVF_VPMD_TX_MAX_BURST ||
- txq->rs_thresh > IAVF_VPMD_TX_MAX_FREE_BUF)
+ if (txq->tx_rs_thresh < IAVF_VPMD_TX_MAX_BURST ||
+ txq->tx_rs_thresh > IAVF_VPMD_TX_MAX_FREE_BUF)
return -1;
if (txq->offloads & IAVF_TX_NO_VECTOR_FLAGS)
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index bc4b8f14c8..ed8455d669 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -1374,10 +1374,10 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint64_t rs = IAVF_TX_DESC_CMD_RS | flags;
int i;
- if (txq->nb_free < txq->free_thresh)
+ if (txq->nb_tx_free < txq->tx_free_thresh)
iavf_tx_free_bufs(txq);
- nb_pkts = (uint16_t)RTE_MIN(txq->nb_free, nb_pkts);
+ nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
return 0;
nb_commit = nb_pkts;
@@ -1386,7 +1386,7 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txdp = &txq->tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
- txq->nb_free = (uint16_t)(txq->nb_free - nb_pkts);
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
@@ -1400,7 +1400,7 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_commit = (uint16_t)(nb_commit - n);
tx_id = 0;
- txq->next_rs = (uint16_t)(txq->rs_thresh - 1);
+ txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
txdp = &txq->tx_ring[tx_id];
@@ -1412,12 +1412,12 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
iavf_vtx(txdp, tx_pkts, nb_commit, flags);
tx_id = (uint16_t)(tx_id + nb_commit);
- if (tx_id > txq->next_rs) {
- txq->tx_ring[txq->next_rs].cmd_type_offset_bsz |=
+ if (tx_id > txq->tx_next_rs) {
+ txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) <<
IAVF_TXD_QW1_CMD_SHIFT);
- txq->next_rs =
- (uint16_t)(txq->next_rs + txq->rs_thresh);
+ txq->tx_next_rs =
+ (uint16_t)(txq->tx_next_rs + txq->tx_rs_thresh);
}
txq->tx_tail = tx_id;
@@ -1441,7 +1441,7 @@ iavf_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t ret, num;
/* cross rs_thresh boundary is not allowed */
- num = (uint16_t)RTE_MIN(nb_pkts, txq->rs_thresh);
+ num = (uint16_t)RTE_MIN(nb_pkts, txq->tx_rs_thresh);
ret = iavf_xmit_fixed_burst_vec(tx_queue, &tx_pkts[nb_tx], num);
nb_tx += ret;
nb_pkts -= ret;
diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c
index 065ab3594c..0646a2f978 100644
--- a/drivers/net/iavf/iavf_vchnl.c
+++ b/drivers/net/iavf/iavf_vchnl.c
@@ -1247,7 +1247,7 @@ iavf_configure_queues(struct iavf_adapter *adapter,
/* Virtchnnl configure tx queues by pairs */
if (i < adapter->dev_data->nb_tx_queues) {
vc_qp->txq.ring_len = txq[i]->nb_tx_desc;
- vc_qp->txq.dma_ring_addr = txq[i]->tx_ring_phys_addr;
+ vc_qp->txq.dma_ring_addr = txq[i]->tx_ring_dma;
}
vc_qp->rxq.vsi_id = vf->vsi_res->vsi_id;
diff --git a/drivers/net/ixgbe/base/ixgbe_osdep.h b/drivers/net/ixgbe/base/ixgbe_osdep.h
index 502f386b56..95dbe2bedd 100644
--- a/drivers/net/ixgbe/base/ixgbe_osdep.h
+++ b/drivers/net/ixgbe/base/ixgbe_osdep.h
@@ -124,7 +124,7 @@ static inline uint32_t ixgbe_read_addr(volatile void* addr)
rte_write32_wc_relaxed((rte_cpu_to_le_32(value)), reg)
#define IXGBE_PCI_REG_ADDR(hw, reg) \
- ((volatile uint32_t *)((char *)(hw)->hw_addr + (reg)))
+ ((volatile void *)((char *)(hw)->hw_addr + (reg)))
#define IXGBE_PCI_REG_ARRAY_ADDR(hw, reg, index) \
IXGBE_PCI_REG_ADDR((hw), (reg) + ((index) << 2))
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index db4b993ebc..0a80b944f0 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -308,7 +308,7 @@ tx_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
/* update tail pointer */
rte_wmb();
- IXGBE_PCI_REG_WC_WRITE_RELAXED(txq->tdt_reg_addr, txq->tx_tail);
+ IXGBE_PCI_REG_WC_WRITE_RELAXED(txq->qtx_tail, txq->tx_tail);
return nb_pkts;
}
@@ -946,7 +946,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u tx_tail=%u nb_tx=%u",
(unsigned) txq->port_id, (unsigned) txq->queue_id,
(unsigned) tx_id, (unsigned) nb_tx);
- IXGBE_PCI_REG_WC_WRITE_RELAXED(txq->tdt_reg_addr, tx_id);
+ IXGBE_PCI_REG_WC_WRITE_RELAXED(txq->qtx_tail, tx_id);
txq->tx_tail = tx_id;
return nb_tx;
@@ -2786,11 +2786,11 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
hw->mac.type == ixgbe_mac_X550_vf ||
hw->mac.type == ixgbe_mac_X550EM_x_vf ||
hw->mac.type == ixgbe_mac_X550EM_a_vf)
- txq->tdt_reg_addr = IXGBE_PCI_REG_ADDR(hw, IXGBE_VFTDT(queue_idx));
+ txq->qtx_tail = IXGBE_PCI_REG_ADDR(hw, IXGBE_VFTDT(queue_idx));
else
- txq->tdt_reg_addr = IXGBE_PCI_REG_ADDR(hw, IXGBE_TDT(txq->reg_idx));
+ txq->qtx_tail = IXGBE_PCI_REG_ADDR(hw, IXGBE_TDT(txq->reg_idx));
- txq->tx_ring_phys_addr = tz->iova;
+ txq->tx_ring_dma = tz->iova;
txq->tx_ring = (union ixgbe_adv_tx_desc *) tz->addr;
/* Allocate software ring */
@@ -2802,7 +2802,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
return -ENOMEM;
}
PMD_INIT_LOG(DEBUG, "sw_ring=%p hw_ring=%p dma_addr=0x%"PRIx64,
- txq->sw_ring, txq->tx_ring, txq->tx_ring_phys_addr);
+ txq->sw_ring, txq->tx_ring, txq->tx_ring_dma);
/* set up vector or scalar TX function as appropriate */
ixgbe_set_tx_function(dev, txq);
@@ -5303,7 +5303,7 @@ ixgbe_dev_tx_init(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_tx_queues; i++) {
txq = dev->data->tx_queues[i];
- bus_addr = txq->tx_ring_phys_addr;
+ bus_addr = txq->tx_ring_dma;
IXGBE_WRITE_REG(hw, IXGBE_TDBAL(txq->reg_idx),
(uint32_t)(bus_addr & 0x00000000ffffffffULL));
IXGBE_WRITE_REG(hw, IXGBE_TDBAH(txq->reg_idx),
@@ -5887,7 +5887,7 @@ ixgbevf_dev_tx_init(struct rte_eth_dev *dev)
/* Setup the Base and Length of the Tx Descriptor Rings */
for (i = 0; i < dev->data->nb_tx_queues; i++) {
txq = dev->data->tx_queues[i];
- bus_addr = txq->tx_ring_phys_addr;
+ bus_addr = txq->tx_ring_dma;
IXGBE_WRITE_REG(hw, IXGBE_VFTDBAL(i),
(uint32_t)(bus_addr & 0x00000000ffffffffULL));
IXGBE_WRITE_REG(hw, IXGBE_VFTDBAH(i),
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index 1647396419..00e2009b3e 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -186,12 +186,12 @@ struct ixgbe_advctx_info {
struct ixgbe_tx_queue {
/** TX ring virtual address. */
volatile union ixgbe_adv_tx_desc *tx_ring;
- uint64_t tx_ring_phys_addr; /**< TX ring DMA address. */
+ rte_iova_t tx_ring_dma; /**< TX ring DMA address. */
union {
struct ci_tx_entry *sw_ring; /**< address of SW ring for scalar PMD. */
struct ci_tx_entry_vec *sw_ring_v; /**< address of SW ring for vector PMD */
};
- volatile uint32_t *tdt_reg_addr; /**< Address of TDT register. */
+ volatile uint8_t *qtx_tail; /**< Address of TDT register. */
uint16_t nb_tx_desc; /**< number of TX descriptors. */
uint16_t tx_tail; /**< current value of TDT reg. */
/**< Start freeing TX buffers if there are less free descriptors than
@@ -218,7 +218,7 @@ struct ixgbe_tx_queue {
/** Hardware context0 history. */
struct ixgbe_advctx_info ctx_cache[IXGBE_CTX_NUM];
const struct ixgbe_txq_ops *ops; /**< txq ops */
- uint8_t tx_deferred_start; /**< not in global dev start. */
+ bool tx_deferred_start; /**< not in global dev start. */
#ifdef RTE_LIB_SECURITY
uint8_t using_ipsec;
/**< indicates that IPsec TX feature is in use */
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
index 02b53c008e..871c1a7cd2 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
@@ -628,7 +628,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_tail = tx_id;
- IXGBE_PCI_REG_WRITE(txq->tdt_reg_addr, txq->tx_tail);
+ IXGBE_PCI_REG_WRITE(txq->qtx_tail, txq->tx_tail);
return nb_pkts;
}
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
index c8b5377c9f..37f2079519 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -751,7 +751,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_tail = tx_id;
- IXGBE_PCI_REG_WC_WRITE(txq->tdt_reg_addr, txq->tx_tail);
+ IXGBE_PCI_REG_WC_WRITE(txq->qtx_tail, txq->tx_tail);
return nb_pkts;
}
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v1 05/21] drivers/net: add prefix for driver-specific structs
2024-12-02 11:24 ` [PATCH v1 " Bruce Richardson
` (3 preceding siblings ...)
2024-12-02 11:24 ` [PATCH v1 04/21] drivers/net: align Tx queue struct field names Bruce Richardson
@ 2024-12-02 11:24 ` Bruce Richardson
2024-12-02 11:24 ` [PATCH v1 06/21] net/_common_intel: merge ice and i40e Tx queue struct Bruce Richardson
` (15 subsequent siblings)
20 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-02 11:24 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Ian Stokes, David Christensen,
Konstantin Ananyev, Wathsala Vithanage, Vladimir Medvedkin,
Anatoly Burakov
In preparation for merging the Tx structs for multiple drivers into a
single struct, rename the driver-specific pointers in each struct to
have a prefix on it, to avoid conflicts.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/i40e/i40e_fdir.c | 6 +--
.../net/i40e/i40e_recycle_mbufs_vec_common.c | 2 +-
drivers/net/i40e/i40e_rxtx.c | 30 ++++++------
drivers/net/i40e/i40e_rxtx.h | 4 +-
drivers/net/i40e/i40e_rxtx_vec_altivec.c | 6 +--
drivers/net/i40e/i40e_rxtx_vec_avx2.c | 6 +--
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 8 ++--
drivers/net/i40e/i40e_rxtx_vec_common.h | 2 +-
drivers/net/i40e/i40e_rxtx_vec_neon.c | 6 +--
drivers/net/i40e/i40e_rxtx_vec_sse.c | 6 +--
drivers/net/iavf/iavf_rxtx.c | 24 +++++-----
drivers/net/iavf/iavf_rxtx.h | 4 +-
drivers/net/iavf/iavf_rxtx_vec_avx2.c | 6 +--
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 14 +++---
drivers/net/iavf/iavf_rxtx_vec_common.h | 2 +-
drivers/net/iavf/iavf_rxtx_vec_sse.c | 6 +--
drivers/net/ice/ice_dcf_ethdev.c | 4 +-
drivers/net/ice/ice_rxtx.c | 48 +++++++++----------
drivers/net/ice/ice_rxtx.h | 4 +-
drivers/net/ice/ice_rxtx_vec_avx2.c | 6 +--
drivers/net/ice/ice_rxtx_vec_avx512.c | 8 ++--
drivers/net/ice/ice_rxtx_vec_common.h | 4 +-
drivers/net/ice/ice_rxtx_vec_sse.c | 6 +--
.../ixgbe/ixgbe_recycle_mbufs_vec_common.c | 2 +-
drivers/net/ixgbe/ixgbe_rxtx.c | 22 ++++-----
drivers/net/ixgbe/ixgbe_rxtx.h | 2 +-
drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 6 +--
drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 6 +--
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 6 +--
29 files changed, 128 insertions(+), 128 deletions(-)
diff --git a/drivers/net/i40e/i40e_fdir.c b/drivers/net/i40e/i40e_fdir.c
index 47f79ecf11..c600167634 100644
--- a/drivers/net/i40e/i40e_fdir.c
+++ b/drivers/net/i40e/i40e_fdir.c
@@ -1383,7 +1383,7 @@ i40e_find_available_buffer(struct rte_eth_dev *dev)
volatile struct i40e_tx_desc *tmp_txdp;
tmp_tail = txq->tx_tail;
- tmp_txdp = &txq->tx_ring[tmp_tail + 1];
+ tmp_txdp = &txq->i40e_tx_ring[tmp_tail + 1];
do {
if ((tmp_txdp->cmd_type_offset_bsz &
@@ -1640,7 +1640,7 @@ i40e_flow_fdir_filter_programming(struct i40e_pf *pf,
PMD_DRV_LOG(INFO, "filling filter programming descriptor.");
fdirdp = (volatile struct i40e_filter_program_desc *)
- (&txq->tx_ring[txq->tx_tail]);
+ (&txq->i40e_tx_ring[txq->tx_tail]);
fdirdp->qindex_flex_ptype_vsi =
rte_cpu_to_le_32((fdir_action->rx_queue <<
@@ -1710,7 +1710,7 @@ i40e_flow_fdir_filter_programming(struct i40e_pf *pf,
fdirdp->fd_id = rte_cpu_to_le_32(filter->soft_id);
PMD_DRV_LOG(INFO, "filling transmit descriptor.");
- txdp = &txq->tx_ring[txq->tx_tail + 1];
+ txdp = &txq->i40e_tx_ring[txq->tx_tail + 1];
txdp->buffer_addr = rte_cpu_to_le_64(pf->fdir.dma_addr[txq->tx_tail >> 1]);
td_cmd = I40E_TX_DESC_CMD_EOP |
diff --git a/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c b/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
index 260d238ce4..8679e5c1fd 100644
--- a/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
+++ b/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
@@ -75,7 +75,7 @@ i40e_recycle_tx_mbufs_reuse_vec(void *tx_queue,
return 0;
/* check DD bits on threshold descriptor */
- if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+ if ((txq->i40e_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) !=
rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE))
return 0;
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index b0bb20fe9a..34ef931859 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -379,7 +379,7 @@ static inline int
i40e_xmit_cleanup(struct i40e_tx_queue *txq)
{
struct ci_tx_entry *sw_ring = txq->sw_ring;
- volatile struct i40e_tx_desc *txd = txq->tx_ring;
+ volatile struct i40e_tx_desc *txd = txq->i40e_tx_ring;
uint16_t last_desc_cleaned = txq->last_desc_cleaned;
uint16_t nb_tx_desc = txq->nb_tx_desc;
uint16_t desc_to_clean_to;
@@ -1103,7 +1103,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
txq = tx_queue;
sw_ring = txq->sw_ring;
- txr = txq->tx_ring;
+ txr = txq->i40e_tx_ring;
tx_id = txq->tx_tail;
txe = &sw_ring[tx_id];
@@ -1338,7 +1338,7 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
const uint16_t k = RTE_ALIGN_FLOOR(tx_rs_thresh, RTE_I40E_TX_MAX_FREE_BUF_SZ);
const uint16_t m = tx_rs_thresh % RTE_I40E_TX_MAX_FREE_BUF_SZ;
- if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+ if ((txq->i40e_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) !=
rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE))
return 0;
@@ -1417,7 +1417,7 @@ i40e_tx_fill_hw_ring(struct i40e_tx_queue *txq,
struct rte_mbuf **pkts,
uint16_t nb_pkts)
{
- volatile struct i40e_tx_desc *txdp = &(txq->tx_ring[txq->tx_tail]);
+ volatile struct i40e_tx_desc *txdp = &txq->i40e_tx_ring[txq->tx_tail];
struct ci_tx_entry *txep = &txq->sw_ring[txq->tx_tail];
const int N_PER_LOOP = 4;
const int N_PER_LOOP_MASK = N_PER_LOOP - 1;
@@ -1445,7 +1445,7 @@ tx_xmit_pkts(struct i40e_tx_queue *txq,
struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- volatile struct i40e_tx_desc *txr = txq->tx_ring;
+ volatile struct i40e_tx_desc *txr = txq->i40e_tx_ring;
uint16_t n = 0;
/**
@@ -1556,7 +1556,7 @@ i40e_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts
bool pkt_error = false;
const char *reason = NULL;
uint16_t good_pkts = nb_pkts;
- struct i40e_adapter *adapter = txq->vsi->adapter;
+ struct i40e_adapter *adapter = txq->i40e_vsi->adapter;
for (idx = 0; idx < nb_pkts; idx++) {
mb = tx_pkts[idx];
@@ -2329,7 +2329,7 @@ i40e_dev_tx_descriptor_status(void *tx_queue, uint16_t offset)
desc -= txq->nb_tx_desc;
}
- status = &txq->tx_ring[desc].cmd_type_offset_bsz;
+ status = &txq->i40e_tx_ring[desc].cmd_type_offset_bsz;
mask = rte_le_to_cpu_64(I40E_TXD_QW1_DTYPE_MASK);
expect = rte_cpu_to_le_64(
I40E_TX_DESC_DTYPE_DESC_DONE << I40E_TXD_QW1_DTYPE_SHIFT);
@@ -2527,7 +2527,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate TX hardware ring descriptors. */
ring_size = sizeof(struct i40e_tx_desc) * I40E_MAX_RING_DESC;
ring_size = RTE_ALIGN(ring_size, I40E_DMA_MEM_ALIGN);
- tz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx,
+ tz = rte_eth_dma_zone_reserve(dev, "i40e_tx_ring", queue_idx,
ring_size, I40E_RING_BASE_ALIGN, socket_id);
if (!tz) {
i40e_tx_queue_release(txq);
@@ -2546,11 +2546,11 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->reg_idx = reg_idx;
txq->port_id = dev->data->port_id;
txq->offloads = offloads;
- txq->vsi = vsi;
+ txq->i40e_vsi = vsi;
txq->tx_deferred_start = tx_conf->tx_deferred_start;
txq->tx_ring_dma = tz->iova;
- txq->tx_ring = (struct i40e_tx_desc *)tz->addr;
+ txq->i40e_tx_ring = (struct i40e_tx_desc *)tz->addr;
/* Allocate software ring */
txq->sw_ring =
@@ -2885,11 +2885,11 @@ i40e_reset_tx_queue(struct i40e_tx_queue *txq)
txe = txq->sw_ring;
size = sizeof(struct i40e_tx_desc) * txq->nb_tx_desc;
for (i = 0; i < size; i++)
- ((volatile char *)txq->tx_ring)[i] = 0;
+ ((volatile char *)txq->i40e_tx_ring)[i] = 0;
prev = (uint16_t)(txq->nb_tx_desc - 1);
for (i = 0; i < txq->nb_tx_desc; i++) {
- volatile struct i40e_tx_desc *txd = &txq->tx_ring[i];
+ volatile struct i40e_tx_desc *txd = &txq->i40e_tx_ring[i];
txd->cmd_type_offset_bsz =
rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE);
@@ -2914,7 +2914,7 @@ int
i40e_tx_queue_init(struct i40e_tx_queue *txq)
{
enum i40e_status_code err = I40E_SUCCESS;
- struct i40e_vsi *vsi = txq->vsi;
+ struct i40e_vsi *vsi = txq->i40e_vsi;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
uint16_t pf_q = txq->reg_idx;
struct i40e_hmc_obj_txq tx_ctx;
@@ -3207,10 +3207,10 @@ i40e_fdir_setup_tx_resources(struct i40e_pf *pf)
txq->nb_tx_desc = I40E_FDIR_NUM_TX_DESC;
txq->queue_id = I40E_FDIR_QUEUE_ID;
txq->reg_idx = pf->fdir.fdir_vsi->base_queue;
- txq->vsi = pf->fdir.fdir_vsi;
+ txq->i40e_vsi = pf->fdir.fdir_vsi;
txq->tx_ring_dma = tz->iova;
- txq->tx_ring = (struct i40e_tx_desc *)tz->addr;
+ txq->i40e_tx_ring = (struct i40e_tx_desc *)tz->addr;
/*
* don't need to allocate software ring and reset for the fdir
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index f420c98687..8315ee2f59 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -130,7 +130,7 @@ struct i40e_rx_queue {
struct i40e_tx_queue {
uint16_t nb_tx_desc; /**< number of TX descriptors */
rte_iova_t tx_ring_dma; /**< TX ring DMA address */
- volatile struct i40e_tx_desc *tx_ring; /**< TX ring virtual address */
+ volatile struct i40e_tx_desc *i40e_tx_ring; /**< TX ring virtual address */
struct ci_tx_entry *sw_ring; /**< virtual address of SW ring */
uint16_t tx_tail; /**< current value of tail register */
volatile uint8_t *qtx_tail; /**< register address of tail */
@@ -150,7 +150,7 @@ struct i40e_tx_queue {
uint16_t port_id; /**< Device port identifier. */
uint16_t queue_id; /**< TX queue index. */
uint16_t reg_idx;
- struct i40e_vsi *vsi; /**< the VSI this queue belongs to */
+ struct i40e_vsi *i40e_vsi; /**< the VSI this queue belongs to */
uint16_t tx_next_dd;
uint16_t tx_next_rs;
bool q_set; /**< indicate if tx queue has been configured */
diff --git a/drivers/net/i40e/i40e_rxtx_vec_altivec.c b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
index 80f07a3e10..bf0e9ebd71 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_altivec.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
@@ -568,7 +568,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->i40e_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -588,7 +588,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->i40e_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
}
@@ -598,7 +598,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->i40e_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)I40E_TX_DESC_CMD_RS) <<
I40E_TXD_QW1_CMD_SHIFT);
txq->tx_next_rs =
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx2.c b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
index b26bae4757..5042e348db 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx2.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
@@ -758,7 +758,7 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->i40e_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -779,7 +779,7 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->i40e_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
}
@@ -789,7 +789,7 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->i40e_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)I40E_TX_DESC_CMD_RS) <<
I40E_TXD_QW1_CMD_SHIFT);
txq->tx_next_rs =
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index 8b8a16daa8..04fbe3b2e3 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -764,7 +764,7 @@ i40e_tx_free_bufs_avx512(struct i40e_tx_queue *txq)
struct rte_mbuf *m, *free[RTE_I40E_TX_MAX_FREE_BUF_SZ];
/* check DD bits on threshold descriptor */
- if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+ if ((txq->i40e_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) !=
rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE))
return 0;
@@ -948,7 +948,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->i40e_tx_ring[tx_id];
txep = (void *)txq->sw_ring;
txep += tx_id;
@@ -970,7 +970,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = txq->tx_ring;
+ txdp = txq->i40e_tx_ring;
txep = (void *)txq->sw_ring;
}
@@ -980,7 +980,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->i40e_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)I40E_TX_DESC_CMD_RS) <<
I40E_TXD_QW1_CMD_SHIFT);
txq->tx_next_rs =
diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index 325e99c1a4..e81f958361 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -26,7 +26,7 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
struct rte_mbuf *m, *free[RTE_I40E_TX_MAX_FREE_BUF_SZ];
/* check DD bits on threshold descriptor */
- if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+ if ((txq->i40e_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) !=
rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE))
return 0;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_neon.c b/drivers/net/i40e/i40e_rxtx_vec_neon.c
index 26bc345a0a..05191e4884 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_neon.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_neon.c
@@ -695,7 +695,7 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->i40e_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -715,7 +715,7 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->i40e_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
}
@@ -725,7 +725,7 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->i40e_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)I40E_TX_DESC_CMD_RS) <<
I40E_TXD_QW1_CMD_SHIFT);
txq->tx_next_rs =
diff --git a/drivers/net/i40e/i40e_rxtx_vec_sse.c b/drivers/net/i40e/i40e_rxtx_vec_sse.c
index ebc32b0d27..d81b553842 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_sse.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_sse.c
@@ -714,7 +714,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->i40e_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -734,7 +734,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->i40e_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
}
@@ -744,7 +744,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->i40e_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)I40E_TX_DESC_CMD_RS) <<
I40E_TXD_QW1_CMD_SHIFT);
txq->tx_next_rs =
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index adaaeb4625..6eda91e76b 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -296,11 +296,11 @@ reset_tx_queue(struct iavf_tx_queue *txq)
txe = txq->sw_ring;
size = sizeof(struct iavf_tx_desc) * txq->nb_tx_desc;
for (i = 0; i < size; i++)
- ((volatile char *)txq->tx_ring)[i] = 0;
+ ((volatile char *)txq->iavf_tx_ring)[i] = 0;
prev = (uint16_t)(txq->nb_tx_desc - 1);
for (i = 0; i < txq->nb_tx_desc; i++) {
- txq->tx_ring[i].cmd_type_offset_bsz =
+ txq->iavf_tx_ring[i].cmd_type_offset_bsz =
rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE);
txe[i].mbuf = NULL;
txe[i].last_id = i;
@@ -851,7 +851,7 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->port_id = dev->data->port_id;
txq->offloads = offloads;
txq->tx_deferred_start = tx_conf->tx_deferred_start;
- txq->vsi = vsi;
+ txq->iavf_vsi = vsi;
if (iavf_ipsec_crypto_supported(adapter))
txq->ipsec_crypto_pkt_md_offset =
@@ -872,7 +872,7 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate TX hardware ring descriptors. */
ring_size = sizeof(struct iavf_tx_desc) * IAVF_MAX_RING_DESC;
ring_size = RTE_ALIGN(ring_size, IAVF_DMA_MEM_ALIGN);
- mz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx,
+ mz = rte_eth_dma_zone_reserve(dev, "iavf_tx_ring", queue_idx,
ring_size, IAVF_RING_BASE_ALIGN,
socket_id);
if (!mz) {
@@ -882,7 +882,7 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,
return -ENOMEM;
}
txq->tx_ring_dma = mz->iova;
- txq->tx_ring = (struct iavf_tx_desc *)mz->addr;
+ txq->iavf_tx_ring = (struct iavf_tx_desc *)mz->addr;
txq->mz = mz;
reset_tx_queue(txq);
@@ -2385,7 +2385,7 @@ iavf_xmit_cleanup(struct iavf_tx_queue *txq)
uint16_t desc_to_clean_to;
uint16_t nb_tx_to_clean;
- volatile struct iavf_tx_desc *txd = txq->tx_ring;
+ volatile struct iavf_tx_desc *txd = txq->iavf_tx_ring;
desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->tx_rs_thresh);
if (desc_to_clean_to >= nb_tx_desc)
@@ -2796,7 +2796,7 @@ uint16_t
iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
struct iavf_tx_queue *txq = tx_queue;
- volatile struct iavf_tx_desc *txr = txq->tx_ring;
+ volatile struct iavf_tx_desc *txr = txq->iavf_tx_ring;
struct ci_tx_entry *txe_ring = txq->sw_ring;
struct ci_tx_entry *txe, *txn;
struct rte_mbuf *mb, *mb_seg;
@@ -3803,10 +3803,10 @@ iavf_xmit_pkts_no_poll(void *tx_queue, struct rte_mbuf **tx_pkts,
struct iavf_tx_queue *txq = tx_queue;
enum iavf_tx_burst_type tx_burst_type;
- if (!txq->vsi || txq->vsi->adapter->no_poll)
+ if (!txq->iavf_vsi || txq->iavf_vsi->adapter->no_poll)
return 0;
- tx_burst_type = txq->vsi->adapter->tx_burst_type;
+ tx_burst_type = txq->iavf_vsi->adapter->tx_burst_type;
return iavf_tx_pkt_burst_ops[tx_burst_type](tx_queue,
tx_pkts, nb_pkts);
@@ -3824,9 +3824,9 @@ iavf_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts,
const char *reason = NULL;
bool pkt_error = false;
struct iavf_tx_queue *txq = tx_queue;
- struct iavf_adapter *adapter = txq->vsi->adapter;
+ struct iavf_adapter *adapter = txq->iavf_vsi->adapter;
enum iavf_tx_burst_type tx_burst_type =
- txq->vsi->adapter->tx_burst_type;
+ txq->iavf_vsi->adapter->tx_burst_type;
for (idx = 0; idx < nb_pkts; idx++) {
mb = tx_pkts[idx];
@@ -4440,7 +4440,7 @@ iavf_dev_tx_desc_status(void *tx_queue, uint16_t offset)
desc -= txq->nb_tx_desc;
}
- status = &txq->tx_ring[desc].cmd_type_offset_bsz;
+ status = &txq->iavf_tx_ring[desc].cmd_type_offset_bsz;
mask = rte_le_to_cpu_64(IAVF_TXD_QW1_DTYPE_MASK);
expect = rte_cpu_to_le_64(
IAVF_TX_DESC_DTYPE_DESC_DONE << IAVF_TXD_QW1_DTYPE_SHIFT);
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index 44e2de731c..cc1eaaf54c 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -276,7 +276,7 @@ struct iavf_rx_queue {
/* Structure associated with each TX queue. */
struct iavf_tx_queue {
const struct rte_memzone *mz; /* memzone for Tx ring */
- volatile struct iavf_tx_desc *tx_ring; /* Tx ring virtual address */
+ volatile struct iavf_tx_desc *iavf_tx_ring; /* Tx ring virtual address */
rte_iova_t tx_ring_dma; /* Tx ring DMA address */
struct ci_tx_entry *sw_ring; /* address array of SW ring */
uint16_t nb_tx_desc; /* ring length */
@@ -289,7 +289,7 @@ struct iavf_tx_queue {
uint16_t tx_free_thresh;
uint16_t tx_rs_thresh;
uint8_t rel_mbufs_type;
- struct iavf_vsi *vsi; /**< the VSI this queue belongs to */
+ struct iavf_vsi *iavf_vsi; /**< the VSI this queue belongs to */
uint16_t port_id;
uint16_t queue_id;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index 42e09a2adf..f33ceceee1 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -1751,7 +1751,7 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_commit = nb_pkts;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->iavf_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -1772,7 +1772,7 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->iavf_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
}
@@ -1782,7 +1782,7 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->iavf_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) <<
IAVF_TXD_QW1_CMD_SHIFT);
txq->tx_next_rs =
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index dc1fef24f0..97420a75fd 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -1854,7 +1854,7 @@ iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq)
struct rte_mbuf *m, *free[IAVF_VPMD_TX_MAX_FREE_BUF];
/* check DD bits on threshold descriptor */
- if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+ if ((txq->iavf_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(IAVF_TXD_QW1_DTYPE_MASK)) !=
rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE))
return 0;
@@ -2328,7 +2328,7 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_commit = nb_pkts;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->iavf_tx_ring[tx_id];
txep = (void *)txq->sw_ring;
txep += tx_id;
@@ -2350,7 +2350,7 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->iavf_tx_ring[tx_id];
txep = (void *)txq->sw_ring;
txep += tx_id;
}
@@ -2361,7 +2361,7 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->iavf_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) <<
IAVF_TXD_QW1_CMD_SHIFT);
txq->tx_next_rs =
@@ -2397,7 +2397,7 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_pkts = nb_commit >> 1;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->iavf_tx_ring[tx_id];
txep = (void *)txq->sw_ring;
txep += (tx_id >> 1);
@@ -2418,7 +2418,7 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
tx_id = 0;
/* avoid reach the end of ring */
- txdp = txq->tx_ring;
+ txdp = txq->iavf_tx_ring;
txep = (void *)txq->sw_ring;
}
@@ -2429,7 +2429,7 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->iavf_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) <<
IAVF_TXD_QW1_CMD_SHIFT);
txq->tx_next_rs =
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index ff24055c34..6305c8cdd6 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -26,7 +26,7 @@ iavf_tx_free_bufs(struct iavf_tx_queue *txq)
struct rte_mbuf *m, *free[IAVF_VPMD_TX_MAX_FREE_BUF];
/* check DD bits on threshold descriptor */
- if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+ if ((txq->iavf_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(IAVF_TXD_QW1_DTYPE_MASK)) !=
rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE))
return 0;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index ed8455d669..64c3bf0eaa 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -1383,7 +1383,7 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_commit = nb_pkts;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->iavf_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -1403,7 +1403,7 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->iavf_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
}
@@ -1413,7 +1413,7 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->iavf_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) <<
IAVF_TXD_QW1_CMD_SHIFT);
txq->tx_next_rs =
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 4b98e4066b..4ffd1f5567 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -401,11 +401,11 @@ reset_tx_queue(struct ice_tx_queue *txq)
txe = txq->sw_ring;
size = sizeof(struct ice_tx_desc) * txq->nb_tx_desc;
for (i = 0; i < size; i++)
- ((volatile char *)txq->tx_ring)[i] = 0;
+ ((volatile char *)txq->ice_tx_ring)[i] = 0;
prev = (uint16_t)(txq->nb_tx_desc - 1);
for (i = 0; i < txq->nb_tx_desc; i++) {
- txq->tx_ring[i].cmd_type_offset_bsz =
+ txq->ice_tx_ring[i].cmd_type_offset_bsz =
rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE);
txe[i].mbuf = NULL;
txe[i].last_id = i;
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index d584086a36..5ec92f6d0c 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -776,7 +776,7 @@ ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
if (!txq_elem)
return -ENOMEM;
- vsi = txq->vsi;
+ vsi = txq->ice_vsi;
hw = ICE_VSI_TO_HW(vsi);
pf = ICE_VSI_TO_PF(vsi);
@@ -966,7 +966,7 @@ ice_fdir_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
if (!txq_elem)
return -ENOMEM;
- vsi = txq->vsi;
+ vsi = txq->ice_vsi;
hw = ICE_VSI_TO_HW(vsi);
memset(&tx_ctx, 0, sizeof(tx_ctx));
@@ -1039,11 +1039,11 @@ ice_reset_tx_queue(struct ice_tx_queue *txq)
txe = txq->sw_ring;
size = sizeof(struct ice_tx_desc) * txq->nb_tx_desc;
for (i = 0; i < size; i++)
- ((volatile char *)txq->tx_ring)[i] = 0;
+ ((volatile char *)txq->ice_tx_ring)[i] = 0;
prev = (uint16_t)(txq->nb_tx_desc - 1);
for (i = 0; i < txq->nb_tx_desc; i++) {
- volatile struct ice_tx_desc *txd = &txq->tx_ring[i];
+ volatile struct ice_tx_desc *txd = &txq->ice_tx_ring[i];
txd->cmd_type_offset_bsz =
rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE);
@@ -1153,7 +1153,7 @@ ice_fdir_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
PMD_DRV_LOG(INFO, "TX queue %u not started", tx_queue_id);
return 0;
}
- vsi = txq->vsi;
+ vsi = txq->ice_vsi;
q_ids[0] = txq->reg_idx;
q_teids[0] = txq->q_teid;
@@ -1479,7 +1479,7 @@ ice_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate TX hardware ring descriptors. */
ring_size = sizeof(struct ice_tx_desc) * ICE_MAX_RING_DESC;
ring_size = RTE_ALIGN(ring_size, ICE_DMA_MEM_ALIGN);
- tz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx,
+ tz = rte_eth_dma_zone_reserve(dev, "ice_tx_ring", queue_idx,
ring_size, ICE_RING_BASE_ALIGN,
socket_id);
if (!tz) {
@@ -1500,11 +1500,11 @@ ice_tx_queue_setup(struct rte_eth_dev *dev,
txq->reg_idx = vsi->base_queue + queue_idx;
txq->port_id = dev->data->port_id;
txq->offloads = offloads;
- txq->vsi = vsi;
+ txq->ice_vsi = vsi;
txq->tx_deferred_start = tx_conf->tx_deferred_start;
txq->tx_ring_dma = tz->iova;
- txq->tx_ring = tz->addr;
+ txq->ice_tx_ring = tz->addr;
/* Allocate software ring */
txq->sw_ring =
@@ -2372,7 +2372,7 @@ ice_tx_descriptor_status(void *tx_queue, uint16_t offset)
desc -= txq->nb_tx_desc;
}
- status = &txq->tx_ring[desc].cmd_type_offset_bsz;
+ status = &txq->ice_tx_ring[desc].cmd_type_offset_bsz;
mask = rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M);
expect = rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE <<
ICE_TXD_QW1_DTYPE_S);
@@ -2452,10 +2452,10 @@ ice_fdir_setup_tx_resources(struct ice_pf *pf)
txq->nb_tx_desc = ICE_FDIR_NUM_TX_DESC;
txq->queue_id = ICE_FDIR_QUEUE_ID;
txq->reg_idx = pf->fdir.fdir_vsi->base_queue;
- txq->vsi = pf->fdir.fdir_vsi;
+ txq->ice_vsi = pf->fdir.fdir_vsi;
txq->tx_ring_dma = tz->iova;
- txq->tx_ring = (struct ice_tx_desc *)tz->addr;
+ txq->ice_tx_ring = (struct ice_tx_desc *)tz->addr;
/*
* don't need to allocate software ring and reset for the fdir
* program queue just set the queue has been configured.
@@ -2838,7 +2838,7 @@ static inline int
ice_xmit_cleanup(struct ice_tx_queue *txq)
{
struct ci_tx_entry *sw_ring = txq->sw_ring;
- volatile struct ice_tx_desc *txd = txq->tx_ring;
+ volatile struct ice_tx_desc *txd = txq->ice_tx_ring;
uint16_t last_desc_cleaned = txq->last_desc_cleaned;
uint16_t nb_tx_desc = txq->nb_tx_desc;
uint16_t desc_to_clean_to;
@@ -2959,7 +2959,7 @@ uint16_t
ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
struct ice_tx_queue *txq;
- volatile struct ice_tx_desc *tx_ring;
+ volatile struct ice_tx_desc *ice_tx_ring;
volatile struct ice_tx_desc *txd;
struct ci_tx_entry *sw_ring;
struct ci_tx_entry *txe, *txn;
@@ -2981,7 +2981,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
txq = tx_queue;
sw_ring = txq->sw_ring;
- tx_ring = txq->tx_ring;
+ ice_tx_ring = txq->ice_tx_ring;
tx_id = txq->tx_tail;
txe = &sw_ring[tx_id];
@@ -3064,7 +3064,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
/* Setup TX context descriptor if required */
volatile struct ice_tx_ctx_desc *ctx_txd =
(volatile struct ice_tx_ctx_desc *)
- &tx_ring[tx_id];
+ &ice_tx_ring[tx_id];
uint16_t cd_l2tag2 = 0;
uint64_t cd_type_cmd_tso_mss = ICE_TX_DESC_DTYPE_CTX;
@@ -3082,7 +3082,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
cd_type_cmd_tso_mss |=
((uint64_t)ICE_TX_CTX_DESC_TSYN <<
ICE_TXD_CTX_QW1_CMD_S) |
- (((uint64_t)txq->vsi->adapter->ptp_tx_index <<
+ (((uint64_t)txq->ice_vsi->adapter->ptp_tx_index <<
ICE_TXD_CTX_QW1_TSYN_S) & ICE_TXD_CTX_QW1_TSYN_M);
ctx_txd->tunneling_params =
@@ -3106,7 +3106,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
m_seg = tx_pkt;
do {
- txd = &tx_ring[tx_id];
+ txd = &ice_tx_ring[tx_id];
txn = &sw_ring[txe->next_id];
if (txe->mbuf)
@@ -3134,7 +3134,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
txe->last_id = tx_last;
tx_id = txe->next_id;
txe = txn;
- txd = &tx_ring[tx_id];
+ txd = &ice_tx_ring[tx_id];
txn = &sw_ring[txe->next_id];
}
@@ -3187,7 +3187,7 @@ ice_tx_free_bufs(struct ice_tx_queue *txq)
struct ci_tx_entry *txep;
uint16_t i;
- if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+ if ((txq->ice_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) !=
rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))
return 0;
@@ -3360,7 +3360,7 @@ static inline void
ice_tx_fill_hw_ring(struct ice_tx_queue *txq, struct rte_mbuf **pkts,
uint16_t nb_pkts)
{
- volatile struct ice_tx_desc *txdp = &txq->tx_ring[txq->tx_tail];
+ volatile struct ice_tx_desc *txdp = &txq->ice_tx_ring[txq->tx_tail];
struct ci_tx_entry *txep = &txq->sw_ring[txq->tx_tail];
const int N_PER_LOOP = 4;
const int N_PER_LOOP_MASK = N_PER_LOOP - 1;
@@ -3393,7 +3393,7 @@ tx_xmit_pkts(struct ice_tx_queue *txq,
struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- volatile struct ice_tx_desc *txr = txq->tx_ring;
+ volatile struct ice_tx_desc *txr = txq->ice_tx_ring;
uint16_t n = 0;
/**
@@ -3722,7 +3722,7 @@ ice_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
bool pkt_error = false;
uint16_t good_pkts = nb_pkts;
const char *reason = NULL;
- struct ice_adapter *adapter = txq->vsi->adapter;
+ struct ice_adapter *adapter = txq->ice_vsi->adapter;
uint64_t ol_flags;
for (idx = 0; idx < nb_pkts; idx++) {
@@ -4701,11 +4701,11 @@ ice_fdir_programming(struct ice_pf *pf, struct ice_fltr_desc *fdir_desc)
uint16_t i;
fdirdp = (volatile struct ice_fltr_desc *)
- (&txq->tx_ring[txq->tx_tail]);
+ (&txq->ice_tx_ring[txq->tx_tail]);
fdirdp->qidx_compq_space_stat = fdir_desc->qidx_compq_space_stat;
fdirdp->dtype_cmd_vsi_fdid = fdir_desc->dtype_cmd_vsi_fdid;
- txdp = &txq->tx_ring[txq->tx_tail + 1];
+ txdp = &txq->ice_tx_ring[txq->tx_tail + 1];
txdp->buf_addr = rte_cpu_to_le_64(pf->fdir.dma_addr);
td_cmd = ICE_TX_DESC_CMD_EOP |
ICE_TX_DESC_CMD_RS |
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index 8d1a1a8676..3257f449f5 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -148,7 +148,7 @@ struct ice_rx_queue {
struct ice_tx_queue {
uint16_t nb_tx_desc; /* number of TX descriptors */
rte_iova_t tx_ring_dma; /* TX ring DMA address */
- volatile struct ice_tx_desc *tx_ring; /* TX ring virtual address */
+ volatile struct ice_tx_desc *ice_tx_ring; /* TX ring virtual address */
struct ci_tx_entry *sw_ring; /* virtual address of SW ring */
uint16_t tx_tail; /* current value of tail register */
volatile uint8_t *qtx_tail; /* register address of tail */
@@ -171,7 +171,7 @@ struct ice_tx_queue {
uint32_t q_teid; /* TX schedule node id. */
uint16_t reg_idx;
uint64_t offloads;
- struct ice_vsi *vsi; /* the VSI this queue belongs to */
+ struct ice_vsi *ice_vsi; /* the VSI this queue belongs to */
uint16_t tx_next_dd;
uint16_t tx_next_rs;
uint64_t mbuf_errors;
diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index 336697e72d..dde07ac99e 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -874,7 +874,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->ice_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -895,7 +895,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->ice_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
}
@@ -905,7 +905,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->ice_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)ICE_TX_DESC_CMD_RS) <<
ICE_TXD_QW1_CMD_S);
txq->tx_next_rs =
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index 6b6aa3f1fe..e4d0270176 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -869,7 +869,7 @@ ice_tx_free_bufs_avx512(struct ice_tx_queue *txq)
struct rte_mbuf *m, *free[ICE_TX_MAX_FREE_BUF_SZ];
/* check DD bits on threshold descriptor */
- if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+ if ((txq->ice_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) !=
rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))
return 0;
@@ -1071,7 +1071,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->ice_tx_ring[tx_id];
txep = (void *)txq->sw_ring;
txep += tx_id;
@@ -1093,7 +1093,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = txq->tx_ring;
+ txdp = txq->ice_tx_ring;
txep = (void *)txq->sw_ring;
}
@@ -1103,7 +1103,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->ice_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)ICE_TX_DESC_CMD_RS) <<
ICE_TXD_QW1_CMD_S);
txq->tx_next_rs =
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index 32e4541267..7b865b53ad 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -22,7 +22,7 @@ ice_tx_free_bufs_vec(struct ice_tx_queue *txq)
struct rte_mbuf *m, *free[ICE_TX_MAX_FREE_BUF_SZ];
/* check DD bits on threshold descriptor */
- if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+ if ((txq->ice_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) !=
rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))
return 0;
@@ -121,7 +121,7 @@ _ice_tx_queue_release_mbufs_vec(struct ice_tx_queue *txq)
i = txq->tx_next_dd - txq->tx_rs_thresh + 1;
#ifdef __AVX512VL__
- struct rte_eth_dev *dev = &rte_eth_devices[txq->vsi->adapter->pf.dev_data->port_id];
+ struct rte_eth_dev *dev = &rte_eth_devices[txq->ice_vsi->adapter->pf.dev_data->port_id];
if (dev->tx_pkt_burst == ice_xmit_pkts_vec_avx512 ||
dev->tx_pkt_burst == ice_xmit_pkts_vec_avx512_offload) {
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index debdd8f6a2..364207e8a8 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -717,7 +717,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->ice_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -737,7 +737,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->ice_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
}
@@ -747,7 +747,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->ice_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)ICE_TX_DESC_CMD_RS) <<
ICE_TXD_QW1_CMD_S);
txq->tx_next_rs =
diff --git a/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c b/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c
index 2241726ad8..a878db3150 100644
--- a/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c
+++ b/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c
@@ -72,7 +72,7 @@ ixgbe_recycle_tx_mbufs_reuse_vec(void *tx_queue,
return 0;
/* check DD bits on threshold descriptor */
- status = txq->tx_ring[txq->tx_next_dd].wb.status;
+ status = txq->ixgbe_tx_ring[txq->tx_next_dd].wb.status;
if (!(status & IXGBE_ADVTXD_STAT_DD))
return 0;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 0a80b944f0..f7ddbba1b6 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -106,7 +106,7 @@ ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq)
struct rte_mbuf *m, *free[RTE_IXGBE_TX_MAX_FREE_BUF_SZ];
/* check DD bit on threshold descriptor */
- status = txq->tx_ring[txq->tx_next_dd].wb.status;
+ status = txq->ixgbe_tx_ring[txq->tx_next_dd].wb.status;
if (!(status & rte_cpu_to_le_32(IXGBE_ADVTXD_STAT_DD)))
return 0;
@@ -198,7 +198,7 @@ static inline void
ixgbe_tx_fill_hw_ring(struct ixgbe_tx_queue *txq, struct rte_mbuf **pkts,
uint16_t nb_pkts)
{
- volatile union ixgbe_adv_tx_desc *txdp = &(txq->tx_ring[txq->tx_tail]);
+ volatile union ixgbe_adv_tx_desc *txdp = &txq->ixgbe_tx_ring[txq->tx_tail];
struct ci_tx_entry *txep = &txq->sw_ring[txq->tx_tail];
const int N_PER_LOOP = 4;
const int N_PER_LOOP_MASK = N_PER_LOOP-1;
@@ -232,7 +232,7 @@ tx_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
- volatile union ixgbe_adv_tx_desc *tx_r = txq->tx_ring;
+ volatile union ixgbe_adv_tx_desc *tx_r = txq->ixgbe_tx_ring;
uint16_t n = 0;
/*
@@ -564,7 +564,7 @@ static inline int
ixgbe_xmit_cleanup(struct ixgbe_tx_queue *txq)
{
struct ci_tx_entry *sw_ring = txq->sw_ring;
- volatile union ixgbe_adv_tx_desc *txr = txq->tx_ring;
+ volatile union ixgbe_adv_tx_desc *txr = txq->ixgbe_tx_ring;
uint16_t last_desc_cleaned = txq->last_desc_cleaned;
uint16_t nb_tx_desc = txq->nb_tx_desc;
uint16_t desc_to_clean_to;
@@ -652,7 +652,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_offload.data[1] = 0;
txq = tx_queue;
sw_ring = txq->sw_ring;
- txr = txq->tx_ring;
+ txr = txq->ixgbe_tx_ring;
tx_id = txq->tx_tail;
txe = &sw_ring[tx_id];
txp = NULL;
@@ -2495,13 +2495,13 @@ ixgbe_reset_tx_queue(struct ixgbe_tx_queue *txq)
/* Zero out HW ring memory */
for (i = 0; i < txq->nb_tx_desc; i++) {
- txq->tx_ring[i] = zeroed_desc;
+ txq->ixgbe_tx_ring[i] = zeroed_desc;
}
/* Initialize SW ring entries */
prev = (uint16_t) (txq->nb_tx_desc - 1);
for (i = 0; i < txq->nb_tx_desc; i++) {
- volatile union ixgbe_adv_tx_desc *txd = &txq->tx_ring[i];
+ volatile union ixgbe_adv_tx_desc *txd = &txq->ixgbe_tx_ring[i];
txd->wb.status = rte_cpu_to_le_32(IXGBE_TXD_STAT_DD);
txe[i].mbuf = NULL;
@@ -2751,7 +2751,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
* handle the maximum ring size is allocated in order to allow for
* resizing in later calls to the queue setup function.
*/
- tz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx,
+ tz = rte_eth_dma_zone_reserve(dev, "ixgbe_tx_ring", queue_idx,
sizeof(union ixgbe_adv_tx_desc) * IXGBE_MAX_RING_DESC,
IXGBE_ALIGN, socket_id);
if (tz == NULL) {
@@ -2791,7 +2791,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->qtx_tail = IXGBE_PCI_REG_ADDR(hw, IXGBE_TDT(txq->reg_idx));
txq->tx_ring_dma = tz->iova;
- txq->tx_ring = (union ixgbe_adv_tx_desc *) tz->addr;
+ txq->ixgbe_tx_ring = (union ixgbe_adv_tx_desc *)tz->addr;
/* Allocate software ring */
txq->sw_ring = rte_zmalloc_socket("txq->sw_ring",
@@ -2802,7 +2802,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
return -ENOMEM;
}
PMD_INIT_LOG(DEBUG, "sw_ring=%p hw_ring=%p dma_addr=0x%"PRIx64,
- txq->sw_ring, txq->tx_ring, txq->tx_ring_dma);
+ txq->sw_ring, txq->ixgbe_tx_ring, txq->tx_ring_dma);
/* set up vector or scalar TX function as appropriate */
ixgbe_set_tx_function(dev, txq);
@@ -3328,7 +3328,7 @@ ixgbe_dev_tx_descriptor_status(void *tx_queue, uint16_t offset)
desc -= txq->nb_tx_desc;
}
- status = &txq->tx_ring[desc].wb.status;
+ status = &txq->ixgbe_tx_ring[desc].wb.status;
if (*status & rte_cpu_to_le_32(IXGBE_ADVTXD_STAT_DD))
return RTE_ETH_TX_DESC_DONE;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index 00e2009b3e..f6bae37cf3 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -185,7 +185,7 @@ struct ixgbe_advctx_info {
*/
struct ixgbe_tx_queue {
/** TX ring virtual address. */
- volatile union ixgbe_adv_tx_desc *tx_ring;
+ volatile union ixgbe_adv_tx_desc *ixgbe_tx_ring;
rte_iova_t tx_ring_dma; /**< TX ring DMA address. */
union {
struct ci_tx_entry *sw_ring; /**< address of SW ring for scalar PMD. */
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
index e9592c0d08..cc51bf6eed 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
@@ -22,7 +22,7 @@ ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq)
struct rte_mbuf *m, *free[RTE_IXGBE_TX_MAX_FREE_BUF_SZ];
/* check DD bit on threshold descriptor */
- status = txq->tx_ring[txq->tx_next_dd].wb.status;
+ status = txq->ixgbe_tx_ring[txq->tx_next_dd].wb.status;
if (!(status & IXGBE_ADVTXD_STAT_DD))
return 0;
@@ -154,11 +154,11 @@ _ixgbe_reset_tx_queue_vec(struct ixgbe_tx_queue *txq)
/* Zero out HW ring memory */
for (i = 0; i < txq->nb_tx_desc; i++)
- txq->tx_ring[i] = zeroed_desc;
+ txq->ixgbe_tx_ring[i] = zeroed_desc;
/* Initialize SW ring entries */
for (i = 0; i < txq->nb_tx_desc; i++) {
- volatile union ixgbe_adv_tx_desc *txd = &txq->tx_ring[i];
+ volatile union ixgbe_adv_tx_desc *txd = &txq->ixgbe_tx_ring[i];
txd->wb.status = IXGBE_TXD_STAT_DD;
txe[i].mbuf = NULL;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
index 871c1a7cd2..06be7ec82a 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
@@ -590,7 +590,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->ixgbe_tx_ring[tx_id];
txep = &txq->sw_ring_v[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -610,7 +610,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->ixgbe_tx_ring[tx_id];
txep = &txq->sw_ring_v[tx_id];
}
@@ -620,7 +620,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].read.cmd_type_len |=
+ txq->ixgbe_tx_ring[txq->tx_next_rs].read.cmd_type_len |=
rte_cpu_to_le_32(IXGBE_ADVTXD_DCMD_RS);
txq->tx_next_rs = (uint16_t)(txq->tx_next_rs +
txq->tx_rs_thresh);
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
index 37f2079519..a21a57bd55 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -712,7 +712,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->ixgbe_tx_ring[tx_id];
txep = &txq->sw_ring_v[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -733,7 +733,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &(txq->tx_ring[tx_id]);
+ txdp = &txq->ixgbe_tx_ring[tx_id];
txep = &txq->sw_ring_v[tx_id];
}
@@ -743,7 +743,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].read.cmd_type_len |=
+ txq->ixgbe_tx_ring[txq->tx_next_rs].read.cmd_type_len |=
rte_cpu_to_le_32(IXGBE_ADVTXD_DCMD_RS);
txq->tx_next_rs = (uint16_t)(txq->tx_next_rs +
txq->tx_rs_thresh);
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v1 06/21] net/_common_intel: merge ice and i40e Tx queue struct
2024-12-02 11:24 ` [PATCH v1 " Bruce Richardson
` (4 preceding siblings ...)
2024-12-02 11:24 ` [PATCH v1 05/21] drivers/net: add prefix for driver-specific structs Bruce Richardson
@ 2024-12-02 11:24 ` Bruce Richardson
2024-12-02 11:24 ` [PATCH v1 07/21] net/iavf: use common Tx queue structure Bruce Richardson
` (14 subsequent siblings)
20 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-02 11:24 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Ian Stokes, David Christensen,
Konstantin Ananyev, Wathsala Vithanage, Anatoly Burakov
The queue structures of i40e and ice drivers are virtually identical, so
merge them into a common struct. This should allow easier function
merging in future using that common struct.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 55 +++++++++++++++++
drivers/net/i40e/i40e_ethdev.c | 4 +-
drivers/net/i40e/i40e_ethdev.h | 4 +-
drivers/net/i40e/i40e_fdir.c | 4 +-
.../net/i40e/i40e_recycle_mbufs_vec_common.c | 2 +-
drivers/net/i40e/i40e_rxtx.c | 58 +++++++++---------
drivers/net/i40e/i40e_rxtx.h | 50 ++--------------
drivers/net/i40e/i40e_rxtx_vec_altivec.c | 4 +-
drivers/net/i40e/i40e_rxtx_vec_avx2.c | 4 +-
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 6 +-
drivers/net/i40e/i40e_rxtx_vec_common.h | 2 +-
drivers/net/i40e/i40e_rxtx_vec_neon.c | 4 +-
drivers/net/i40e/i40e_rxtx_vec_sse.c | 4 +-
drivers/net/ice/ice_dcf.c | 4 +-
drivers/net/ice/ice_dcf_ethdev.c | 10 ++--
drivers/net/ice/ice_diagnose.c | 2 +-
drivers/net/ice/ice_ethdev.c | 2 +-
drivers/net/ice/ice_ethdev.h | 4 +-
drivers/net/ice/ice_rxtx.c | 60 +++++++++----------
drivers/net/ice/ice_rxtx.h | 41 +------------
drivers/net/ice/ice_rxtx_vec_avx2.c | 4 +-
drivers/net/ice/ice_rxtx_vec_avx512.c | 8 +--
drivers/net/ice/ice_rxtx_vec_common.h | 8 +--
drivers/net/ice/ice_rxtx_vec_sse.c | 6 +-
24 files changed, 165 insertions(+), 185 deletions(-)
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
index 5397007411..c965f5ee6c 100644
--- a/drivers/net/_common_intel/tx.h
+++ b/drivers/net/_common_intel/tx.h
@@ -8,6 +8,9 @@
#include <stdint.h>
#include <rte_mbuf.h>
+/* forward declaration of the common intel (ci) queue structure */
+struct ci_tx_queue;
+
/**
* Structure associated with each descriptor of the TX ring of a TX queue.
*/
@@ -24,6 +27,58 @@ struct ci_tx_entry_vec {
struct rte_mbuf *mbuf; /* mbuf associated with TX desc, if any. */
};
+typedef void (*ice_tx_release_mbufs_t)(struct ci_tx_queue *txq);
+
+struct ci_tx_queue {
+ union { /* TX ring virtual address */
+ volatile struct ice_tx_desc *ice_tx_ring;
+ volatile struct i40e_tx_desc *i40e_tx_ring;
+ };
+ volatile uint8_t *qtx_tail; /* register address of tail */
+ struct ci_tx_entry *sw_ring; /* virtual address of SW ring */
+ rte_iova_t tx_ring_dma; /* TX ring DMA address */
+ uint16_t nb_tx_desc; /* number of TX descriptors */
+ uint16_t tx_tail; /* current value of tail register */
+ uint16_t nb_tx_used; /* number of TX desc used since RS bit set */
+ /* index to last TX descriptor to have been cleaned */
+ uint16_t last_desc_cleaned;
+ /* Total number of TX descriptors ready to be allocated. */
+ uint16_t nb_tx_free;
+ /* Start freeing TX buffers if there are less free descriptors than
+ * this value.
+ */
+ uint16_t tx_free_thresh;
+ /* Number of TX descriptors to use before RS bit is set. */
+ uint16_t tx_rs_thresh;
+ uint8_t pthresh; /**< Prefetch threshold register. */
+ uint8_t hthresh; /**< Host threshold register. */
+ uint8_t wthresh; /**< Write-back threshold reg. */
+ uint16_t port_id; /* Device port identifier. */
+ uint16_t queue_id; /* TX queue index. */
+ uint16_t reg_idx;
+ uint64_t offloads;
+ uint16_t tx_next_dd;
+ uint16_t tx_next_rs;
+ uint64_t mbuf_errors;
+ bool tx_deferred_start; /* don't start this queue in dev start */
+ bool q_set; /* indicate if tx queue has been configured */
+ union { /* the VSI this queue belongs to */
+ struct ice_vsi *ice_vsi;
+ struct i40e_vsi *i40e_vsi;
+ };
+ const struct rte_memzone *mz;
+
+ union {
+ struct { /* ICE driver specific values */
+ ice_tx_release_mbufs_t tx_rel_mbufs;
+ uint32_t q_teid; /* TX schedule node id. */
+ };
+ struct { /* I40E driver specific values */
+ uint8_t dcb_tc;
+ };
+ };
+};
+
static __rte_always_inline void
ci_tx_backlog_entry(struct ci_tx_entry *txep, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 30dcdc68a8..bf5560ccc8 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -3685,7 +3685,7 @@ i40e_dev_update_mbuf_stats(struct rte_eth_dev *ethdev,
struct i40e_mbuf_stats *mbuf_stats)
{
uint16_t idx;
- struct i40e_tx_queue *txq;
+ struct ci_tx_queue *txq;
for (idx = 0; idx < ethdev->data->nb_tx_queues; idx++) {
txq = ethdev->data->tx_queues[idx];
@@ -6585,7 +6585,7 @@ i40e_dev_tx_init(struct i40e_pf *pf)
struct rte_eth_dev_data *data = pf->dev_data;
uint16_t i;
uint32_t ret = I40E_SUCCESS;
- struct i40e_tx_queue *txq;
+ struct ci_tx_queue *txq;
for (i = 0; i < data->nb_tx_queues; i++) {
txq = data->tx_queues[i];
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index 98213948b4..d351193ed9 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -334,7 +334,7 @@ struct i40e_vsi_list {
};
struct i40e_rx_queue;
-struct i40e_tx_queue;
+struct ci_tx_queue;
/* Bandwidth limit information */
struct i40e_bw_info {
@@ -738,7 +738,7 @@ TAILQ_HEAD(i40e_fdir_filter_list, i40e_fdir_filter);
struct i40e_fdir_info {
struct i40e_vsi *fdir_vsi; /* pointer to fdir VSI structure */
uint16_t match_counter_index; /* Statistic counter index used for fdir*/
- struct i40e_tx_queue *txq;
+ struct ci_tx_queue *txq;
struct i40e_rx_queue *rxq;
void *prg_pkt[I40E_FDIR_PRG_PKT_CNT]; /* memory for fdir program packet */
uint64_t dma_addr[I40E_FDIR_PRG_PKT_CNT]; /* physic address of packet memory*/
diff --git a/drivers/net/i40e/i40e_fdir.c b/drivers/net/i40e/i40e_fdir.c
index c600167634..349627a2ed 100644
--- a/drivers/net/i40e/i40e_fdir.c
+++ b/drivers/net/i40e/i40e_fdir.c
@@ -1372,7 +1372,7 @@ i40e_find_available_buffer(struct rte_eth_dev *dev)
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_fdir_info *fdir_info = &pf->fdir;
- struct i40e_tx_queue *txq = pf->fdir.txq;
+ struct ci_tx_queue *txq = pf->fdir.txq;
/* no available buffer
* search for more available buffers from the current
@@ -1628,7 +1628,7 @@ i40e_flow_fdir_filter_programming(struct i40e_pf *pf,
const struct i40e_fdir_filter_conf *filter,
bool add, bool wait_status)
{
- struct i40e_tx_queue *txq = pf->fdir.txq;
+ struct ci_tx_queue *txq = pf->fdir.txq;
struct i40e_rx_queue *rxq = pf->fdir.rxq;
const struct i40e_fdir_action *fdir_action = &filter->action;
volatile struct i40e_tx_desc *txdp;
diff --git a/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c b/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
index 8679e5c1fd..5a65c80d90 100644
--- a/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
+++ b/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
@@ -55,7 +55,7 @@ uint16_t
i40e_recycle_tx_mbufs_reuse_vec(void *tx_queue,
struct rte_eth_recycle_rxq_info *recycle_rxq_info)
{
- struct i40e_tx_queue *txq = tx_queue;
+ struct ci_tx_queue *txq = tx_queue;
struct ci_tx_entry *txep;
struct rte_mbuf **rxep;
int i, n;
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 34ef931859..305bc53480 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -376,7 +376,7 @@ i40e_build_ctob(uint32_t td_cmd,
}
static inline int
-i40e_xmit_cleanup(struct i40e_tx_queue *txq)
+i40e_xmit_cleanup(struct ci_tx_queue *txq)
{
struct ci_tx_entry *sw_ring = txq->sw_ring;
volatile struct i40e_tx_desc *txd = txq->i40e_tx_ring;
@@ -1080,7 +1080,7 @@ i40e_calc_pkt_desc(struct rte_mbuf *tx_pkt)
uint16_t
i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
- struct i40e_tx_queue *txq;
+ struct ci_tx_queue *txq;
struct ci_tx_entry *sw_ring;
struct ci_tx_entry *txe, *txn;
volatile struct i40e_tx_desc *txd;
@@ -1329,7 +1329,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
}
static __rte_always_inline int
-i40e_tx_free_bufs(struct i40e_tx_queue *txq)
+i40e_tx_free_bufs(struct ci_tx_queue *txq)
{
struct ci_tx_entry *txep;
uint16_t tx_rs_thresh = txq->tx_rs_thresh;
@@ -1413,7 +1413,7 @@ tx1(volatile struct i40e_tx_desc *txdp, struct rte_mbuf **pkts)
/* Fill hardware descriptor ring with mbuf data */
static inline void
-i40e_tx_fill_hw_ring(struct i40e_tx_queue *txq,
+i40e_tx_fill_hw_ring(struct ci_tx_queue *txq,
struct rte_mbuf **pkts,
uint16_t nb_pkts)
{
@@ -1441,7 +1441,7 @@ i40e_tx_fill_hw_ring(struct i40e_tx_queue *txq,
}
static inline uint16_t
-tx_xmit_pkts(struct i40e_tx_queue *txq,
+tx_xmit_pkts(struct ci_tx_queue *txq,
struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
@@ -1504,14 +1504,14 @@ i40e_xmit_pkts_simple(void *tx_queue,
uint16_t nb_tx = 0;
if (likely(nb_pkts <= I40E_TX_MAX_BURST))
- return tx_xmit_pkts((struct i40e_tx_queue *)tx_queue,
+ return tx_xmit_pkts((struct ci_tx_queue *)tx_queue,
tx_pkts, nb_pkts);
while (nb_pkts) {
uint16_t ret, num = (uint16_t)RTE_MIN(nb_pkts,
I40E_TX_MAX_BURST);
- ret = tx_xmit_pkts((struct i40e_tx_queue *)tx_queue,
+ ret = tx_xmit_pkts((struct ci_tx_queue *)tx_queue,
&tx_pkts[nb_tx], num);
nb_tx = (uint16_t)(nb_tx + ret);
nb_pkts = (uint16_t)(nb_pkts - ret);
@@ -1527,7 +1527,7 @@ i40e_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
uint16_t nb_tx = 0;
- struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
@@ -1549,7 +1549,7 @@ i40e_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
static uint16_t
i40e_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
- struct i40e_tx_queue *txq = tx_queue;
+ struct ci_tx_queue *txq = tx_queue;
uint16_t idx;
uint64_t ol_flags;
struct rte_mbuf *mb;
@@ -1611,7 +1611,7 @@ i40e_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts
pkt_error = true;
break;
}
- if (mb->nb_segs > ((struct i40e_tx_queue *)tx_queue)->nb_tx_desc) {
+ if (mb->nb_segs > ((struct ci_tx_queue *)tx_queue)->nb_tx_desc) {
PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs out of ring length");
pkt_error = true;
break;
@@ -1873,7 +1873,7 @@ int
i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
int err;
- struct i40e_tx_queue *txq;
+ struct ci_tx_queue *txq;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
PMD_INIT_FUNC_TRACE();
@@ -1907,7 +1907,7 @@ i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
int
i40e_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
- struct i40e_tx_queue *txq;
+ struct ci_tx_queue *txq;
int err;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
@@ -2311,7 +2311,7 @@ i40e_dev_rx_descriptor_status(void *rx_queue, uint16_t offset)
int
i40e_dev_tx_descriptor_status(void *tx_queue, uint16_t offset)
{
- struct i40e_tx_queue *txq = tx_queue;
+ struct ci_tx_queue *txq = tx_queue;
volatile uint64_t *status;
uint64_t mask, expect;
uint32_t desc;
@@ -2341,7 +2341,7 @@ i40e_dev_tx_descriptor_status(void *tx_queue, uint16_t offset)
static int
i40e_dev_tx_queue_setup_runtime(struct rte_eth_dev *dev,
- struct i40e_tx_queue *txq)
+ struct ci_tx_queue *txq)
{
struct i40e_adapter *ad =
I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
@@ -2394,7 +2394,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
{
struct i40e_vsi *vsi;
struct i40e_pf *pf = NULL;
- struct i40e_tx_queue *txq;
+ struct ci_tx_queue *txq;
const struct rte_memzone *tz;
uint32_t ring_size;
uint16_t tx_rs_thresh, tx_free_thresh;
@@ -2515,7 +2515,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate the TX queue data structure. */
txq = rte_zmalloc_socket("i40e tx queue",
- sizeof(struct i40e_tx_queue),
+ sizeof(struct ci_tx_queue),
RTE_CACHE_LINE_SIZE,
socket_id);
if (!txq) {
@@ -2600,7 +2600,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
void
i40e_tx_queue_release(void *txq)
{
- struct i40e_tx_queue *q = (struct i40e_tx_queue *)txq;
+ struct ci_tx_queue *q = (struct ci_tx_queue *)txq;
if (!q) {
PMD_DRV_LOG(DEBUG, "Pointer to TX queue is NULL");
@@ -2705,7 +2705,7 @@ i40e_reset_rx_queue(struct i40e_rx_queue *rxq)
}
void
-i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq)
+i40e_tx_queue_release_mbufs(struct ci_tx_queue *txq)
{
struct rte_eth_dev *dev;
uint16_t i;
@@ -2765,7 +2765,7 @@ i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq)
}
static int
-i40e_tx_done_cleanup_full(struct i40e_tx_queue *txq,
+i40e_tx_done_cleanup_full(struct ci_tx_queue *txq,
uint32_t free_cnt)
{
struct ci_tx_entry *swr_ring = txq->sw_ring;
@@ -2824,7 +2824,7 @@ i40e_tx_done_cleanup_full(struct i40e_tx_queue *txq,
}
static int
-i40e_tx_done_cleanup_simple(struct i40e_tx_queue *txq,
+i40e_tx_done_cleanup_simple(struct ci_tx_queue *txq,
uint32_t free_cnt)
{
int i, n, cnt;
@@ -2848,7 +2848,7 @@ i40e_tx_done_cleanup_simple(struct i40e_tx_queue *txq,
}
static int
-i40e_tx_done_cleanup_vec(struct i40e_tx_queue *txq __rte_unused,
+i40e_tx_done_cleanup_vec(struct ci_tx_queue *txq __rte_unused,
uint32_t free_cnt __rte_unused)
{
return -ENOTSUP;
@@ -2856,7 +2856,7 @@ i40e_tx_done_cleanup_vec(struct i40e_tx_queue *txq __rte_unused,
int
i40e_tx_done_cleanup(void *txq, uint32_t free_cnt)
{
- struct i40e_tx_queue *q = (struct i40e_tx_queue *)txq;
+ struct ci_tx_queue *q = (struct ci_tx_queue *)txq;
struct rte_eth_dev *dev = &rte_eth_devices[q->port_id];
struct i40e_adapter *ad =
I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
@@ -2872,7 +2872,7 @@ i40e_tx_done_cleanup(void *txq, uint32_t free_cnt)
}
void
-i40e_reset_tx_queue(struct i40e_tx_queue *txq)
+i40e_reset_tx_queue(struct ci_tx_queue *txq)
{
struct ci_tx_entry *txe;
uint16_t i, prev, size;
@@ -2911,7 +2911,7 @@ i40e_reset_tx_queue(struct i40e_tx_queue *txq)
/* Init the TX queue in hardware */
int
-i40e_tx_queue_init(struct i40e_tx_queue *txq)
+i40e_tx_queue_init(struct ci_tx_queue *txq)
{
enum i40e_status_code err = I40E_SUCCESS;
struct i40e_vsi *vsi = txq->i40e_vsi;
@@ -3167,7 +3167,7 @@ i40e_dev_free_queues(struct rte_eth_dev *dev)
enum i40e_status_code
i40e_fdir_setup_tx_resources(struct i40e_pf *pf)
{
- struct i40e_tx_queue *txq;
+ struct ci_tx_queue *txq;
const struct rte_memzone *tz = NULL;
struct rte_eth_dev *dev;
uint32_t ring_size;
@@ -3181,7 +3181,7 @@ i40e_fdir_setup_tx_resources(struct i40e_pf *pf)
/* Allocate the TX queue data structure. */
txq = rte_zmalloc_socket("i40e fdir tx queue",
- sizeof(struct i40e_tx_queue),
+ sizeof(struct ci_tx_queue),
RTE_CACHE_LINE_SIZE,
SOCKET_ID_ANY);
if (!txq) {
@@ -3304,7 +3304,7 @@ void
i40e_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
struct rte_eth_txq_info *qinfo)
{
- struct i40e_tx_queue *txq;
+ struct ci_tx_queue *txq;
txq = dev->data->tx_queues[queue_id];
@@ -3552,7 +3552,7 @@ i40e_rx_burst_mode_get(struct rte_eth_dev *dev, __rte_unused uint16_t queue_id,
}
void __rte_cold
-i40e_set_tx_function_flag(struct rte_eth_dev *dev, struct i40e_tx_queue *txq)
+i40e_set_tx_function_flag(struct rte_eth_dev *dev, struct ci_tx_queue *txq)
{
struct i40e_adapter *ad =
I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
@@ -3592,7 +3592,7 @@ i40e_set_tx_function(struct rte_eth_dev *dev)
#endif
if (ad->tx_vec_allowed) {
for (i = 0; i < dev->data->nb_tx_queues; i++) {
- struct i40e_tx_queue *txq =
+ struct ci_tx_queue *txq =
dev->data->tx_queues[i];
if (txq && i40e_txq_vec_setup(txq)) {
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 8315ee2f59..043d1df912 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -124,44 +124,6 @@ struct i40e_rx_queue {
const struct rte_memzone *mz;
};
-/*
- * Structure associated with each TX queue.
- */
-struct i40e_tx_queue {
- uint16_t nb_tx_desc; /**< number of TX descriptors */
- rte_iova_t tx_ring_dma; /**< TX ring DMA address */
- volatile struct i40e_tx_desc *i40e_tx_ring; /**< TX ring virtual address */
- struct ci_tx_entry *sw_ring; /**< virtual address of SW ring */
- uint16_t tx_tail; /**< current value of tail register */
- volatile uint8_t *qtx_tail; /**< register address of tail */
- uint16_t nb_tx_used; /**< number of TX desc used since RS bit set */
- /**< index to last TX descriptor to have been cleaned */
- uint16_t last_desc_cleaned;
- /**< Total number of TX descriptors ready to be allocated. */
- uint16_t nb_tx_free;
- /**< Start freeing TX buffers if there are less free descriptors than
- this value. */
- uint16_t tx_free_thresh;
- /** Number of TX descriptors to use before RS bit is set. */
- uint16_t tx_rs_thresh;
- uint8_t pthresh; /**< Prefetch threshold register. */
- uint8_t hthresh; /**< Host threshold register. */
- uint8_t wthresh; /**< Write-back threshold reg. */
- uint16_t port_id; /**< Device port identifier. */
- uint16_t queue_id; /**< TX queue index. */
- uint16_t reg_idx;
- struct i40e_vsi *i40e_vsi; /**< the VSI this queue belongs to */
- uint16_t tx_next_dd;
- uint16_t tx_next_rs;
- bool q_set; /**< indicate if tx queue has been configured */
- uint64_t mbuf_errors;
-
- bool tx_deferred_start; /**< don't start this queue in dev start */
- uint8_t dcb_tc; /**< Traffic class of tx queue */
- uint64_t offloads; /**< Tx offload flags of RTE_ETH_TX_OFFLOAD_* */
- const struct rte_memzone *mz;
-};
-
/** Offload features */
union i40e_tx_offload {
uint64_t data;
@@ -209,15 +171,15 @@ uint16_t i40e_simple_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
uint16_t i40e_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
-int i40e_tx_queue_init(struct i40e_tx_queue *txq);
+int i40e_tx_queue_init(struct ci_tx_queue *txq);
int i40e_rx_queue_init(struct i40e_rx_queue *rxq);
-void i40e_free_tx_resources(struct i40e_tx_queue *txq);
+void i40e_free_tx_resources(struct ci_tx_queue *txq);
void i40e_free_rx_resources(struct i40e_rx_queue *rxq);
void i40e_dev_clear_queues(struct rte_eth_dev *dev);
void i40e_dev_free_queues(struct rte_eth_dev *dev);
void i40e_reset_rx_queue(struct i40e_rx_queue *rxq);
-void i40e_reset_tx_queue(struct i40e_tx_queue *txq);
-void i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq);
+void i40e_reset_tx_queue(struct ci_tx_queue *txq);
+void i40e_tx_queue_release_mbufs(struct ci_tx_queue *txq);
int i40e_tx_done_cleanup(void *txq, uint32_t free_cnt);
int i40e_alloc_rx_queue_mbufs(struct i40e_rx_queue *rxq);
void i40e_rx_queue_release_mbufs(struct i40e_rx_queue *rxq);
@@ -237,13 +199,13 @@ uint16_t i40e_recv_scattered_pkts_vec(void *rx_queue,
uint16_t nb_pkts);
int i40e_rx_vec_dev_conf_condition_check(struct rte_eth_dev *dev);
int i40e_rxq_vec_setup(struct i40e_rx_queue *rxq);
-int i40e_txq_vec_setup(struct i40e_tx_queue *txq);
+int i40e_txq_vec_setup(struct ci_tx_queue *txq);
void i40e_rx_queue_release_mbufs_vec(struct i40e_rx_queue *rxq);
uint16_t i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
void i40e_set_rx_function(struct rte_eth_dev *dev);
void i40e_set_tx_function_flag(struct rte_eth_dev *dev,
- struct i40e_tx_queue *txq);
+ struct ci_tx_queue *txq);
void i40e_set_tx_function(struct rte_eth_dev *dev);
void i40e_set_default_ptype_table(struct rte_eth_dev *dev);
void i40e_set_default_pctype_table(struct rte_eth_dev *dev);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_altivec.c b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
index bf0e9ebd71..500bba2cef 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_altivec.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
@@ -551,7 +551,7 @@ uint16_t
i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
@@ -625,7 +625,7 @@ i40e_rxq_vec_setup(struct i40e_rx_queue *rxq)
}
int __rte_cold
-i40e_txq_vec_setup(struct i40e_tx_queue __rte_unused * txq)
+i40e_txq_vec_setup(struct ci_tx_queue __rte_unused * txq)
{
return 0;
}
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx2.c b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
index 5042e348db..29bef64287 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx2.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
@@ -743,7 +743,7 @@ static inline uint16_t
i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
@@ -808,7 +808,7 @@ i40e_xmit_pkts_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
uint16_t nb_tx = 0;
- struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index 04fbe3b2e3..a3f6d1667f 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -755,7 +755,7 @@ i40e_recv_scattered_pkts_vec_avx512(void *rx_queue,
}
static __rte_always_inline int
-i40e_tx_free_bufs_avx512(struct i40e_tx_queue *txq)
+i40e_tx_free_bufs_avx512(struct ci_tx_queue *txq)
{
struct ci_tx_entry_vec *txep;
uint32_t n;
@@ -933,7 +933,7 @@ static inline uint16_t
i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
@@ -999,7 +999,7 @@ i40e_xmit_pkts_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
uint16_t nb_tx = 0;
- struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index e81f958361..57d6263ccf 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -17,7 +17,7 @@
#endif
static __rte_always_inline int
-i40e_tx_free_bufs(struct i40e_tx_queue *txq)
+i40e_tx_free_bufs(struct ci_tx_queue *txq)
{
struct ci_tx_entry *txep;
uint32_t n;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_neon.c b/drivers/net/i40e/i40e_rxtx_vec_neon.c
index 05191e4884..4006538ba5 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_neon.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_neon.c
@@ -679,7 +679,7 @@ uint16_t
i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
struct rte_mbuf **__rte_restrict tx_pkts, uint16_t nb_pkts)
{
- struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
@@ -753,7 +753,7 @@ i40e_rxq_vec_setup(struct i40e_rx_queue *rxq)
}
int __rte_cold
-i40e_txq_vec_setup(struct i40e_tx_queue __rte_unused *txq)
+i40e_txq_vec_setup(struct ci_tx_queue __rte_unused *txq)
{
return 0;
}
diff --git a/drivers/net/i40e/i40e_rxtx_vec_sse.c b/drivers/net/i40e/i40e_rxtx_vec_sse.c
index d81b553842..e9a5715515 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_sse.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_sse.c
@@ -698,7 +698,7 @@ uint16_t
i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
@@ -771,7 +771,7 @@ i40e_rxq_vec_setup(struct i40e_rx_queue *rxq)
}
int __rte_cold
-i40e_txq_vec_setup(struct i40e_tx_queue __rte_unused *txq)
+i40e_txq_vec_setup(struct ci_tx_queue __rte_unused *txq)
{
return 0;
}
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 204d4eadbb..65c18921f4 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -1177,8 +1177,8 @@ ice_dcf_configure_queues(struct ice_dcf_hw *hw)
{
struct ice_rx_queue **rxq =
(struct ice_rx_queue **)hw->eth_dev->data->rx_queues;
- struct ice_tx_queue **txq =
- (struct ice_tx_queue **)hw->eth_dev->data->tx_queues;
+ struct ci_tx_queue **txq =
+ (struct ci_tx_queue **)hw->eth_dev->data->tx_queues;
struct virtchnl_vsi_queue_config_info *vc_config;
struct virtchnl_queue_pair_info *vc_qp;
struct dcf_virtchnl_cmd args;
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 4ffd1f5567..a0c065d78c 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -387,7 +387,7 @@ reset_rx_queue(struct ice_rx_queue *rxq)
}
static inline void
-reset_tx_queue(struct ice_tx_queue *txq)
+reset_tx_queue(struct ci_tx_queue *txq)
{
struct ci_tx_entry *txe;
uint32_t i, size;
@@ -454,7 +454,7 @@ ice_dcf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
struct ice_dcf_adapter *ad = dev->data->dev_private;
struct iavf_hw *hw = &ad->real_hw.avf;
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
int err = 0;
if (tx_queue_id >= dev->data->nb_tx_queues)
@@ -486,7 +486,7 @@ ice_dcf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
struct ice_dcf_adapter *ad = dev->data->dev_private;
struct ice_dcf_hw *hw = &ad->real_hw;
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
int err;
if (tx_queue_id >= dev->data->nb_tx_queues)
@@ -511,7 +511,7 @@ static int
ice_dcf_start_queues(struct rte_eth_dev *dev)
{
struct ice_rx_queue *rxq;
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
int nb_rxq = 0;
int nb_txq, i;
@@ -638,7 +638,7 @@ ice_dcf_stop_queues(struct rte_eth_dev *dev)
struct ice_dcf_adapter *ad = dev->data->dev_private;
struct ice_dcf_hw *hw = &ad->real_hw;
struct ice_rx_queue *rxq;
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
int ret, i;
/* Stop All queues */
diff --git a/drivers/net/ice/ice_diagnose.c b/drivers/net/ice/ice_diagnose.c
index 5bec9d00ad..a50068441a 100644
--- a/drivers/net/ice/ice_diagnose.c
+++ b/drivers/net/ice/ice_diagnose.c
@@ -605,7 +605,7 @@ void print_node(const struct rte_eth_dev_data *ethdata,
get_elem_type(data->data.elem_type));
if (data->data.elem_type == ICE_AQC_ELEM_TYPE_LEAF) {
for (uint16_t i = 0; i < ethdata->nb_tx_queues; i++) {
- struct ice_tx_queue *q = ethdata->tx_queues[i];
+ struct ci_tx_queue *q = ethdata->tx_queues[i];
if (q->q_teid == data->node_teid) {
fprintf(stream, "\t\t\t\t<tr><td>TXQ</td><td>%u</td></tr>\n", i);
break;
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 93a6308a86..80eee03204 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -6448,7 +6448,7 @@ ice_update_mbuf_stats(struct rte_eth_dev *ethdev,
struct ice_mbuf_stats *mbuf_stats)
{
uint16_t idx;
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
for (idx = 0; idx < ethdev->data->nb_tx_queues; idx++) {
txq = ethdev->data->tx_queues[idx];
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index a5b27fabd2..ba54655499 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -258,7 +258,7 @@ struct ice_vsi_list {
};
struct ice_rx_queue;
-struct ice_tx_queue;
+struct ci_tx_queue;
/**
* Structure that defines a VSI, associated with a adapter.
@@ -408,7 +408,7 @@ struct ice_fdir_counter_pool_container {
*/
struct ice_fdir_info {
struct ice_vsi *fdir_vsi; /* pointer to fdir VSI structure */
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
struct ice_rx_queue *rxq;
void *prg_pkt; /* memory for fdir program packet */
uint64_t dma_addr; /* physic address of packet memory*/
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 5ec92f6d0c..bcc7c7a016 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -743,7 +743,7 @@ ice_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
int
ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
int err;
struct ice_vsi *vsi;
struct ice_hw *hw;
@@ -944,7 +944,7 @@ int
ice_fdir_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
int err;
struct ice_vsi *vsi;
struct ice_hw *hw;
@@ -1008,7 +1008,7 @@ ice_fdir_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
/* Free all mbufs for descriptors in tx queue */
static void
-_ice_tx_queue_release_mbufs(struct ice_tx_queue *txq)
+_ice_tx_queue_release_mbufs(struct ci_tx_queue *txq)
{
uint16_t i;
@@ -1026,7 +1026,7 @@ _ice_tx_queue_release_mbufs(struct ice_tx_queue *txq)
}
static void
-ice_reset_tx_queue(struct ice_tx_queue *txq)
+ice_reset_tx_queue(struct ci_tx_queue *txq)
{
struct ci_tx_entry *txe;
uint16_t i, prev, size;
@@ -1066,7 +1066,7 @@ ice_reset_tx_queue(struct ice_tx_queue *txq)
int
ice_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_vsi *vsi = pf->main_vsi;
@@ -1134,7 +1134,7 @@ ice_fdir_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
int
ice_fdir_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_vsi *vsi = pf->main_vsi;
@@ -1354,7 +1354,7 @@ ice_tx_queue_setup(struct rte_eth_dev *dev,
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_vsi *vsi = pf->main_vsi;
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
const struct rte_memzone *tz;
uint32_t ring_size;
uint16_t tx_rs_thresh, tx_free_thresh;
@@ -1467,7 +1467,7 @@ ice_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate the TX queue data structure. */
txq = rte_zmalloc_socket(NULL,
- sizeof(struct ice_tx_queue),
+ sizeof(struct ci_tx_queue),
RTE_CACHE_LINE_SIZE,
socket_id);
if (!txq) {
@@ -1542,7 +1542,7 @@ ice_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
void
ice_tx_queue_release(void *txq)
{
- struct ice_tx_queue *q = (struct ice_tx_queue *)txq;
+ struct ci_tx_queue *q = (struct ci_tx_queue *)txq;
if (!q) {
PMD_DRV_LOG(DEBUG, "Pointer to TX queue is NULL");
@@ -1577,7 +1577,7 @@ void
ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
struct rte_eth_txq_info *qinfo)
{
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
txq = dev->data->tx_queues[queue_id];
@@ -2354,7 +2354,7 @@ ice_rx_descriptor_status(void *rx_queue, uint16_t offset)
int
ice_tx_descriptor_status(void *tx_queue, uint16_t offset)
{
- struct ice_tx_queue *txq = tx_queue;
+ struct ci_tx_queue *txq = tx_queue;
volatile uint64_t *status;
uint64_t mask, expect;
uint32_t desc;
@@ -2412,7 +2412,7 @@ ice_free_queues(struct rte_eth_dev *dev)
int
ice_fdir_setup_tx_resources(struct ice_pf *pf)
{
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
const struct rte_memzone *tz = NULL;
uint32_t ring_size;
struct rte_eth_dev *dev;
@@ -2426,7 +2426,7 @@ ice_fdir_setup_tx_resources(struct ice_pf *pf)
/* Allocate the TX queue data structure. */
txq = rte_zmalloc_socket("ice fdir tx queue",
- sizeof(struct ice_tx_queue),
+ sizeof(struct ci_tx_queue),
RTE_CACHE_LINE_SIZE,
SOCKET_ID_ANY);
if (!txq) {
@@ -2835,7 +2835,7 @@ ice_txd_enable_checksum(uint64_t ol_flags,
}
static inline int
-ice_xmit_cleanup(struct ice_tx_queue *txq)
+ice_xmit_cleanup(struct ci_tx_queue *txq)
{
struct ci_tx_entry *sw_ring = txq->sw_ring;
volatile struct ice_tx_desc *txd = txq->ice_tx_ring;
@@ -2958,7 +2958,7 @@ ice_calc_pkt_desc(struct rte_mbuf *tx_pkt)
uint16_t
ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
volatile struct ice_tx_desc *ice_tx_ring;
volatile struct ice_tx_desc *txd;
struct ci_tx_entry *sw_ring;
@@ -3182,7 +3182,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
}
static __rte_always_inline int
-ice_tx_free_bufs(struct ice_tx_queue *txq)
+ice_tx_free_bufs(struct ci_tx_queue *txq)
{
struct ci_tx_entry *txep;
uint16_t i;
@@ -3218,7 +3218,7 @@ ice_tx_free_bufs(struct ice_tx_queue *txq)
}
static int
-ice_tx_done_cleanup_full(struct ice_tx_queue *txq,
+ice_tx_done_cleanup_full(struct ci_tx_queue *txq,
uint32_t free_cnt)
{
struct ci_tx_entry *swr_ring = txq->sw_ring;
@@ -3278,7 +3278,7 @@ ice_tx_done_cleanup_full(struct ice_tx_queue *txq,
#ifdef RTE_ARCH_X86
static int
-ice_tx_done_cleanup_vec(struct ice_tx_queue *txq __rte_unused,
+ice_tx_done_cleanup_vec(struct ci_tx_queue *txq __rte_unused,
uint32_t free_cnt __rte_unused)
{
return -ENOTSUP;
@@ -3286,7 +3286,7 @@ ice_tx_done_cleanup_vec(struct ice_tx_queue *txq __rte_unused,
#endif
static int
-ice_tx_done_cleanup_simple(struct ice_tx_queue *txq,
+ice_tx_done_cleanup_simple(struct ci_tx_queue *txq,
uint32_t free_cnt)
{
int i, n, cnt;
@@ -3312,7 +3312,7 @@ ice_tx_done_cleanup_simple(struct ice_tx_queue *txq,
int
ice_tx_done_cleanup(void *txq, uint32_t free_cnt)
{
- struct ice_tx_queue *q = (struct ice_tx_queue *)txq;
+ struct ci_tx_queue *q = (struct ci_tx_queue *)txq;
struct rte_eth_dev *dev = &rte_eth_devices[q->port_id];
struct ice_adapter *ad =
ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
@@ -3357,7 +3357,7 @@ tx1(volatile struct ice_tx_desc *txdp, struct rte_mbuf **pkts)
}
static inline void
-ice_tx_fill_hw_ring(struct ice_tx_queue *txq, struct rte_mbuf **pkts,
+ice_tx_fill_hw_ring(struct ci_tx_queue *txq, struct rte_mbuf **pkts,
uint16_t nb_pkts)
{
volatile struct ice_tx_desc *txdp = &txq->ice_tx_ring[txq->tx_tail];
@@ -3389,7 +3389,7 @@ ice_tx_fill_hw_ring(struct ice_tx_queue *txq, struct rte_mbuf **pkts,
}
static inline uint16_t
-tx_xmit_pkts(struct ice_tx_queue *txq,
+tx_xmit_pkts(struct ci_tx_queue *txq,
struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
@@ -3452,14 +3452,14 @@ ice_xmit_pkts_simple(void *tx_queue,
uint16_t nb_tx = 0;
if (likely(nb_pkts <= ICE_TX_MAX_BURST))
- return tx_xmit_pkts((struct ice_tx_queue *)tx_queue,
+ return tx_xmit_pkts((struct ci_tx_queue *)tx_queue,
tx_pkts, nb_pkts);
while (nb_pkts) {
uint16_t ret, num = (uint16_t)RTE_MIN(nb_pkts,
ICE_TX_MAX_BURST);
- ret = tx_xmit_pkts((struct ice_tx_queue *)tx_queue,
+ ret = tx_xmit_pkts((struct ci_tx_queue *)tx_queue,
&tx_pkts[nb_tx], num);
nb_tx = (uint16_t)(nb_tx + ret);
nb_pkts = (uint16_t)(nb_pkts - ret);
@@ -3667,7 +3667,7 @@ ice_rx_burst_mode_get(struct rte_eth_dev *dev, __rte_unused uint16_t queue_id,
}
void __rte_cold
-ice_set_tx_function_flag(struct rte_eth_dev *dev, struct ice_tx_queue *txq)
+ice_set_tx_function_flag(struct rte_eth_dev *dev, struct ci_tx_queue *txq)
{
struct ice_adapter *ad =
ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
@@ -3716,7 +3716,7 @@ ice_check_empty_mbuf(struct rte_mbuf *tx_pkt)
static uint16_t
ice_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
- struct ice_tx_queue *txq = tx_queue;
+ struct ci_tx_queue *txq = tx_queue;
uint16_t idx;
struct rte_mbuf *mb;
bool pkt_error = false;
@@ -3778,7 +3778,7 @@ ice_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
pkt_error = true;
break;
}
- if (mb->nb_segs > ((struct ice_tx_queue *)tx_queue)->nb_tx_desc) {
+ if (mb->nb_segs > ((struct ci_tx_queue *)tx_queue)->nb_tx_desc) {
PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs out of ring length");
pkt_error = true;
break;
@@ -3839,7 +3839,7 @@ ice_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
(m->tso_segsz < ICE_MIN_TSO_MSS ||
m->tso_segsz > ICE_MAX_TSO_MSS ||
m->nb_segs >
- ((struct ice_tx_queue *)tx_queue)->nb_tx_desc ||
+ ((struct ci_tx_queue *)tx_queue)->nb_tx_desc ||
m->pkt_len > ICE_MAX_TSO_FRAME_SIZE)) {
/**
* MSS outside the range are considered malicious
@@ -3881,7 +3881,7 @@ ice_set_tx_function(struct rte_eth_dev *dev)
ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
int mbuf_check = ad->devargs.mbuf_check;
#ifdef RTE_ARCH_X86
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
int i;
int tx_check_ret = -1;
@@ -4693,7 +4693,7 @@ ice_check_fdir_programming_status(struct ice_rx_queue *rxq)
int
ice_fdir_programming(struct ice_pf *pf, struct ice_fltr_desc *fdir_desc)
{
- struct ice_tx_queue *txq = pf->fdir.txq;
+ struct ci_tx_queue *txq = pf->fdir.txq;
struct ice_rx_queue *rxq = pf->fdir.rxq;
volatile struct ice_fltr_desc *fdirdp;
volatile struct ice_tx_desc *txdp;
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index 3257f449f5..1cae8a9b50 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -79,7 +79,6 @@ extern int ice_timestamp_dynfield_offset;
#define ICE_TX_MTU_SEG_MAX 8
typedef void (*ice_rx_release_mbufs_t)(struct ice_rx_queue *rxq);
-typedef void (*ice_tx_release_mbufs_t)(struct ice_tx_queue *txq);
typedef void (*ice_rxd_to_pkt_fields_t)(struct ice_rx_queue *rxq,
struct rte_mbuf *mb,
volatile union ice_rx_flex_desc *rxdp);
@@ -145,42 +144,6 @@ struct ice_rx_queue {
bool ts_enable; /* if rxq timestamp is enabled */
};
-struct ice_tx_queue {
- uint16_t nb_tx_desc; /* number of TX descriptors */
- rte_iova_t tx_ring_dma; /* TX ring DMA address */
- volatile struct ice_tx_desc *ice_tx_ring; /* TX ring virtual address */
- struct ci_tx_entry *sw_ring; /* virtual address of SW ring */
- uint16_t tx_tail; /* current value of tail register */
- volatile uint8_t *qtx_tail; /* register address of tail */
- uint16_t nb_tx_used; /* number of TX desc used since RS bit set */
- /* index to last TX descriptor to have been cleaned */
- uint16_t last_desc_cleaned;
- /* Total number of TX descriptors ready to be allocated. */
- uint16_t nb_tx_free;
- /* Start freeing TX buffers if there are less free descriptors than
- * this value.
- */
- uint16_t tx_free_thresh;
- /* Number of TX descriptors to use before RS bit is set. */
- uint16_t tx_rs_thresh;
- uint8_t pthresh; /**< Prefetch threshold register. */
- uint8_t hthresh; /**< Host threshold register. */
- uint8_t wthresh; /**< Write-back threshold reg. */
- uint16_t port_id; /* Device port identifier. */
- uint16_t queue_id; /* TX queue index. */
- uint32_t q_teid; /* TX schedule node id. */
- uint16_t reg_idx;
- uint64_t offloads;
- struct ice_vsi *ice_vsi; /* the VSI this queue belongs to */
- uint16_t tx_next_dd;
- uint16_t tx_next_rs;
- uint64_t mbuf_errors;
- bool tx_deferred_start; /* don't start this queue in dev start */
- bool q_set; /* indicate if tx queue has been configured */
- ice_tx_release_mbufs_t tx_rel_mbufs;
- const struct rte_memzone *mz;
-};
-
/* Offload features */
union ice_tx_offload {
uint64_t data;
@@ -268,7 +231,7 @@ void ice_set_rx_function(struct rte_eth_dev *dev);
uint16_t ice_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
void ice_set_tx_function_flag(struct rte_eth_dev *dev,
- struct ice_tx_queue *txq);
+ struct ci_tx_queue *txq);
void ice_set_tx_function(struct rte_eth_dev *dev);
uint32_t ice_rx_queue_count(void *rx_queue);
void ice_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
@@ -290,7 +253,7 @@ void ice_select_rxd_to_pkt_fields_handler(struct ice_rx_queue *rxq,
int ice_rx_vec_dev_check(struct rte_eth_dev *dev);
int ice_tx_vec_dev_check(struct rte_eth_dev *dev);
int ice_rxq_vec_setup(struct ice_rx_queue *rxq);
-int ice_txq_vec_setup(struct ice_tx_queue *txq);
+int ice_txq_vec_setup(struct ci_tx_queue *txq);
uint16_t ice_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
uint16_t ice_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index dde07ac99e..12ffa0fa9a 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -856,7 +856,7 @@ static __rte_always_inline uint16_t
ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool offload)
{
- struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct ice_tx_desc *txdp;
struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
@@ -924,7 +924,7 @@ ice_xmit_pkts_vec_avx2_common(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool offload)
{
uint16_t nb_tx = 0;
- struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index e4d0270176..eabd8b04a0 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -860,7 +860,7 @@ ice_recv_scattered_pkts_vec_avx512_offload(void *rx_queue,
}
static __rte_always_inline int
-ice_tx_free_bufs_avx512(struct ice_tx_queue *txq)
+ice_tx_free_bufs_avx512(struct ci_tx_queue *txq)
{
struct ci_tx_entry_vec *txep;
uint32_t n;
@@ -1053,7 +1053,7 @@ static __rte_always_inline uint16_t
ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool do_offload)
{
- struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct ice_tx_desc *txdp;
struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
@@ -1122,7 +1122,7 @@ ice_xmit_pkts_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
uint16_t nb_tx = 0;
- struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
@@ -1144,7 +1144,7 @@ ice_xmit_pkts_vec_avx512_offload(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
uint16_t nb_tx = 0;
- struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index 7b865b53ad..b39289ceb5 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -13,7 +13,7 @@
#endif
static __rte_always_inline int
-ice_tx_free_bufs_vec(struct ice_tx_queue *txq)
+ice_tx_free_bufs_vec(struct ci_tx_queue *txq)
{
struct ci_tx_entry *txep;
uint32_t n;
@@ -105,7 +105,7 @@ _ice_rx_queue_release_mbufs_vec(struct ice_rx_queue *rxq)
}
static inline void
-_ice_tx_queue_release_mbufs_vec(struct ice_tx_queue *txq)
+_ice_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq)
{
uint16_t i;
@@ -231,7 +231,7 @@ ice_rx_vec_queue_default(struct ice_rx_queue *rxq)
}
static inline int
-ice_tx_vec_queue_default(struct ice_tx_queue *txq)
+ice_tx_vec_queue_default(struct ci_tx_queue *txq)
{
if (!txq)
return -1;
@@ -273,7 +273,7 @@ static inline int
ice_tx_vec_dev_check_default(struct rte_eth_dev *dev)
{
int i;
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
int ret = 0;
int result = 0;
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index 364207e8a8..a62a32a552 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -697,7 +697,7 @@ static uint16_t
ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct ice_tx_desc *txdp;
struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
@@ -766,7 +766,7 @@ ice_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
uint16_t nb_tx = 0;
- struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
@@ -793,7 +793,7 @@ ice_rxq_vec_setup(struct ice_rx_queue *rxq)
}
int __rte_cold
-ice_txq_vec_setup(struct ice_tx_queue __rte_unused *txq)
+ice_txq_vec_setup(struct ci_tx_queue __rte_unused *txq)
{
if (!txq)
return -1;
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v1 07/21] net/iavf: use common Tx queue structure
2024-12-02 11:24 ` [PATCH v1 " Bruce Richardson
` (5 preceding siblings ...)
2024-12-02 11:24 ` [PATCH v1 06/21] net/_common_intel: merge ice and i40e Tx queue struct Bruce Richardson
@ 2024-12-02 11:24 ` Bruce Richardson
2024-12-02 11:24 ` [PATCH v1 08/21] net/ixgbe: convert Tx queue context cache field to ptr Bruce Richardson
` (13 subsequent siblings)
20 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-02 11:24 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Vladimir Medvedkin, Ian Stokes, Konstantin Ananyev
Merge in the few additional fields used by iavf driver and convert it to
using the common Tx queue structure also.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 15 +++++++-
drivers/net/iavf/iavf.h | 2 +-
drivers/net/iavf/iavf_ethdev.c | 4 +-
drivers/net/iavf/iavf_rxtx.c | 42 ++++++++++-----------
drivers/net/iavf/iavf_rxtx.h | 49 +++----------------------
drivers/net/iavf/iavf_rxtx_vec_avx2.c | 4 +-
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 14 +++----
drivers/net/iavf/iavf_rxtx_vec_common.h | 8 ++--
drivers/net/iavf/iavf_rxtx_vec_sse.c | 8 ++--
drivers/net/iavf/iavf_vchnl.c | 6 +--
10 files changed, 62 insertions(+), 90 deletions(-)
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
index c965f5ee6c..c4a1a0c816 100644
--- a/drivers/net/_common_intel/tx.h
+++ b/drivers/net/_common_intel/tx.h
@@ -31,8 +31,9 @@ typedef void (*ice_tx_release_mbufs_t)(struct ci_tx_queue *txq);
struct ci_tx_queue {
union { /* TX ring virtual address */
- volatile struct ice_tx_desc *ice_tx_ring;
volatile struct i40e_tx_desc *i40e_tx_ring;
+ volatile struct iavf_tx_desc *iavf_tx_ring;
+ volatile struct ice_tx_desc *ice_tx_ring;
};
volatile uint8_t *qtx_tail; /* register address of tail */
struct ci_tx_entry *sw_ring; /* virtual address of SW ring */
@@ -63,8 +64,9 @@ struct ci_tx_queue {
bool tx_deferred_start; /* don't start this queue in dev start */
bool q_set; /* indicate if tx queue has been configured */
union { /* the VSI this queue belongs to */
- struct ice_vsi *ice_vsi;
struct i40e_vsi *i40e_vsi;
+ struct iavf_vsi *iavf_vsi;
+ struct ice_vsi *ice_vsi;
};
const struct rte_memzone *mz;
@@ -76,6 +78,15 @@ struct ci_tx_queue {
struct { /* I40E driver specific values */
uint8_t dcb_tc;
};
+ struct { /* iavf driver specific values */
+ uint16_t ipsec_crypto_pkt_md_offset;
+ uint8_t rel_mbufs_type;
+#define IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG1 BIT(0)
+#define IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2 BIT(1)
+ uint8_t vlan_flag;
+ uint8_t tc;
+ bool use_ctx; /* with ctx info, each pkt needs two descriptors */
+ };
};
};
diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h
index ad526c644c..956c60ef45 100644
--- a/drivers/net/iavf/iavf.h
+++ b/drivers/net/iavf/iavf.h
@@ -98,7 +98,7 @@
struct iavf_adapter;
struct iavf_rx_queue;
-struct iavf_tx_queue;
+struct ci_tx_queue;
struct iavf_ipsec_crypto_stats {
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 7f80cd6258..328c224c93 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -954,7 +954,7 @@ static int
iavf_start_queues(struct rte_eth_dev *dev)
{
struct iavf_rx_queue *rxq;
- struct iavf_tx_queue *txq;
+ struct ci_tx_queue *txq;
int i;
uint16_t nb_txq, nb_rxq;
@@ -1885,7 +1885,7 @@ iavf_dev_update_mbuf_stats(struct rte_eth_dev *ethdev,
struct iavf_mbuf_stats *mbuf_stats)
{
uint16_t idx;
- struct iavf_tx_queue *txq;
+ struct ci_tx_queue *txq;
for (idx = 0; idx < ethdev->data->nb_tx_queues; idx++) {
txq = ethdev->data->tx_queues[idx];
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 6eda91e76b..7e381b2a17 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -213,7 +213,7 @@ check_rx_vec_allow(struct iavf_rx_queue *rxq)
}
static inline bool
-check_tx_vec_allow(struct iavf_tx_queue *txq)
+check_tx_vec_allow(struct ci_tx_queue *txq)
{
if (!(txq->offloads & IAVF_TX_NO_VECTOR_FLAGS) &&
txq->tx_rs_thresh >= IAVF_VPMD_TX_MAX_BURST &&
@@ -282,7 +282,7 @@ reset_rx_queue(struct iavf_rx_queue *rxq)
}
static inline void
-reset_tx_queue(struct iavf_tx_queue *txq)
+reset_tx_queue(struct ci_tx_queue *txq)
{
struct ci_tx_entry *txe;
uint32_t i, size;
@@ -388,7 +388,7 @@ release_rxq_mbufs(struct iavf_rx_queue *rxq)
}
static inline void
-release_txq_mbufs(struct iavf_tx_queue *txq)
+release_txq_mbufs(struct ci_tx_queue *txq)
{
uint16_t i;
@@ -778,7 +778,7 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,
struct iavf_info *vf =
IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
struct iavf_vsi *vsi = &vf->vsi;
- struct iavf_tx_queue *txq;
+ struct ci_tx_queue *txq;
const struct rte_memzone *mz;
uint32_t ring_size;
uint16_t tx_rs_thresh, tx_free_thresh;
@@ -814,7 +814,7 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate the TX queue data structure. */
txq = rte_zmalloc_socket("iavf txq",
- sizeof(struct iavf_tx_queue),
+ sizeof(struct ci_tx_queue),
RTE_CACHE_LINE_SIZE,
socket_id);
if (!txq) {
@@ -979,7 +979,7 @@ iavf_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- struct iavf_tx_queue *txq;
+ struct ci_tx_queue *txq;
int err = 0;
PMD_DRV_FUNC_TRACE();
@@ -1048,7 +1048,7 @@ iavf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
struct iavf_adapter *adapter =
IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
- struct iavf_tx_queue *txq;
+ struct ci_tx_queue *txq;
int err;
PMD_DRV_FUNC_TRACE();
@@ -1092,7 +1092,7 @@ iavf_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
void
iavf_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
{
- struct iavf_tx_queue *q = dev->data->tx_queues[qid];
+ struct ci_tx_queue *q = dev->data->tx_queues[qid];
if (!q)
return;
@@ -1107,7 +1107,7 @@ static void
iavf_reset_queues(struct rte_eth_dev *dev)
{
struct iavf_rx_queue *rxq;
- struct iavf_tx_queue *txq;
+ struct ci_tx_queue *txq;
int i;
for (i = 0; i < dev->data->nb_tx_queues; i++) {
@@ -2377,7 +2377,7 @@ iavf_recv_pkts_bulk_alloc(void *rx_queue,
}
static inline int
-iavf_xmit_cleanup(struct iavf_tx_queue *txq)
+iavf_xmit_cleanup(struct ci_tx_queue *txq)
{
struct ci_tx_entry *sw_ring = txq->sw_ring;
uint16_t last_desc_cleaned = txq->last_desc_cleaned;
@@ -2781,7 +2781,7 @@ iavf_fill_data_desc(volatile struct iavf_tx_desc *desc,
static struct iavf_ipsec_crypto_pkt_metadata *
-iavf_ipsec_crypto_get_pkt_metadata(const struct iavf_tx_queue *txq,
+iavf_ipsec_crypto_get_pkt_metadata(const struct ci_tx_queue *txq,
struct rte_mbuf *m)
{
if (m->ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD)
@@ -2795,7 +2795,7 @@ iavf_ipsec_crypto_get_pkt_metadata(const struct iavf_tx_queue *txq,
uint16_t
iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
- struct iavf_tx_queue *txq = tx_queue;
+ struct ci_tx_queue *txq = tx_queue;
volatile struct iavf_tx_desc *txr = txq->iavf_tx_ring;
struct ci_tx_entry *txe_ring = txq->sw_ring;
struct ci_tx_entry *txe, *txn;
@@ -3027,7 +3027,7 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
* correct queue.
*/
static int
-iavf_check_vlan_up2tc(struct iavf_tx_queue *txq, struct rte_mbuf *m)
+iavf_check_vlan_up2tc(struct ci_tx_queue *txq, struct rte_mbuf *m)
{
struct rte_eth_dev *dev = &rte_eth_devices[txq->port_id];
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
@@ -3646,7 +3646,7 @@ iavf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
int i, ret;
uint64_t ol_flags;
struct rte_mbuf *m;
- struct iavf_tx_queue *txq = tx_queue;
+ struct ci_tx_queue *txq = tx_queue;
struct rte_eth_dev *dev = &rte_eth_devices[txq->port_id];
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
struct iavf_adapter *adapter = IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
@@ -3800,7 +3800,7 @@ static uint16_t
iavf_xmit_pkts_no_poll(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct iavf_tx_queue *txq = tx_queue;
+ struct ci_tx_queue *txq = tx_queue;
enum iavf_tx_burst_type tx_burst_type;
if (!txq->iavf_vsi || txq->iavf_vsi->adapter->no_poll)
@@ -3823,7 +3823,7 @@ iavf_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t good_pkts = nb_pkts;
const char *reason = NULL;
bool pkt_error = false;
- struct iavf_tx_queue *txq = tx_queue;
+ struct ci_tx_queue *txq = tx_queue;
struct iavf_adapter *adapter = txq->iavf_vsi->adapter;
enum iavf_tx_burst_type tx_burst_type =
txq->iavf_vsi->adapter->tx_burst_type;
@@ -4144,7 +4144,7 @@ iavf_set_tx_function(struct rte_eth_dev *dev)
int mbuf_check = adapter->devargs.mbuf_check;
int no_poll_on_link_down = adapter->devargs.no_poll_on_link_down;
#ifdef RTE_ARCH_X86
- struct iavf_tx_queue *txq;
+ struct ci_tx_queue *txq;
int i;
int check_ret;
bool use_sse = false;
@@ -4265,7 +4265,7 @@ iavf_set_tx_function(struct rte_eth_dev *dev)
}
static int
-iavf_tx_done_cleanup_full(struct iavf_tx_queue *txq,
+iavf_tx_done_cleanup_full(struct ci_tx_queue *txq,
uint32_t free_cnt)
{
struct ci_tx_entry *swr_ring = txq->sw_ring;
@@ -4324,7 +4324,7 @@ iavf_tx_done_cleanup_full(struct iavf_tx_queue *txq,
int
iavf_dev_tx_done_cleanup(void *txq, uint32_t free_cnt)
{
- struct iavf_tx_queue *q = (struct iavf_tx_queue *)txq;
+ struct ci_tx_queue *q = (struct ci_tx_queue *)txq;
return iavf_tx_done_cleanup_full(q, free_cnt);
}
@@ -4350,7 +4350,7 @@ void
iavf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
struct rte_eth_txq_info *qinfo)
{
- struct iavf_tx_queue *txq;
+ struct ci_tx_queue *txq;
txq = dev->data->tx_queues[queue_id];
@@ -4422,7 +4422,7 @@ iavf_dev_rx_desc_status(void *rx_queue, uint16_t offset)
int
iavf_dev_tx_desc_status(void *tx_queue, uint16_t offset)
{
- struct iavf_tx_queue *txq = tx_queue;
+ struct ci_tx_queue *txq = tx_queue;
volatile uint64_t *status;
uint64_t mask, expect;
uint32_t desc;
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index cc1eaaf54c..c18e01560c 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -211,7 +211,7 @@ struct iavf_rxq_ops {
};
struct iavf_txq_ops {
- void (*release_mbufs)(struct iavf_tx_queue *txq);
+ void (*release_mbufs)(struct ci_tx_queue *txq);
};
@@ -273,43 +273,6 @@ struct iavf_rx_queue {
uint64_t hw_time_update;
};
-/* Structure associated with each TX queue. */
-struct iavf_tx_queue {
- const struct rte_memzone *mz; /* memzone for Tx ring */
- volatile struct iavf_tx_desc *iavf_tx_ring; /* Tx ring virtual address */
- rte_iova_t tx_ring_dma; /* Tx ring DMA address */
- struct ci_tx_entry *sw_ring; /* address array of SW ring */
- uint16_t nb_tx_desc; /* ring length */
- uint16_t tx_tail; /* current value of tail */
- volatile uint8_t *qtx_tail; /* register address of tail */
- /* number of used desc since RS bit set */
- uint16_t nb_tx_used;
- uint16_t nb_tx_free;
- uint16_t last_desc_cleaned; /* last desc have been cleaned*/
- uint16_t tx_free_thresh;
- uint16_t tx_rs_thresh;
- uint8_t rel_mbufs_type;
- struct iavf_vsi *iavf_vsi; /**< the VSI this queue belongs to */
-
- uint16_t port_id;
- uint16_t queue_id;
- uint64_t offloads;
- uint16_t tx_next_dd; /* next to set RS, for VPMD */
- uint16_t tx_next_rs; /* next to check DD, for VPMD */
- uint16_t ipsec_crypto_pkt_md_offset;
-
- uint64_t mbuf_errors;
-
- bool q_set; /* if rx queue has been configured */
- bool tx_deferred_start; /* don't start this queue in dev start */
- const struct iavf_txq_ops *ops;
-#define IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG1 BIT(0)
-#define IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2 BIT(1)
- uint8_t vlan_flag;
- uint8_t tc;
- uint8_t use_ctx:1; /* if use the ctx desc, a packet needs two descriptors */
-};
-
/* Offload features */
union iavf_tx_offload {
uint64_t data;
@@ -724,7 +687,7 @@ int iavf_get_monitor_addr(void *rx_queue, struct rte_power_monitor_cond *pmc);
int iavf_rx_vec_dev_check(struct rte_eth_dev *dev);
int iavf_tx_vec_dev_check(struct rte_eth_dev *dev);
int iavf_rxq_vec_setup(struct iavf_rx_queue *rxq);
-int iavf_txq_vec_setup(struct iavf_tx_queue *txq);
+int iavf_txq_vec_setup(struct ci_tx_queue *txq);
uint16_t iavf_recv_pkts_vec_avx512(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
uint16_t iavf_recv_pkts_vec_avx512_offload(void *rx_queue,
@@ -757,14 +720,14 @@ uint16_t iavf_xmit_pkts_vec_avx512_ctx_offload(void *tx_queue, struct rte_mbuf *
uint16_t nb_pkts);
uint16_t iavf_xmit_pkts_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
-int iavf_txq_vec_setup_avx512(struct iavf_tx_queue *txq);
+int iavf_txq_vec_setup_avx512(struct ci_tx_queue *txq);
uint8_t iavf_proto_xtr_type_to_rxdid(uint8_t xtr_type);
void iavf_set_default_ptype_table(struct rte_eth_dev *dev);
-void iavf_tx_queue_release_mbufs_avx512(struct iavf_tx_queue *txq);
+void iavf_tx_queue_release_mbufs_avx512(struct ci_tx_queue *txq);
void iavf_rx_queue_release_mbufs_sse(struct iavf_rx_queue *rxq);
-void iavf_tx_queue_release_mbufs_sse(struct iavf_tx_queue *txq);
+void iavf_tx_queue_release_mbufs_sse(struct ci_tx_queue *txq);
static inline
void iavf_dump_rx_descriptor(struct iavf_rx_queue *rxq,
@@ -791,7 +754,7 @@ void iavf_dump_rx_descriptor(struct iavf_rx_queue *rxq,
* to print the qwords
*/
static inline
-void iavf_dump_tx_descriptor(const struct iavf_tx_queue *txq,
+void iavf_dump_tx_descriptor(const struct ci_tx_queue *txq,
const volatile void *desc, uint16_t tx_id)
{
const char *name;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index f33ceceee1..fdb98b417a 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -1734,7 +1734,7 @@ static __rte_always_inline uint16_t
iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool offload)
{
- struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
@@ -1801,7 +1801,7 @@ iavf_xmit_pkts_vec_avx2_common(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool offload)
{
uint16_t nb_tx = 0;
- struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index 97420a75fd..9cf7171524 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -1845,7 +1845,7 @@ iavf_recv_scattered_pkts_vec_avx512_flex_rxd_offload(void *rx_queue,
}
static __rte_always_inline int
-iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq)
+iavf_tx_free_bufs_avx512(struct ci_tx_queue *txq)
{
struct ci_tx_entry_vec *txep;
uint32_t n;
@@ -2311,7 +2311,7 @@ static __rte_always_inline uint16_t
iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool offload)
{
- struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
@@ -2379,7 +2379,7 @@ static __rte_always_inline uint16_t
iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool offload)
{
- struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, nb_mbuf, tx_id;
@@ -2447,7 +2447,7 @@ iavf_xmit_pkts_vec_avx512_cmn(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool offload)
{
uint16_t nb_tx = 0;
- struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
@@ -2473,7 +2473,7 @@ iavf_xmit_pkts_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
}
void __rte_cold
-iavf_tx_queue_release_mbufs_avx512(struct iavf_tx_queue *txq)
+iavf_tx_queue_release_mbufs_avx512(struct ci_tx_queue *txq)
{
unsigned int i;
const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1);
@@ -2494,7 +2494,7 @@ iavf_tx_queue_release_mbufs_avx512(struct iavf_tx_queue *txq)
}
int __rte_cold
-iavf_txq_vec_setup_avx512(struct iavf_tx_queue *txq)
+iavf_txq_vec_setup_avx512(struct ci_tx_queue *txq)
{
txq->rel_mbufs_type = IAVF_REL_MBUFS_AVX512_VEC;
return 0;
@@ -2512,7 +2512,7 @@ iavf_xmit_pkts_vec_avx512_ctx_cmn(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool offload)
{
uint16_t nb_tx = 0;
- struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index 6305c8cdd6..f1bb12c4f4 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -17,7 +17,7 @@
#endif
static __rte_always_inline int
-iavf_tx_free_bufs(struct iavf_tx_queue *txq)
+iavf_tx_free_bufs(struct ci_tx_queue *txq)
{
struct ci_tx_entry *txep;
uint32_t n;
@@ -104,7 +104,7 @@ _iavf_rx_queue_release_mbufs_vec(struct iavf_rx_queue *rxq)
}
static inline void
-_iavf_tx_queue_release_mbufs_vec(struct iavf_tx_queue *txq)
+_iavf_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq)
{
unsigned i;
const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1);
@@ -164,7 +164,7 @@ iavf_rx_vec_queue_default(struct iavf_rx_queue *rxq)
}
static inline int
-iavf_tx_vec_queue_default(struct iavf_tx_queue *txq)
+iavf_tx_vec_queue_default(struct ci_tx_queue *txq)
{
if (!txq)
return -1;
@@ -227,7 +227,7 @@ static inline int
iavf_tx_vec_dev_check_default(struct rte_eth_dev *dev)
{
int i;
- struct iavf_tx_queue *txq;
+ struct ci_tx_queue *txq;
int ret;
int result = 0;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index 64c3bf0eaa..5c0b2fff46 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -1366,7 +1366,7 @@ uint16_t
iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
@@ -1435,7 +1435,7 @@ iavf_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
uint16_t nb_tx = 0;
- struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
@@ -1459,13 +1459,13 @@ iavf_rx_queue_release_mbufs_sse(struct iavf_rx_queue *rxq)
}
void __rte_cold
-iavf_tx_queue_release_mbufs_sse(struct iavf_tx_queue *txq)
+iavf_tx_queue_release_mbufs_sse(struct ci_tx_queue *txq)
{
_iavf_tx_queue_release_mbufs_vec(txq);
}
int __rte_cold
-iavf_txq_vec_setup(struct iavf_tx_queue *txq)
+iavf_txq_vec_setup(struct ci_tx_queue *txq)
{
txq->rel_mbufs_type = IAVF_REL_MBUFS_SSE_VEC;
return 0;
diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c
index 0646a2f978..c74466735d 100644
--- a/drivers/net/iavf/iavf_vchnl.c
+++ b/drivers/net/iavf/iavf_vchnl.c
@@ -1218,10 +1218,8 @@ int
iavf_configure_queues(struct iavf_adapter *adapter,
uint16_t num_queue_pairs, uint16_t index)
{
- struct iavf_rx_queue **rxq =
- (struct iavf_rx_queue **)adapter->dev_data->rx_queues;
- struct iavf_tx_queue **txq =
- (struct iavf_tx_queue **)adapter->dev_data->tx_queues;
+ struct iavf_rx_queue **rxq = (struct iavf_rx_queue **)adapter->dev_data->rx_queues;
+ struct ci_tx_queue **txq = (struct ci_tx_queue **)adapter->dev_data->tx_queues;
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
struct virtchnl_vsi_queue_config_info *vc_config;
struct virtchnl_queue_pair_info *vc_qp;
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v1 08/21] net/ixgbe: convert Tx queue context cache field to ptr
2024-12-02 11:24 ` [PATCH v1 " Bruce Richardson
` (6 preceding siblings ...)
2024-12-02 11:24 ` [PATCH v1 07/21] net/iavf: use common Tx queue structure Bruce Richardson
@ 2024-12-02 11:24 ` Bruce Richardson
2024-12-02 11:24 ` [PATCH v1 09/21] net/ixgbe: use common Tx queue structure Bruce Richardson
` (12 subsequent siblings)
20 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-02 11:24 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Anatoly Burakov, Vladimir Medvedkin
Rather than having a two element array of context cache values inside
the Tx queue structure, convert it to a pointer to a cache at the end of
the structure. This makes future merging of the structure easier as we
don't need the "ixgbe_advctx_info" struct defined when defining a
combined queue structure.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/ixgbe/ixgbe_rxtx.c | 7 ++++---
drivers/net/ixgbe/ixgbe_rxtx.h | 4 ++--
2 files changed, 6 insertions(+), 5 deletions(-)
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index f7ddbba1b6..2ca26cd132 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -2522,8 +2522,7 @@ ixgbe_reset_tx_queue(struct ixgbe_tx_queue *txq)
txq->last_desc_cleaned = (uint16_t)(txq->nb_tx_desc - 1);
txq->nb_tx_free = (uint16_t)(txq->nb_tx_desc - 1);
txq->ctx_curr = 0;
- memset((void *)&txq->ctx_cache, 0,
- IXGBE_CTX_NUM * sizeof(struct ixgbe_advctx_info));
+ memset(txq->ctx_cache, 0, IXGBE_CTX_NUM * sizeof(struct ixgbe_advctx_info));
}
static const struct ixgbe_txq_ops def_txq_ops = {
@@ -2741,10 +2740,12 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
}
/* First allocate the tx queue data structure */
- txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct ixgbe_tx_queue),
+ txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct ixgbe_tx_queue) +
+ sizeof(struct ixgbe_advctx_info) * IXGBE_CTX_NUM,
RTE_CACHE_LINE_SIZE, socket_id);
if (txq == NULL)
return -ENOMEM;
+ txq->ctx_cache = RTE_PTR_ADD(txq, sizeof(struct ixgbe_tx_queue));
/*
* Allocate TX ring hardware descriptors. A memzone large enough to
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index f6bae37cf3..847cacf7b5 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -215,8 +215,8 @@ struct ixgbe_tx_queue {
uint8_t wthresh; /**< Write-back threshold reg. */
uint64_t offloads; /**< Tx offload flags of RTE_ETH_TX_OFFLOAD_* */
uint32_t ctx_curr; /**< Hardware context states. */
- /** Hardware context0 history. */
- struct ixgbe_advctx_info ctx_cache[IXGBE_CTX_NUM];
+ /** Hardware context history. */
+ struct ixgbe_advctx_info *ctx_cache;
const struct ixgbe_txq_ops *ops; /**< txq ops */
bool tx_deferred_start; /**< not in global dev start. */
#ifdef RTE_LIB_SECURITY
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v1 09/21] net/ixgbe: use common Tx queue structure
2024-12-02 11:24 ` [PATCH v1 " Bruce Richardson
` (7 preceding siblings ...)
2024-12-02 11:24 ` [PATCH v1 08/21] net/ixgbe: convert Tx queue context cache field to ptr Bruce Richardson
@ 2024-12-02 11:24 ` Bruce Richardson
2024-12-02 13:51 ` Medvedkin, Vladimir
2024-12-02 11:24 ` [PATCH v1 10/21] net/_common_intel: pack " Bruce Richardson
` (11 subsequent siblings)
20 siblings, 1 reply; 127+ messages in thread
From: Bruce Richardson @ 2024-12-02 11:24 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Anatoly Burakov, Vladimir Medvedkin,
Wathsala Vithanage, Konstantin Ananyev
Merge in additional fields used by the ixgbe driver and then convert it
over to using the common Tx queue structure.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 14 +++-
drivers/net/ixgbe/ixgbe_ethdev.c | 4 +-
.../ixgbe/ixgbe_recycle_mbufs_vec_common.c | 2 +-
drivers/net/ixgbe/ixgbe_rxtx.c | 64 +++++++++----------
drivers/net/ixgbe/ixgbe_rxtx.h | 56 ++--------------
drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 26 ++++----
drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 14 ++--
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 14 ++--
8 files changed, 80 insertions(+), 114 deletions(-)
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
index c4a1a0c816..51ae3b051d 100644
--- a/drivers/net/_common_intel/tx.h
+++ b/drivers/net/_common_intel/tx.h
@@ -34,9 +34,13 @@ struct ci_tx_queue {
volatile struct i40e_tx_desc *i40e_tx_ring;
volatile struct iavf_tx_desc *iavf_tx_ring;
volatile struct ice_tx_desc *ice_tx_ring;
+ volatile union ixgbe_adv_tx_desc *ixgbe_tx_ring;
};
volatile uint8_t *qtx_tail; /* register address of tail */
- struct ci_tx_entry *sw_ring; /* virtual address of SW ring */
+ union {
+ struct ci_tx_entry *sw_ring; /* virtual address of SW ring */
+ struct ci_tx_entry_vec *sw_ring_vec;
+ };
rte_iova_t tx_ring_dma; /* TX ring DMA address */
uint16_t nb_tx_desc; /* number of TX descriptors */
uint16_t tx_tail; /* current value of tail register */
@@ -87,6 +91,14 @@ struct ci_tx_queue {
uint8_t tc;
bool use_ctx; /* with ctx info, each pkt needs two descriptors */
};
+ struct { /* ixgbe specific values */
+ const struct ixgbe_txq_ops *ops;
+ struct ixgbe_advctx_info *ctx_cache;
+ uint32_t ctx_curr;
+#ifdef RTE_LIB_SECURITY
+ uint8_t using_ipsec; /**< indicates that IPsec TX feature is in use */
+#endif
+ };
};
};
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 8bee97d191..5f18fbaad5 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1118,7 +1118,7 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
* RX and TX function.
*/
if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
- struct ixgbe_tx_queue *txq;
+ struct ci_tx_queue *txq;
/* TX queue function in primary, set by last queue initialized
* Tx queue may not initialized by primary process
*/
@@ -1623,7 +1623,7 @@ eth_ixgbevf_dev_init(struct rte_eth_dev *eth_dev)
* RX function
*/
if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
- struct ixgbe_tx_queue *txq;
+ struct ci_tx_queue *txq;
/* TX queue function in primary, set by last queue initialized
* Tx queue may not initialized by primary process
*/
diff --git a/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c b/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c
index a878db3150..3fd05ed5eb 100644
--- a/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c
+++ b/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c
@@ -51,7 +51,7 @@ uint16_t
ixgbe_recycle_tx_mbufs_reuse_vec(void *tx_queue,
struct rte_eth_recycle_rxq_info *recycle_rxq_info)
{
- struct ixgbe_tx_queue *txq = tx_queue;
+ struct ci_tx_queue *txq = tx_queue;
struct ci_tx_entry *txep;
struct rte_mbuf **rxep;
int i, n;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 2ca26cd132..f8f5f42e5c 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -98,7 +98,7 @@
* Return the total number of buffers freed.
*/
static __rte_always_inline int
-ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq)
+ixgbe_tx_free_bufs(struct ci_tx_queue *txq)
{
struct ci_tx_entry *txep;
uint32_t status;
@@ -195,7 +195,7 @@ tx1(volatile union ixgbe_adv_tx_desc *txdp, struct rte_mbuf **pkts)
* Copy mbuf pointers to the S/W ring.
*/
static inline void
-ixgbe_tx_fill_hw_ring(struct ixgbe_tx_queue *txq, struct rte_mbuf **pkts,
+ixgbe_tx_fill_hw_ring(struct ci_tx_queue *txq, struct rte_mbuf **pkts,
uint16_t nb_pkts)
{
volatile union ixgbe_adv_tx_desc *txdp = &txq->ixgbe_tx_ring[txq->tx_tail];
@@ -231,7 +231,7 @@ static inline uint16_t
tx_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile union ixgbe_adv_tx_desc *tx_r = txq->ixgbe_tx_ring;
uint16_t n = 0;
@@ -344,7 +344,7 @@ ixgbe_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
uint16_t nb_tx = 0;
- struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
@@ -362,7 +362,7 @@ ixgbe_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
}
static inline void
-ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
+ixgbe_set_xmit_ctx(struct ci_tx_queue *txq,
volatile struct ixgbe_adv_tx_context_desc *ctx_txd,
uint64_t ol_flags, union ixgbe_tx_offload tx_offload,
__rte_unused uint64_t *mdata)
@@ -493,7 +493,7 @@ ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
* or create a new context descriptor.
*/
static inline uint32_t
-what_advctx_update(struct ixgbe_tx_queue *txq, uint64_t flags,
+what_advctx_update(struct ci_tx_queue *txq, uint64_t flags,
union ixgbe_tx_offload tx_offload)
{
/* If match with the current used context */
@@ -561,7 +561,7 @@ tx_desc_ol_flags_to_cmdtype(uint64_t ol_flags)
/* Reset transmit descriptors after they have been used */
static inline int
-ixgbe_xmit_cleanup(struct ixgbe_tx_queue *txq)
+ixgbe_xmit_cleanup(struct ci_tx_queue *txq)
{
struct ci_tx_entry *sw_ring = txq->sw_ring;
volatile union ixgbe_adv_tx_desc *txr = txq->ixgbe_tx_ring;
@@ -623,7 +623,7 @@ uint16_t
ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct ixgbe_tx_queue *txq;
+ struct ci_tx_queue *txq;
struct ci_tx_entry *sw_ring;
struct ci_tx_entry *txe, *txn;
volatile union ixgbe_adv_tx_desc *txr;
@@ -963,7 +963,7 @@ ixgbe_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
int i, ret;
uint64_t ol_flags;
struct rte_mbuf *m;
- struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
for (i = 0; i < nb_pkts; i++) {
m = tx_pkts[i];
@@ -2335,7 +2335,7 @@ ixgbe_recv_pkts_lro_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts,
**********************************************************************/
static void __rte_cold
-ixgbe_tx_queue_release_mbufs(struct ixgbe_tx_queue *txq)
+ixgbe_tx_queue_release_mbufs(struct ci_tx_queue *txq)
{
unsigned i;
@@ -2350,7 +2350,7 @@ ixgbe_tx_queue_release_mbufs(struct ixgbe_tx_queue *txq)
}
static int
-ixgbe_tx_done_cleanup_full(struct ixgbe_tx_queue *txq, uint32_t free_cnt)
+ixgbe_tx_done_cleanup_full(struct ci_tx_queue *txq, uint32_t free_cnt)
{
struct ci_tx_entry *swr_ring = txq->sw_ring;
uint16_t i, tx_last, tx_id;
@@ -2408,7 +2408,7 @@ ixgbe_tx_done_cleanup_full(struct ixgbe_tx_queue *txq, uint32_t free_cnt)
}
static int
-ixgbe_tx_done_cleanup_simple(struct ixgbe_tx_queue *txq,
+ixgbe_tx_done_cleanup_simple(struct ci_tx_queue *txq,
uint32_t free_cnt)
{
int i, n, cnt;
@@ -2432,7 +2432,7 @@ ixgbe_tx_done_cleanup_simple(struct ixgbe_tx_queue *txq,
}
static int
-ixgbe_tx_done_cleanup_vec(struct ixgbe_tx_queue *txq __rte_unused,
+ixgbe_tx_done_cleanup_vec(struct ci_tx_queue *txq __rte_unused,
uint32_t free_cnt __rte_unused)
{
return -ENOTSUP;
@@ -2441,7 +2441,7 @@ ixgbe_tx_done_cleanup_vec(struct ixgbe_tx_queue *txq __rte_unused,
int
ixgbe_dev_tx_done_cleanup(void *tx_queue, uint32_t free_cnt)
{
- struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
if (txq->offloads == 0 &&
#ifdef RTE_LIB_SECURITY
!(txq->using_ipsec) &&
@@ -2450,7 +2450,7 @@ ixgbe_dev_tx_done_cleanup(void *tx_queue, uint32_t free_cnt)
if (txq->tx_rs_thresh <= RTE_IXGBE_TX_MAX_FREE_BUF_SZ &&
rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128 &&
(rte_eal_process_type() != RTE_PROC_PRIMARY ||
- txq->sw_ring_v != NULL)) {
+ txq->sw_ring_vec != NULL)) {
return ixgbe_tx_done_cleanup_vec(txq, free_cnt);
} else {
return ixgbe_tx_done_cleanup_simple(txq, free_cnt);
@@ -2461,7 +2461,7 @@ ixgbe_dev_tx_done_cleanup(void *tx_queue, uint32_t free_cnt)
}
static void __rte_cold
-ixgbe_tx_free_swring(struct ixgbe_tx_queue *txq)
+ixgbe_tx_free_swring(struct ci_tx_queue *txq)
{
if (txq != NULL &&
txq->sw_ring != NULL)
@@ -2469,7 +2469,7 @@ ixgbe_tx_free_swring(struct ixgbe_tx_queue *txq)
}
static void __rte_cold
-ixgbe_tx_queue_release(struct ixgbe_tx_queue *txq)
+ixgbe_tx_queue_release(struct ci_tx_queue *txq)
{
if (txq != NULL && txq->ops != NULL) {
txq->ops->release_mbufs(txq);
@@ -2487,7 +2487,7 @@ ixgbe_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
/* (Re)set dynamic ixgbe_tx_queue fields to defaults */
static void __rte_cold
-ixgbe_reset_tx_queue(struct ixgbe_tx_queue *txq)
+ixgbe_reset_tx_queue(struct ci_tx_queue *txq)
{
static const union ixgbe_adv_tx_desc zeroed_desc = {{0}};
struct ci_tx_entry *txe = txq->sw_ring;
@@ -2536,7 +2536,7 @@ static const struct ixgbe_txq_ops def_txq_ops = {
* in dev_init by secondary process when attaching to an existing ethdev.
*/
void __rte_cold
-ixgbe_set_tx_function(struct rte_eth_dev *dev, struct ixgbe_tx_queue *txq)
+ixgbe_set_tx_function(struct rte_eth_dev *dev, struct ci_tx_queue *txq)
{
/* Use a simple Tx queue (no offloads, no multi segs) if possible */
if ((txq->offloads == 0) &&
@@ -2618,7 +2618,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
const struct rte_eth_txconf *tx_conf)
{
const struct rte_memzone *tz;
- struct ixgbe_tx_queue *txq;
+ struct ci_tx_queue *txq;
struct ixgbe_hw *hw;
uint16_t tx_rs_thresh, tx_free_thresh;
uint64_t offloads;
@@ -2740,12 +2740,12 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
}
/* First allocate the tx queue data structure */
- txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct ixgbe_tx_queue) +
+ txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct ci_tx_queue) +
sizeof(struct ixgbe_advctx_info) * IXGBE_CTX_NUM,
RTE_CACHE_LINE_SIZE, socket_id);
if (txq == NULL)
return -ENOMEM;
- txq->ctx_cache = RTE_PTR_ADD(txq, sizeof(struct ixgbe_tx_queue));
+ txq->ctx_cache = RTE_PTR_ADD(txq, sizeof(struct ci_tx_queue));
/*
* Allocate TX ring hardware descriptors. A memzone large enough to
@@ -3312,7 +3312,7 @@ ixgbe_dev_rx_descriptor_status(void *rx_queue, uint16_t offset)
int
ixgbe_dev_tx_descriptor_status(void *tx_queue, uint16_t offset)
{
- struct ixgbe_tx_queue *txq = tx_queue;
+ struct ci_tx_queue *txq = tx_queue;
volatile uint32_t *status;
uint32_t desc;
@@ -3377,7 +3377,7 @@ ixgbe_dev_clear_queues(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
for (i = 0; i < dev->data->nb_tx_queues; i++) {
- struct ixgbe_tx_queue *txq = dev->data->tx_queues[i];
+ struct ci_tx_queue *txq = dev->data->tx_queues[i];
if (txq != NULL) {
txq->ops->release_mbufs(txq);
@@ -5284,7 +5284,7 @@ void __rte_cold
ixgbe_dev_tx_init(struct rte_eth_dev *dev)
{
struct ixgbe_hw *hw;
- struct ixgbe_tx_queue *txq;
+ struct ci_tx_queue *txq;
uint64_t bus_addr;
uint32_t hlreg0;
uint32_t txctrl;
@@ -5402,7 +5402,7 @@ int __rte_cold
ixgbe_dev_rxtx_start(struct rte_eth_dev *dev)
{
struct ixgbe_hw *hw;
- struct ixgbe_tx_queue *txq;
+ struct ci_tx_queue *txq;
struct ixgbe_rx_queue *rxq;
uint32_t txdctl;
uint32_t dmatxctl;
@@ -5572,7 +5572,7 @@ int __rte_cold
ixgbe_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
struct ixgbe_hw *hw;
- struct ixgbe_tx_queue *txq;
+ struct ci_tx_queue *txq;
uint32_t txdctl;
int poll_ms;
@@ -5611,7 +5611,7 @@ int __rte_cold
ixgbe_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
struct ixgbe_hw *hw;
- struct ixgbe_tx_queue *txq;
+ struct ci_tx_queue *txq;
uint32_t txdctl;
uint32_t txtdh, txtdt;
int poll_ms;
@@ -5685,7 +5685,7 @@ void
ixgbe_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
struct rte_eth_txq_info *qinfo)
{
- struct ixgbe_tx_queue *txq;
+ struct ci_tx_queue *txq;
txq = dev->data->tx_queues[queue_id];
@@ -5877,7 +5877,7 @@ void __rte_cold
ixgbevf_dev_tx_init(struct rte_eth_dev *dev)
{
struct ixgbe_hw *hw;
- struct ixgbe_tx_queue *txq;
+ struct ci_tx_queue *txq;
uint64_t bus_addr;
uint32_t txctrl;
uint16_t i;
@@ -5918,7 +5918,7 @@ void __rte_cold
ixgbevf_dev_rxtx_start(struct rte_eth_dev *dev)
{
struct ixgbe_hw *hw;
- struct ixgbe_tx_queue *txq;
+ struct ci_tx_queue *txq;
struct ixgbe_rx_queue *rxq;
uint32_t txdctl;
uint32_t rxdctl;
@@ -6127,7 +6127,7 @@ ixgbe_xmit_fixed_burst_vec(void __rte_unused *tx_queue,
}
int
-ixgbe_txq_vec_setup(struct ixgbe_tx_queue __rte_unused *txq)
+ixgbe_txq_vec_setup(struct ci_tx_queue __rte_unused *txq)
{
return -1;
}
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index 847cacf7b5..4333e5bf2f 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -180,56 +180,10 @@ struct ixgbe_advctx_info {
union ixgbe_tx_offload tx_offload_mask;
};
-/**
- * Structure associated with each TX queue.
- */
-struct ixgbe_tx_queue {
- /** TX ring virtual address. */
- volatile union ixgbe_adv_tx_desc *ixgbe_tx_ring;
- rte_iova_t tx_ring_dma; /**< TX ring DMA address. */
- union {
- struct ci_tx_entry *sw_ring; /**< address of SW ring for scalar PMD. */
- struct ci_tx_entry_vec *sw_ring_v; /**< address of SW ring for vector PMD */
- };
- volatile uint8_t *qtx_tail; /**< Address of TDT register. */
- uint16_t nb_tx_desc; /**< number of TX descriptors. */
- uint16_t tx_tail; /**< current value of TDT reg. */
- /**< Start freeing TX buffers if there are less free descriptors than
- this value. */
- uint16_t tx_free_thresh;
- /** Number of TX descriptors to use before RS bit is set. */
- uint16_t tx_rs_thresh;
- /** Number of TX descriptors used since RS bit was set. */
- uint16_t nb_tx_used;
- /** Index to last TX descriptor to have been cleaned. */
- uint16_t last_desc_cleaned;
- /** Total number of TX descriptors ready to be allocated. */
- uint16_t nb_tx_free;
- uint16_t tx_next_dd; /**< next desc to scan for DD bit */
- uint16_t tx_next_rs; /**< next desc to set RS bit */
- uint16_t queue_id; /**< TX queue index. */
- uint16_t reg_idx; /**< TX queue register index. */
- uint16_t port_id; /**< Device port identifier. */
- uint8_t pthresh; /**< Prefetch threshold register. */
- uint8_t hthresh; /**< Host threshold register. */
- uint8_t wthresh; /**< Write-back threshold reg. */
- uint64_t offloads; /**< Tx offload flags of RTE_ETH_TX_OFFLOAD_* */
- uint32_t ctx_curr; /**< Hardware context states. */
- /** Hardware context history. */
- struct ixgbe_advctx_info *ctx_cache;
- const struct ixgbe_txq_ops *ops; /**< txq ops */
- bool tx_deferred_start; /**< not in global dev start. */
-#ifdef RTE_LIB_SECURITY
- uint8_t using_ipsec;
- /**< indicates that IPsec TX feature is in use */
-#endif
- const struct rte_memzone *mz;
-};
-
struct ixgbe_txq_ops {
- void (*release_mbufs)(struct ixgbe_tx_queue *txq);
- void (*free_swring)(struct ixgbe_tx_queue *txq);
- void (*reset)(struct ixgbe_tx_queue *txq);
+ void (*release_mbufs)(struct ci_tx_queue *txq);
+ void (*free_swring)(struct ci_tx_queue *txq);
+ void (*reset)(struct ci_tx_queue *txq);
};
/*
@@ -250,7 +204,7 @@ struct ixgbe_txq_ops {
* the queue parameters. Used in tx_queue_setup by primary process and then
* in dev_init by secondary process when attaching to an existing ethdev.
*/
-void ixgbe_set_tx_function(struct rte_eth_dev *dev, struct ixgbe_tx_queue *txq);
+void ixgbe_set_tx_function(struct rte_eth_dev *dev, struct ci_tx_queue *txq);
/**
* Sets the rx_pkt_burst callback in the ixgbe rte_eth_dev instance.
@@ -287,7 +241,7 @@ void ixgbe_recycle_rx_descriptors_refill_vec(void *rx_queue, uint16_t nb_mbufs);
uint16_t ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
-int ixgbe_txq_vec_setup(struct ixgbe_tx_queue *txq);
+int ixgbe_txq_vec_setup(struct ci_tx_queue *txq);
uint64_t ixgbe_get_tx_port_offloads(struct rte_eth_dev *dev);
uint64_t ixgbe_get_rx_queue_offloads(struct rte_eth_dev *dev);
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
index cc51bf6eed..81fd8bb64d 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
@@ -12,7 +12,7 @@
#include "ixgbe_rxtx.h"
static __rte_always_inline int
-ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq)
+ixgbe_tx_free_bufs(struct ci_tx_queue *txq)
{
struct ci_tx_entry_vec *txep;
uint32_t status;
@@ -32,7 +32,7 @@ ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq)
* first buffer to free from S/W ring is at index
* tx_next_dd - (tx_rs_thresh-1)
*/
- txep = &txq->sw_ring_v[txq->tx_next_dd - (n - 1)];
+ txep = &txq->sw_ring_vec[txq->tx_next_dd - (n - 1)];
m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
if (likely(m != NULL)) {
free[0] = m;
@@ -79,7 +79,7 @@ tx_backlog_entry(struct ci_tx_entry_vec *txep,
}
static inline void
-_ixgbe_tx_queue_release_mbufs_vec(struct ixgbe_tx_queue *txq)
+_ixgbe_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq)
{
unsigned int i;
struct ci_tx_entry_vec *txe;
@@ -92,14 +92,14 @@ _ixgbe_tx_queue_release_mbufs_vec(struct ixgbe_tx_queue *txq)
for (i = txq->tx_next_dd - (txq->tx_rs_thresh - 1);
i != txq->tx_tail;
i = (i + 1) % txq->nb_tx_desc) {
- txe = &txq->sw_ring_v[i];
+ txe = &txq->sw_ring_vec[i];
rte_pktmbuf_free_seg(txe->mbuf);
}
txq->nb_tx_free = max_desc;
/* reset tx_entry */
for (i = 0; i < txq->nb_tx_desc; i++) {
- txe = &txq->sw_ring_v[i];
+ txe = &txq->sw_ring_vec[i];
txe->mbuf = NULL;
}
}
@@ -134,22 +134,22 @@ _ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq)
}
static inline void
-_ixgbe_tx_free_swring_vec(struct ixgbe_tx_queue *txq)
+_ixgbe_tx_free_swring_vec(struct ci_tx_queue *txq)
{
if (txq == NULL)
return;
if (txq->sw_ring != NULL) {
- rte_free(txq->sw_ring_v - 1);
- txq->sw_ring_v = NULL;
+ rte_free(txq->sw_ring_vec - 1);
+ txq->sw_ring_vec = NULL;
}
}
static inline void
-_ixgbe_reset_tx_queue_vec(struct ixgbe_tx_queue *txq)
+_ixgbe_reset_tx_queue_vec(struct ci_tx_queue *txq)
{
static const union ixgbe_adv_tx_desc zeroed_desc = { { 0 } };
- struct ci_tx_entry_vec *txe = txq->sw_ring_v;
+ struct ci_tx_entry_vec *txe = txq->sw_ring_vec;
uint16_t i;
/* Zero out HW ring memory */
@@ -199,14 +199,14 @@ ixgbe_rxq_vec_setup_default(struct ixgbe_rx_queue *rxq)
}
static inline int
-ixgbe_txq_vec_setup_default(struct ixgbe_tx_queue *txq,
+ixgbe_txq_vec_setup_default(struct ci_tx_queue *txq,
const struct ixgbe_txq_ops *txq_ops)
{
- if (txq->sw_ring_v == NULL)
+ if (txq->sw_ring_vec == NULL)
return -1;
/* leave the first one for overflow */
- txq->sw_ring_v = txq->sw_ring_v + 1;
+ txq->sw_ring_vec = txq->sw_ring_vec + 1;
txq->ops = txq_ops;
return 0;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
index 06be7ec82a..cb749a3760 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
@@ -571,7 +571,7 @@ uint16_t
ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile union ixgbe_adv_tx_desc *txdp;
struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
@@ -591,7 +591,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = txq->tx_tail;
txdp = &txq->ixgbe_tx_ring[tx_id];
- txep = &txq->sw_ring_v[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -611,7 +611,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
/* avoid reach the end of ring */
txdp = &txq->ixgbe_tx_ring[tx_id];
- txep = &txq->sw_ring_v[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
}
tx_backlog_entry(txep, tx_pkts, nb_commit);
@@ -634,7 +634,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
}
static void __rte_cold
-ixgbe_tx_queue_release_mbufs_vec(struct ixgbe_tx_queue *txq)
+ixgbe_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq)
{
_ixgbe_tx_queue_release_mbufs_vec(txq);
}
@@ -646,13 +646,13 @@ ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq)
}
static void __rte_cold
-ixgbe_tx_free_swring(struct ixgbe_tx_queue *txq)
+ixgbe_tx_free_swring(struct ci_tx_queue *txq)
{
_ixgbe_tx_free_swring_vec(txq);
}
static void __rte_cold
-ixgbe_reset_tx_queue(struct ixgbe_tx_queue *txq)
+ixgbe_reset_tx_queue(struct ci_tx_queue *txq)
{
_ixgbe_reset_tx_queue_vec(txq);
}
@@ -670,7 +670,7 @@ ixgbe_rxq_vec_setup(struct ixgbe_rx_queue *rxq)
}
int __rte_cold
-ixgbe_txq_vec_setup(struct ixgbe_tx_queue *txq)
+ixgbe_txq_vec_setup(struct ci_tx_queue *txq)
{
return ixgbe_txq_vec_setup_default(txq, &vec_txq_ops);
}
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
index a21a57bd55..e46550f76a 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -693,7 +693,7 @@ uint16_t
ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile union ixgbe_adv_tx_desc *txdp;
struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
@@ -713,7 +713,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = txq->tx_tail;
txdp = &txq->ixgbe_tx_ring[tx_id];
- txep = &txq->sw_ring_v[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -734,7 +734,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
/* avoid reach the end of ring */
txdp = &txq->ixgbe_tx_ring[tx_id];
- txep = &txq->sw_ring_v[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
}
tx_backlog_entry(txep, tx_pkts, nb_commit);
@@ -757,7 +757,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
}
static void __rte_cold
-ixgbe_tx_queue_release_mbufs_vec(struct ixgbe_tx_queue *txq)
+ixgbe_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq)
{
_ixgbe_tx_queue_release_mbufs_vec(txq);
}
@@ -769,13 +769,13 @@ ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq)
}
static void __rte_cold
-ixgbe_tx_free_swring(struct ixgbe_tx_queue *txq)
+ixgbe_tx_free_swring(struct ci_tx_queue *txq)
{
_ixgbe_tx_free_swring_vec(txq);
}
static void __rte_cold
-ixgbe_reset_tx_queue(struct ixgbe_tx_queue *txq)
+ixgbe_reset_tx_queue(struct ci_tx_queue *txq)
{
_ixgbe_reset_tx_queue_vec(txq);
}
@@ -793,7 +793,7 @@ ixgbe_rxq_vec_setup(struct ixgbe_rx_queue *rxq)
}
int __rte_cold
-ixgbe_txq_vec_setup(struct ixgbe_tx_queue *txq)
+ixgbe_txq_vec_setup(struct ci_tx_queue *txq)
{
return ixgbe_txq_vec_setup_default(txq, &vec_txq_ops);
}
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v1 10/21] net/_common_intel: pack Tx queue structure
2024-12-02 11:24 ` [PATCH v1 " Bruce Richardson
` (8 preceding siblings ...)
2024-12-02 11:24 ` [PATCH v1 09/21] net/ixgbe: use common Tx queue structure Bruce Richardson
@ 2024-12-02 11:24 ` Bruce Richardson
2024-12-02 11:24 ` [PATCH v1 11/21] net/_common_intel: add post-Tx buffer free function Bruce Richardson
` (10 subsequent siblings)
20 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-02 11:24 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Ian Stokes, Anatoly Burakov
Move some fields about to better pack the Tx queue structure and make
sure all data used by the vector codepaths is on the first cacheline of
the structure. Checking with "pahole" on 64-bit build, only one 6-byte
hole is left in the structure - on second cacheline - after this patch.
As part of the reordering, move the p/h/wthresh values to the
ixgbe-specific part of the union. That is the only driver which actually
uses those values. i40e and ice drivers just record the values for later
return, so we can drop them from the Tx queue structure for those
drivers and just report the defaults in all cases.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 12 +++++-------
drivers/net/i40e/i40e_rxtx.c | 9 +++------
drivers/net/ice/ice_rxtx.c | 9 +++------
3 files changed, 11 insertions(+), 19 deletions(-)
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
index 51ae3b051d..c372d2838b 100644
--- a/drivers/net/_common_intel/tx.h
+++ b/drivers/net/_common_intel/tx.h
@@ -41,7 +41,6 @@ struct ci_tx_queue {
struct ci_tx_entry *sw_ring; /* virtual address of SW ring */
struct ci_tx_entry_vec *sw_ring_vec;
};
- rte_iova_t tx_ring_dma; /* TX ring DMA address */
uint16_t nb_tx_desc; /* number of TX descriptors */
uint16_t tx_tail; /* current value of tail register */
uint16_t nb_tx_used; /* number of TX desc used since RS bit set */
@@ -55,16 +54,14 @@ struct ci_tx_queue {
uint16_t tx_free_thresh;
/* Number of TX descriptors to use before RS bit is set. */
uint16_t tx_rs_thresh;
- uint8_t pthresh; /**< Prefetch threshold register. */
- uint8_t hthresh; /**< Host threshold register. */
- uint8_t wthresh; /**< Write-back threshold reg. */
uint16_t port_id; /* Device port identifier. */
uint16_t queue_id; /* TX queue index. */
uint16_t reg_idx;
- uint64_t offloads;
uint16_t tx_next_dd;
uint16_t tx_next_rs;
+ uint64_t offloads;
uint64_t mbuf_errors;
+ rte_iova_t tx_ring_dma; /* TX ring DMA address */
bool tx_deferred_start; /* don't start this queue in dev start */
bool q_set; /* indicate if tx queue has been configured */
union { /* the VSI this queue belongs to */
@@ -95,9 +92,10 @@ struct ci_tx_queue {
const struct ixgbe_txq_ops *ops;
struct ixgbe_advctx_info *ctx_cache;
uint32_t ctx_curr;
-#ifdef RTE_LIB_SECURITY
+ uint8_t pthresh; /**< Prefetch threshold register. */
+ uint8_t hthresh; /**< Host threshold register. */
+ uint8_t wthresh; /**< Write-back threshold reg. */
uint8_t using_ipsec; /**< indicates that IPsec TX feature is in use */
-#endif
};
};
};
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 305bc53480..539b170266 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2539,9 +2539,6 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->nb_tx_desc = nb_desc;
txq->tx_rs_thresh = tx_rs_thresh;
txq->tx_free_thresh = tx_free_thresh;
- txq->pthresh = tx_conf->tx_thresh.pthresh;
- txq->hthresh = tx_conf->tx_thresh.hthresh;
- txq->wthresh = tx_conf->tx_thresh.wthresh;
txq->queue_id = queue_idx;
txq->reg_idx = reg_idx;
txq->port_id = dev->data->port_id;
@@ -3310,9 +3307,9 @@ i40e_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
qinfo->nb_desc = txq->nb_tx_desc;
- qinfo->conf.tx_thresh.pthresh = txq->pthresh;
- qinfo->conf.tx_thresh.hthresh = txq->hthresh;
- qinfo->conf.tx_thresh.wthresh = txq->wthresh;
+ qinfo->conf.tx_thresh.pthresh = I40E_DEFAULT_TX_PTHRESH;
+ qinfo->conf.tx_thresh.hthresh = I40E_DEFAULT_TX_HTHRESH;
+ qinfo->conf.tx_thresh.wthresh = I40E_DEFAULT_TX_WTHRESH;
qinfo->conf.tx_free_thresh = txq->tx_free_thresh;
qinfo->conf.tx_rs_thresh = txq->tx_rs_thresh;
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index bcc7c7a016..e2e147ba3e 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -1492,9 +1492,6 @@ ice_tx_queue_setup(struct rte_eth_dev *dev,
txq->nb_tx_desc = nb_desc;
txq->tx_rs_thresh = tx_rs_thresh;
txq->tx_free_thresh = tx_free_thresh;
- txq->pthresh = tx_conf->tx_thresh.pthresh;
- txq->hthresh = tx_conf->tx_thresh.hthresh;
- txq->wthresh = tx_conf->tx_thresh.wthresh;
txq->queue_id = queue_idx;
txq->reg_idx = vsi->base_queue + queue_idx;
@@ -1583,9 +1580,9 @@ ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
qinfo->nb_desc = txq->nb_tx_desc;
- qinfo->conf.tx_thresh.pthresh = txq->pthresh;
- qinfo->conf.tx_thresh.hthresh = txq->hthresh;
- qinfo->conf.tx_thresh.wthresh = txq->wthresh;
+ qinfo->conf.tx_thresh.pthresh = ICE_DEFAULT_TX_PTHRESH;
+ qinfo->conf.tx_thresh.hthresh = ICE_DEFAULT_TX_HTHRESH;
+ qinfo->conf.tx_thresh.wthresh = ICE_DEFAULT_TX_WTHRESH;
qinfo->conf.tx_free_thresh = txq->tx_free_thresh;
qinfo->conf.tx_rs_thresh = txq->tx_rs_thresh;
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v1 11/21] net/_common_intel: add post-Tx buffer free function
2024-12-02 11:24 ` [PATCH v1 " Bruce Richardson
` (9 preceding siblings ...)
2024-12-02 11:24 ` [PATCH v1 10/21] net/_common_intel: pack " Bruce Richardson
@ 2024-12-02 11:24 ` Bruce Richardson
2024-12-02 12:59 ` David Marchand
2024-12-02 11:24 ` [PATCH v1 12/21] net/_common_intel: add Tx buffer free fn for AVX-512 Bruce Richardson
` (9 subsequent siblings)
20 siblings, 1 reply; 127+ messages in thread
From: Bruce Richardson @ 2024-12-02 11:24 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Ian Stokes, Vladimir Medvedkin, Anatoly Burakov
The actions taken for post-Tx buffer free for the SSE and AVX drivers
for i40e, iavf and ice drivers are all common, so centralize those in
common/intel_eth driver.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 71 ++++++++++++++++++++++++
drivers/net/i40e/i40e_rxtx_vec_common.h | 72 ++++---------------------
drivers/net/iavf/iavf_rxtx_vec_common.h | 61 ++++-----------------
drivers/net/ice/ice_rxtx_vec_common.h | 61 ++++-----------------
4 files changed, 98 insertions(+), 167 deletions(-)
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
index c372d2838b..a930309c05 100644
--- a/drivers/net/_common_intel/tx.h
+++ b/drivers/net/_common_intel/tx.h
@@ -7,6 +7,7 @@
#include <stdint.h>
#include <rte_mbuf.h>
+#include <rte_ethdev.h>
/* forward declaration of the common intel (ci) queue structure */
struct ci_tx_queue;
@@ -107,4 +108,74 @@ ci_tx_backlog_entry(struct ci_tx_entry *txep, struct rte_mbuf **tx_pkts, uint16_
txep[i].mbuf = tx_pkts[i];
}
+#define IETH_VPMD_TX_MAX_FREE_BUF 64
+
+typedef int (*ci_desc_done_fn)(struct ci_tx_queue *txq, uint16_t idx);
+
+static __rte_always_inline int
+ci_tx_free_bufs(struct ci_tx_queue *txq, ci_desc_done_fn desc_done)
+{
+ struct ci_tx_entry *txep;
+ uint32_t n;
+ uint32_t i;
+ int nb_free = 0;
+ struct rte_mbuf *m, *free[IETH_VPMD_TX_MAX_FREE_BUF];
+
+ /* check DD bits on threshold descriptor */
+ if (!desc_done(txq, txq->tx_next_dd))
+ return 0;
+
+ n = txq->tx_rs_thresh;
+
+ /* first buffer to free from S/W ring is at index
+ * tx_next_dd - (tx_rs_thresh-1)
+ */
+ txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)];
+
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
+ for (i = 0; i < n; i++) {
+ free[i] = txep[i].mbuf;
+ /* no need to reset txep[i].mbuf in vector path */
+ }
+ rte_mempool_put_bulk(free[0]->pool, (void **)free, n);
+ goto done;
+ }
+
+ m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
+ if (likely(m != NULL)) {
+ free[0] = m;
+ nb_free = 1;
+ for (i = 1; i < n; i++) {
+ m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
+ if (likely(m != NULL)) {
+ if (likely(m->pool == free[0]->pool)) {
+ free[nb_free++] = m;
+ } else {
+ rte_mempool_put_bulk(free[0]->pool,
+ (void *)free,
+ nb_free);
+ free[0] = m;
+ nb_free = 1;
+ }
+ }
+ }
+ rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
+ } else {
+ for (i = 1; i < n; i++) {
+ m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
+ if (m != NULL)
+ rte_mempool_put(m->pool, m);
+ }
+ }
+
+done:
+ /* buffers were freed, update counters */
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
+ txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
+ if (txq->tx_next_dd >= txq->nb_tx_desc)
+ txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
+
+ return txq->tx_rs_thresh;
+}
+
#endif /* _COMMON_INTEL_TX_H_ */
diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index 57d6263ccf..907d32dd0b 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -16,72 +16,18 @@
#pragma GCC diagnostic ignored "-Wcast-qual"
#endif
+static inline int
+i40e_tx_desc_done(struct ci_tx_queue *txq, uint16_t idx)
+{
+ return (txq->i40e_tx_ring[idx].cmd_type_offset_bsz &
+ rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) ==
+ rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE);
+}
+
static __rte_always_inline int
i40e_tx_free_bufs(struct ci_tx_queue *txq)
{
- struct ci_tx_entry *txep;
- uint32_t n;
- uint32_t i;
- int nb_free = 0;
- struct rte_mbuf *m, *free[RTE_I40E_TX_MAX_FREE_BUF_SZ];
-
- /* check DD bits on threshold descriptor */
- if ((txq->i40e_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
- rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) !=
- rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE))
- return 0;
-
- n = txq->tx_rs_thresh;
-
- /* first buffer to free from S/W ring is at index
- * tx_next_dd - (tx_rs_thresh-1)
- */
- txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)];
-
- if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
- for (i = 0; i < n; i++) {
- free[i] = txep[i].mbuf;
- /* no need to reset txep[i].mbuf in vector path */
- }
- rte_mempool_put_bulk(free[0]->pool, (void **)free, n);
- goto done;
- }
-
- m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
- if (likely(m != NULL)) {
- free[0] = m;
- nb_free = 1;
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (likely(m != NULL)) {
- if (likely(m->pool == free[0]->pool)) {
- free[nb_free++] = m;
- } else {
- rte_mempool_put_bulk(free[0]->pool,
- (void *)free,
- nb_free);
- free[0] = m;
- nb_free = 1;
- }
- }
- }
- rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
- } else {
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (m != NULL)
- rte_mempool_put(m->pool, m);
- }
- }
-
-done:
- /* buffers were freed, update counters */
- txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
- txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
- if (txq->tx_next_dd >= txq->nb_tx_desc)
- txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
-
- return txq->tx_rs_thresh;
+ return ci_tx_free_bufs(txq, i40e_tx_desc_done);
}
static inline void
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index f1bb12c4f4..7130229f23 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -16,61 +16,18 @@
#pragma GCC diagnostic ignored "-Wcast-qual"
#endif
+static inline int
+iavf_tx_desc_done(struct ci_tx_queue *txq, uint16_t idx)
+{
+ return (txq->iavf_tx_ring[idx].cmd_type_offset_bsz &
+ rte_cpu_to_le_64(IAVF_TXD_QW1_DTYPE_MASK)) ==
+ rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE);
+}
+
static __rte_always_inline int
iavf_tx_free_bufs(struct ci_tx_queue *txq)
{
- struct ci_tx_entry *txep;
- uint32_t n;
- uint32_t i;
- int nb_free = 0;
- struct rte_mbuf *m, *free[IAVF_VPMD_TX_MAX_FREE_BUF];
-
- /* check DD bits on threshold descriptor */
- if ((txq->iavf_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
- rte_cpu_to_le_64(IAVF_TXD_QW1_DTYPE_MASK)) !=
- rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE))
- return 0;
-
- n = txq->tx_rs_thresh;
-
- /* first buffer to free from S/W ring is at index
- * tx_next_dd - (tx_rs_thresh-1)
- */
- txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)];
- m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
- if (likely(m != NULL)) {
- free[0] = m;
- nb_free = 1;
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (likely(m != NULL)) {
- if (likely(m->pool == free[0]->pool)) {
- free[nb_free++] = m;
- } else {
- rte_mempool_put_bulk(free[0]->pool,
- (void *)free,
- nb_free);
- free[0] = m;
- nb_free = 1;
- }
- }
- }
- rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
- } else {
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (m)
- rte_mempool_put(m->pool, m);
- }
- }
-
- /* buffers were freed, update counters */
- txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
- txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
- if (txq->tx_next_dd >= txq->nb_tx_desc)
- txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
-
- return txq->tx_rs_thresh;
+ return ci_tx_free_bufs(txq, iavf_tx_desc_done);
}
static inline void
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index b39289ceb5..c6c3933299 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -12,61 +12,18 @@
#pragma GCC diagnostic ignored "-Wcast-qual"
#endif
+static inline int
+ice_tx_desc_done(struct ci_tx_queue *txq, uint16_t idx)
+{
+ return (txq->ice_tx_ring[idx].cmd_type_offset_bsz &
+ rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) ==
+ rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE);
+}
+
static __rte_always_inline int
ice_tx_free_bufs_vec(struct ci_tx_queue *txq)
{
- struct ci_tx_entry *txep;
- uint32_t n;
- uint32_t i;
- int nb_free = 0;
- struct rte_mbuf *m, *free[ICE_TX_MAX_FREE_BUF_SZ];
-
- /* check DD bits on threshold descriptor */
- if ((txq->ice_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
- rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) !=
- rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))
- return 0;
-
- n = txq->tx_rs_thresh;
-
- /* first buffer to free from S/W ring is at index
- * tx_next_dd - (tx_rs_thresh-1)
- */
- txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)];
- m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
- if (likely(m)) {
- free[0] = m;
- nb_free = 1;
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (likely(m)) {
- if (likely(m->pool == free[0]->pool)) {
- free[nb_free++] = m;
- } else {
- rte_mempool_put_bulk(free[0]->pool,
- (void *)free,
- nb_free);
- free[0] = m;
- nb_free = 1;
- }
- }
- }
- rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
- } else {
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (m)
- rte_mempool_put(m->pool, m);
- }
- }
-
- /* buffers were freed, update counters */
- txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
- txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
- if (txq->tx_next_dd >= txq->nb_tx_desc)
- txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
-
- return txq->tx_rs_thresh;
+ return ci_tx_free_bufs(txq, ice_tx_desc_done);
}
static inline void
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v1 12/21] net/_common_intel: add Tx buffer free fn for AVX-512
2024-12-02 11:24 ` [PATCH v1 " Bruce Richardson
` (10 preceding siblings ...)
2024-12-02 11:24 ` [PATCH v1 11/21] net/_common_intel: add post-Tx buffer free function Bruce Richardson
@ 2024-12-02 11:24 ` Bruce Richardson
2024-12-02 11:24 ` [PATCH v1 13/21] net/iavf: use common Tx " Bruce Richardson
` (8 subsequent siblings)
20 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-02 11:24 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Konstantin Ananyev, Ian Stokes, Anatoly Burakov
AVX-512 code paths for ice and i40e drivers are common, and differ from
the regular post-Tx free function in that the SW ring from which the
buffers are freed does not contain anything other than the mbuf pointer.
Merge these into a common function in intel_common to reduce
duplication.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 93 +++++++++++++++++++
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 114 +----------------------
drivers/net/ice/ice_rxtx_vec_avx512.c | 117 +-----------------------
3 files changed, 95 insertions(+), 229 deletions(-)
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
index a930309c05..145501834a 100644
--- a/drivers/net/_common_intel/tx.h
+++ b/drivers/net/_common_intel/tx.h
@@ -178,4 +178,97 @@ ci_tx_free_bufs(struct ci_tx_queue *txq, ci_desc_done_fn desc_done)
return txq->tx_rs_thresh;
}
+static __rte_always_inline int
+ci_tx_free_bufs_vec(struct ci_tx_queue *txq, ci_desc_done_fn desc_done)
+{
+ int nb_free = 0;
+ struct rte_mbuf *free[IETH_VPMD_TX_MAX_FREE_BUF];
+ struct rte_mbuf *m;
+
+ /* check DD bits on threshold descriptor */
+ if (!desc_done(txq, txq->tx_next_dd))
+ return 0;
+
+ const uint32_t n = txq->tx_rs_thresh;
+
+ /* first buffer to free from S/W ring is at index
+ * tx_next_dd - (tx_rs_thresh - 1)
+ */
+ struct ci_tx_entry_vec *txep = txq->sw_ring_vec;
+ txep += txq->tx_next_dd - (n - 1);
+
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
+ struct rte_mempool *mp = txep[0].mbuf->pool;
+ void **cache_objs;
+ struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
+ rte_lcore_id());
+
+ if (!cache || cache->len == 0)
+ goto normal;
+
+ cache_objs = &cache->objs[cache->len];
+
+ if (n > RTE_MEMPOOL_CACHE_MAX_SIZE) {
+ rte_mempool_ops_enqueue_bulk(mp, (void *)txep, n);
+ goto done;
+ }
+
+ /* The cache follows the following algorithm
+ * 1. Add the objects to the cache
+ * 2. Anything greater than the cache min value (if it
+ * crosses the cache flush threshold) is flushed to the ring.
+ */
+ /* Add elements back into the cache */
+ uint32_t copied = 0;
+ /* n is multiple of 32 */
+ while (copied < n) {
+ memcpy(&cache_objs[copied], &txep[copied], 32 * sizeof(void *));
+ copied += 32;
+ }
+ cache->len += n;
+
+ if (cache->len >= cache->flushthresh) {
+ rte_mempool_ops_enqueue_bulk(mp, &cache->objs[cache->size],
+ cache->len - cache->size);
+ cache->len = cache->size;
+ }
+ goto done;
+ }
+
+normal:
+ m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
+ if (likely(m)) {
+ free[0] = m;
+ nb_free = 1;
+ for (uint32_t i = 1; i < n; i++) {
+ m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
+ if (likely(m)) {
+ if (likely(m->pool == free[0]->pool)) {
+ free[nb_free++] = m;
+ } else {
+ rte_mempool_put_bulk(free[0]->pool, (void *)free, nb_free);
+ free[0] = m;
+ nb_free = 1;
+ }
+ }
+ }
+ rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
+ } else {
+ for (uint32_t i = 1; i < n; i++) {
+ m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
+ if (m)
+ rte_mempool_put(m->pool, m);
+ }
+ }
+
+done:
+ /* buffers were freed, update counters */
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
+ txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
+ if (txq->tx_next_dd >= txq->nb_tx_desc)
+ txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
+
+ return txq->tx_rs_thresh;
+}
+
#endif /* _COMMON_INTEL_TX_H_ */
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index a3f6d1667f..9bb2a44231 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -754,118 +754,6 @@ i40e_recv_scattered_pkts_vec_avx512(void *rx_queue,
rx_pkts + retval, nb_pkts);
}
-static __rte_always_inline int
-i40e_tx_free_bufs_avx512(struct ci_tx_queue *txq)
-{
- struct ci_tx_entry_vec *txep;
- uint32_t n;
- uint32_t i;
- int nb_free = 0;
- struct rte_mbuf *m, *free[RTE_I40E_TX_MAX_FREE_BUF_SZ];
-
- /* check DD bits on threshold descriptor */
- if ((txq->i40e_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
- rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) !=
- rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE))
- return 0;
-
- n = txq->tx_rs_thresh;
-
- /* first buffer to free from S/W ring is at index
- * tx_next_dd - (tx_rs_thresh-1)
- */
- txep = (void *)txq->sw_ring;
- txep += txq->tx_next_dd - (n - 1);
-
- if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
- struct rte_mempool *mp = txep[0].mbuf->pool;
- void **cache_objs;
- struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
- rte_lcore_id());
-
- if (!cache || n > RTE_MEMPOOL_CACHE_MAX_SIZE) {
- rte_mempool_generic_put(mp, (void *)txep, n, cache);
- goto done;
- }
-
- cache_objs = &cache->objs[cache->len];
-
- /* The cache follows the following algorithm
- * 1. Add the objects to the cache
- * 2. Anything greater than the cache min value (if it
- * crosses the cache flush threshold) is flushed to the ring.
- */
- /* Add elements back into the cache */
- uint32_t copied = 0;
- /* n is multiple of 32 */
- while (copied < n) {
-#ifdef RTE_ARCH_64
- const __m512i a = _mm512_load_si512(&txep[copied]);
- const __m512i b = _mm512_load_si512(&txep[copied + 8]);
- const __m512i c = _mm512_load_si512(&txep[copied + 16]);
- const __m512i d = _mm512_load_si512(&txep[copied + 24]);
-
- _mm512_storeu_si512(&cache_objs[copied], a);
- _mm512_storeu_si512(&cache_objs[copied + 8], b);
- _mm512_storeu_si512(&cache_objs[copied + 16], c);
- _mm512_storeu_si512(&cache_objs[copied + 24], d);
-#else
- const __m512i a = _mm512_load_si512(&txep[copied]);
- const __m512i b = _mm512_load_si512(&txep[copied + 16]);
- _mm512_storeu_si512(&cache_objs[copied], a);
- _mm512_storeu_si512(&cache_objs[copied + 16], b);
-#endif
- copied += 32;
- }
- cache->len += n;
-
- if (cache->len >= cache->flushthresh) {
- rte_mempool_ops_enqueue_bulk
- (mp, &cache->objs[cache->size],
- cache->len - cache->size);
- cache->len = cache->size;
- }
- goto done;
- }
-
- m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
- if (likely(m)) {
- free[0] = m;
- nb_free = 1;
- for (i = 1; i < n; i++) {
- rte_mbuf_prefetch_part2(txep[i + 3].mbuf);
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (likely(m)) {
- if (likely(m->pool == free[0]->pool)) {
- free[nb_free++] = m;
- } else {
- rte_mempool_put_bulk(free[0]->pool,
- (void *)free,
- nb_free);
- free[0] = m;
- nb_free = 1;
- }
- }
- }
- rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
- } else {
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (m)
- rte_mempool_put(m->pool, m);
- }
- }
-
-done:
- /* buffers were freed, update counters */
- txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
- txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
- if (txq->tx_next_dd >= txq->nb_tx_desc)
- txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
-
- return txq->tx_rs_thresh;
-}
-
static inline void
vtx1(volatile struct i40e_tx_desc *txdp, struct rte_mbuf *pkt, uint64_t flags)
{
@@ -941,7 +829,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
if (txq->nb_tx_free < txq->tx_free_thresh)
- i40e_tx_free_bufs_avx512(txq);
+ ci_tx_free_bufs_vec(txq, i40e_tx_desc_done);
nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index eabd8b04a0..538be707ef 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -859,121 +859,6 @@ ice_recv_scattered_pkts_vec_avx512_offload(void *rx_queue,
rx_pkts + retval, nb_pkts);
}
-static __rte_always_inline int
-ice_tx_free_bufs_avx512(struct ci_tx_queue *txq)
-{
- struct ci_tx_entry_vec *txep;
- uint32_t n;
- uint32_t i;
- int nb_free = 0;
- struct rte_mbuf *m, *free[ICE_TX_MAX_FREE_BUF_SZ];
-
- /* check DD bits on threshold descriptor */
- if ((txq->ice_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
- rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) !=
- rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))
- return 0;
-
- n = txq->tx_rs_thresh;
-
- /* first buffer to free from S/W ring is at index
- * tx_next_dd - (tx_rs_thresh - 1)
- */
- txep = (void *)txq->sw_ring;
- txep += txq->tx_next_dd - (n - 1);
-
- if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
- struct rte_mempool *mp = txep[0].mbuf->pool;
- void **cache_objs;
- struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
- rte_lcore_id());
-
- if (!cache || cache->len == 0)
- goto normal;
-
- cache_objs = &cache->objs[cache->len];
-
- if (n > RTE_MEMPOOL_CACHE_MAX_SIZE) {
- rte_mempool_ops_enqueue_bulk(mp, (void *)txep, n);
- goto done;
- }
-
- /* The cache follows the following algorithm
- * 1. Add the objects to the cache
- * 2. Anything greater than the cache min value (if it
- * crosses the cache flush threshold) is flushed to the ring.
- */
- /* Add elements back into the cache */
- uint32_t copied = 0;
- /* n is multiple of 32 */
- while (copied < n) {
-#ifdef RTE_ARCH_64
- const __m512i a = _mm512_loadu_si512(&txep[copied]);
- const __m512i b = _mm512_loadu_si512(&txep[copied + 8]);
- const __m512i c = _mm512_loadu_si512(&txep[copied + 16]);
- const __m512i d = _mm512_loadu_si512(&txep[copied + 24]);
-
- _mm512_storeu_si512(&cache_objs[copied], a);
- _mm512_storeu_si512(&cache_objs[copied + 8], b);
- _mm512_storeu_si512(&cache_objs[copied + 16], c);
- _mm512_storeu_si512(&cache_objs[copied + 24], d);
-#else
- const __m512i a = _mm512_loadu_si512(&txep[copied]);
- const __m512i b = _mm512_loadu_si512(&txep[copied + 16]);
- _mm512_storeu_si512(&cache_objs[copied], a);
- _mm512_storeu_si512(&cache_objs[copied + 16], b);
-#endif
- copied += 32;
- }
- cache->len += n;
-
- if (cache->len >= cache->flushthresh) {
- rte_mempool_ops_enqueue_bulk
- (mp, &cache->objs[cache->size],
- cache->len - cache->size);
- cache->len = cache->size;
- }
- goto done;
- }
-
-normal:
- m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
- if (likely(m)) {
- free[0] = m;
- nb_free = 1;
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (likely(m)) {
- if (likely(m->pool == free[0]->pool)) {
- free[nb_free++] = m;
- } else {
- rte_mempool_put_bulk(free[0]->pool,
- (void *)free,
- nb_free);
- free[0] = m;
- nb_free = 1;
- }
- }
- }
- rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
- } else {
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (m)
- rte_mempool_put(m->pool, m);
- }
- }
-
-done:
- /* buffers were freed, update counters */
- txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
- txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
- if (txq->tx_next_dd >= txq->nb_tx_desc)
- txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
-
- return txq->tx_rs_thresh;
-}
-
static __rte_always_inline void
ice_vtx1(volatile struct ice_tx_desc *txdp,
struct rte_mbuf *pkt, uint64_t flags, bool do_offload)
@@ -1064,7 +949,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_pkts = RTE_MIN(nb_pkts, txq->tx_rs_thresh);
if (txq->nb_tx_free < txq->tx_free_thresh)
- ice_tx_free_bufs_avx512(txq);
+ ci_tx_free_bufs_vec(txq, ice_tx_desc_done);
nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v1 13/21] net/iavf: use common Tx free fn for AVX-512
2024-12-02 11:24 ` [PATCH v1 " Bruce Richardson
` (11 preceding siblings ...)
2024-12-02 11:24 ` [PATCH v1 12/21] net/_common_intel: add Tx buffer free fn for AVX-512 Bruce Richardson
@ 2024-12-02 11:24 ` Bruce Richardson
2024-12-02 11:24 ` [PATCH v1 14/21] net/ice: move Tx queue mbuf cleanup fn to common Bruce Richardson
` (7 subsequent siblings)
20 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-02 11:24 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Konstantin Ananyev, Ian Stokes,
Vladimir Medvedkin, Anatoly Burakov
Switch the iavf driver to use the common Tx free function. This requires
one additional parameter to that function, since iavf sometimes uses
context descriptors which means that we have double the descriptors per
SW ring slot.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 6 +-
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 2 +-
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 119 +-----------------------
drivers/net/ice/ice_rxtx_vec_avx512.c | 2 +-
4 files changed, 7 insertions(+), 122 deletions(-)
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
index 145501834a..21f4d71e50 100644
--- a/drivers/net/_common_intel/tx.h
+++ b/drivers/net/_common_intel/tx.h
@@ -179,7 +179,7 @@ ci_tx_free_bufs(struct ci_tx_queue *txq, ci_desc_done_fn desc_done)
}
static __rte_always_inline int
-ci_tx_free_bufs_vec(struct ci_tx_queue *txq, ci_desc_done_fn desc_done)
+ci_tx_free_bufs_vec(struct ci_tx_queue *txq, ci_desc_done_fn desc_done, bool ctx_descs)
{
int nb_free = 0;
struct rte_mbuf *free[IETH_VPMD_TX_MAX_FREE_BUF];
@@ -189,13 +189,13 @@ ci_tx_free_bufs_vec(struct ci_tx_queue *txq, ci_desc_done_fn desc_done)
if (!desc_done(txq, txq->tx_next_dd))
return 0;
- const uint32_t n = txq->tx_rs_thresh;
+ const uint32_t n = txq->tx_rs_thresh >> ctx_descs;
/* first buffer to free from S/W ring is at index
* tx_next_dd - (tx_rs_thresh - 1)
*/
struct ci_tx_entry_vec *txep = txq->sw_ring_vec;
- txep += txq->tx_next_dd - (n - 1);
+ txep += (txq->tx_next_dd >> ctx_descs) - (n - 1);
if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
struct rte_mempool *mp = txep[0].mbuf->pool;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index 9bb2a44231..c555c3491d 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -829,7 +829,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
if (txq->nb_tx_free < txq->tx_free_thresh)
- ci_tx_free_bufs_vec(txq, i40e_tx_desc_done);
+ ci_tx_free_bufs_vec(txq, i40e_tx_desc_done, false);
nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index 9cf7171524..8543490c70 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -1844,121 +1844,6 @@ iavf_recv_scattered_pkts_vec_avx512_flex_rxd_offload(void *rx_queue,
true);
}
-static __rte_always_inline int
-iavf_tx_free_bufs_avx512(struct ci_tx_queue *txq)
-{
- struct ci_tx_entry_vec *txep;
- uint32_t n;
- uint32_t i;
- int nb_free = 0;
- struct rte_mbuf *m, *free[IAVF_VPMD_TX_MAX_FREE_BUF];
-
- /* check DD bits on threshold descriptor */
- if ((txq->iavf_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
- rte_cpu_to_le_64(IAVF_TXD_QW1_DTYPE_MASK)) !=
- rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE))
- return 0;
-
- n = txq->tx_rs_thresh >> txq->use_ctx;
-
- /* first buffer to free from S/W ring is at index
- * tx_next_dd - (tx_rs_thresh-1)
- */
- txep = (void *)txq->sw_ring;
- txep += (txq->tx_next_dd >> txq->use_ctx) - (n - 1);
-
- if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
- struct rte_mempool *mp = txep[0].mbuf->pool;
- struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
- rte_lcore_id());
- void **cache_objs;
-
- if (!cache || cache->len == 0)
- goto normal;
-
- cache_objs = &cache->objs[cache->len];
-
- if (n > RTE_MEMPOOL_CACHE_MAX_SIZE) {
- rte_mempool_ops_enqueue_bulk(mp, (void *)txep, n);
- goto done;
- }
-
- /* The cache follows the following algorithm
- * 1. Add the objects to the cache
- * 2. Anything greater than the cache min value (if it crosses the
- * cache flush threshold) is flushed to the ring.
- */
- /* Add elements back into the cache */
- uint32_t copied = 0;
- /* n is multiple of 32 */
- while (copied < n) {
-#ifdef RTE_ARCH_64
- const __m512i a = _mm512_loadu_si512(&txep[copied]);
- const __m512i b = _mm512_loadu_si512(&txep[copied + 8]);
- const __m512i c = _mm512_loadu_si512(&txep[copied + 16]);
- const __m512i d = _mm512_loadu_si512(&txep[copied + 24]);
-
- _mm512_storeu_si512(&cache_objs[copied], a);
- _mm512_storeu_si512(&cache_objs[copied + 8], b);
- _mm512_storeu_si512(&cache_objs[copied + 16], c);
- _mm512_storeu_si512(&cache_objs[copied + 24], d);
-#else
- const __m512i a = _mm512_loadu_si512(&txep[copied]);
- const __m512i b = _mm512_loadu_si512(&txep[copied + 16]);
- _mm512_storeu_si512(&cache_objs[copied], a);
- _mm512_storeu_si512(&cache_objs[copied + 16], b);
-#endif
- copied += 32;
- }
- cache->len += n;
-
- if (cache->len >= cache->flushthresh) {
- rte_mempool_ops_enqueue_bulk(mp,
- &cache->objs[cache->size],
- cache->len - cache->size);
- cache->len = cache->size;
- }
- goto done;
- }
-
-normal:
- m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
- if (likely(m)) {
- free[0] = m;
- nb_free = 1;
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (likely(m)) {
- if (likely(m->pool == free[0]->pool)) {
- free[nb_free++] = m;
- } else {
- rte_mempool_put_bulk(free[0]->pool,
- (void *)free,
- nb_free);
- free[0] = m;
- nb_free = 1;
- }
- }
- }
- rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
- } else {
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (m)
- rte_mempool_put(m->pool, m);
- }
- }
-
-done:
- /* buffers were freed, update counters */
- txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
- txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
- if (txq->tx_next_dd >= txq->nb_tx_desc)
- txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
-
- return txq->tx_rs_thresh;
-}
-
static __rte_always_inline void
tx_backlog_entry_avx512(struct ci_tx_entry_vec *txep,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
@@ -2320,7 +2205,7 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint64_t rs = IAVF_TX_DESC_CMD_RS | flags;
if (txq->nb_tx_free < txq->tx_free_thresh)
- iavf_tx_free_bufs_avx512(txq);
+ ci_tx_free_bufs_vec(txq, iavf_tx_desc_done, false);
nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
@@ -2388,7 +2273,7 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
uint64_t rs = IAVF_TX_DESC_CMD_RS | flags;
if (txq->nb_tx_free < txq->tx_free_thresh)
- iavf_tx_free_bufs_avx512(txq);
+ ci_tx_free_bufs_vec(txq, iavf_tx_desc_done, true);
nb_commit = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts << 1);
nb_commit &= 0xFFFE;
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index 538be707ef..f6ec593f96 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -949,7 +949,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_pkts = RTE_MIN(nb_pkts, txq->tx_rs_thresh);
if (txq->nb_tx_free < txq->tx_free_thresh)
- ci_tx_free_bufs_vec(txq, ice_tx_desc_done);
+ ci_tx_free_bufs_vec(txq, ice_tx_desc_done, false);
nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v1 14/21] net/ice: move Tx queue mbuf cleanup fn to common
2024-12-02 11:24 ` [PATCH v1 " Bruce Richardson
` (12 preceding siblings ...)
2024-12-02 11:24 ` [PATCH v1 13/21] net/iavf: use common Tx " Bruce Richardson
@ 2024-12-02 11:24 ` Bruce Richardson
2024-12-02 11:24 ` [PATCH v1 15/21] net/i40e: use common Tx queue mbuf cleanup fn Bruce Richardson
` (6 subsequent siblings)
20 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-02 11:24 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Anatoly Burakov, Konstantin Ananyev
The functions to loop over the Tx queue and clean up all the mbufs on
it, e.g. for queue shutdown, is not device specific and so can move into
the common_intel headers. Only complication is ensuring that the
correct ring format, either minimal vector or full structure, is used.
Ice driver currently uses two functions and a function pointer to help
with this - though actually one of those functions uses a further check
inside it - so we can simplify this down to just one common function,
with a flag set in the appropriate place. This avoids checking for
AVX-512-specific functions, which were the only function using the
smaller struct in this driver.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 49 ++++++++++++++++++++++++-
drivers/net/ice/ice_dcf_ethdev.c | 5 +--
drivers/net/ice/ice_ethdev.h | 3 +-
drivers/net/ice/ice_rxtx.c | 33 +++++------------
drivers/net/ice/ice_rxtx_vec_common.h | 51 ---------------------------
drivers/net/ice/ice_rxtx_vec_sse.c | 4 +--
6 files changed, 61 insertions(+), 84 deletions(-)
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
index 21f4d71e50..2a34ec267d 100644
--- a/drivers/net/_common_intel/tx.h
+++ b/drivers/net/_common_intel/tx.h
@@ -65,6 +65,8 @@ struct ci_tx_queue {
rte_iova_t tx_ring_dma; /* TX ring DMA address */
bool tx_deferred_start; /* don't start this queue in dev start */
bool q_set; /* indicate if tx queue has been configured */
+ bool vector_tx; /* port is using vector TX */
+ bool vector_sw_ring; /* port is using vectorized SW ring (ieth_tx_entry_vec) */
union { /* the VSI this queue belongs to */
struct i40e_vsi *i40e_vsi;
struct iavf_vsi *iavf_vsi;
@@ -74,7 +76,6 @@ struct ci_tx_queue {
union {
struct { /* ICE driver specific values */
- ice_tx_release_mbufs_t tx_rel_mbufs;
uint32_t q_teid; /* TX schedule node id. */
};
struct { /* I40E driver specific values */
@@ -271,4 +272,50 @@ ci_tx_free_bufs_vec(struct ci_tx_queue *txq, ci_desc_done_fn desc_done, bool ctx
return txq->tx_rs_thresh;
}
+#define IETH_FREE_BUFS_LOOP(txq, swr, start) do { \
+ uint16_t i = start; \
+ if (txq->tx_tail < i) { \
+ for (; i < txq->nb_tx_desc; i++) { \
+ rte_pktmbuf_free_seg(swr[i].mbuf); \
+ swr[i].mbuf = NULL; \
+ } \
+ i = 0; \
+ } \
+ for (; i < txq->tx_tail; i++) { \
+ rte_pktmbuf_free_seg(swr[i].mbuf); \
+ swr[i].mbuf = NULL; \
+ } \
+} while (0)
+
+static inline void
+ci_txq_release_all_mbufs(struct ci_tx_queue *txq)
+{
+ if (unlikely(!txq || !txq->sw_ring))
+ return;
+
+ if (!txq->vector_tx) {
+ for (uint16_t i = 0; i < txq->nb_tx_desc; i++) {
+ if (txq->sw_ring[i].mbuf != NULL) {
+ rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
+ txq->sw_ring[i].mbuf = NULL;
+ }
+ }
+ return;
+ }
+
+ /**
+ * vPMD tx will not set sw_ring's mbuf to NULL after free,
+ * so need to free remains more carefully.
+ */
+ const uint16_t start = txq->tx_next_dd - txq->tx_rs_thresh + 1;
+
+ if (txq->vector_sw_ring) {
+ struct ci_tx_entry_vec *swr = txq->sw_ring_vec;
+ IETH_FREE_BUFS_LOOP(txq, swr, start);
+ } else {
+ struct ci_tx_entry *swr = txq->sw_ring;
+ IETH_FREE_BUFS_LOOP(txq, swr, start);
+ }
+}
+
#endif /* _COMMON_INTEL_TX_H_ */
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index a0c065d78c..c20399cd84 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -24,6 +24,7 @@
#include "ice_generic_flow.h"
#include "ice_dcf_ethdev.h"
#include "ice_rxtx.h"
+#include "_common_intel/tx.h"
#define DCF_NUM_MACADDR_MAX 64
@@ -500,7 +501,7 @@ ice_dcf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
}
txq = dev->data->tx_queues[tx_queue_id];
- txq->tx_rel_mbufs(txq);
+ ci_txq_release_all_mbufs(txq);
reset_tx_queue(txq);
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
@@ -650,7 +651,7 @@ ice_dcf_stop_queues(struct rte_eth_dev *dev)
txq = dev->data->tx_queues[i];
if (!txq)
continue;
- txq->tx_rel_mbufs(txq);
+ ci_txq_release_all_mbufs(txq);
reset_tx_queue(txq);
dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
}
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index ba54655499..afe8dae497 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -621,13 +621,12 @@ struct ice_adapter {
/* Set bit if the engine is disabled */
unsigned long disabled_engine_mask;
struct ice_parser *psr;
-#ifdef RTE_ARCH_X86
+ /* used only on X86, zero on other Archs */
bool rx_use_avx2;
bool rx_use_avx512;
bool tx_use_avx2;
bool tx_use_avx512;
bool rx_vec_offload_support;
-#endif
};
struct ice_vsi_vlan_pvid_info {
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index e2e147ba3e..0a890e587c 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -751,6 +751,7 @@ ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
struct ice_aqc_add_tx_qgrp *txq_elem;
struct ice_tlan_ctx tx_ctx;
int buf_len;
+ struct ice_adapter *ad = ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
PMD_INIT_FUNC_TRACE();
@@ -822,6 +823,10 @@ ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
return -EIO;
}
+ /* record what kind of descriptor cleanup we need on teardown */
+ txq->vector_tx = ad->tx_vec_allowed;
+ txq->vector_sw_ring = ad->tx_use_avx512;
+
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED;
rte_free(txq_elem);
@@ -1006,25 +1011,6 @@ ice_fdir_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
return 0;
}
-/* Free all mbufs for descriptors in tx queue */
-static void
-_ice_tx_queue_release_mbufs(struct ci_tx_queue *txq)
-{
- uint16_t i;
-
- if (!txq || !txq->sw_ring) {
- PMD_DRV_LOG(DEBUG, "Pointer to txq or sw_ring is NULL");
- return;
- }
-
- for (i = 0; i < txq->nb_tx_desc; i++) {
- if (txq->sw_ring[i].mbuf) {
- rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
- txq->sw_ring[i].mbuf = NULL;
- }
- }
-}
-
static void
ice_reset_tx_queue(struct ci_tx_queue *txq)
{
@@ -1103,7 +1089,7 @@ ice_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
return -EINVAL;
}
- txq->tx_rel_mbufs(txq);
+ ci_txq_release_all_mbufs(txq);
ice_reset_tx_queue(txq);
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
@@ -1166,7 +1152,7 @@ ice_fdir_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
return -EINVAL;
}
- txq->tx_rel_mbufs(txq);
+ ci_txq_release_all_mbufs(txq);
txq->qtx_tail = NULL;
return 0;
@@ -1518,7 +1504,6 @@ ice_tx_queue_setup(struct rte_eth_dev *dev,
ice_reset_tx_queue(txq);
txq->q_set = true;
dev->data->tx_queues[queue_idx] = txq;
- txq->tx_rel_mbufs = _ice_tx_queue_release_mbufs;
ice_set_tx_function_flag(dev, txq);
return 0;
@@ -1546,8 +1531,7 @@ ice_tx_queue_release(void *txq)
return;
}
- if (q->tx_rel_mbufs != NULL)
- q->tx_rel_mbufs(q);
+ ci_txq_release_all_mbufs(q);
rte_free(q->sw_ring);
rte_memzone_free(q->mz);
rte_free(q);
@@ -2460,7 +2444,6 @@ ice_fdir_setup_tx_resources(struct ice_pf *pf)
txq->q_set = true;
pf->fdir.txq = txq;
- txq->tx_rel_mbufs = _ice_tx_queue_release_mbufs;
return ICE_SUCCESS;
}
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index c6c3933299..907828b675 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -61,57 +61,6 @@ _ice_rx_queue_release_mbufs_vec(struct ice_rx_queue *rxq)
memset(rxq->sw_ring, 0, sizeof(rxq->sw_ring[0]) * rxq->nb_rx_desc);
}
-static inline void
-_ice_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq)
-{
- uint16_t i;
-
- if (unlikely(!txq || !txq->sw_ring)) {
- PMD_DRV_LOG(DEBUG, "Pointer to rxq or sw_ring is NULL");
- return;
- }
-
- /**
- * vPMD tx will not set sw_ring's mbuf to NULL after free,
- * so need to free remains more carefully.
- */
- i = txq->tx_next_dd - txq->tx_rs_thresh + 1;
-
-#ifdef __AVX512VL__
- struct rte_eth_dev *dev = &rte_eth_devices[txq->ice_vsi->adapter->pf.dev_data->port_id];
-
- if (dev->tx_pkt_burst == ice_xmit_pkts_vec_avx512 ||
- dev->tx_pkt_burst == ice_xmit_pkts_vec_avx512_offload) {
- struct ci_tx_entry_vec *swr = (void *)txq->sw_ring;
-
- if (txq->tx_tail < i) {
- for (; i < txq->nb_tx_desc; i++) {
- rte_pktmbuf_free_seg(swr[i].mbuf);
- swr[i].mbuf = NULL;
- }
- i = 0;
- }
- for (; i < txq->tx_tail; i++) {
- rte_pktmbuf_free_seg(swr[i].mbuf);
- swr[i].mbuf = NULL;
- }
- } else
-#endif
- {
- if (txq->tx_tail < i) {
- for (; i < txq->nb_tx_desc; i++) {
- rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
- txq->sw_ring[i].mbuf = NULL;
- }
- i = 0;
- }
- for (; i < txq->tx_tail; i++) {
- rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
- txq->sw_ring[i].mbuf = NULL;
- }
- }
-}
-
static inline int
ice_rxq_vec_setup_default(struct ice_rx_queue *rxq)
{
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index a62a32a552..04f6408338 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -793,12 +793,10 @@ ice_rxq_vec_setup(struct ice_rx_queue *rxq)
}
int __rte_cold
-ice_txq_vec_setup(struct ci_tx_queue __rte_unused *txq)
+ice_txq_vec_setup(struct ci_tx_queue *txq)
{
if (!txq)
return -1;
-
- txq->tx_rel_mbufs = _ice_tx_queue_release_mbufs_vec;
return 0;
}
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v1 15/21] net/i40e: use common Tx queue mbuf cleanup fn
2024-12-02 11:24 ` [PATCH v1 " Bruce Richardson
` (13 preceding siblings ...)
2024-12-02 11:24 ` [PATCH v1 14/21] net/ice: move Tx queue mbuf cleanup fn to common Bruce Richardson
@ 2024-12-02 11:24 ` Bruce Richardson
2024-12-02 11:24 ` [PATCH v1 16/21] net/ixgbe: " Bruce Richardson
` (5 subsequent siblings)
20 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-02 11:24 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Ian Stokes
Update driver to be similar to the "ice" driver and use the common mbuf
ring cleanup code on shutdown of a Tx queue.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/i40e/i40e_ethdev.h | 4 +-
drivers/net/i40e/i40e_rxtx.c | 70 ++++------------------------------
drivers/net/i40e/i40e_rxtx.h | 1 -
3 files changed, 9 insertions(+), 66 deletions(-)
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index d351193ed9..ccc8732d7d 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -1260,12 +1260,12 @@ struct i40e_adapter {
/* For RSS reta table update */
uint8_t rss_reta_updated;
-#ifdef RTE_ARCH_X86
+
+ /* used only on x86, zero on other architectures */
bool rx_use_avx2;
bool rx_use_avx512;
bool tx_use_avx2;
bool tx_use_avx512;
-#endif
};
/**
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 539b170266..b70919c5dc 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -1875,6 +1875,7 @@ i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
int err;
struct ci_tx_queue *txq;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ const struct i40e_adapter *ad = I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
PMD_INIT_FUNC_TRACE();
@@ -1889,6 +1890,9 @@ i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
PMD_DRV_LOG(WARNING, "TX queue %u is deferred start",
tx_queue_id);
+ txq->vector_tx = ad->tx_vec_allowed;
+ txq->vector_sw_ring = ad->tx_use_avx512;
+
/*
* tx_queue_id is queue id application refers to, while
* rxq->reg_idx is the real queue index.
@@ -1929,7 +1933,7 @@ i40e_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
return err;
}
- i40e_tx_queue_release_mbufs(txq);
+ ci_txq_release_all_mbufs(txq);
i40e_reset_tx_queue(txq);
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
@@ -2604,7 +2608,7 @@ i40e_tx_queue_release(void *txq)
return;
}
- i40e_tx_queue_release_mbufs(q);
+ ci_txq_release_all_mbufs(q);
rte_free(q->sw_ring);
rte_memzone_free(q->mz);
rte_free(q);
@@ -2701,66 +2705,6 @@ i40e_reset_rx_queue(struct i40e_rx_queue *rxq)
rxq->rxrearm_nb = 0;
}
-void
-i40e_tx_queue_release_mbufs(struct ci_tx_queue *txq)
-{
- struct rte_eth_dev *dev;
- uint16_t i;
-
- if (!txq || !txq->sw_ring) {
- PMD_DRV_LOG(DEBUG, "Pointer to txq or sw_ring is NULL");
- return;
- }
-
- dev = &rte_eth_devices[txq->port_id];
-
- /**
- * vPMD tx will not set sw_ring's mbuf to NULL after free,
- * so need to free remains more carefully.
- */
-#ifdef CC_AVX512_SUPPORT
- if (dev->tx_pkt_burst == i40e_xmit_pkts_vec_avx512) {
- struct ci_tx_entry_vec *swr = (void *)txq->sw_ring;
-
- i = txq->tx_next_dd - txq->tx_rs_thresh + 1;
- if (txq->tx_tail < i) {
- for (; i < txq->nb_tx_desc; i++) {
- rte_pktmbuf_free_seg(swr[i].mbuf);
- swr[i].mbuf = NULL;
- }
- i = 0;
- }
- for (; i < txq->tx_tail; i++) {
- rte_pktmbuf_free_seg(swr[i].mbuf);
- swr[i].mbuf = NULL;
- }
- return;
- }
-#endif
- if (dev->tx_pkt_burst == i40e_xmit_pkts_vec_avx2 ||
- dev->tx_pkt_burst == i40e_xmit_pkts_vec) {
- i = txq->tx_next_dd - txq->tx_rs_thresh + 1;
- if (txq->tx_tail < i) {
- for (; i < txq->nb_tx_desc; i++) {
- rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
- txq->sw_ring[i].mbuf = NULL;
- }
- i = 0;
- }
- for (; i < txq->tx_tail; i++) {
- rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
- txq->sw_ring[i].mbuf = NULL;
- }
- } else {
- for (i = 0; i < txq->nb_tx_desc; i++) {
- if (txq->sw_ring[i].mbuf) {
- rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
- txq->sw_ring[i].mbuf = NULL;
- }
- }
- }
-}
-
static int
i40e_tx_done_cleanup_full(struct ci_tx_queue *txq,
uint32_t free_cnt)
@@ -3127,7 +3071,7 @@ i40e_dev_clear_queues(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_tx_queues; i++) {
if (!dev->data->tx_queues[i])
continue;
- i40e_tx_queue_release_mbufs(dev->data->tx_queues[i]);
+ ci_txq_release_all_mbufs(dev->data->tx_queues[i]);
i40e_reset_tx_queue(dev->data->tx_queues[i]);
}
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 043d1df912..858b8433e9 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -179,7 +179,6 @@ void i40e_dev_clear_queues(struct rte_eth_dev *dev);
void i40e_dev_free_queues(struct rte_eth_dev *dev);
void i40e_reset_rx_queue(struct i40e_rx_queue *rxq);
void i40e_reset_tx_queue(struct ci_tx_queue *txq);
-void i40e_tx_queue_release_mbufs(struct ci_tx_queue *txq);
int i40e_tx_done_cleanup(void *txq, uint32_t free_cnt);
int i40e_alloc_rx_queue_mbufs(struct i40e_rx_queue *rxq);
void i40e_rx_queue_release_mbufs(struct i40e_rx_queue *rxq);
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v1 16/21] net/ixgbe: use common Tx queue mbuf cleanup fn
2024-12-02 11:24 ` [PATCH v1 " Bruce Richardson
` (14 preceding siblings ...)
2024-12-02 11:24 ` [PATCH v1 15/21] net/i40e: use common Tx queue mbuf cleanup fn Bruce Richardson
@ 2024-12-02 11:24 ` Bruce Richardson
2024-12-02 11:24 ` [PATCH v1 17/21] net/iavf: " Bruce Richardson
` (4 subsequent siblings)
20 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-02 11:24 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Anatoly Burakov, Vladimir Medvedkin,
Wathsala Vithanage, Konstantin Ananyev
Update driver to use the common cleanup function.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/ixgbe/ixgbe_rxtx.c | 22 +++---------------
drivers/net/ixgbe/ixgbe_rxtx.h | 1 -
drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 28 ++---------------------
drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 7 ------
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 7 ------
5 files changed, 5 insertions(+), 60 deletions(-)
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index f8f5f42e5c..5ab62808a0 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -2334,21 +2334,6 @@ ixgbe_recv_pkts_lro_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts,
*
**********************************************************************/
-static void __rte_cold
-ixgbe_tx_queue_release_mbufs(struct ci_tx_queue *txq)
-{
- unsigned i;
-
- if (txq->sw_ring != NULL) {
- for (i = 0; i < txq->nb_tx_desc; i++) {
- if (txq->sw_ring[i].mbuf != NULL) {
- rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
- txq->sw_ring[i].mbuf = NULL;
- }
- }
- }
-}
-
static int
ixgbe_tx_done_cleanup_full(struct ci_tx_queue *txq, uint32_t free_cnt)
{
@@ -2472,7 +2457,7 @@ static void __rte_cold
ixgbe_tx_queue_release(struct ci_tx_queue *txq)
{
if (txq != NULL && txq->ops != NULL) {
- txq->ops->release_mbufs(txq);
+ ci_txq_release_all_mbufs(txq);
txq->ops->free_swring(txq);
rte_memzone_free(txq->mz);
rte_free(txq);
@@ -2526,7 +2511,6 @@ ixgbe_reset_tx_queue(struct ci_tx_queue *txq)
}
static const struct ixgbe_txq_ops def_txq_ops = {
- .release_mbufs = ixgbe_tx_queue_release_mbufs,
.free_swring = ixgbe_tx_free_swring,
.reset = ixgbe_reset_tx_queue,
};
@@ -3380,7 +3364,7 @@ ixgbe_dev_clear_queues(struct rte_eth_dev *dev)
struct ci_tx_queue *txq = dev->data->tx_queues[i];
if (txq != NULL) {
- txq->ops->release_mbufs(txq);
+ ci_txq_release_all_mbufs(txq);
txq->ops->reset(txq);
dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
}
@@ -5655,7 +5639,7 @@ ixgbe_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
}
if (txq->ops != NULL) {
- txq->ops->release_mbufs(txq);
+ ci_txq_release_all_mbufs(txq);
txq->ops->reset(txq);
}
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index 4333e5bf2f..11689eb432 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -181,7 +181,6 @@ struct ixgbe_advctx_info {
};
struct ixgbe_txq_ops {
- void (*release_mbufs)(struct ci_tx_queue *txq);
void (*free_swring)(struct ci_tx_queue *txq);
void (*reset)(struct ci_tx_queue *txq);
};
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
index 81fd8bb64d..65794e45cb 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
@@ -78,32 +78,6 @@ tx_backlog_entry(struct ci_tx_entry_vec *txep,
txep[i].mbuf = tx_pkts[i];
}
-static inline void
-_ixgbe_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq)
-{
- unsigned int i;
- struct ci_tx_entry_vec *txe;
- const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1);
-
- if (txq->sw_ring == NULL || txq->nb_tx_free == max_desc)
- return;
-
- /* release the used mbufs in sw_ring */
- for (i = txq->tx_next_dd - (txq->tx_rs_thresh - 1);
- i != txq->tx_tail;
- i = (i + 1) % txq->nb_tx_desc) {
- txe = &txq->sw_ring_vec[i];
- rte_pktmbuf_free_seg(txe->mbuf);
- }
- txq->nb_tx_free = max_desc;
-
- /* reset tx_entry */
- for (i = 0; i < txq->nb_tx_desc; i++) {
- txe = &txq->sw_ring_vec[i];
- txe->mbuf = NULL;
- }
-}
-
static inline void
_ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq)
{
@@ -208,6 +182,8 @@ ixgbe_txq_vec_setup_default(struct ci_tx_queue *txq,
/* leave the first one for overflow */
txq->sw_ring_vec = txq->sw_ring_vec + 1;
txq->ops = txq_ops;
+ txq->vector_tx = 1;
+ txq->vector_sw_ring = 1;
return 0;
}
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
index cb749a3760..2ccb399b64 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
@@ -633,12 +633,6 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
return nb_pkts;
}
-static void __rte_cold
-ixgbe_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq)
-{
- _ixgbe_tx_queue_release_mbufs_vec(txq);
-}
-
void __rte_cold
ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq)
{
@@ -658,7 +652,6 @@ ixgbe_reset_tx_queue(struct ci_tx_queue *txq)
}
static const struct ixgbe_txq_ops vec_txq_ops = {
- .release_mbufs = ixgbe_tx_queue_release_mbufs_vec,
.free_swring = ixgbe_tx_free_swring,
.reset = ixgbe_reset_tx_queue,
};
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
index e46550f76a..fa26365f06 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -756,12 +756,6 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
return nb_pkts;
}
-static void __rte_cold
-ixgbe_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq)
-{
- _ixgbe_tx_queue_release_mbufs_vec(txq);
-}
-
void __rte_cold
ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq)
{
@@ -781,7 +775,6 @@ ixgbe_reset_tx_queue(struct ci_tx_queue *txq)
}
static const struct ixgbe_txq_ops vec_txq_ops = {
- .release_mbufs = ixgbe_tx_queue_release_mbufs_vec,
.free_swring = ixgbe_tx_free_swring,
.reset = ixgbe_reset_tx_queue,
};
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v1 17/21] net/iavf: use common Tx queue mbuf cleanup fn
2024-12-02 11:24 ` [PATCH v1 " Bruce Richardson
` (15 preceding siblings ...)
2024-12-02 11:24 ` [PATCH v1 16/21] net/ixgbe: " Bruce Richardson
@ 2024-12-02 11:24 ` Bruce Richardson
2024-12-02 11:24 ` [PATCH v1 18/21] net/ice: use vector SW ring for all vector paths Bruce Richardson
` (3 subsequent siblings)
20 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-02 11:24 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Ian Stokes, Vladimir Medvedkin,
Konstantin Ananyev, Anatoly Burakov
Adjust iavf driver to also use the common mbuf freeing functions on Tx
queue release/cleanup. The implementation is complicated a little by the
need to integrate the additional "has_ctx" parameter for the iavf code,
but changes in other drivers are minimal - just a constant "false"
parameter.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 27 +++++++++---------
drivers/net/i40e/i40e_rxtx.c | 6 ++--
drivers/net/iavf/iavf_rxtx.c | 37 ++-----------------------
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 24 ++--------------
drivers/net/iavf/iavf_rxtx_vec_common.h | 18 ------------
drivers/net/iavf/iavf_rxtx_vec_sse.c | 9 ++----
drivers/net/ice/ice_dcf_ethdev.c | 4 +--
drivers/net/ice/ice_rxtx.c | 6 ++--
drivers/net/ixgbe/ixgbe_rxtx.c | 6 ++--
9 files changed, 31 insertions(+), 106 deletions(-)
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
index 2a34ec267d..279eb6ea67 100644
--- a/drivers/net/_common_intel/tx.h
+++ b/drivers/net/_common_intel/tx.h
@@ -272,23 +272,23 @@ ci_tx_free_bufs_vec(struct ci_tx_queue *txq, ci_desc_done_fn desc_done, bool ctx
return txq->tx_rs_thresh;
}
-#define IETH_FREE_BUFS_LOOP(txq, swr, start) do { \
+#define IETH_FREE_BUFS_LOOP(swr, nb_desc, start, end) do { \
uint16_t i = start; \
- if (txq->tx_tail < i) { \
- for (; i < txq->nb_tx_desc; i++) { \
+ if (end < i) { \
+ for (; i < nb_desc; i++) { \
rte_pktmbuf_free_seg(swr[i].mbuf); \
swr[i].mbuf = NULL; \
} \
i = 0; \
} \
- for (; i < txq->tx_tail; i++) { \
+ for (; i < end; i++) { \
rte_pktmbuf_free_seg(swr[i].mbuf); \
swr[i].mbuf = NULL; \
} \
} while (0)
static inline void
-ci_txq_release_all_mbufs(struct ci_tx_queue *txq)
+ci_txq_release_all_mbufs(struct ci_tx_queue *txq, bool use_ctx)
{
if (unlikely(!txq || !txq->sw_ring))
return;
@@ -307,15 +307,14 @@ ci_txq_release_all_mbufs(struct ci_tx_queue *txq)
* vPMD tx will not set sw_ring's mbuf to NULL after free,
* so need to free remains more carefully.
*/
- const uint16_t start = txq->tx_next_dd - txq->tx_rs_thresh + 1;
-
- if (txq->vector_sw_ring) {
- struct ci_tx_entry_vec *swr = txq->sw_ring_vec;
- IETH_FREE_BUFS_LOOP(txq, swr, start);
- } else {
- struct ci_tx_entry *swr = txq->sw_ring;
- IETH_FREE_BUFS_LOOP(txq, swr, start);
- }
+ const uint16_t start = (txq->tx_next_dd - txq->tx_rs_thresh + 1) >> use_ctx;
+ const uint16_t nb_desc = txq->nb_tx_desc >> use_ctx;
+ const uint16_t end = txq->tx_tail >> use_ctx;
+
+ if (txq->vector_sw_ring)
+ IETH_FREE_BUFS_LOOP(txq->sw_ring_vec, nb_desc, start, end);
+ else
+ IETH_FREE_BUFS_LOOP(txq->sw_ring, nb_desc, start, end);
}
#endif /* _COMMON_INTEL_TX_H_ */
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index b70919c5dc..081d743e62 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -1933,7 +1933,7 @@ i40e_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
return err;
}
- ci_txq_release_all_mbufs(txq);
+ ci_txq_release_all_mbufs(txq, false);
i40e_reset_tx_queue(txq);
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
@@ -2608,7 +2608,7 @@ i40e_tx_queue_release(void *txq)
return;
}
- ci_txq_release_all_mbufs(q);
+ ci_txq_release_all_mbufs(q, false);
rte_free(q->sw_ring);
rte_memzone_free(q->mz);
rte_free(q);
@@ -3071,7 +3071,7 @@ i40e_dev_clear_queues(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_tx_queues; i++) {
if (!dev->data->tx_queues[i])
continue;
- ci_txq_release_all_mbufs(dev->data->tx_queues[i]);
+ ci_txq_release_all_mbufs(dev->data->tx_queues[i], false);
i40e_reset_tx_queue(dev->data->tx_queues[i]);
}
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 7e381b2a17..f0ab881ac5 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -387,24 +387,6 @@ release_rxq_mbufs(struct iavf_rx_queue *rxq)
rxq->rx_nb_avail = 0;
}
-static inline void
-release_txq_mbufs(struct ci_tx_queue *txq)
-{
- uint16_t i;
-
- if (!txq || !txq->sw_ring) {
- PMD_DRV_LOG(DEBUG, "Pointer to rxq or sw_ring is NULL");
- return;
- }
-
- for (i = 0; i < txq->nb_tx_desc; i++) {
- if (txq->sw_ring[i].mbuf) {
- rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
- txq->sw_ring[i].mbuf = NULL;
- }
- }
-}
-
static const
struct iavf_rxq_ops iavf_rxq_release_mbufs_ops[] = {
[IAVF_REL_MBUFS_DEFAULT].release_mbufs = release_rxq_mbufs,
@@ -413,18 +395,6 @@ struct iavf_rxq_ops iavf_rxq_release_mbufs_ops[] = {
#endif
};
-static const
-struct iavf_txq_ops iavf_txq_release_mbufs_ops[] = {
- [IAVF_REL_MBUFS_DEFAULT].release_mbufs = release_txq_mbufs,
-#ifdef RTE_ARCH_X86
- [IAVF_REL_MBUFS_SSE_VEC].release_mbufs = iavf_tx_queue_release_mbufs_sse,
-#ifdef CC_AVX512_SUPPORT
- [IAVF_REL_MBUFS_AVX512_VEC].release_mbufs = iavf_tx_queue_release_mbufs_avx512,
-#endif
-#endif
-
-};
-
static inline void
iavf_rxd_to_pkt_fields_by_comms_ovs(__rte_unused struct iavf_rx_queue *rxq,
struct rte_mbuf *mb,
@@ -889,7 +859,6 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->q_set = true;
dev->data->tx_queues[queue_idx] = txq;
txq->qtx_tail = hw->hw_addr + IAVF_QTX_TAIL1(queue_idx);
- txq->rel_mbufs_type = IAVF_REL_MBUFS_DEFAULT;
if (check_tx_vec_allow(txq) == false) {
struct iavf_adapter *ad =
@@ -1068,7 +1037,7 @@ iavf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
}
txq = dev->data->tx_queues[tx_queue_id];
- iavf_txq_release_mbufs_ops[txq->rel_mbufs_type].release_mbufs(txq);
+ ci_txq_release_all_mbufs(txq, txq->use_ctx);
reset_tx_queue(txq);
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
@@ -1097,7 +1066,7 @@ iavf_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
if (!q)
return;
- iavf_txq_release_mbufs_ops[q->rel_mbufs_type].release_mbufs(q);
+ ci_txq_release_all_mbufs(q, q->use_ctx);
rte_free(q->sw_ring);
rte_memzone_free(q->mz);
rte_free(q);
@@ -1114,7 +1083,7 @@ iavf_reset_queues(struct rte_eth_dev *dev)
txq = dev->data->tx_queues[i];
if (!txq)
continue;
- iavf_txq_release_mbufs_ops[txq->rel_mbufs_type].release_mbufs(txq);
+ ci_txq_release_all_mbufs(txq, txq->use_ctx);
reset_tx_queue(txq);
dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
}
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index 8543490c70..007759e451 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -2357,31 +2357,11 @@ iavf_xmit_pkts_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
return iavf_xmit_pkts_vec_avx512_cmn(tx_queue, tx_pkts, nb_pkts, false);
}
-void __rte_cold
-iavf_tx_queue_release_mbufs_avx512(struct ci_tx_queue *txq)
-{
- unsigned int i;
- const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1);
- const uint16_t end_desc = txq->tx_tail >> txq->use_ctx; /* next empty slot */
- const uint16_t wrap_point = txq->nb_tx_desc >> txq->use_ctx; /* end of SW ring */
- struct ci_tx_entry_vec *swr = (void *)txq->sw_ring;
-
- if (!txq->sw_ring || txq->nb_tx_free == max_desc)
- return;
-
- i = (txq->tx_next_dd - txq->tx_rs_thresh + 1) >> txq->use_ctx;
- while (i != end_desc) {
- rte_pktmbuf_free_seg(swr[i].mbuf);
- swr[i].mbuf = NULL;
- if (++i == wrap_point)
- i = 0;
- }
-}
-
int __rte_cold
iavf_txq_vec_setup_avx512(struct ci_tx_queue *txq)
{
- txq->rel_mbufs_type = IAVF_REL_MBUFS_AVX512_VEC;
+ txq->vector_tx = true;
+ txq->vector_sw_ring = true;
return 0;
}
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index 7130229f23..6f94587eee 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -60,24 +60,6 @@ _iavf_rx_queue_release_mbufs_vec(struct iavf_rx_queue *rxq)
memset(rxq->sw_ring, 0, sizeof(rxq->sw_ring[0]) * rxq->nb_rx_desc);
}
-static inline void
-_iavf_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq)
-{
- unsigned i;
- const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1);
-
- if (!txq->sw_ring || txq->nb_tx_free == max_desc)
- return;
-
- i = txq->tx_next_dd - txq->tx_rs_thresh + 1;
- while (i != txq->tx_tail) {
- rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
- txq->sw_ring[i].mbuf = NULL;
- if (++i == txq->nb_tx_desc)
- i = 0;
- }
-}
-
static inline int
iavf_rxq_vec_setup_default(struct iavf_rx_queue *rxq)
{
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index 5c0b2fff46..3adf2a59e4 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -1458,16 +1458,11 @@ iavf_rx_queue_release_mbufs_sse(struct iavf_rx_queue *rxq)
_iavf_rx_queue_release_mbufs_vec(rxq);
}
-void __rte_cold
-iavf_tx_queue_release_mbufs_sse(struct ci_tx_queue *txq)
-{
- _iavf_tx_queue_release_mbufs_vec(txq);
-}
-
int __rte_cold
iavf_txq_vec_setup(struct ci_tx_queue *txq)
{
- txq->rel_mbufs_type = IAVF_REL_MBUFS_SSE_VEC;
+ txq->vector_tx = true;
+ txq->vector_sw_ring = false;
return 0;
}
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index c20399cd84..57fe44ebb3 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -501,7 +501,7 @@ ice_dcf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
}
txq = dev->data->tx_queues[tx_queue_id];
- ci_txq_release_all_mbufs(txq);
+ ci_txq_release_all_mbufs(txq, false);
reset_tx_queue(txq);
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
@@ -651,7 +651,7 @@ ice_dcf_stop_queues(struct rte_eth_dev *dev)
txq = dev->data->tx_queues[i];
if (!txq)
continue;
- ci_txq_release_all_mbufs(txq);
+ ci_txq_release_all_mbufs(txq, false);
reset_tx_queue(txq);
dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
}
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 0a890e587c..ad0ddf6a88 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -1089,7 +1089,7 @@ ice_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
return -EINVAL;
}
- ci_txq_release_all_mbufs(txq);
+ ci_txq_release_all_mbufs(txq, false);
ice_reset_tx_queue(txq);
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
@@ -1152,7 +1152,7 @@ ice_fdir_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
return -EINVAL;
}
- ci_txq_release_all_mbufs(txq);
+ ci_txq_release_all_mbufs(txq, false);
txq->qtx_tail = NULL;
return 0;
@@ -1531,7 +1531,7 @@ ice_tx_queue_release(void *txq)
return;
}
- ci_txq_release_all_mbufs(q);
+ ci_txq_release_all_mbufs(q, false);
rte_free(q->sw_ring);
rte_memzone_free(q->mz);
rte_free(q);
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 5ab62808a0..b6a6d5224d 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -2457,7 +2457,7 @@ static void __rte_cold
ixgbe_tx_queue_release(struct ci_tx_queue *txq)
{
if (txq != NULL && txq->ops != NULL) {
- ci_txq_release_all_mbufs(txq);
+ ci_txq_release_all_mbufs(txq, false);
txq->ops->free_swring(txq);
rte_memzone_free(txq->mz);
rte_free(txq);
@@ -3364,7 +3364,7 @@ ixgbe_dev_clear_queues(struct rte_eth_dev *dev)
struct ci_tx_queue *txq = dev->data->tx_queues[i];
if (txq != NULL) {
- ci_txq_release_all_mbufs(txq);
+ ci_txq_release_all_mbufs(txq, false);
txq->ops->reset(txq);
dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
}
@@ -5639,7 +5639,7 @@ ixgbe_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
}
if (txq->ops != NULL) {
- ci_txq_release_all_mbufs(txq);
+ ci_txq_release_all_mbufs(txq, false);
txq->ops->reset(txq);
}
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v1 18/21] net/ice: use vector SW ring for all vector paths
2024-12-02 11:24 ` [PATCH v1 " Bruce Richardson
` (16 preceding siblings ...)
2024-12-02 11:24 ` [PATCH v1 17/21] net/iavf: " Bruce Richardson
@ 2024-12-02 11:24 ` Bruce Richardson
2024-12-02 11:24 ` [PATCH v1 19/21] net/i40e: " Bruce Richardson
` (2 subsequent siblings)
20 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-02 11:24 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Anatoly Burakov, Konstantin Ananyev
The AVX-512 code path used a smaller SW ring structure only containing
the mbuf pointer, but no other fields. The other fields are only used in
the scalar code path, so update all vector driver code paths to use the
smaller, faster structure.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 7 +++++++
drivers/net/ice/ice_rxtx.c | 2 +-
drivers/net/ice/ice_rxtx_vec_avx2.c | 12 ++++++------
drivers/net/ice/ice_rxtx_vec_avx512.c | 14 ++------------
drivers/net/ice/ice_rxtx_vec_common.h | 6 ------
drivers/net/ice/ice_rxtx_vec_sse.c | 12 ++++++------
6 files changed, 22 insertions(+), 31 deletions(-)
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
index 279eb6ea67..d4054d7150 100644
--- a/drivers/net/_common_intel/tx.h
+++ b/drivers/net/_common_intel/tx.h
@@ -109,6 +109,13 @@ ci_tx_backlog_entry(struct ci_tx_entry *txep, struct rte_mbuf **tx_pkts, uint16_
txep[i].mbuf = tx_pkts[i];
}
+static __rte_always_inline void
+ci_tx_backlog_entry_vec(struct ci_tx_entry_vec *txep, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+ for (uint16_t i = 0; i < nb_pkts; ++i)
+ txep[i].mbuf = tx_pkts[i];
+}
+
#define IETH_VPMD_TX_MAX_FREE_BUF 64
typedef int (*ci_desc_done_fn)(struct ci_tx_queue *txq, uint16_t idx);
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index ad0ddf6a88..77cb6688a7 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -825,7 +825,7 @@ ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
/* record what kind of descriptor cleanup we need on teardown */
txq->vector_tx = ad->tx_vec_allowed;
- txq->vector_sw_ring = ad->tx_use_avx512;
+ txq->vector_sw_ring = txq->vector_tx;
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED;
diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index 12ffa0fa9a..98bab322b4 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -858,7 +858,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct ice_tx_desc *txdp;
- struct ci_tx_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = ICE_TD_CMD;
uint64_t rs = ICE_TX_DESC_CMD_RS | ICE_TD_CMD;
@@ -867,7 +867,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_pkts = RTE_MIN(nb_pkts, txq->tx_rs_thresh);
if (txq->nb_tx_free < txq->tx_free_thresh)
- ice_tx_free_bufs_vec(txq);
+ ci_tx_free_bufs_vec(txq, ice_tx_desc_done, false);
nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
@@ -875,13 +875,13 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = txq->tx_tail;
txdp = &txq->ice_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ci_tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, n);
ice_vtx(txdp, tx_pkts, n - 1, flags, offload);
tx_pkts += (n - 1);
@@ -896,10 +896,10 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
/* avoid reach the end of ring */
txdp = &txq->ice_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
}
- ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
ice_vtx(txdp, tx_pkts, nb_commit, flags, offload);
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index f6ec593f96..481f784e34 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -924,16 +924,6 @@ ice_vtx(volatile struct ice_tx_desc *txdp, struct rte_mbuf **pkt,
}
}
-static __rte_always_inline void
-ice_tx_backlog_entry_avx512(struct ci_tx_entry_vec *txep,
- struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
-{
- int i;
-
- for (i = 0; i < (int)nb_pkts; ++i)
- txep[i].mbuf = tx_pkts[i];
-}
-
static __rte_always_inline uint16_t
ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool do_offload)
@@ -964,7 +954,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ice_tx_backlog_entry_avx512(txep, tx_pkts, n);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, n);
ice_vtx(txdp, tx_pkts, n - 1, flags, do_offload);
tx_pkts += (n - 1);
@@ -982,7 +972,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = (void *)txq->sw_ring;
}
- ice_tx_backlog_entry_avx512(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
ice_vtx(txdp, tx_pkts, nb_commit, flags, do_offload);
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index 907828b675..aa709fb51c 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -20,12 +20,6 @@ ice_tx_desc_done(struct ci_tx_queue *txq, uint16_t idx)
rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE);
}
-static __rte_always_inline int
-ice_tx_free_bufs_vec(struct ci_tx_queue *txq)
-{
- return ci_tx_free_bufs(txq, ice_tx_desc_done);
-}
-
static inline void
_ice_rx_queue_release_mbufs_vec(struct ice_rx_queue *rxq)
{
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index 04f6408338..5f0231c1e1 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -699,7 +699,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct ice_tx_desc *txdp;
- struct ci_tx_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = ICE_TD_CMD;
uint64_t rs = ICE_TX_DESC_CMD_RS | ICE_TD_CMD;
@@ -709,7 +709,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_pkts = RTE_MIN(nb_pkts, txq->tx_rs_thresh);
if (txq->nb_tx_free < txq->tx_free_thresh)
- ice_tx_free_bufs_vec(txq);
+ ci_tx_free_bufs_vec(txq, ice_tx_desc_done, false);
nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
nb_commit = nb_pkts;
@@ -718,13 +718,13 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = txq->tx_tail;
txdp = &txq->ice_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ci_tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
ice_vtx1(txdp, *tx_pkts, flags);
@@ -738,10 +738,10 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
/* avoid reach the end of ring */
txdp = &txq->ice_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
}
- ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
ice_vtx(txdp, tx_pkts, nb_commit, flags);
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v1 19/21] net/i40e: use vector SW ring for all vector paths
2024-12-02 11:24 ` [PATCH v1 " Bruce Richardson
` (17 preceding siblings ...)
2024-12-02 11:24 ` [PATCH v1 18/21] net/ice: use vector SW ring for all vector paths Bruce Richardson
@ 2024-12-02 11:24 ` Bruce Richardson
2024-12-02 11:24 ` [PATCH v1 20/21] net/iavf: " Bruce Richardson
2024-12-02 11:24 ` [PATCH v1 21/21] net/ixgbe: use common Tx backlog entry fn Bruce Richardson
20 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-02 11:24 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Ian Stokes, David Christensen,
Konstantin Ananyev, Wathsala Vithanage
The AVX-512 code path used a smaller SW ring structure only containing
the mbuf pointer, but no other fields. The other fields are only used in
the scalar code path, so update all vector driver code paths (AVX2, SSE,
Neon, Altivec) to use the smaller, faster structure.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/i40e/i40e_rxtx.c | 8 +++++---
drivers/net/i40e/i40e_rxtx_vec_altivec.c | 12 ++++++------
drivers/net/i40e/i40e_rxtx_vec_avx2.c | 12 ++++++------
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 14 ++------------
drivers/net/i40e/i40e_rxtx_vec_common.h | 6 ------
drivers/net/i40e/i40e_rxtx_vec_neon.c | 12 ++++++------
drivers/net/i40e/i40e_rxtx_vec_sse.c | 12 ++++++------
7 files changed, 31 insertions(+), 45 deletions(-)
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 081d743e62..745c467912 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -1891,7 +1891,7 @@ i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
tx_queue_id);
txq->vector_tx = ad->tx_vec_allowed;
- txq->vector_sw_ring = ad->tx_use_avx512;
+ txq->vector_sw_ring = txq->vector_tx;
/*
* tx_queue_id is queue id application refers to, while
@@ -3550,9 +3550,11 @@ i40e_set_tx_function(struct rte_eth_dev *dev)
}
}
+ if (rte_vect_get_max_simd_bitwidth() < RTE_VECT_SIMD_128)
+ ad->tx_vec_allowed = false;
+
if (ad->tx_simple_allowed) {
- if (ad->tx_vec_allowed &&
- rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) {
+ if (ad->tx_vec_allowed) {
#ifdef RTE_ARCH_X86
if (ad->tx_use_avx512) {
#ifdef CC_AVX512_SUPPORT
diff --git a/drivers/net/i40e/i40e_rxtx_vec_altivec.c b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
index 500bba2cef..b6900a3e15 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_altivec.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
@@ -553,14 +553,14 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
- struct ci_tx_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = I40E_TD_CMD;
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
int i;
if (txq->nb_tx_free < txq->tx_free_thresh)
- i40e_tx_free_bufs(txq);
+ ci_tx_free_bufs_vec(txq, i40e_tx_desc_done, false);
nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
nb_commit = nb_pkts;
@@ -569,13 +569,13 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = txq->tx_tail;
txdp = &txq->i40e_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ci_tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -589,10 +589,10 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
/* avoid reach the end of ring */
txdp = &txq->i40e_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
}
- ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx2.c b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
index 29bef64287..2477573c01 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx2.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
@@ -745,13 +745,13 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
- struct ci_tx_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = I40E_TD_CMD;
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
if (txq->nb_tx_free < txq->tx_free_thresh)
- i40e_tx_free_bufs(txq);
+ ci_tx_free_bufs_vec(txq, i40e_tx_desc_done, false);
nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
@@ -759,13 +759,13 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = txq->tx_tail;
txdp = &txq->i40e_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ci_tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, n);
vtx(txdp, tx_pkts, n - 1, flags);
tx_pkts += (n - 1);
@@ -780,10 +780,10 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
/* avoid reach the end of ring */
txdp = &txq->i40e_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
}
- ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index c555c3491d..2497e6a8f0 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -807,16 +807,6 @@ vtx(volatile struct i40e_tx_desc *txdp,
}
}
-static __rte_always_inline void
-tx_backlog_entry_avx512(struct ci_tx_entry_vec *txep,
- struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
-{
- int i;
-
- for (i = 0; i < (int)nb_pkts; ++i)
- txep[i].mbuf = tx_pkts[i];
-}
-
static inline uint16_t
i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
@@ -844,7 +834,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- tx_backlog_entry_avx512(txep, tx_pkts, n);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, n);
vtx(txdp, tx_pkts, n - 1, flags);
tx_pkts += (n - 1);
@@ -862,7 +852,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = (void *)txq->sw_ring;
}
- tx_backlog_entry_avx512(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index 907d32dd0b..733dc797cd 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -24,12 +24,6 @@ i40e_tx_desc_done(struct ci_tx_queue *txq, uint16_t idx)
rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE);
}
-static __rte_always_inline int
-i40e_tx_free_bufs(struct ci_tx_queue *txq)
-{
- return ci_tx_free_bufs(txq, i40e_tx_desc_done);
-}
-
static inline void
_i40e_rx_queue_release_mbufs_vec(struct i40e_rx_queue *rxq)
{
diff --git a/drivers/net/i40e/i40e_rxtx_vec_neon.c b/drivers/net/i40e/i40e_rxtx_vec_neon.c
index 4006538ba5..37d7e51e60 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_neon.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_neon.c
@@ -681,14 +681,14 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
{
struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
- struct ci_tx_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = I40E_TD_CMD;
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
int i;
if (txq->nb_tx_free < txq->tx_free_thresh)
- i40e_tx_free_bufs(txq);
+ ci_tx_free_bufs_vec(txq, i40e_tx_desc_done, false);
nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
@@ -696,13 +696,13 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
tx_id = txq->tx_tail;
txdp = &txq->i40e_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ci_tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -716,10 +716,10 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
/* avoid reach the end of ring */
txdp = &txq->i40e_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
}
- ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_sse.c b/drivers/net/i40e/i40e_rxtx_vec_sse.c
index e9a5715515..3757272402 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_sse.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_sse.c
@@ -700,14 +700,14 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
- struct ci_tx_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = I40E_TD_CMD;
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
int i;
if (txq->nb_tx_free < txq->tx_free_thresh)
- i40e_tx_free_bufs(txq);
+ ci_tx_free_bufs_vec(txq, i40e_tx_desc_done, false);
nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
@@ -715,13 +715,13 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = txq->tx_tail;
txdp = &txq->i40e_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ci_tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -735,10 +735,10 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
/* avoid reach the end of ring */
txdp = &txq->i40e_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
}
- ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v1 20/21] net/iavf: use vector SW ring for all vector paths
2024-12-02 11:24 ` [PATCH v1 " Bruce Richardson
` (18 preceding siblings ...)
2024-12-02 11:24 ` [PATCH v1 19/21] net/i40e: " Bruce Richardson
@ 2024-12-02 11:24 ` Bruce Richardson
2024-12-02 11:24 ` [PATCH v1 21/21] net/ixgbe: use common Tx backlog entry fn Bruce Richardson
20 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-02 11:24 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Vladimir Medvedkin, Ian Stokes, Konstantin Ananyev
The AVX-512 code path used a smaller SW ring structure only containing
the mbuf pointer, but no other fields. The other fields are only used in
the scalar code path, so update all vector driver code paths (AVX2, SSE)
to use the smaller, faster structure.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/iavf/iavf_rxtx.c | 7 -------
drivers/net/iavf/iavf_rxtx_vec_avx2.c | 12 ++++++------
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 8 --------
drivers/net/iavf/iavf_rxtx_vec_common.h | 6 ------
drivers/net/iavf/iavf_rxtx_vec_sse.c | 14 +++++++-------
5 files changed, 13 insertions(+), 34 deletions(-)
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index f0ab881ac5..6692f6992b 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -4193,14 +4193,7 @@ iavf_set_tx_function(struct rte_eth_dev *dev)
txq = dev->data->tx_queues[i];
if (!txq)
continue;
-#ifdef CC_AVX512_SUPPORT
- if (use_avx512)
- iavf_txq_vec_setup_avx512(txq);
- else
- iavf_txq_vec_setup(txq);
-#else
iavf_txq_vec_setup(txq);
-#endif
}
if (no_poll_on_link_down) {
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index fdb98b417a..b847886081 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -1736,14 +1736,14 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
- struct ci_tx_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
/* bit2 is reserved and must be set to 1 according to Spec */
uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC;
uint64_t rs = IAVF_TX_DESC_CMD_RS | flags;
if (txq->nb_tx_free < txq->tx_free_thresh)
- iavf_tx_free_bufs(txq);
+ ci_tx_free_bufs_vec(txq, iavf_tx_desc_done, false);
nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
@@ -1752,13 +1752,13 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = txq->tx_tail;
txdp = &txq->iavf_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ci_tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, n);
iavf_vtx(txdp, tx_pkts, n - 1, flags, offload);
tx_pkts += (n - 1);
@@ -1773,10 +1773,10 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
/* avoid reach the end of ring */
txdp = &txq->iavf_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
}
- ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
iavf_vtx(txdp, tx_pkts, nb_commit, flags, offload);
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index 007759e451..641f3311eb 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -2357,14 +2357,6 @@ iavf_xmit_pkts_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
return iavf_xmit_pkts_vec_avx512_cmn(tx_queue, tx_pkts, nb_pkts, false);
}
-int __rte_cold
-iavf_txq_vec_setup_avx512(struct ci_tx_queue *txq)
-{
- txq->vector_tx = true;
- txq->vector_sw_ring = true;
- return 0;
-}
-
uint16_t
iavf_xmit_pkts_vec_avx512_offload(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index 6f94587eee..c69399a173 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -24,12 +24,6 @@ iavf_tx_desc_done(struct ci_tx_queue *txq, uint16_t idx)
rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE);
}
-static __rte_always_inline int
-iavf_tx_free_bufs(struct ci_tx_queue *txq)
-{
- return ci_tx_free_bufs(txq, iavf_tx_desc_done);
-}
-
static inline void
_iavf_rx_queue_release_mbufs_vec(struct iavf_rx_queue *rxq)
{
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index 3adf2a59e4..9f7db80bfd 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -1368,14 +1368,14 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
- struct ci_tx_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = IAVF_TX_DESC_CMD_EOP | 0x04; /* bit 2 must be set */
uint64_t rs = IAVF_TX_DESC_CMD_RS | flags;
int i;
if (txq->nb_tx_free < txq->tx_free_thresh)
- iavf_tx_free_bufs(txq);
+ ci_tx_free_bufs_vec(txq, iavf_tx_desc_done, false);
nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
@@ -1384,13 +1384,13 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = txq->tx_tail;
txdp = &txq->iavf_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ci_tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -1404,10 +1404,10 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
/* avoid reach the end of ring */
txdp = &txq->iavf_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
}
- ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
iavf_vtx(txdp, tx_pkts, nb_commit, flags);
@@ -1462,7 +1462,7 @@ int __rte_cold
iavf_txq_vec_setup(struct ci_tx_queue *txq)
{
txq->vector_tx = true;
- txq->vector_sw_ring = false;
+ txq->vector_sw_ring = txq->vector_tx;
return 0;
}
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v1 21/21] net/ixgbe: use common Tx backlog entry fn
2024-12-02 11:24 ` [PATCH v1 " Bruce Richardson
` (19 preceding siblings ...)
2024-12-02 11:24 ` [PATCH v1 20/21] net/iavf: " Bruce Richardson
@ 2024-12-02 11:24 ` Bruce Richardson
20 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-02 11:24 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Anatoly Burakov, Vladimir Medvedkin,
Wathsala Vithanage, Konstantin Ananyev
Remove the custom vector Tx backlog entry function and use the standard
intel_common one, now that all vector drivers are using the same,
smaller ring structure.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 10 ----------
drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 4 ++--
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 4 ++--
3 files changed, 4 insertions(+), 14 deletions(-)
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
index 65794e45cb..22f77b1a4d 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
@@ -68,16 +68,6 @@ ixgbe_tx_free_bufs(struct ci_tx_queue *txq)
return txq->tx_rs_thresh;
}
-static __rte_always_inline void
-tx_backlog_entry(struct ci_tx_entry_vec *txep,
- struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
-{
- int i;
-
- for (i = 0; i < (int)nb_pkts; ++i)
- txep[i].mbuf = tx_pkts[i];
-}
-
static inline void
_ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq)
{
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
index 2ccb399b64..f879f6fa9a 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
@@ -597,7 +597,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -614,7 +614,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = &txq->sw_ring_vec[tx_id];
}
- tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
index fa26365f06..915358e16b 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -720,7 +720,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -737,7 +737,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = &txq->sw_ring_vec[tx_id];
}
- tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* Re: [PATCH v1 11/21] net/_common_intel: add post-Tx buffer free function
2024-12-02 11:24 ` [PATCH v1 11/21] net/_common_intel: add post-Tx buffer free function Bruce Richardson
@ 2024-12-02 12:59 ` David Marchand
2024-12-02 13:12 ` Bruce Richardson
2024-12-02 13:24 ` Bruce Richardson
0 siblings, 2 replies; 127+ messages in thread
From: David Marchand @ 2024-12-02 12:59 UTC (permalink / raw)
To: Bruce Richardson; +Cc: dev, Ian Stokes, Vladimir Medvedkin, Anatoly Burakov
On Mon, Dec 2, 2024 at 12:27 PM Bruce Richardson
<bruce.richardson@intel.com> wrote:
>
> The actions taken for post-Tx buffer free for the SSE and AVX drivers
> for i40e, iavf and ice drivers are all common, so centralize those in
> common/intel_eth driver.
>
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
> drivers/net/_common_intel/tx.h | 71 ++++++++++++++++++++++++
> drivers/net/i40e/i40e_rxtx_vec_common.h | 72 ++++---------------------
> drivers/net/iavf/iavf_rxtx_vec_common.h | 61 ++++-----------------
> drivers/net/ice/ice_rxtx_vec_common.h | 61 ++++-----------------
> 4 files changed, 98 insertions(+), 167 deletions(-)
>
> diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
> index c372d2838b..a930309c05 100644
> --- a/drivers/net/_common_intel/tx.h
> +++ b/drivers/net/_common_intel/tx.h
> @@ -7,6 +7,7 @@
>
> #include <stdint.h>
> #include <rte_mbuf.h>
> +#include <rte_ethdev.h>
>
> /* forward declaration of the common intel (ci) queue structure */
> struct ci_tx_queue;
> @@ -107,4 +108,74 @@ ci_tx_backlog_entry(struct ci_tx_entry *txep, struct rte_mbuf **tx_pkts, uint16_
> txep[i].mbuf = tx_pkts[i];
> }
>
> +#define IETH_VPMD_TX_MAX_FREE_BUF 64
> +
> +typedef int (*ci_desc_done_fn)(struct ci_tx_queue *txq, uint16_t idx);
> +
> +static __rte_always_inline int
> +ci_tx_free_bufs(struct ci_tx_queue *txq, ci_desc_done_fn desc_done)
> +{
> + struct ci_tx_entry *txep;
> + uint32_t n;
> + uint32_t i;
> + int nb_free = 0;
> + struct rte_mbuf *m, *free[IETH_VPMD_TX_MAX_FREE_BUF];
> +
> + /* check DD bits on threshold descriptor */
> + if (!desc_done(txq, txq->tx_next_dd))
> + return 0;
> +
> + n = txq->tx_rs_thresh;
> +
> + /* first buffer to free from S/W ring is at index
> + * tx_next_dd - (tx_rs_thresh-1)
> + */
> + txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)];
> +
> + if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
> + for (i = 0; i < n; i++) {
> + free[i] = txep[i].mbuf;
> + /* no need to reset txep[i].mbuf in vector path */
> + }
> + rte_mempool_put_bulk(free[0]->pool, (void **)free, n);
> + goto done;
> + }
> +
> + m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
> + if (likely(m != NULL)) {
> + free[0] = m;
> + nb_free = 1;
> + for (i = 1; i < n; i++) {
> + m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
> + if (likely(m != NULL)) {
> + if (likely(m->pool == free[0]->pool)) {
> + free[nb_free++] = m;
> + } else {
> + rte_mempool_put_bulk(free[0]->pool,
> + (void *)free,
> + nb_free);
> + free[0] = m;
> + nb_free = 1;
> + }
> + }
> + }
> + rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
> + } else {
> + for (i = 1; i < n; i++) {
> + m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
> + if (m != NULL)
> + rte_mempool_put(m->pool, m);
> + }
> + }
Is it possible to take an extra step and convert to rte_pktmbuf_free_bulk?
> +
> +done:
> + /* buffers were freed, update counters */
> + txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
> + txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
> + if (txq->tx_next_dd >= txq->nb_tx_desc)
> + txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
> +
> + return txq->tx_rs_thresh;
> +}
> +
--
David Marchand
^ permalink raw reply [flat|nested] 127+ messages in thread
* Re: [PATCH v1 11/21] net/_common_intel: add post-Tx buffer free function
2024-12-02 12:59 ` David Marchand
@ 2024-12-02 13:12 ` Bruce Richardson
2024-12-02 13:24 ` Bruce Richardson
1 sibling, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-02 13:12 UTC (permalink / raw)
To: David Marchand; +Cc: dev, Ian Stokes, Vladimir Medvedkin, Anatoly Burakov
On Mon, Dec 02, 2024 at 01:59:37PM +0100, David Marchand wrote:
> On Mon, Dec 2, 2024 at 12:27 PM Bruce Richardson
> <bruce.richardson@intel.com> wrote:
> >
> > The actions taken for post-Tx buffer free for the SSE and AVX drivers
> > for i40e, iavf and ice drivers are all common, so centralize those in
> > common/intel_eth driver.
> >
> > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > ---
> > drivers/net/_common_intel/tx.h | 71 ++++++++++++++++++++++++
> > drivers/net/i40e/i40e_rxtx_vec_common.h | 72 ++++---------------------
> > drivers/net/iavf/iavf_rxtx_vec_common.h | 61 ++++-----------------
> > drivers/net/ice/ice_rxtx_vec_common.h | 61 ++++-----------------
> > 4 files changed, 98 insertions(+), 167 deletions(-)
> >
> > diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
> > index c372d2838b..a930309c05 100644
> > --- a/drivers/net/_common_intel/tx.h
> > +++ b/drivers/net/_common_intel/tx.h
> > @@ -7,6 +7,7 @@
> >
> > #include <stdint.h>
> > #include <rte_mbuf.h>
> > +#include <rte_ethdev.h>
> >
> > /* forward declaration of the common intel (ci) queue structure */
> > struct ci_tx_queue;
> > @@ -107,4 +108,74 @@ ci_tx_backlog_entry(struct ci_tx_entry *txep, struct rte_mbuf **tx_pkts, uint16_
> > txep[i].mbuf = tx_pkts[i];
> > }
> >
> > +#define IETH_VPMD_TX_MAX_FREE_BUF 64
> > +
> > +typedef int (*ci_desc_done_fn)(struct ci_tx_queue *txq, uint16_t idx);
> > +
> > +static __rte_always_inline int
> > +ci_tx_free_bufs(struct ci_tx_queue *txq, ci_desc_done_fn desc_done)
> > +{
> > + struct ci_tx_entry *txep;
> > + uint32_t n;
> > + uint32_t i;
> > + int nb_free = 0;
> > + struct rte_mbuf *m, *free[IETH_VPMD_TX_MAX_FREE_BUF];
> > +
> > + /* check DD bits on threshold descriptor */
> > + if (!desc_done(txq, txq->tx_next_dd))
> > + return 0;
> > +
> > + n = txq->tx_rs_thresh;
> > +
> > + /* first buffer to free from S/W ring is at index
> > + * tx_next_dd - (tx_rs_thresh-1)
> > + */
> > + txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)];
> > +
> > + if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
> > + for (i = 0; i < n; i++) {
> > + free[i] = txep[i].mbuf;
> > + /* no need to reset txep[i].mbuf in vector path */
> > + }
> > + rte_mempool_put_bulk(free[0]->pool, (void **)free, n);
> > + goto done;
> > + }
> > +
> > + m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
> > + if (likely(m != NULL)) {
> > + free[0] = m;
> > + nb_free = 1;
> > + for (i = 1; i < n; i++) {
> > + m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
> > + if (likely(m != NULL)) {
> > + if (likely(m->pool == free[0]->pool)) {
> > + free[nb_free++] = m;
> > + } else {
> > + rte_mempool_put_bulk(free[0]->pool,
> > + (void *)free,
> > + nb_free);
> > + free[0] = m;
> > + nb_free = 1;
> > + }
> > + }
> > + }
> > + rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
> > + } else {
> > + for (i = 1; i < n; i++) {
> > + m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
> > + if (m != NULL)
> > + rte_mempool_put(m->pool, m);
> > + }
> > + }
>
> Is it possible to take an extra step and convert to rte_pktmbuf_free_bulk?
>
Will investigate....
^ permalink raw reply [flat|nested] 127+ messages in thread
* Re: [PATCH v1 11/21] net/_common_intel: add post-Tx buffer free function
2024-12-02 12:59 ` David Marchand
2024-12-02 13:12 ` Bruce Richardson
@ 2024-12-02 13:24 ` Bruce Richardson
2024-12-02 13:55 ` David Marchand
1 sibling, 1 reply; 127+ messages in thread
From: Bruce Richardson @ 2024-12-02 13:24 UTC (permalink / raw)
To: David Marchand; +Cc: dev, Ian Stokes, Vladimir Medvedkin, Anatoly Burakov
On Mon, Dec 02, 2024 at 01:59:37PM +0100, David Marchand wrote:
> On Mon, Dec 2, 2024 at 12:27 PM Bruce Richardson
> <bruce.richardson@intel.com> wrote:
> >
> > The actions taken for post-Tx buffer free for the SSE and AVX drivers
> > for i40e, iavf and ice drivers are all common, so centralize those in
> > common/intel_eth driver.
> >
> > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > ---
> > drivers/net/_common_intel/tx.h | 71 ++++++++++++++++++++++++
> > drivers/net/i40e/i40e_rxtx_vec_common.h | 72 ++++---------------------
> > drivers/net/iavf/iavf_rxtx_vec_common.h | 61 ++++-----------------
> > drivers/net/ice/ice_rxtx_vec_common.h | 61 ++++-----------------
> > 4 files changed, 98 insertions(+), 167 deletions(-)
> >
> > diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
> > index c372d2838b..a930309c05 100644
> > --- a/drivers/net/_common_intel/tx.h
> > +++ b/drivers/net/_common_intel/tx.h
> > @@ -7,6 +7,7 @@
> >
> > #include <stdint.h>
> > #include <rte_mbuf.h>
> > +#include <rte_ethdev.h>
> >
> > /* forward declaration of the common intel (ci) queue structure */
> > struct ci_tx_queue;
> > @@ -107,4 +108,74 @@ ci_tx_backlog_entry(struct ci_tx_entry *txep, struct rte_mbuf **tx_pkts, uint16_
> > txep[i].mbuf = tx_pkts[i];
> > }
> >
> > +#define IETH_VPMD_TX_MAX_FREE_BUF 64
> > +
> > +typedef int (*ci_desc_done_fn)(struct ci_tx_queue *txq, uint16_t idx);
> > +
> > +static __rte_always_inline int
> > +ci_tx_free_bufs(struct ci_tx_queue *txq, ci_desc_done_fn desc_done)
> > +{
> > + struct ci_tx_entry *txep;
> > + uint32_t n;
> > + uint32_t i;
> > + int nb_free = 0;
> > + struct rte_mbuf *m, *free[IETH_VPMD_TX_MAX_FREE_BUF];
> > +
> > + /* check DD bits on threshold descriptor */
> > + if (!desc_done(txq, txq->tx_next_dd))
> > + return 0;
> > +
> > + n = txq->tx_rs_thresh;
> > +
> > + /* first buffer to free from S/W ring is at index
> > + * tx_next_dd - (tx_rs_thresh-1)
> > + */
> > + txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)];
> > +
> > + if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
> > + for (i = 0; i < n; i++) {
> > + free[i] = txep[i].mbuf;
> > + /* no need to reset txep[i].mbuf in vector path */
> > + }
> > + rte_mempool_put_bulk(free[0]->pool, (void **)free, n);
> > + goto done;
> > + }
> > +
> > + m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
> > + if (likely(m != NULL)) {
> > + free[0] = m;
> > + nb_free = 1;
> > + for (i = 1; i < n; i++) {
> > + m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
> > + if (likely(m != NULL)) {
> > + if (likely(m->pool == free[0]->pool)) {
> > + free[nb_free++] = m;
> > + } else {
> > + rte_mempool_put_bulk(free[0]->pool,
> > + (void *)free,
> > + nb_free);
> > + free[0] = m;
> > + nb_free = 1;
> > + }
> > + }
> > + }
> > + rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
> > + } else {
> > + for (i = 1; i < n; i++) {
> > + m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
> > + if (m != NULL)
> > + rte_mempool_put(m->pool, m);
> > + }
> > + }
>
> Is it possible to take an extra step and convert to rte_pktmbuf_free_bulk?
>
Right now that's not possible without some more severe refactoring - and
even then I'm not convinced that it should be done. The code here is
working off the buffers in the shadow ring directly, where they should be
flattened out to avoid having mbuf chains. Therefore, we are freeing
segment by segment as each buffer has been transmitted.
/Bruce
^ permalink raw reply [flat|nested] 127+ messages in thread
* Re: [PATCH v1 09/21] net/ixgbe: use common Tx queue structure
2024-12-02 11:24 ` [PATCH v1 09/21] net/ixgbe: use common Tx queue structure Bruce Richardson
@ 2024-12-02 13:51 ` Medvedkin, Vladimir
2024-12-02 14:09 ` Bruce Richardson
0 siblings, 1 reply; 127+ messages in thread
From: Medvedkin, Vladimir @ 2024-12-02 13:51 UTC (permalink / raw)
To: Bruce Richardson, dev
Cc: Anatoly Burakov, Wathsala Vithanage, Konstantin Ananyev
[-- Attachment #1: Type: text/plain, Size: 3325 bytes --]
Hi Bruce,
On 02/12/2024 11:24, Bruce Richardson wrote:
> Merge in additional fields used by the ixgbe driver and then convert it
> over to using the common Tx queue structure.
>
> Signed-off-by: Bruce Richardson<bruce.richardson@intel.com>
> ---
> drivers/net/_common_intel/tx.h | 14 +++-
> drivers/net/ixgbe/ixgbe_ethdev.c | 4 +-
> .../ixgbe/ixgbe_recycle_mbufs_vec_common.c | 2 +-
> drivers/net/ixgbe/ixgbe_rxtx.c | 64 +++++++++----------
> drivers/net/ixgbe/ixgbe_rxtx.h | 56 ++--------------
> drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 26 ++++----
> drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 14 ++--
> drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 14 ++--
> 8 files changed, 80 insertions(+), 114 deletions(-)
>
> diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
> index c4a1a0c816..51ae3b051d 100644
> --- a/drivers/net/_common_intel/tx.h
> +++ b/drivers/net/_common_intel/tx.h
> @@ -34,9 +34,13 @@ struct ci_tx_queue {
> volatile struct i40e_tx_desc *i40e_tx_ring;
> volatile struct iavf_tx_desc *iavf_tx_ring;
> volatile struct ice_tx_desc *ice_tx_ring;
> + volatile union ixgbe_adv_tx_desc *ixgbe_tx_ring;
> };
> volatile uint8_t *qtx_tail; /* register address of tail */
> - struct ci_tx_entry *sw_ring; /* virtual address of SW ring */
> + union {
> + struct ci_tx_entry *sw_ring; /* virtual address of SW ring */
> + struct ci_tx_entry_vec *sw_ring_vec;
> + };
> rte_iova_t tx_ring_dma; /* TX ring DMA address */
> uint16_t nb_tx_desc; /* number of TX descriptors */
> uint16_t tx_tail; /* current value of tail register */
> @@ -87,6 +91,14 @@ struct ci_tx_queue {
> uint8_t tc;
> bool use_ctx; /* with ctx info, each pkt needs two descriptors */
> };
> + struct { /* ixgbe specific values */
> + const struct ixgbe_txq_ops *ops;
> + struct ixgbe_advctx_info *ctx_cache;
'struct ixgbe_advctx_info ctx_cache[IXGBE_CTX_NUM];' takes only 80 bytes
of memory, so using a pointer saves 72 bytes. Since the final version of
the 'struct ci_tx_queue' without driver specific fields takes 96 bytes,
embedding 'ixgbe_advctx_info ctx_cache[2]' array will take one more
cache line, which is hot a huge deal in my opinion.
Or consider another (possibly better) approach, where for non IXGBE
'struct ci_tx_queue' will remain the same size, but only for IXGBE an
extra 80 bytes will be alllocated:
struct {/* ixgbe specific values */
const struct ixgbe_txq_ops *ops;
uint32_t ctx_curr;
uint8_t pthresh; /**< Prefetch threshold
register. */
uint8_t hthresh; /**< Host threshold register. */
uint8_t wthresh; /**< Write-back threshold reg. */
uint8_t using_ipsec; /**< indicates that IPsec
TX feature is in use */
struct ixgbe_advctx_info ctx_cache[0];
};
> + uint32_t ctx_curr;
> +#ifdef RTE_LIB_SECURITY
> + uint8_t using_ipsec; /**< indicates that IPsec TX feature is in use */
> +#endif
> + };
> };
> };
>
<snip>
--
Regards,
Vladimir
[-- Attachment #2: Type: text/html, Size: 5330 bytes --]
^ permalink raw reply [flat|nested] 127+ messages in thread
* Re: [PATCH v1 11/21] net/_common_intel: add post-Tx buffer free function
2024-12-02 13:24 ` Bruce Richardson
@ 2024-12-02 13:55 ` David Marchand
0 siblings, 0 replies; 127+ messages in thread
From: David Marchand @ 2024-12-02 13:55 UTC (permalink / raw)
To: Bruce Richardson; +Cc: dev, Ian Stokes, Vladimir Medvedkin, Anatoly Burakov
On Mon, Dec 2, 2024 at 2:24 PM Bruce Richardson
<bruce.richardson@intel.com> wrote:
>
> On Mon, Dec 02, 2024 at 01:59:37PM +0100, David Marchand wrote:
> > On Mon, Dec 2, 2024 at 12:27 PM Bruce Richardson
> > <bruce.richardson@intel.com> wrote:
> > >
> > > The actions taken for post-Tx buffer free for the SSE and AVX drivers
> > > for i40e, iavf and ice drivers are all common, so centralize those in
> > > common/intel_eth driver.
> > >
> > > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > > ---
> > > drivers/net/_common_intel/tx.h | 71 ++++++++++++++++++++++++
> > > drivers/net/i40e/i40e_rxtx_vec_common.h | 72 ++++---------------------
> > > drivers/net/iavf/iavf_rxtx_vec_common.h | 61 ++++-----------------
> > > drivers/net/ice/ice_rxtx_vec_common.h | 61 ++++-----------------
> > > 4 files changed, 98 insertions(+), 167 deletions(-)
> > >
> > > diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
> > > index c372d2838b..a930309c05 100644
> > > --- a/drivers/net/_common_intel/tx.h
> > > +++ b/drivers/net/_common_intel/tx.h
> > > @@ -7,6 +7,7 @@
> > >
> > > #include <stdint.h>
> > > #include <rte_mbuf.h>
> > > +#include <rte_ethdev.h>
> > >
> > > /* forward declaration of the common intel (ci) queue structure */
> > > struct ci_tx_queue;
> > > @@ -107,4 +108,74 @@ ci_tx_backlog_entry(struct ci_tx_entry *txep, struct rte_mbuf **tx_pkts, uint16_
> > > txep[i].mbuf = tx_pkts[i];
> > > }
> > >
> > > +#define IETH_VPMD_TX_MAX_FREE_BUF 64
> > > +
> > > +typedef int (*ci_desc_done_fn)(struct ci_tx_queue *txq, uint16_t idx);
> > > +
> > > +static __rte_always_inline int
> > > +ci_tx_free_bufs(struct ci_tx_queue *txq, ci_desc_done_fn desc_done)
> > > +{
> > > + struct ci_tx_entry *txep;
> > > + uint32_t n;
> > > + uint32_t i;
> > > + int nb_free = 0;
> > > + struct rte_mbuf *m, *free[IETH_VPMD_TX_MAX_FREE_BUF];
> > > +
> > > + /* check DD bits on threshold descriptor */
> > > + if (!desc_done(txq, txq->tx_next_dd))
> > > + return 0;
> > > +
> > > + n = txq->tx_rs_thresh;
> > > +
> > > + /* first buffer to free from S/W ring is at index
> > > + * tx_next_dd - (tx_rs_thresh-1)
> > > + */
> > > + txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)];
> > > +
> > > + if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
> > > + for (i = 0; i < n; i++) {
> > > + free[i] = txep[i].mbuf;
> > > + /* no need to reset txep[i].mbuf in vector path */
> > > + }
> > > + rte_mempool_put_bulk(free[0]->pool, (void **)free, n);
> > > + goto done;
> > > + }
> > > +
> > > + m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
> > > + if (likely(m != NULL)) {
> > > + free[0] = m;
> > > + nb_free = 1;
> > > + for (i = 1; i < n; i++) {
> > > + m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
> > > + if (likely(m != NULL)) {
> > > + if (likely(m->pool == free[0]->pool)) {
> > > + free[nb_free++] = m;
> > > + } else {
> > > + rte_mempool_put_bulk(free[0]->pool,
> > > + (void *)free,
> > > + nb_free);
> > > + free[0] = m;
> > > + nb_free = 1;
> > > + }
> > > + }
> > > + }
> > > + rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
> > > + } else {
> > > + for (i = 1; i < n; i++) {
> > > + m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
> > > + if (m != NULL)
> > > + rte_mempool_put(m->pool, m);
> > > + }
> > > + }
> >
> > Is it possible to take an extra step and convert to rte_pktmbuf_free_bulk?
> >
> Right now that's not possible without some more severe refactoring - and
> even then I'm not convinced that it should be done. The code here is
> working off the buffers in the shadow ring directly, where they should be
> flattened out to avoid having mbuf chains. Therefore, we are freeing
> segment by segment as each buffer has been transmitted.
Nevermind, at least, this series removes many copies of this loop.
Thanks Bruce.
--
David Marchand
^ permalink raw reply [flat|nested] 127+ messages in thread
* Re: [PATCH v1 09/21] net/ixgbe: use common Tx queue structure
2024-12-02 13:51 ` Medvedkin, Vladimir
@ 2024-12-02 14:09 ` Bruce Richardson
2024-12-02 15:15 ` Bruce Richardson
0 siblings, 1 reply; 127+ messages in thread
From: Bruce Richardson @ 2024-12-02 14:09 UTC (permalink / raw)
To: Medvedkin, Vladimir
Cc: dev, Anatoly Burakov, Wathsala Vithanage, Konstantin Ananyev
On Mon, Dec 02, 2024 at 01:51:35PM +0000, Medvedkin, Vladimir wrote:
> Hi Bruce,
>
> On 02/12/2024 11:24, Bruce Richardson wrote:
>
> Merge in additional fields used by the ixgbe driver and then convert it
> over to using the common Tx queue structure.
>
> Signed-off-by: Bruce Richardson [1]<bruce.richardson@intel.com>
> ---
> drivers/net/_common_intel/tx.h | 14 +++-
> drivers/net/ixgbe/ixgbe_ethdev.c | 4 +-
> .../ixgbe/ixgbe_recycle_mbufs_vec_common.c | 2 +-
> drivers/net/ixgbe/ixgbe_rxtx.c | 64 +++++++++----------
> drivers/net/ixgbe/ixgbe_rxtx.h | 56 ++--------------
> drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 26 ++++----
> drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 14 ++--
> drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 14 ++--
> 8 files changed, 80 insertions(+), 114 deletions(-)
>
> diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
> index c4a1a0c816..51ae3b051d 100644
> --- a/drivers/net/_common_intel/tx.h
> +++ b/drivers/net/_common_intel/tx.h
> @@ -34,9 +34,13 @@ struct ci_tx_queue {
> volatile struct i40e_tx_desc *i40e_tx_ring;
> volatile struct iavf_tx_desc *iavf_tx_ring;
> volatile struct ice_tx_desc *ice_tx_ring;
> + volatile union ixgbe_adv_tx_desc *ixgbe_tx_ring;
> };
> volatile uint8_t *qtx_tail; /* register address of tail */
> - struct ci_tx_entry *sw_ring; /* virtual address of SW ring */
> + union {
> + struct ci_tx_entry *sw_ring; /* virtual address of SW ring */
> + struct ci_tx_entry_vec *sw_ring_vec;
> + };
> rte_iova_t tx_ring_dma; /* TX ring DMA address */
> uint16_t nb_tx_desc; /* number of TX descriptors */
> uint16_t tx_tail; /* current value of tail register */
> @@ -87,6 +91,14 @@ struct ci_tx_queue {
> uint8_t tc;
> bool use_ctx; /* with ctx info, each pkt needs two desc
> riptors */
> };
> + struct { /* ixgbe specific values */
> + const struct ixgbe_txq_ops *ops;
> + struct ixgbe_advctx_info *ctx_cache;
>
> 'struct ixgbe_advctx_info ctx_cache[IXGBE_CTX_NUM];' takes only 80
> bytes of memory, so using a pointer saves 72 bytes. Since the final
> version of the 'struct ci_tx_queue' without driver specific fields
> takes 96 bytes, embedding 'ixgbe_advctx_info ctx_cache[2]' array will
> take one more cache line, which is hot a huge deal in my opinion.
>
Maybe not, though another way to look at it is that it is that those two
context entries are nearly as big as the rest of the struct!
> Or consider another (possibly better) approach, where for non IXGBE
> 'struct ci_tx_queue' will remain the same size, but only for IXGBE an
> extra 80 bytes will be alllocated:
>
> struct { /* ixgbe specific values */
>
> const struct ixgbe_txq_ops *ops;
>
> uint32_t ctx_curr;
>
> uint8_t pthresh; /**< Prefetch threshold
> register. */
>
> uint8_t hthresh; /**< Host threshold
> register. */
>
> uint8_t wthresh; /**< Write-back threshold
> reg. */
>
> uint8_t using_ipsec; /**< indicates that IPsec
> TX feature is in use */
> struct ixgbe_advctx_info ctx_cache[0];
>
> };
>
> + uint32_t ctx_curr;
> +#ifdef RTE_LIB_SECURITY
> + uint8_t using_ipsec; /**< indicates that IPsec TX featu
> re is in use */
> +#endif
> + };
> };
> };
>
I prefer solutions where the extra 80 bytes are only allocated for the one
driver that needs them. I'll see if this alternative can work ok for us.
/Bruce
^ permalink raw reply [flat|nested] 127+ messages in thread
* Re: [PATCH v1 09/21] net/ixgbe: use common Tx queue structure
2024-12-02 14:09 ` Bruce Richardson
@ 2024-12-02 15:15 ` Bruce Richardson
0 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-02 15:15 UTC (permalink / raw)
To: Medvedkin, Vladimir
Cc: dev, Anatoly Burakov, Wathsala Vithanage, Konstantin Ananyev,
david.marchand
On Mon, Dec 02, 2024 at 02:09:35PM +0000, Bruce Richardson wrote:
> On Mon, Dec 02, 2024 at 01:51:35PM +0000, Medvedkin, Vladimir wrote:
> > Hi Bruce,
> >
> > On 02/12/2024 11:24, Bruce Richardson wrote:
> >
> > Merge in additional fields used by the ixgbe driver and then convert it
> > over to using the common Tx queue structure.
> >
> > Signed-off-by: Bruce Richardson [1]<bruce.richardson@intel.com>
> > ---
> > drivers/net/_common_intel/tx.h | 14 +++-
> > drivers/net/ixgbe/ixgbe_ethdev.c | 4 +-
> > .../ixgbe/ixgbe_recycle_mbufs_vec_common.c | 2 +-
> > drivers/net/ixgbe/ixgbe_rxtx.c | 64 +++++++++----------
> > drivers/net/ixgbe/ixgbe_rxtx.h | 56 ++--------------
> > drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 26 ++++----
> > drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 14 ++--
> > drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 14 ++--
> > 8 files changed, 80 insertions(+), 114 deletions(-)
> >
> > diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
> > index c4a1a0c816..51ae3b051d 100644
> > --- a/drivers/net/_common_intel/tx.h
> > +++ b/drivers/net/_common_intel/tx.h
> > @@ -34,9 +34,13 @@ struct ci_tx_queue {
> > volatile struct i40e_tx_desc *i40e_tx_ring;
> > volatile struct iavf_tx_desc *iavf_tx_ring;
> > volatile struct ice_tx_desc *ice_tx_ring;
> > + volatile union ixgbe_adv_tx_desc *ixgbe_tx_ring;
> > };
> > volatile uint8_t *qtx_tail; /* register address of tail */
> > - struct ci_tx_entry *sw_ring; /* virtual address of SW ring */
> > + union {
> > + struct ci_tx_entry *sw_ring; /* virtual address of SW ring */
> > + struct ci_tx_entry_vec *sw_ring_vec;
> > + };
> > rte_iova_t tx_ring_dma; /* TX ring DMA address */
> > uint16_t nb_tx_desc; /* number of TX descriptors */
> > uint16_t tx_tail; /* current value of tail register */
> > @@ -87,6 +91,14 @@ struct ci_tx_queue {
> > uint8_t tc;
> > bool use_ctx; /* with ctx info, each pkt needs two desc
> > riptors */
> > };
> > + struct { /* ixgbe specific values */
> > + const struct ixgbe_txq_ops *ops;
> > + struct ixgbe_advctx_info *ctx_cache;
> >
> > 'struct ixgbe_advctx_info ctx_cache[IXGBE_CTX_NUM];' takes only 80
> > bytes of memory, so using a pointer saves 72 bytes. Since the final
> > version of the 'struct ci_tx_queue' without driver specific fields
> > takes 96 bytes, embedding 'ixgbe_advctx_info ctx_cache[2]' array will
> > take one more cache line, which is hot a huge deal in my opinion.
> >
>
> Maybe not, though another way to look at it is that it is that those two
> context entries are nearly as big as the rest of the struct!
>
> > Or consider another (possibly better) approach, where for non IXGBE
> > 'struct ci_tx_queue' will remain the same size, but only for IXGBE an
> > extra 80 bytes will be alllocated:
> >
> > struct { /* ixgbe specific values */
> >
> > const struct ixgbe_txq_ops *ops;
> >
> > uint32_t ctx_curr;
> >
> > uint8_t pthresh; /**< Prefetch threshold
> > register. */
> >
> > uint8_t hthresh; /**< Host threshold
> > register. */
> >
> > uint8_t wthresh; /**< Write-back threshold
> > reg. */
> >
> > uint8_t using_ipsec; /**< indicates that IPsec
> > TX feature is in use */
> > struct ixgbe_advctx_info ctx_cache[0];
> >
> > };
> >
> > + uint32_t ctx_curr;
> > +#ifdef RTE_LIB_SECURITY
> > + uint8_t using_ipsec; /**< indicates that IPsec TX featu
> > re is in use */
> > +#endif
> > + };
> > };
> > };
> >
>
> I prefer solutions where the extra 80 bytes are only allocated for the one
> driver that needs them. I'll see if this alternative can work ok for us.
>
Trying out this solution, I hit the problem described in the commit log of
the previous patch to this one - it introduces a dependency on ixgbe
structures inside the common driver. By changing the type of the ctx field
from an array to a pointer, we remove the need to have the actual type
defined at compile time - as long as we never dereference the pointer. This
no-reference is why, for example, we have have the union of all the
different descriptor types in the structure without having to include the
headers that define them.
If we include ixgbe_advctx_info as an array rather than a struct - even as
a zero-length array - then we need to have the definition of the structure
present at that point in the code. This means we either need to:
- copy in the definitions of ixgbe_advctx_info and ixgbe_tx_offload into
our common header structure, or
- put a #include ixgbe_rxtx.h in our common tx.h header file. [And it can't
go at the start because that header itself includes tx.h to get the
definitions of the queue types, meaning that it can only be included
half-way down tx.h]
In any case, either approach introduces ixgbe-specific definitions into the
common header files. Therefore, I would prefer to keep things as in this
patchset, giving us a smaller structure, and a clean separation of
driver-specific and common structures.
/Bruce
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v2 00/22] Reduce code duplication across Intel NIC drivers
2024-11-22 12:53 [RFC PATCH 00/21] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (22 preceding siblings ...)
2024-12-02 11:24 ` [PATCH v1 " Bruce Richardson
@ 2024-12-03 16:41 ` Bruce Richardson
2024-12-03 16:41 ` [PATCH v2 01/22] net/_common_intel: add pkt reassembly fn for intel drivers Bruce Richardson
` (21 more replies)
2024-12-11 17:33 ` [PATCH v3 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
2024-12-20 14:38 ` [PATCH v4 00/24] Reduce code duplication across Intel NIC drivers Bruce Richardson
25 siblings, 22 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-03 16:41 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson
This RFC attempts to reduce the amount of code duplication across a
number of Intel NIC drivers, specifically: ixgbe, i40e, iavf, and ice.
The first patch extract a function from the Rx side, otherwise the
majority of the changes are on the Tx side, leading to a converged Tx
queue structure across the 4 drivers, and a large number of common
functions.
v1->v2:
* Fix two additional checkpatch issues that were flagged.
* Added in patch 21, which performs additional cleanup that is possible
once all vector drivers use the same mbuf free/release process.
[This brings the patchset to having over twice as many lines removed
as added (1887 vs 930), and close to having a net removal of 1kloc]
RFC->v1:
* Moved the location of the common code from "common/intel_eth" to
"net/_common_intel", and added only ".." to the driver include path so
that the paths included "_common_intel" in them, to make it clear it's
not driver-local headers.
* Due to change in location, structure/fn prefix changes from "ieth" to
"ci" for "common intel".
* Removed the seeming-arbitrary split of vector and non-vector code -
since much of the code taken from vector files was scalar code which
was used by the vector drivers.
* Split code into separate Rx and Tx files.
* Fixed multiple checkpatch issues (but not all).
* Attempted to improve name standardization, by using "_vec" as a common
suffix for all vector-related fns and data. Previously, some names had
"vec" in the middle, others had just "_v" suffix or full word "vector"
as suffix.
* Other minor changes...
Bruce Richardson (22):
net/_common_intel: add pkt reassembly fn for intel drivers
net/_common_intel: provide common Tx entry structures
net/_common_intel: add Tx mbuf ring replenish fn
drivers/net: align Tx queue struct field names
drivers/net: add prefix for driver-specific structs
net/_common_intel: merge ice and i40e Tx queue struct
net/iavf: use common Tx queue structure
net/ixgbe: convert Tx queue context cache field to ptr
net/ixgbe: use common Tx queue structure
net/_common_intel: pack Tx queue structure
net/_common_intel: add post-Tx buffer free function
net/_common_intel: add Tx buffer free fn for AVX-512
net/iavf: use common Tx free fn for AVX-512
net/ice: move Tx queue mbuf cleanup fn to common
net/i40e: use common Tx queue mbuf cleanup fn
net/ixgbe: use common Tx queue mbuf cleanup fn
net/iavf: use common Tx queue mbuf cleanup fn
net/ice: use vector SW ring for all vector paths
net/i40e: use vector SW ring for all vector paths
net/iavf: use vector SW ring for all vector paths
net/_common_intel: remove unneeded code
net/ixgbe: use common Tx backlog entry fn
drivers/net/_common_intel/rx.h | 79 ++++++
drivers/net/_common_intel/tx.h | 249 ++++++++++++++++++
drivers/net/i40e/i40e_ethdev.c | 4 +-
drivers/net/i40e/i40e_ethdev.h | 8 +-
drivers/net/i40e/i40e_fdir.c | 10 +-
.../net/i40e/i40e_recycle_mbufs_vec_common.c | 6 +-
drivers/net/i40e/i40e_rxtx.c | 192 +++++---------
drivers/net/i40e/i40e_rxtx.h | 61 +----
drivers/net/i40e/i40e_rxtx_vec_altivec.c | 26 +-
drivers/net/i40e/i40e_rxtx_vec_avx2.c | 26 +-
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 144 +---------
drivers/net/i40e/i40e_rxtx_vec_common.h | 144 +---------
drivers/net/i40e/i40e_rxtx_vec_neon.c | 26 +-
drivers/net/i40e/i40e_rxtx_vec_sse.c | 26 +-
drivers/net/i40e/meson.build | 2 +-
drivers/net/iavf/iavf.h | 2 +-
drivers/net/iavf/iavf_ethdev.c | 4 +-
drivers/net/iavf/iavf_rxtx.c | 180 +++++--------
drivers/net/iavf/iavf_rxtx.h | 61 +----
drivers/net/iavf/iavf_rxtx_vec_avx2.c | 47 ++--
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 214 +++------------
drivers/net/iavf/iavf_rxtx_vec_common.h | 160 +----------
drivers/net/iavf/iavf_rxtx_vec_sse.c | 56 ++--
drivers/net/iavf/iavf_vchnl.c | 8 +-
drivers/net/iavf/meson.build | 2 +-
drivers/net/ice/ice_dcf.c | 4 +-
drivers/net/ice/ice_dcf_ethdev.c | 21 +-
drivers/net/ice/ice_diagnose.c | 2 +-
drivers/net/ice/ice_ethdev.c | 2 +-
drivers/net/ice/ice_ethdev.h | 7 +-
drivers/net/ice/ice_rxtx.c | 163 +++++-------
drivers/net/ice/ice_rxtx.h | 52 +---
drivers/net/ice/ice_rxtx_vec_avx2.c | 26 +-
drivers/net/ice/ice_rxtx_vec_avx512.c | 153 +----------
drivers/net/ice/ice_rxtx_vec_common.h | 190 +------------
drivers/net/ice/ice_rxtx_vec_sse.c | 32 +--
drivers/net/ice/meson.build | 2 +-
drivers/net/ixgbe/base/ixgbe_osdep.h | 2 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 4 +-
.../ixgbe/ixgbe_recycle_mbufs_vec_common.c | 6 +-
drivers/net/ixgbe/ixgbe_rxtx.c | 139 +++++-----
drivers/net/ixgbe/ixgbe_rxtx.h | 73 +----
drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 128 ++-------
drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 37 ++-
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 37 ++-
drivers/net/ixgbe/meson.build | 2 +-
46 files changed, 930 insertions(+), 1889 deletions(-)
create mode 100644 drivers/net/_common_intel/rx.h
create mode 100644 drivers/net/_common_intel/tx.h
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v2 01/22] net/_common_intel: add pkt reassembly fn for intel drivers
2024-12-03 16:41 ` [PATCH v2 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
@ 2024-12-03 16:41 ` Bruce Richardson
2024-12-03 16:41 ` [PATCH v2 02/22] net/_common_intel: provide common Tx entry structures Bruce Richardson
` (20 subsequent siblings)
21 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-03 16:41 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, David Christensen, Ian Stokes,
Konstantin Ananyev, Wathsala Vithanage, Vladimir Medvedkin,
Anatoly Burakov
The code for reassembling a single, multi-mbuf packet from multiple
buffers received from the NIC is duplicated across many drivers. Rather
than having multiple copies of this function, we can create an
"_common_intel" directory to hold such functions and consolidate
multiple functions down to a single one for easier maintenance.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/rx.h | 79 +++++++++++++++++++++++
drivers/net/i40e/i40e_rxtx_vec_altivec.c | 4 +-
drivers/net/i40e/i40e_rxtx_vec_avx2.c | 4 +-
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 4 +-
drivers/net/i40e/i40e_rxtx_vec_common.h | 64 +-----------------
drivers/net/i40e/i40e_rxtx_vec_neon.c | 4 +-
drivers/net/i40e/i40e_rxtx_vec_sse.c | 4 +-
drivers/net/i40e/meson.build | 2 +-
drivers/net/iavf/iavf_rxtx_vec_avx2.c | 8 +--
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 8 +--
drivers/net/iavf/iavf_rxtx_vec_common.h | 65 +------------------
drivers/net/iavf/iavf_rxtx_vec_sse.c | 8 +--
drivers/net/iavf/meson.build | 2 +-
drivers/net/ice/ice_rxtx_vec_avx2.c | 4 +-
drivers/net/ice/ice_rxtx_vec_avx512.c | 8 +--
drivers/net/ice/ice_rxtx_vec_common.h | 66 +------------------
drivers/net/ice/ice_rxtx_vec_sse.c | 4 +-
drivers/net/ice/meson.build | 2 +-
drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 63 +-----------------
drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 4 +-
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 4 +-
drivers/net/ixgbe/meson.build | 2 +-
22 files changed, 121 insertions(+), 292 deletions(-)
create mode 100644 drivers/net/_common_intel/rx.h
diff --git a/drivers/net/_common_intel/rx.h b/drivers/net/_common_intel/rx.h
new file mode 100644
index 0000000000..5bd2fea7e3
--- /dev/null
+++ b/drivers/net/_common_intel/rx.h
@@ -0,0 +1,79 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Intel Corporation
+ */
+
+#ifndef _COMMON_INTEL_RX_H_
+#define _COMMON_INTEL_RX_H_
+
+#include <stdint.h>
+#include <unistd.h>
+#include <rte_mbuf.h>
+
+#define CI_RX_BURST 32
+
+static inline uint16_t
+ci_rx_reassemble_packets(struct rte_mbuf **rx_bufs, uint16_t nb_bufs, uint8_t *split_flags,
+ struct rte_mbuf **pkt_first_seg, struct rte_mbuf **pkt_last_seg,
+ const uint8_t crc_len)
+{
+ struct rte_mbuf *pkts[CI_RX_BURST] = {0}; /*finished pkts*/
+ struct rte_mbuf *start = *pkt_first_seg;
+ struct rte_mbuf *end = *pkt_last_seg;
+ unsigned int pkt_idx, buf_idx;
+
+ for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) {
+ if (end) {
+ /* processing a split packet */
+ end->next = rx_bufs[buf_idx];
+ rx_bufs[buf_idx]->data_len += crc_len;
+
+ start->nb_segs++;
+ start->pkt_len += rx_bufs[buf_idx]->data_len;
+ end = end->next;
+
+ if (!split_flags[buf_idx]) {
+ /* it's the last packet of the set */
+ start->hash = end->hash;
+ start->vlan_tci = end->vlan_tci;
+ start->ol_flags = end->ol_flags;
+ /* we need to strip crc for the whole packet */
+ start->pkt_len -= crc_len;
+ if (end->data_len > crc_len) {
+ end->data_len -= crc_len;
+ } else {
+ /* free up last mbuf */
+ struct rte_mbuf *secondlast = start;
+
+ start->nb_segs--;
+ while (secondlast->next != end)
+ secondlast = secondlast->next;
+ secondlast->data_len -= (crc_len - end->data_len);
+ secondlast->next = NULL;
+ rte_pktmbuf_free_seg(end);
+ }
+ pkts[pkt_idx++] = start;
+ start = NULL;
+ end = NULL;
+ }
+ } else {
+ /* not processing a split packet */
+ if (!split_flags[buf_idx]) {
+ /* not a split packet, save and skip */
+ pkts[pkt_idx++] = rx_bufs[buf_idx];
+ continue;
+ }
+ start = rx_bufs[buf_idx];
+ end = start;
+ rx_bufs[buf_idx]->data_len += crc_len;
+ rx_bufs[buf_idx]->pkt_len += crc_len;
+ }
+ }
+
+ /* save the partial packet for next time */
+ *pkt_first_seg = start;
+ *pkt_last_seg = end;
+ memcpy(rx_bufs, pkts, pkt_idx * (sizeof(*pkts)));
+ return pkt_idx;
+}
+
+#endif /* _COMMON_INTEL_RX_H_ */
diff --git a/drivers/net/i40e/i40e_rxtx_vec_altivec.c b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
index b6b0d38ec1..95829f65d5 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_altivec.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
@@ -494,8 +494,8 @@ i40e_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
if (i == nb_bufs)
return nb_bufs;
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx2.c b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
index 19cf0ac718..6dd6e55d9c 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx2.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
@@ -657,8 +657,8 @@ i40e_recv_scattered_burst_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/*
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index 3b2750221b..506f1b5878 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -725,8 +725,8 @@ i40e_recv_scattered_burst_vec_avx512(void *rx_queue,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index 8b745630e4..1248cecacd 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -8,6 +8,7 @@
#include <ethdev_driver.h>
#include <rte_malloc.h>
+#include <_common_intel/rx.h>
#include "i40e_ethdev.h"
#include "i40e_rxtx.h"
@@ -15,69 +16,6 @@
#pragma GCC diagnostic ignored "-Wcast-qual"
#endif
-static inline uint16_t
-reassemble_packets(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_bufs,
- uint16_t nb_bufs, uint8_t *split_flags)
-{
- struct rte_mbuf *pkts[RTE_I40E_VPMD_RX_BURST]; /*finished pkts*/
- struct rte_mbuf *start = rxq->pkt_first_seg;
- struct rte_mbuf *end = rxq->pkt_last_seg;
- unsigned pkt_idx, buf_idx;
-
- for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) {
- if (end != NULL) {
- /* processing a split packet */
- end->next = rx_bufs[buf_idx];
- rx_bufs[buf_idx]->data_len += rxq->crc_len;
-
- start->nb_segs++;
- start->pkt_len += rx_bufs[buf_idx]->data_len;
- end = end->next;
-
- if (!split_flags[buf_idx]) {
- /* it's the last packet of the set */
- start->hash = end->hash;
- start->vlan_tci = end->vlan_tci;
- start->ol_flags = end->ol_flags;
- /* we need to strip crc for the whole packet */
- start->pkt_len -= rxq->crc_len;
- if (end->data_len > rxq->crc_len)
- end->data_len -= rxq->crc_len;
- else {
- /* free up last mbuf */
- struct rte_mbuf *secondlast = start;
-
- start->nb_segs--;
- while (secondlast->next != end)
- secondlast = secondlast->next;
- secondlast->data_len -= (rxq->crc_len -
- end->data_len);
- secondlast->next = NULL;
- rte_pktmbuf_free_seg(end);
- }
- pkts[pkt_idx++] = start;
- start = end = NULL;
- }
- } else {
- /* not processing a split packet */
- if (!split_flags[buf_idx]) {
- /* not a split packet, save and skip */
- pkts[pkt_idx++] = rx_bufs[buf_idx];
- continue;
- }
- end = start = rx_bufs[buf_idx];
- rx_bufs[buf_idx]->data_len += rxq->crc_len;
- rx_bufs[buf_idx]->pkt_len += rxq->crc_len;
- }
- }
-
- /* save the partial packet for next time */
- rxq->pkt_first_seg = start;
- rxq->pkt_last_seg = end;
- memcpy(rx_bufs, pkts, pkt_idx * (sizeof(*pkts)));
- return pkt_idx;
-}
-
static __rte_always_inline int
i40e_tx_free_bufs(struct i40e_tx_queue *txq)
{
diff --git a/drivers/net/i40e/i40e_rxtx_vec_neon.c b/drivers/net/i40e/i40e_rxtx_vec_neon.c
index e1c5c7041b..159d971796 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_neon.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_neon.c
@@ -623,8 +623,8 @@ i40e_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/i40e/i40e_rxtx_vec_sse.c b/drivers/net/i40e/i40e_rxtx_vec_sse.c
index ad560d2b6b..3a8128e014 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_sse.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_sse.c
@@ -641,8 +641,8 @@ i40e_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/i40e/meson.build b/drivers/net/i40e/meson.build
index 5c93493124..0e0b416b8f 100644
--- a/drivers/net/i40e/meson.build
+++ b/drivers/net/i40e/meson.build
@@ -36,7 +36,7 @@ sources = files(
testpmd_sources = files('i40e_testpmd.c')
deps += ['hash']
-includes += include_directories('base')
+includes += include_directories('base', '..')
if arch_subdir == 'x86'
sources += files('i40e_rxtx_vec_sse.c')
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index 49d41af953..0baf5045c8 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -1508,8 +1508,8 @@ iavf_recv_scattered_burst_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
@@ -1597,8 +1597,8 @@ iavf_recv_scattered_burst_vec_avx2_flex_rxd(void *rx_queue,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index d6a861bf80..5a88007096 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -1685,8 +1685,8 @@ iavf_recv_scattered_burst_vec_avx512(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
@@ -1761,8 +1761,8 @@ iavf_recv_scattered_burst_vec_avx512_flex_rxd(void *rx_queue,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index 5c5220048d..26b6f07614 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -8,6 +8,7 @@
#include <ethdev_driver.h>
#include <rte_malloc.h>
+#include <_common_intel/rx.h>
#include "iavf.h"
#include "iavf_rxtx.h"
@@ -15,70 +16,6 @@
#pragma GCC diagnostic ignored "-Wcast-qual"
#endif
-static __rte_always_inline uint16_t
-reassemble_packets(struct iavf_rx_queue *rxq, struct rte_mbuf **rx_bufs,
- uint16_t nb_bufs, uint8_t *split_flags)
-{
- struct rte_mbuf *pkts[IAVF_VPMD_RX_MAX_BURST];
- struct rte_mbuf *start = rxq->pkt_first_seg;
- struct rte_mbuf *end = rxq->pkt_last_seg;
- unsigned int pkt_idx, buf_idx;
-
- for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) {
- if (end) {
- /* processing a split packet */
- end->next = rx_bufs[buf_idx];
- rx_bufs[buf_idx]->data_len += rxq->crc_len;
-
- start->nb_segs++;
- start->pkt_len += rx_bufs[buf_idx]->data_len;
- end = end->next;
-
- if (!split_flags[buf_idx]) {
- /* it's the last packet of the set */
- start->hash = end->hash;
- start->vlan_tci = end->vlan_tci;
- start->ol_flags = end->ol_flags;
- /* we need to strip crc for the whole packet */
- start->pkt_len -= rxq->crc_len;
- if (end->data_len > rxq->crc_len) {
- end->data_len -= rxq->crc_len;
- } else {
- /* free up last mbuf */
- struct rte_mbuf *secondlast = start;
-
- start->nb_segs--;
- while (secondlast->next != end)
- secondlast = secondlast->next;
- secondlast->data_len -= (rxq->crc_len -
- end->data_len);
- secondlast->next = NULL;
- rte_pktmbuf_free_seg(end);
- }
- pkts[pkt_idx++] = start;
- start = NULL;
- end = NULL;
- }
- } else {
- /* not processing a split packet */
- if (!split_flags[buf_idx]) {
- /* not a split packet, save and skip */
- pkts[pkt_idx++] = rx_bufs[buf_idx];
- continue;
- }
- end = start = rx_bufs[buf_idx];
- rx_bufs[buf_idx]->data_len += rxq->crc_len;
- rx_bufs[buf_idx]->pkt_len += rxq->crc_len;
- }
- }
-
- /* save the partial packet for next time */
- rxq->pkt_first_seg = start;
- rxq->pkt_last_seg = end;
- memcpy(rx_bufs, pkts, pkt_idx * (sizeof(*pkts)));
- return pkt_idx;
-}
-
static __rte_always_inline int
iavf_tx_free_bufs(struct iavf_tx_queue *txq)
{
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index 0db6fa8bd4..48b01462ea 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -1238,8 +1238,8 @@ iavf_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
@@ -1307,8 +1307,8 @@ iavf_recv_scattered_burst_vec_flex_rxd(void *rx_queue,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/iavf/meson.build b/drivers/net/iavf/meson.build
index b48bb83438..9106e016ef 100644
--- a/drivers/net/iavf/meson.build
+++ b/drivers/net/iavf/meson.build
@@ -5,7 +5,7 @@ if dpdk_conf.get('RTE_IOVA_IN_MBUF') == 0
subdir_done()
endif
-includes += include_directories('../../common/iavf')
+includes += include_directories('../../common/iavf', '..')
testpmd_sources = files('iavf_testpmd.c')
diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index d6e88dbb29..ca247b155c 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -726,8 +726,8 @@ ice_recv_scattered_burst_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + ice_rx_reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index add095ef06..1e603d5d8f 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -763,8 +763,8 @@ ice_recv_scattered_burst_vec_avx512(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + ice_rx_reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
@@ -805,8 +805,8 @@ ice_recv_scattered_burst_vec_avx512_offload(void *rx_queue,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + ice_rx_reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index 4b73465af5..dd7da4761f 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -5,77 +5,13 @@
#ifndef _ICE_RXTX_VEC_COMMON_H_
#define _ICE_RXTX_VEC_COMMON_H_
+#include <_common_intel/rx.h>
#include "ice_rxtx.h"
#ifndef __INTEL_COMPILER
#pragma GCC diagnostic ignored "-Wcast-qual"
#endif
-static inline uint16_t
-ice_rx_reassemble_packets(struct ice_rx_queue *rxq, struct rte_mbuf **rx_bufs,
- uint16_t nb_bufs, uint8_t *split_flags)
-{
- struct rte_mbuf *pkts[ICE_VPMD_RX_BURST] = {0}; /*finished pkts*/
- struct rte_mbuf *start = rxq->pkt_first_seg;
- struct rte_mbuf *end = rxq->pkt_last_seg;
- unsigned int pkt_idx, buf_idx;
-
- for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) {
- if (end) {
- /* processing a split packet */
- end->next = rx_bufs[buf_idx];
- rx_bufs[buf_idx]->data_len += rxq->crc_len;
-
- start->nb_segs++;
- start->pkt_len += rx_bufs[buf_idx]->data_len;
- end = end->next;
-
- if (!split_flags[buf_idx]) {
- /* it's the last packet of the set */
- start->hash = end->hash;
- start->vlan_tci = end->vlan_tci;
- start->ol_flags = end->ol_flags;
- /* we need to strip crc for the whole packet */
- start->pkt_len -= rxq->crc_len;
- if (end->data_len > rxq->crc_len) {
- end->data_len -= rxq->crc_len;
- } else {
- /* free up last mbuf */
- struct rte_mbuf *secondlast = start;
-
- start->nb_segs--;
- while (secondlast->next != end)
- secondlast = secondlast->next;
- secondlast->data_len -= (rxq->crc_len -
- end->data_len);
- secondlast->next = NULL;
- rte_pktmbuf_free_seg(end);
- }
- pkts[pkt_idx++] = start;
- start = NULL;
- end = NULL;
- }
- } else {
- /* not processing a split packet */
- if (!split_flags[buf_idx]) {
- /* not a split packet, save and skip */
- pkts[pkt_idx++] = rx_bufs[buf_idx];
- continue;
- }
- start = rx_bufs[buf_idx];
- end = start;
- rx_bufs[buf_idx]->data_len += rxq->crc_len;
- rx_bufs[buf_idx]->pkt_len += rxq->crc_len;
- }
- }
-
- /* save the partial packet for next time */
- rxq->pkt_first_seg = start;
- rxq->pkt_last_seg = end;
- memcpy(rx_bufs, pkts, pkt_idx * (sizeof(*pkts)));
- return pkt_idx;
-}
-
static __rte_always_inline int
ice_tx_free_bufs_vec(struct ice_tx_queue *txq)
{
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index c01d8ede29..01533454ba 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -640,8 +640,8 @@ ice_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + ice_rx_reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/ice/meson.build b/drivers/net/ice/meson.build
index 1c9dc0cc6d..02c028db73 100644
--- a/drivers/net/ice/meson.build
+++ b/drivers/net/ice/meson.build
@@ -19,7 +19,7 @@ sources = files(
testpmd_sources = files('ice_testpmd.c')
deps += ['hash', 'net', 'common_iavf']
-includes += include_directories('base', '../../common/iavf')
+includes += include_directories('base', '..')
if arch_subdir == 'x86'
sources += files('ice_rxtx_vec_sse.c')
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
index a4d9ec9b08..2bab17c934 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
@@ -7,71 +7,10 @@
#include <stdint.h>
#include <ethdev_driver.h>
+#include <_common_intel/rx.h>
#include "ixgbe_ethdev.h"
#include "ixgbe_rxtx.h"
-static inline uint16_t
-reassemble_packets(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_bufs,
- uint16_t nb_bufs, uint8_t *split_flags)
-{
- struct rte_mbuf *pkts[nb_bufs]; /*finished pkts*/
- struct rte_mbuf *start = rxq->pkt_first_seg;
- struct rte_mbuf *end = rxq->pkt_last_seg;
- unsigned int pkt_idx, buf_idx;
-
- for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) {
- if (end != NULL) {
- /* processing a split packet */
- end->next = rx_bufs[buf_idx];
- rx_bufs[buf_idx]->data_len += rxq->crc_len;
-
- start->nb_segs++;
- start->pkt_len += rx_bufs[buf_idx]->data_len;
- end = end->next;
-
- if (!split_flags[buf_idx]) {
- /* it's the last packet of the set */
- start->hash = end->hash;
- start->ol_flags = end->ol_flags;
- /* we need to strip crc for the whole packet */
- start->pkt_len -= rxq->crc_len;
- if (end->data_len > rxq->crc_len)
- end->data_len -= rxq->crc_len;
- else {
- /* free up last mbuf */
- struct rte_mbuf *secondlast = start;
-
- start->nb_segs--;
- while (secondlast->next != end)
- secondlast = secondlast->next;
- secondlast->data_len -= (rxq->crc_len -
- end->data_len);
- secondlast->next = NULL;
- rte_pktmbuf_free_seg(end);
- }
- pkts[pkt_idx++] = start;
- start = end = NULL;
- }
- } else {
- /* not processing a split packet */
- if (!split_flags[buf_idx]) {
- /* not a split packet, save and skip */
- pkts[pkt_idx++] = rx_bufs[buf_idx];
- continue;
- }
- end = start = rx_bufs[buf_idx];
- rx_bufs[buf_idx]->data_len += rxq->crc_len;
- rx_bufs[buf_idx]->pkt_len += rxq->crc_len;
- }
- }
-
- /* save the partial packet for next time */
- rxq->pkt_first_seg = start;
- rxq->pkt_last_seg = end;
- memcpy(rx_bufs, pkts, pkt_idx * (sizeof(*pkts)));
- return pkt_idx;
-}
-
static __rte_always_inline int
ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq)
{
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
index 952b032eb6..7b35093075 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
@@ -516,8 +516,8 @@ ixgbe_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
index a77370cdb7..a709bf8c7f 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -639,8 +639,8 @@ ixgbe_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/ixgbe/meson.build b/drivers/net/ixgbe/meson.build
index 0ae12dd5ff..a65ff51379 100644
--- a/drivers/net/ixgbe/meson.build
+++ b/drivers/net/ixgbe/meson.build
@@ -35,6 +35,6 @@ elif arch_subdir == 'arm'
sources += files('ixgbe_recycle_mbufs_vec_common.c')
endif
-includes += include_directories('base')
+includes += include_directories('base', '..')
headers = files('rte_pmd_ixgbe.h')
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v2 02/22] net/_common_intel: provide common Tx entry structures
2024-12-03 16:41 ` [PATCH v2 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
2024-12-03 16:41 ` [PATCH v2 01/22] net/_common_intel: add pkt reassembly fn for intel drivers Bruce Richardson
@ 2024-12-03 16:41 ` Bruce Richardson
2024-12-03 16:41 ` [PATCH v2 03/22] net/_common_intel: add Tx mbuf ring replenish fn Bruce Richardson
` (19 subsequent siblings)
21 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-03 16:41 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Ian Stokes, David Christensen,
Konstantin Ananyev, Wathsala Vithanage, Vladimir Medvedkin,
Anatoly Burakov
The Tx entry structures, both vector and scalar, are common across Intel
drivers, so provide a single definition to be used everywhere.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 27 +++++++++++++++++++
.../net/i40e/i40e_recycle_mbufs_vec_common.c | 2 +-
drivers/net/i40e/i40e_rxtx.c | 18 ++++++-------
drivers/net/i40e/i40e_rxtx.h | 14 +++-------
drivers/net/i40e/i40e_rxtx_vec_altivec.c | 2 +-
drivers/net/i40e/i40e_rxtx_vec_avx2.c | 2 +-
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 6 ++---
drivers/net/i40e/i40e_rxtx_vec_common.h | 4 +--
drivers/net/i40e/i40e_rxtx_vec_neon.c | 2 +-
drivers/net/i40e/i40e_rxtx_vec_sse.c | 2 +-
drivers/net/iavf/iavf_rxtx.c | 12 ++++-----
drivers/net/iavf/iavf_rxtx.h | 14 +++-------
drivers/net/iavf/iavf_rxtx_vec_avx2.c | 2 +-
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 10 +++----
drivers/net/iavf/iavf_rxtx_vec_common.h | 4 +--
drivers/net/iavf/iavf_rxtx_vec_sse.c | 2 +-
drivers/net/ice/ice_dcf_ethdev.c | 2 +-
drivers/net/ice/ice_rxtx.c | 16 +++++------
drivers/net/ice/ice_rxtx.h | 13 ++-------
drivers/net/ice/ice_rxtx_vec_avx2.c | 2 +-
drivers/net/ice/ice_rxtx_vec_avx512.c | 6 ++---
drivers/net/ice/ice_rxtx_vec_common.h | 6 ++---
drivers/net/ice/ice_rxtx_vec_sse.c | 2 +-
.../ixgbe/ixgbe_recycle_mbufs_vec_common.c | 2 +-
drivers/net/ixgbe/ixgbe_rxtx.c | 16 +++++------
drivers/net/ixgbe/ixgbe_rxtx.h | 22 +++------------
drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 8 +++---
drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 2 +-
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 2 +-
29 files changed, 105 insertions(+), 117 deletions(-)
create mode 100644 drivers/net/_common_intel/tx.h
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
new file mode 100644
index 0000000000..384352b9db
--- /dev/null
+++ b/drivers/net/_common_intel/tx.h
@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Intel Corporation
+ */
+
+#ifndef _COMMON_INTEL_TX_H_
+#define _COMMON_INTEL_TX_H_
+
+#include <stdint.h>
+#include <rte_mbuf.h>
+
+/**
+ * Structure associated with each descriptor of the TX ring of a TX queue.
+ */
+struct ci_tx_entry {
+ struct rte_mbuf *mbuf; /* mbuf associated with TX desc, if any. */
+ uint16_t next_id; /* Index of next descriptor in ring. */
+ uint16_t last_id; /* Index of last scattered descriptor. */
+};
+
+/**
+ * Structure associated with each descriptor of the TX ring of a TX queue in vector Tx.
+ */
+struct ci_tx_entry_vec {
+ struct rte_mbuf *mbuf; /* mbuf associated with TX desc, if any. */
+};
+
+#endif /* _COMMON_INTEL_TX_H_ */
diff --git a/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c b/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
index 14424c9921..260d238ce4 100644
--- a/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
+++ b/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
@@ -56,7 +56,7 @@ i40e_recycle_tx_mbufs_reuse_vec(void *tx_queue,
struct rte_eth_recycle_rxq_info *recycle_rxq_info)
{
struct i40e_tx_queue *txq = tx_queue;
- struct i40e_tx_entry *txep;
+ struct ci_tx_entry *txep;
struct rte_mbuf **rxep;
int i, n;
uint16_t nb_recycle_mbufs;
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 839c8a5442..2e1f07d2a1 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -378,7 +378,7 @@ i40e_build_ctob(uint32_t td_cmd,
static inline int
i40e_xmit_cleanup(struct i40e_tx_queue *txq)
{
- struct i40e_tx_entry *sw_ring = txq->sw_ring;
+ struct ci_tx_entry *sw_ring = txq->sw_ring;
volatile struct i40e_tx_desc *txd = txq->tx_ring;
uint16_t last_desc_cleaned = txq->last_desc_cleaned;
uint16_t nb_tx_desc = txq->nb_tx_desc;
@@ -1081,8 +1081,8 @@ uint16_t
i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
struct i40e_tx_queue *txq;
- struct i40e_tx_entry *sw_ring;
- struct i40e_tx_entry *txe, *txn;
+ struct ci_tx_entry *sw_ring;
+ struct ci_tx_entry *txe, *txn;
volatile struct i40e_tx_desc *txd;
volatile struct i40e_tx_desc *txr;
struct rte_mbuf *tx_pkt;
@@ -1331,7 +1331,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
static __rte_always_inline int
i40e_tx_free_bufs(struct i40e_tx_queue *txq)
{
- struct i40e_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint16_t tx_rs_thresh = txq->tx_rs_thresh;
uint16_t i = 0, j = 0;
struct rte_mbuf *free[RTE_I40E_TX_MAX_FREE_BUF_SZ];
@@ -1418,7 +1418,7 @@ i40e_tx_fill_hw_ring(struct i40e_tx_queue *txq,
uint16_t nb_pkts)
{
volatile struct i40e_tx_desc *txdp = &(txq->tx_ring[txq->tx_tail]);
- struct i40e_tx_entry *txep = &(txq->sw_ring[txq->tx_tail]);
+ struct ci_tx_entry *txep = &txq->sw_ring[txq->tx_tail];
const int N_PER_LOOP = 4;
const int N_PER_LOOP_MASK = N_PER_LOOP - 1;
int mainpart, leftover;
@@ -2555,7 +2555,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate software ring */
txq->sw_ring =
rte_zmalloc_socket("i40e tx sw ring",
- sizeof(struct i40e_tx_entry) * nb_desc,
+ sizeof(struct ci_tx_entry) * nb_desc,
RTE_CACHE_LINE_SIZE,
socket_id);
if (!txq->sw_ring) {
@@ -2723,7 +2723,7 @@ i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq)
*/
#ifdef CC_AVX512_SUPPORT
if (dev->tx_pkt_burst == i40e_xmit_pkts_vec_avx512) {
- struct i40e_vec_tx_entry *swr = (void *)txq->sw_ring;
+ struct ci_tx_entry_vec *swr = (void *)txq->sw_ring;
i = txq->tx_next_dd - txq->tx_rs_thresh + 1;
if (txq->tx_tail < i) {
@@ -2768,7 +2768,7 @@ static int
i40e_tx_done_cleanup_full(struct i40e_tx_queue *txq,
uint32_t free_cnt)
{
- struct i40e_tx_entry *swr_ring = txq->sw_ring;
+ struct ci_tx_entry *swr_ring = txq->sw_ring;
uint16_t i, tx_last, tx_id;
uint16_t nb_tx_free_last;
uint16_t nb_tx_to_clean;
@@ -2874,7 +2874,7 @@ i40e_tx_done_cleanup(void *txq, uint32_t free_cnt)
void
i40e_reset_tx_queue(struct i40e_tx_queue *txq)
{
- struct i40e_tx_entry *txe;
+ struct ci_tx_entry *txe;
uint16_t i, prev, size;
if (!txq) {
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 33fc9770d9..0f5d3cb0b7 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -5,6 +5,8 @@
#ifndef _I40E_RXTX_H_
#define _I40E_RXTX_H_
+#include <_common_intel/tx.h>
+
#define RTE_PMD_I40E_RX_MAX_BURST 32
#define RTE_PMD_I40E_TX_MAX_BURST 32
@@ -122,16 +124,6 @@ struct i40e_rx_queue {
const struct rte_memzone *mz;
};
-struct i40e_tx_entry {
- struct rte_mbuf *mbuf;
- uint16_t next_id;
- uint16_t last_id;
-};
-
-struct i40e_vec_tx_entry {
- struct rte_mbuf *mbuf;
-};
-
/*
* Structure associated with each TX queue.
*/
@@ -139,7 +131,7 @@ struct i40e_tx_queue {
uint16_t nb_tx_desc; /**< number of TX descriptors */
uint64_t tx_ring_phys_addr; /**< TX ring DMA address */
volatile struct i40e_tx_desc *tx_ring; /**< TX ring virtual address */
- struct i40e_tx_entry *sw_ring; /**< virtual address of SW ring */
+ struct ci_tx_entry *sw_ring; /**< virtual address of SW ring */
uint16_t tx_tail; /**< current value of tail register */
volatile uint8_t *qtx_tail; /**< register address of tail */
uint16_t nb_tx_used; /**< number of TX desc used since RS bit set */
diff --git a/drivers/net/i40e/i40e_rxtx_vec_altivec.c b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
index 95829f65d5..ca1038eaa6 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_altivec.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
@@ -553,7 +553,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
- struct i40e_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = I40E_TD_CMD;
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx2.c b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
index 6dd6e55d9c..e8441de759 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx2.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
@@ -745,7 +745,7 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
- struct i40e_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = I40E_TD_CMD;
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index 506f1b5878..8b8a16daa8 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -757,7 +757,7 @@ i40e_recv_scattered_pkts_vec_avx512(void *rx_queue,
static __rte_always_inline int
i40e_tx_free_bufs_avx512(struct i40e_tx_queue *txq)
{
- struct i40e_vec_tx_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint32_t n;
uint32_t i;
int nb_free = 0;
@@ -920,7 +920,7 @@ vtx(volatile struct i40e_tx_desc *txdp,
}
static __rte_always_inline void
-tx_backlog_entry_avx512(struct i40e_vec_tx_entry *txep,
+tx_backlog_entry_avx512(struct ci_tx_entry_vec *txep,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
int i;
@@ -935,7 +935,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
- struct i40e_vec_tx_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = I40E_TD_CMD;
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index 1248cecacd..619fb89110 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -19,7 +19,7 @@
static __rte_always_inline int
i40e_tx_free_bufs(struct i40e_tx_queue *txq)
{
- struct i40e_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint32_t n;
uint32_t i;
int nb_free = 0;
@@ -85,7 +85,7 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
}
static __rte_always_inline void
-tx_backlog_entry(struct i40e_tx_entry *txep,
+tx_backlog_entry(struct ci_tx_entry *txep,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
int i;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_neon.c b/drivers/net/i40e/i40e_rxtx_vec_neon.c
index 159d971796..9b90a32e28 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_neon.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_neon.c
@@ -681,7 +681,7 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
{
struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
- struct i40e_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = I40E_TD_CMD;
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_sse.c b/drivers/net/i40e/i40e_rxtx_vec_sse.c
index 3a8128e014..e1fa2ed543 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_sse.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_sse.c
@@ -700,7 +700,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
- struct i40e_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = I40E_TD_CMD;
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 6a093c6746..e337f20073 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -284,7 +284,7 @@ reset_rx_queue(struct iavf_rx_queue *rxq)
static inline void
reset_tx_queue(struct iavf_tx_queue *txq)
{
- struct iavf_tx_entry *txe;
+ struct ci_tx_entry *txe;
uint32_t i, size;
uint16_t prev;
@@ -860,7 +860,7 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate software ring */
txq->sw_ring =
rte_zmalloc_socket("iavf tx sw ring",
- sizeof(struct iavf_tx_entry) * nb_desc,
+ sizeof(struct ci_tx_entry) * nb_desc,
RTE_CACHE_LINE_SIZE,
socket_id);
if (!txq->sw_ring) {
@@ -2379,7 +2379,7 @@ iavf_recv_pkts_bulk_alloc(void *rx_queue,
static inline int
iavf_xmit_cleanup(struct iavf_tx_queue *txq)
{
- struct iavf_tx_entry *sw_ring = txq->sw_ring;
+ struct ci_tx_entry *sw_ring = txq->sw_ring;
uint16_t last_desc_cleaned = txq->last_desc_cleaned;
uint16_t nb_tx_desc = txq->nb_tx_desc;
uint16_t desc_to_clean_to;
@@ -2797,8 +2797,8 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
struct iavf_tx_queue *txq = tx_queue;
volatile struct iavf_tx_desc *txr = txq->tx_ring;
- struct iavf_tx_entry *txe_ring = txq->sw_ring;
- struct iavf_tx_entry *txe, *txn;
+ struct ci_tx_entry *txe_ring = txq->sw_ring;
+ struct ci_tx_entry *txe, *txn;
struct rte_mbuf *mb, *mb_seg;
uint64_t buf_dma_addr;
uint16_t desc_idx, desc_idx_last;
@@ -4268,7 +4268,7 @@ static int
iavf_tx_done_cleanup_full(struct iavf_tx_queue *txq,
uint32_t free_cnt)
{
- struct iavf_tx_entry *swr_ring = txq->sw_ring;
+ struct ci_tx_entry *swr_ring = txq->sw_ring;
uint16_t tx_last, tx_id;
uint16_t nb_tx_free_last;
uint16_t nb_tx_to_clean;
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index 7b56076d32..1a191f2c89 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -5,6 +5,8 @@
#ifndef _IAVF_RXTX_H_
#define _IAVF_RXTX_H_
+#include <_common_intel/tx.h>
+
/* In QLEN must be whole number of 32 descriptors. */
#define IAVF_ALIGN_RING_DESC 32
#define IAVF_MIN_RING_DESC 64
@@ -271,22 +273,12 @@ struct iavf_rx_queue {
uint64_t hw_time_update;
};
-struct iavf_tx_entry {
- struct rte_mbuf *mbuf;
- uint16_t next_id;
- uint16_t last_id;
-};
-
-struct iavf_tx_vec_entry {
- struct rte_mbuf *mbuf;
-};
-
/* Structure associated with each TX queue. */
struct iavf_tx_queue {
const struct rte_memzone *mz; /* memzone for Tx ring */
volatile struct iavf_tx_desc *tx_ring; /* Tx ring virtual address */
uint64_t tx_ring_phys_addr; /* Tx ring DMA address */
- struct iavf_tx_entry *sw_ring; /* address array of SW ring */
+ struct ci_tx_entry *sw_ring; /* address array of SW ring */
uint16_t nb_tx_desc; /* ring length */
uint16_t tx_tail; /* current value of tail */
volatile uint8_t *qtx_tail; /* register address of tail */
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index 0baf5045c8..e7d3d52655 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -1736,7 +1736,7 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
- struct iavf_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
/* bit2 is reserved and must be set to 1 according to Spec */
uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index 5a88007096..a899309f94 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -1847,7 +1847,7 @@ iavf_recv_scattered_pkts_vec_avx512_flex_rxd_offload(void *rx_queue,
static __rte_always_inline int
iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq)
{
- struct iavf_tx_vec_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint32_t n;
uint32_t i;
int nb_free = 0;
@@ -1960,7 +1960,7 @@ iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq)
}
static __rte_always_inline void
-tx_backlog_entry_avx512(struct iavf_tx_vec_entry *txep,
+tx_backlog_entry_avx512(struct ci_tx_entry_vec *txep,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
int i;
@@ -2313,7 +2313,7 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
- struct iavf_tx_vec_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
/* bit2 is reserved and must be set to 1 according to Spec */
uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC;
@@ -2380,7 +2380,7 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
- struct iavf_tx_vec_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, nb_mbuf, tx_id;
/* bit2 is reserved and must be set to 1 according to Spec */
uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC;
@@ -2478,7 +2478,7 @@ iavf_tx_queue_release_mbufs_avx512(struct iavf_tx_queue *txq)
const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1);
const uint16_t end_desc = txq->tx_tail >> txq->use_ctx; /* next empty slot */
const uint16_t wrap_point = txq->nb_tx_desc >> txq->use_ctx; /* end of SW ring */
- struct iavf_tx_vec_entry *swr = (void *)txq->sw_ring;
+ struct ci_tx_entry_vec *swr = (void *)txq->sw_ring;
if (!txq->sw_ring || txq->nb_free == max_desc)
return;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index 26b6f07614..df40857218 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -19,7 +19,7 @@
static __rte_always_inline int
iavf_tx_free_bufs(struct iavf_tx_queue *txq)
{
- struct iavf_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint32_t n;
uint32_t i;
int nb_free = 0;
@@ -74,7 +74,7 @@ iavf_tx_free_bufs(struct iavf_tx_queue *txq)
}
static __rte_always_inline void
-tx_backlog_entry(struct iavf_tx_entry *txep,
+tx_backlog_entry(struct ci_tx_entry *txep,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
int i;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index 48b01462ea..0a30b1ef64 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -1368,7 +1368,7 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
- struct iavf_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = IAVF_TX_DESC_CMD_EOP | 0x04; /* bit 2 must be set */
uint64_t rs = IAVF_TX_DESC_CMD_RS | flags;
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 91f4943a11..4b98e4066b 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -389,7 +389,7 @@ reset_rx_queue(struct ice_rx_queue *rxq)
static inline void
reset_tx_queue(struct ice_tx_queue *txq)
{
- struct ice_tx_entry *txe;
+ struct ci_tx_entry *txe;
uint32_t i, size;
uint16_t prev;
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 0c7106c7e0..d584086a36 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -1028,7 +1028,7 @@ _ice_tx_queue_release_mbufs(struct ice_tx_queue *txq)
static void
ice_reset_tx_queue(struct ice_tx_queue *txq)
{
- struct ice_tx_entry *txe;
+ struct ci_tx_entry *txe;
uint16_t i, prev, size;
if (!txq) {
@@ -1509,7 +1509,7 @@ ice_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate software ring */
txq->sw_ring =
rte_zmalloc_socket(NULL,
- sizeof(struct ice_tx_entry) * nb_desc,
+ sizeof(struct ci_tx_entry) * nb_desc,
RTE_CACHE_LINE_SIZE,
socket_id);
if (!txq->sw_ring) {
@@ -2837,7 +2837,7 @@ ice_txd_enable_checksum(uint64_t ol_flags,
static inline int
ice_xmit_cleanup(struct ice_tx_queue *txq)
{
- struct ice_tx_entry *sw_ring = txq->sw_ring;
+ struct ci_tx_entry *sw_ring = txq->sw_ring;
volatile struct ice_tx_desc *txd = txq->tx_ring;
uint16_t last_desc_cleaned = txq->last_desc_cleaned;
uint16_t nb_tx_desc = txq->nb_tx_desc;
@@ -2961,8 +2961,8 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
struct ice_tx_queue *txq;
volatile struct ice_tx_desc *tx_ring;
volatile struct ice_tx_desc *txd;
- struct ice_tx_entry *sw_ring;
- struct ice_tx_entry *txe, *txn;
+ struct ci_tx_entry *sw_ring;
+ struct ci_tx_entry *txe, *txn;
struct rte_mbuf *tx_pkt;
struct rte_mbuf *m_seg;
uint32_t cd_tunneling_params;
@@ -3184,7 +3184,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
static __rte_always_inline int
ice_tx_free_bufs(struct ice_tx_queue *txq)
{
- struct ice_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint16_t i;
if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
@@ -3221,7 +3221,7 @@ static int
ice_tx_done_cleanup_full(struct ice_tx_queue *txq,
uint32_t free_cnt)
{
- struct ice_tx_entry *swr_ring = txq->sw_ring;
+ struct ci_tx_entry *swr_ring = txq->sw_ring;
uint16_t i, tx_last, tx_id;
uint16_t nb_tx_free_last;
uint16_t nb_tx_to_clean;
@@ -3361,7 +3361,7 @@ ice_tx_fill_hw_ring(struct ice_tx_queue *txq, struct rte_mbuf **pkts,
uint16_t nb_pkts)
{
volatile struct ice_tx_desc *txdp = &txq->tx_ring[txq->tx_tail];
- struct ice_tx_entry *txep = &txq->sw_ring[txq->tx_tail];
+ struct ci_tx_entry *txep = &txq->sw_ring[txq->tx_tail];
const int N_PER_LOOP = 4;
const int N_PER_LOOP_MASK = N_PER_LOOP - 1;
int mainpart, leftover;
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index 45f25b3609..8d1a1a8676 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -5,6 +5,7 @@
#ifndef _ICE_RXTX_H_
#define _ICE_RXTX_H_
+#include <_common_intel/tx.h>
#include "ice_ethdev.h"
#define ICE_ALIGN_RING_DESC 32
@@ -144,21 +145,11 @@ struct ice_rx_queue {
bool ts_enable; /* if rxq timestamp is enabled */
};
-struct ice_tx_entry {
- struct rte_mbuf *mbuf;
- uint16_t next_id;
- uint16_t last_id;
-};
-
-struct ice_vec_tx_entry {
- struct rte_mbuf *mbuf;
-};
-
struct ice_tx_queue {
uint16_t nb_tx_desc; /* number of TX descriptors */
rte_iova_t tx_ring_dma; /* TX ring DMA address */
volatile struct ice_tx_desc *tx_ring; /* TX ring virtual address */
- struct ice_tx_entry *sw_ring; /* virtual address of SW ring */
+ struct ci_tx_entry *sw_ring; /* virtual address of SW ring */
uint16_t tx_tail; /* current value of tail register */
volatile uint8_t *qtx_tail; /* register address of tail */
uint16_t nb_tx_used; /* number of TX desc used since RS bit set */
diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index ca247b155c..cf1862263a 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -858,7 +858,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
volatile struct ice_tx_desc *txdp;
- struct ice_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = ICE_TD_CMD;
uint64_t rs = ICE_TX_DESC_CMD_RS | ICE_TD_CMD;
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index 1e603d5d8f..6b6aa3f1fe 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -862,7 +862,7 @@ ice_recv_scattered_pkts_vec_avx512_offload(void *rx_queue,
static __rte_always_inline int
ice_tx_free_bufs_avx512(struct ice_tx_queue *txq)
{
- struct ice_vec_tx_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint32_t n;
uint32_t i;
int nb_free = 0;
@@ -1040,7 +1040,7 @@ ice_vtx(volatile struct ice_tx_desc *txdp, struct rte_mbuf **pkt,
}
static __rte_always_inline void
-ice_tx_backlog_entry_avx512(struct ice_vec_tx_entry *txep,
+ice_tx_backlog_entry_avx512(struct ci_tx_entry_vec *txep,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
int i;
@@ -1055,7 +1055,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
volatile struct ice_tx_desc *txdp;
- struct ice_vec_tx_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = ICE_TD_CMD;
uint64_t rs = ICE_TX_DESC_CMD_RS | ICE_TD_CMD;
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index dd7da4761f..3dc6061e84 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -15,7 +15,7 @@
static __rte_always_inline int
ice_tx_free_bufs_vec(struct ice_tx_queue *txq)
{
- struct ice_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint32_t n;
uint32_t i;
int nb_free = 0;
@@ -70,7 +70,7 @@ ice_tx_free_bufs_vec(struct ice_tx_queue *txq)
}
static __rte_always_inline void
-ice_tx_backlog_entry(struct ice_tx_entry *txep,
+ice_tx_backlog_entry(struct ci_tx_entry *txep,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
int i;
@@ -135,7 +135,7 @@ _ice_tx_queue_release_mbufs_vec(struct ice_tx_queue *txq)
if (dev->tx_pkt_burst == ice_xmit_pkts_vec_avx512 ||
dev->tx_pkt_burst == ice_xmit_pkts_vec_avx512_offload) {
- struct ice_vec_tx_entry *swr = (void *)txq->sw_ring;
+ struct ci_tx_entry_vec *swr = (void *)txq->sw_ring;
if (txq->tx_tail < i) {
for (; i < txq->nb_tx_desc; i++) {
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index 01533454ba..889b754cc1 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -699,7 +699,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
volatile struct ice_tx_desc *txdp;
- struct ice_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = ICE_TD_CMD;
uint64_t rs = ICE_TX_DESC_CMD_RS | ICE_TD_CMD;
diff --git a/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c b/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c
index d451562269..2241726ad8 100644
--- a/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c
+++ b/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c
@@ -52,7 +52,7 @@ ixgbe_recycle_tx_mbufs_reuse_vec(void *tx_queue,
struct rte_eth_recycle_rxq_info *recycle_rxq_info)
{
struct ixgbe_tx_queue *txq = tx_queue;
- struct ixgbe_tx_entry *txep;
+ struct ci_tx_entry *txep;
struct rte_mbuf **rxep;
int i, n;
uint32_t status;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 7d16eb9df7..db4b993ebc 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -100,7 +100,7 @@
static __rte_always_inline int
ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq)
{
- struct ixgbe_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint32_t status;
int i, nb_free = 0;
struct rte_mbuf *m, *free[RTE_IXGBE_TX_MAX_FREE_BUF_SZ];
@@ -199,7 +199,7 @@ ixgbe_tx_fill_hw_ring(struct ixgbe_tx_queue *txq, struct rte_mbuf **pkts,
uint16_t nb_pkts)
{
volatile union ixgbe_adv_tx_desc *txdp = &(txq->tx_ring[txq->tx_tail]);
- struct ixgbe_tx_entry *txep = &(txq->sw_ring[txq->tx_tail]);
+ struct ci_tx_entry *txep = &txq->sw_ring[txq->tx_tail];
const int N_PER_LOOP = 4;
const int N_PER_LOOP_MASK = N_PER_LOOP-1;
int mainpart, leftover;
@@ -563,7 +563,7 @@ tx_desc_ol_flags_to_cmdtype(uint64_t ol_flags)
static inline int
ixgbe_xmit_cleanup(struct ixgbe_tx_queue *txq)
{
- struct ixgbe_tx_entry *sw_ring = txq->sw_ring;
+ struct ci_tx_entry *sw_ring = txq->sw_ring;
volatile union ixgbe_adv_tx_desc *txr = txq->tx_ring;
uint16_t last_desc_cleaned = txq->last_desc_cleaned;
uint16_t nb_tx_desc = txq->nb_tx_desc;
@@ -624,8 +624,8 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
struct ixgbe_tx_queue *txq;
- struct ixgbe_tx_entry *sw_ring;
- struct ixgbe_tx_entry *txe, *txn;
+ struct ci_tx_entry *sw_ring;
+ struct ci_tx_entry *txe, *txn;
volatile union ixgbe_adv_tx_desc *txr;
volatile union ixgbe_adv_tx_desc *txd, *txp;
struct rte_mbuf *tx_pkt;
@@ -2352,7 +2352,7 @@ ixgbe_tx_queue_release_mbufs(struct ixgbe_tx_queue *txq)
static int
ixgbe_tx_done_cleanup_full(struct ixgbe_tx_queue *txq, uint32_t free_cnt)
{
- struct ixgbe_tx_entry *swr_ring = txq->sw_ring;
+ struct ci_tx_entry *swr_ring = txq->sw_ring;
uint16_t i, tx_last, tx_id;
uint16_t nb_tx_free_last;
uint16_t nb_tx_to_clean;
@@ -2490,7 +2490,7 @@ static void __rte_cold
ixgbe_reset_tx_queue(struct ixgbe_tx_queue *txq)
{
static const union ixgbe_adv_tx_desc zeroed_desc = {{0}};
- struct ixgbe_tx_entry *txe = txq->sw_ring;
+ struct ci_tx_entry *txe = txq->sw_ring;
uint16_t prev, i;
/* Zero out HW ring memory */
@@ -2795,7 +2795,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate software ring */
txq->sw_ring = rte_zmalloc_socket("txq->sw_ring",
- sizeof(struct ixgbe_tx_entry) * nb_desc,
+ sizeof(struct ci_tx_entry) * nb_desc,
RTE_CACHE_LINE_SIZE, socket_id);
if (txq->sw_ring == NULL) {
ixgbe_tx_queue_release(txq);
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index 0550c1da60..1647396419 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -5,6 +5,8 @@
#ifndef _IXGBE_RXTX_H_
#define _IXGBE_RXTX_H_
+#include <_common_intel/tx.h>
+
/*
* Rings setup and release.
*
@@ -75,22 +77,6 @@ struct ixgbe_scattered_rx_entry {
struct rte_mbuf *fbuf; /**< First segment of the fragmented packet. */
};
-/**
- * Structure associated with each descriptor of the TX ring of a TX queue.
- */
-struct ixgbe_tx_entry {
- struct rte_mbuf *mbuf; /**< mbuf associated with TX desc, if any. */
- uint16_t next_id; /**< Index of next descriptor in ring. */
- uint16_t last_id; /**< Index of last scattered descriptor. */
-};
-
-/**
- * Structure associated with each descriptor of the TX ring of a TX queue.
- */
-struct ixgbe_tx_entry_v {
- struct rte_mbuf *mbuf; /**< mbuf associated with TX desc, if any. */
-};
-
/**
* Structure associated with each RX queue.
*/
@@ -202,8 +188,8 @@ struct ixgbe_tx_queue {
volatile union ixgbe_adv_tx_desc *tx_ring;
uint64_t tx_ring_phys_addr; /**< TX ring DMA address. */
union {
- struct ixgbe_tx_entry *sw_ring; /**< address of SW ring for scalar PMD. */
- struct ixgbe_tx_entry_v *sw_ring_v; /**< address of SW ring for vector PMD */
+ struct ci_tx_entry *sw_ring; /**< address of SW ring for scalar PMD. */
+ struct ci_tx_entry_vec *sw_ring_v; /**< address of SW ring for vector PMD */
};
volatile uint32_t *tdt_reg_addr; /**< Address of TDT register. */
uint16_t nb_tx_desc; /**< number of TX descriptors. */
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
index 2bab17c934..e9592c0d08 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
@@ -14,7 +14,7 @@
static __rte_always_inline int
ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq)
{
- struct ixgbe_tx_entry_v *txep;
+ struct ci_tx_entry_vec *txep;
uint32_t status;
uint32_t n;
uint32_t i;
@@ -69,7 +69,7 @@ ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq)
}
static __rte_always_inline void
-tx_backlog_entry(struct ixgbe_tx_entry_v *txep,
+tx_backlog_entry(struct ci_tx_entry_vec *txep,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
int i;
@@ -82,7 +82,7 @@ static inline void
_ixgbe_tx_queue_release_mbufs_vec(struct ixgbe_tx_queue *txq)
{
unsigned int i;
- struct ixgbe_tx_entry_v *txe;
+ struct ci_tx_entry_vec *txe;
const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1);
if (txq->sw_ring == NULL || txq->nb_tx_free == max_desc)
@@ -149,7 +149,7 @@ static inline void
_ixgbe_reset_tx_queue_vec(struct ixgbe_tx_queue *txq)
{
static const union ixgbe_adv_tx_desc zeroed_desc = { { 0 } };
- struct ixgbe_tx_entry_v *txe = txq->sw_ring_v;
+ struct ci_tx_entry_vec *txe = txq->sw_ring_v;
uint16_t i;
/* Zero out HW ring memory */
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
index 7b35093075..02b53c008e 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
@@ -573,7 +573,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
volatile union ixgbe_adv_tx_desc *txdp;
- struct ixgbe_tx_entry_v *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = DCMD_DTYP_FLAGS;
uint64_t rs = IXGBE_ADVTXD_DCMD_RS | DCMD_DTYP_FLAGS;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
index a709bf8c7f..c8b5377c9f 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -695,7 +695,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
volatile union ixgbe_adv_tx_desc *txdp;
- struct ixgbe_tx_entry_v *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = DCMD_DTYP_FLAGS;
uint64_t rs = IXGBE_ADVTXD_DCMD_RS|DCMD_DTYP_FLAGS;
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v2 03/22] net/_common_intel: add Tx mbuf ring replenish fn
2024-12-03 16:41 ` [PATCH v2 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
2024-12-03 16:41 ` [PATCH v2 01/22] net/_common_intel: add pkt reassembly fn for intel drivers Bruce Richardson
2024-12-03 16:41 ` [PATCH v2 02/22] net/_common_intel: provide common Tx entry structures Bruce Richardson
@ 2024-12-03 16:41 ` Bruce Richardson
2024-12-03 16:41 ` [PATCH v2 04/22] drivers/net: align Tx queue struct field names Bruce Richardson
` (18 subsequent siblings)
21 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-03 16:41 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, David Christensen, Ian Stokes,
Konstantin Ananyev, Wathsala Vithanage, Vladimir Medvedkin,
Anatoly Burakov
Move the short function used to place mbufs on the SW Tx ring to common
code to avoid duplication.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 7 +++++++
drivers/net/i40e/i40e_rxtx_vec_altivec.c | 4 ++--
drivers/net/i40e/i40e_rxtx_vec_avx2.c | 4 ++--
drivers/net/i40e/i40e_rxtx_vec_common.h | 10 ----------
drivers/net/i40e/i40e_rxtx_vec_neon.c | 4 ++--
drivers/net/i40e/i40e_rxtx_vec_sse.c | 4 ++--
drivers/net/iavf/iavf_rxtx_vec_avx2.c | 4 ++--
drivers/net/iavf/iavf_rxtx_vec_common.h | 10 ----------
drivers/net/iavf/iavf_rxtx_vec_sse.c | 4 ++--
drivers/net/ice/ice_rxtx_vec_avx2.c | 4 ++--
drivers/net/ice/ice_rxtx_vec_common.h | 10 ----------
drivers/net/ice/ice_rxtx_vec_sse.c | 4 ++--
12 files changed, 23 insertions(+), 46 deletions(-)
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
index 384352b9db..5397007411 100644
--- a/drivers/net/_common_intel/tx.h
+++ b/drivers/net/_common_intel/tx.h
@@ -24,4 +24,11 @@ struct ci_tx_entry_vec {
struct rte_mbuf *mbuf; /* mbuf associated with TX desc, if any. */
};
+static __rte_always_inline void
+ci_tx_backlog_entry(struct ci_tx_entry *txep, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+ for (uint16_t i = 0; i < (int)nb_pkts; ++i)
+ txep[i].mbuf = tx_pkts[i];
+}
+
#endif /* _COMMON_INTEL_TX_H_ */
diff --git a/drivers/net/i40e/i40e_rxtx_vec_altivec.c b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
index ca1038eaa6..80f07a3e10 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_altivec.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
@@ -575,7 +575,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -592,7 +592,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = &txq->sw_ring[tx_id];
}
- tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx2.c b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
index e8441de759..b26bae4757 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx2.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
@@ -765,7 +765,7 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry(txep, tx_pkts, n);
vtx(txdp, tx_pkts, n - 1, flags);
tx_pkts += (n - 1);
@@ -783,7 +783,7 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = &txq->sw_ring[tx_id];
}
- tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index 619fb89110..325e99c1a4 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -84,16 +84,6 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
return txq->tx_rs_thresh;
}
-static __rte_always_inline void
-tx_backlog_entry(struct ci_tx_entry *txep,
- struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
-{
- int i;
-
- for (i = 0; i < (int)nb_pkts; ++i)
- txep[i].mbuf = tx_pkts[i];
-}
-
static inline void
_i40e_rx_queue_release_mbufs_vec(struct i40e_rx_queue *rxq)
{
diff --git a/drivers/net/i40e/i40e_rxtx_vec_neon.c b/drivers/net/i40e/i40e_rxtx_vec_neon.c
index 9b90a32e28..26bc345a0a 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_neon.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_neon.c
@@ -702,7 +702,7 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -719,7 +719,7 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
txep = &txq->sw_ring[tx_id];
}
- tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_sse.c b/drivers/net/i40e/i40e_rxtx_vec_sse.c
index e1fa2ed543..ebc32b0d27 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_sse.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_sse.c
@@ -721,7 +721,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -738,7 +738,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = &txq->sw_ring[tx_id];
}
- tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index e7d3d52655..28885800e0 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -1757,7 +1757,7 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry(txep, tx_pkts, n);
iavf_vtx(txdp, tx_pkts, n - 1, flags, offload);
tx_pkts += (n - 1);
@@ -1775,7 +1775,7 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = &txq->sw_ring[tx_id];
}
- tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
iavf_vtx(txdp, tx_pkts, nb_commit, flags, offload);
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index df40857218..2c118cc059 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -73,16 +73,6 @@ iavf_tx_free_bufs(struct iavf_tx_queue *txq)
return txq->rs_thresh;
}
-static __rte_always_inline void
-tx_backlog_entry(struct ci_tx_entry *txep,
- struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
-{
- int i;
-
- for (i = 0; i < (int)nb_pkts; ++i)
- txep[i].mbuf = tx_pkts[i];
-}
-
static inline void
_iavf_rx_queue_release_mbufs_vec(struct iavf_rx_queue *rxq)
{
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index 0a30b1ef64..bc4b8f14c8 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -1390,7 +1390,7 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -1407,7 +1407,7 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = &txq->sw_ring[tx_id];
}
- tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
iavf_vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index cf1862263a..336697e72d 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -881,7 +881,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ice_tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry(txep, tx_pkts, n);
ice_vtx(txdp, tx_pkts, n - 1, flags, offload);
tx_pkts += (n - 1);
@@ -899,7 +899,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = &txq->sw_ring[tx_id];
}
- ice_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
ice_vtx(txdp, tx_pkts, nb_commit, flags, offload);
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index 3dc6061e84..32e4541267 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -69,16 +69,6 @@ ice_tx_free_bufs_vec(struct ice_tx_queue *txq)
return txq->tx_rs_thresh;
}
-static __rte_always_inline void
-ice_tx_backlog_entry(struct ci_tx_entry *txep,
- struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
-{
- int i;
-
- for (i = 0; i < (int)nb_pkts; ++i)
- txep[i].mbuf = tx_pkts[i];
-}
-
static inline void
_ice_rx_queue_release_mbufs_vec(struct ice_rx_queue *rxq)
{
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index 889b754cc1..debdd8f6a2 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -724,7 +724,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ice_tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
ice_vtx1(txdp, *tx_pkts, flags);
@@ -741,7 +741,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = &txq->sw_ring[tx_id];
}
- ice_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
ice_vtx(txdp, tx_pkts, nb_commit, flags);
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v2 04/22] drivers/net: align Tx queue struct field names
2024-12-03 16:41 ` [PATCH v2 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (2 preceding siblings ...)
2024-12-03 16:41 ` [PATCH v2 03/22] net/_common_intel: add Tx mbuf ring replenish fn Bruce Richardson
@ 2024-12-03 16:41 ` Bruce Richardson
2024-12-03 16:41 ` [PATCH v2 05/22] drivers/net: add prefix for driver-specific structs Bruce Richardson
` (17 subsequent siblings)
21 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-03 16:41 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Ian Stokes, Vladimir Medvedkin,
Konstantin Ananyev, Anatoly Burakov, Wathsala Vithanage
Across the various Intel drivers sometimes different names are given to
fields in the Tx queue structure which have the same function. Do some
renaming to align things better for future merging.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/i40e/i40e_rxtx.c | 6 +--
drivers/net/i40e/i40e_rxtx.h | 2 +-
drivers/net/iavf/iavf_rxtx.c | 60 ++++++++++++-------------
drivers/net/iavf/iavf_rxtx.h | 14 +++---
drivers/net/iavf/iavf_rxtx_vec_avx2.c | 19 ++++----
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 57 +++++++++++------------
drivers/net/iavf/iavf_rxtx_vec_common.h | 24 +++++-----
drivers/net/iavf/iavf_rxtx_vec_sse.c | 18 ++++----
drivers/net/iavf/iavf_vchnl.c | 2 +-
drivers/net/ixgbe/base/ixgbe_osdep.h | 2 +-
drivers/net/ixgbe/ixgbe_rxtx.c | 16 +++----
drivers/net/ixgbe/ixgbe_rxtx.h | 6 +--
drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 2 +-
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 2 +-
14 files changed, 116 insertions(+), 114 deletions(-)
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 2e1f07d2a1..b0bb20fe9a 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2549,7 +2549,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->vsi = vsi;
txq->tx_deferred_start = tx_conf->tx_deferred_start;
- txq->tx_ring_phys_addr = tz->iova;
+ txq->tx_ring_dma = tz->iova;
txq->tx_ring = (struct i40e_tx_desc *)tz->addr;
/* Allocate software ring */
@@ -2923,7 +2923,7 @@ i40e_tx_queue_init(struct i40e_tx_queue *txq)
/* clear the context structure first */
memset(&tx_ctx, 0, sizeof(tx_ctx));
tx_ctx.new_context = 1;
- tx_ctx.base = txq->tx_ring_phys_addr / I40E_QUEUE_BASE_ADDR_UNIT;
+ tx_ctx.base = txq->tx_ring_dma / I40E_QUEUE_BASE_ADDR_UNIT;
tx_ctx.qlen = txq->nb_tx_desc;
#ifdef RTE_LIBRTE_IEEE1588
@@ -3209,7 +3209,7 @@ i40e_fdir_setup_tx_resources(struct i40e_pf *pf)
txq->reg_idx = pf->fdir.fdir_vsi->base_queue;
txq->vsi = pf->fdir.fdir_vsi;
- txq->tx_ring_phys_addr = tz->iova;
+ txq->tx_ring_dma = tz->iova;
txq->tx_ring = (struct i40e_tx_desc *)tz->addr;
/*
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 0f5d3cb0b7..f420c98687 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -129,7 +129,7 @@ struct i40e_rx_queue {
*/
struct i40e_tx_queue {
uint16_t nb_tx_desc; /**< number of TX descriptors */
- uint64_t tx_ring_phys_addr; /**< TX ring DMA address */
+ rte_iova_t tx_ring_dma; /**< TX ring DMA address */
volatile struct i40e_tx_desc *tx_ring; /**< TX ring virtual address */
struct ci_tx_entry *sw_ring; /**< virtual address of SW ring */
uint16_t tx_tail; /**< current value of tail register */
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index e337f20073..adaaeb4625 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -216,8 +216,8 @@ static inline bool
check_tx_vec_allow(struct iavf_tx_queue *txq)
{
if (!(txq->offloads & IAVF_TX_NO_VECTOR_FLAGS) &&
- txq->rs_thresh >= IAVF_VPMD_TX_MAX_BURST &&
- txq->rs_thresh <= IAVF_VPMD_TX_MAX_FREE_BUF) {
+ txq->tx_rs_thresh >= IAVF_VPMD_TX_MAX_BURST &&
+ txq->tx_rs_thresh <= IAVF_VPMD_TX_MAX_FREE_BUF) {
PMD_INIT_LOG(DEBUG, "Vector tx can be enabled on this txq.");
return true;
}
@@ -309,13 +309,13 @@ reset_tx_queue(struct iavf_tx_queue *txq)
}
txq->tx_tail = 0;
- txq->nb_used = 0;
+ txq->nb_tx_used = 0;
txq->last_desc_cleaned = txq->nb_tx_desc - 1;
- txq->nb_free = txq->nb_tx_desc - 1;
+ txq->nb_tx_free = txq->nb_tx_desc - 1;
- txq->next_dd = txq->rs_thresh - 1;
- txq->next_rs = txq->rs_thresh - 1;
+ txq->tx_next_dd = txq->tx_rs_thresh - 1;
+ txq->tx_next_rs = txq->tx_rs_thresh - 1;
}
static int
@@ -845,8 +845,8 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,
}
txq->nb_tx_desc = nb_desc;
- txq->rs_thresh = tx_rs_thresh;
- txq->free_thresh = tx_free_thresh;
+ txq->tx_rs_thresh = tx_rs_thresh;
+ txq->tx_free_thresh = tx_free_thresh;
txq->queue_id = queue_idx;
txq->port_id = dev->data->port_id;
txq->offloads = offloads;
@@ -881,7 +881,7 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,
rte_free(txq);
return -ENOMEM;
}
- txq->tx_ring_phys_addr = mz->iova;
+ txq->tx_ring_dma = mz->iova;
txq->tx_ring = (struct iavf_tx_desc *)mz->addr;
txq->mz = mz;
@@ -2387,7 +2387,7 @@ iavf_xmit_cleanup(struct iavf_tx_queue *txq)
volatile struct iavf_tx_desc *txd = txq->tx_ring;
- desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->rs_thresh);
+ desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->tx_rs_thresh);
if (desc_to_clean_to >= nb_tx_desc)
desc_to_clean_to = (uint16_t)(desc_to_clean_to - nb_tx_desc);
@@ -2411,7 +2411,7 @@ iavf_xmit_cleanup(struct iavf_tx_queue *txq)
txd[desc_to_clean_to].cmd_type_offset_bsz = 0;
txq->last_desc_cleaned = desc_to_clean_to;
- txq->nb_free = (uint16_t)(txq->nb_free + nb_tx_to_clean);
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + nb_tx_to_clean);
return 0;
}
@@ -2807,7 +2807,7 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
/* Check if the descriptor ring needs to be cleaned. */
- if (txq->nb_free < txq->free_thresh)
+ if (txq->nb_tx_free < txq->tx_free_thresh)
iavf_xmit_cleanup(txq);
desc_idx = txq->tx_tail;
@@ -2862,14 +2862,14 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
"port_id=%u queue_id=%u tx_first=%u tx_last=%u",
txq->port_id, txq->queue_id, desc_idx, desc_idx_last);
- if (nb_desc_required > txq->nb_free) {
+ if (nb_desc_required > txq->nb_tx_free) {
if (iavf_xmit_cleanup(txq)) {
if (idx == 0)
return 0;
goto end_of_tx;
}
- if (unlikely(nb_desc_required > txq->rs_thresh)) {
- while (nb_desc_required > txq->nb_free) {
+ if (unlikely(nb_desc_required > txq->tx_rs_thresh)) {
+ while (nb_desc_required > txq->nb_tx_free) {
if (iavf_xmit_cleanup(txq)) {
if (idx == 0)
return 0;
@@ -2991,10 +2991,10 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
/* The last packet data descriptor needs End Of Packet (EOP) */
ddesc_cmd = IAVF_TX_DESC_CMD_EOP;
- txq->nb_used = (uint16_t)(txq->nb_used + nb_desc_required);
- txq->nb_free = (uint16_t)(txq->nb_free - nb_desc_required);
+ txq->nb_tx_used = (uint16_t)(txq->nb_tx_used + nb_desc_required);
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_desc_required);
- if (txq->nb_used >= txq->rs_thresh) {
+ if (txq->nb_tx_used >= txq->tx_rs_thresh) {
PMD_TX_LOG(DEBUG, "Setting RS bit on TXD id="
"%4u (port=%d queue=%d)",
desc_idx_last, txq->port_id, txq->queue_id);
@@ -3002,7 +3002,7 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
ddesc_cmd |= IAVF_TX_DESC_CMD_RS;
/* Update txq RS bit counters */
- txq->nb_used = 0;
+ txq->nb_tx_used = 0;
}
ddesc->cmd_type_offset_bsz |= rte_cpu_to_le_64(ddesc_cmd <<
@@ -4278,11 +4278,11 @@ iavf_tx_done_cleanup_full(struct iavf_tx_queue *txq,
tx_id = txq->tx_tail;
tx_last = tx_id;
- if (txq->nb_free == 0 && iavf_xmit_cleanup(txq))
+ if (txq->nb_tx_free == 0 && iavf_xmit_cleanup(txq))
return 0;
- nb_tx_to_clean = txq->nb_free;
- nb_tx_free_last = txq->nb_free;
+ nb_tx_to_clean = txq->nb_tx_free;
+ nb_tx_free_last = txq->nb_tx_free;
if (!free_cnt)
free_cnt = txq->nb_tx_desc;
@@ -4305,16 +4305,16 @@ iavf_tx_done_cleanup_full(struct iavf_tx_queue *txq,
tx_id = swr_ring[tx_id].next_id;
} while (--nb_tx_to_clean && pkt_cnt < free_cnt && tx_id != tx_last);
- if (txq->rs_thresh > txq->nb_tx_desc -
- txq->nb_free || tx_id == tx_last)
+ if (txq->tx_rs_thresh > txq->nb_tx_desc -
+ txq->nb_tx_free || tx_id == tx_last)
break;
if (pkt_cnt < free_cnt) {
if (iavf_xmit_cleanup(txq))
break;
- nb_tx_to_clean = txq->nb_free - nb_tx_free_last;
- nb_tx_free_last = txq->nb_free;
+ nb_tx_to_clean = txq->nb_tx_free - nb_tx_free_last;
+ nb_tx_free_last = txq->nb_tx_free;
}
}
@@ -4356,8 +4356,8 @@ iavf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
qinfo->nb_desc = txq->nb_tx_desc;
- qinfo->conf.tx_free_thresh = txq->free_thresh;
- qinfo->conf.tx_rs_thresh = txq->rs_thresh;
+ qinfo->conf.tx_free_thresh = txq->tx_free_thresh;
+ qinfo->conf.tx_rs_thresh = txq->tx_rs_thresh;
qinfo->conf.offloads = txq->offloads;
qinfo->conf.tx_deferred_start = txq->tx_deferred_start;
}
@@ -4432,8 +4432,8 @@ iavf_dev_tx_desc_status(void *tx_queue, uint16_t offset)
desc = txq->tx_tail + offset;
/* go to next desc that has the RS bit */
- desc = ((desc + txq->rs_thresh - 1) / txq->rs_thresh) *
- txq->rs_thresh;
+ desc = ((desc + txq->tx_rs_thresh - 1) / txq->tx_rs_thresh) *
+ txq->tx_rs_thresh;
if (desc >= txq->nb_tx_desc) {
desc -= txq->nb_tx_desc;
if (desc >= txq->nb_tx_desc)
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index 1a191f2c89..44e2de731c 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -277,25 +277,25 @@ struct iavf_rx_queue {
struct iavf_tx_queue {
const struct rte_memzone *mz; /* memzone for Tx ring */
volatile struct iavf_tx_desc *tx_ring; /* Tx ring virtual address */
- uint64_t tx_ring_phys_addr; /* Tx ring DMA address */
+ rte_iova_t tx_ring_dma; /* Tx ring DMA address */
struct ci_tx_entry *sw_ring; /* address array of SW ring */
uint16_t nb_tx_desc; /* ring length */
uint16_t tx_tail; /* current value of tail */
volatile uint8_t *qtx_tail; /* register address of tail */
/* number of used desc since RS bit set */
- uint16_t nb_used;
- uint16_t nb_free;
+ uint16_t nb_tx_used;
+ uint16_t nb_tx_free;
uint16_t last_desc_cleaned; /* last desc have been cleaned*/
- uint16_t free_thresh;
- uint16_t rs_thresh;
+ uint16_t tx_free_thresh;
+ uint16_t tx_rs_thresh;
uint8_t rel_mbufs_type;
struct iavf_vsi *vsi; /**< the VSI this queue belongs to */
uint16_t port_id;
uint16_t queue_id;
uint64_t offloads;
- uint16_t next_dd; /* next to set RS, for VPMD */
- uint16_t next_rs; /* next to check DD, for VPMD */
+ uint16_t tx_next_dd; /* next to set RS, for VPMD */
+ uint16_t tx_next_rs; /* next to check DD, for VPMD */
uint16_t ipsec_crypto_pkt_md_offset;
uint64_t mbuf_errors;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index 28885800e0..42e09a2adf 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -1742,18 +1742,19 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC;
uint64_t rs = IAVF_TX_DESC_CMD_RS | flags;
- if (txq->nb_free < txq->free_thresh)
+ if (txq->nb_tx_free < txq->tx_free_thresh)
iavf_tx_free_bufs(txq);
- nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_free, nb_pkts);
+ nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
return 0;
+ nb_commit = nb_pkts;
tx_id = txq->tx_tail;
txdp = &txq->tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
- txq->nb_free = (uint16_t)(txq->nb_free - nb_pkts);
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
@@ -1768,7 +1769,7 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_commit = (uint16_t)(nb_commit - n);
tx_id = 0;
- txq->next_rs = (uint16_t)(txq->rs_thresh - 1);
+ txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
txdp = &txq->tx_ring[tx_id];
@@ -1780,12 +1781,12 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
iavf_vtx(txdp, tx_pkts, nb_commit, flags, offload);
tx_id = (uint16_t)(tx_id + nb_commit);
- if (tx_id > txq->next_rs) {
- txq->tx_ring[txq->next_rs].cmd_type_offset_bsz |=
+ if (tx_id > txq->tx_next_rs) {
+ txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) <<
IAVF_TXD_QW1_CMD_SHIFT);
- txq->next_rs =
- (uint16_t)(txq->next_rs + txq->rs_thresh);
+ txq->tx_next_rs =
+ (uint16_t)(txq->tx_next_rs + txq->tx_rs_thresh);
}
txq->tx_tail = tx_id;
@@ -1806,7 +1807,7 @@ iavf_xmit_pkts_vec_avx2_common(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t ret, num;
/* cross rs_thresh boundary is not allowed */
- num = (uint16_t)RTE_MIN(nb_pkts, txq->rs_thresh);
+ num = (uint16_t)RTE_MIN(nb_pkts, txq->tx_rs_thresh);
ret = iavf_xmit_fixed_burst_vec_avx2(tx_queue, &tx_pkts[nb_tx],
num, offload);
nb_tx += ret;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index a899309f94..dc1fef24f0 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -1854,18 +1854,18 @@ iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq)
struct rte_mbuf *m, *free[IAVF_VPMD_TX_MAX_FREE_BUF];
/* check DD bits on threshold descriptor */
- if ((txq->tx_ring[txq->next_dd].cmd_type_offset_bsz &
+ if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(IAVF_TXD_QW1_DTYPE_MASK)) !=
rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE))
return 0;
- n = txq->rs_thresh >> txq->use_ctx;
+ n = txq->tx_rs_thresh >> txq->use_ctx;
/* first buffer to free from S/W ring is at index
* tx_next_dd - (tx_rs_thresh-1)
*/
txep = (void *)txq->sw_ring;
- txep += (txq->next_dd >> txq->use_ctx) - (n - 1);
+ txep += (txq->tx_next_dd >> txq->use_ctx) - (n - 1);
if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
struct rte_mempool *mp = txep[0].mbuf->pool;
@@ -1951,12 +1951,12 @@ iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq)
done:
/* buffers were freed, update counters */
- txq->nb_free = (uint16_t)(txq->nb_free + txq->rs_thresh);
- txq->next_dd = (uint16_t)(txq->next_dd + txq->rs_thresh);
- if (txq->next_dd >= txq->nb_tx_desc)
- txq->next_dd = (uint16_t)(txq->rs_thresh - 1);
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
+ txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
+ if (txq->tx_next_dd >= txq->nb_tx_desc)
+ txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
- return txq->rs_thresh;
+ return txq->tx_rs_thresh;
}
static __rte_always_inline void
@@ -2319,19 +2319,20 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC;
uint64_t rs = IAVF_TX_DESC_CMD_RS | flags;
- if (txq->nb_free < txq->free_thresh)
+ if (txq->nb_tx_free < txq->tx_free_thresh)
iavf_tx_free_bufs_avx512(txq);
- nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_free, nb_pkts);
+ nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
return 0;
+ nb_commit = nb_pkts;
tx_id = txq->tx_tail;
txdp = &txq->tx_ring[tx_id];
txep = (void *)txq->sw_ring;
txep += tx_id;
- txq->nb_free = (uint16_t)(txq->nb_free - nb_pkts);
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
@@ -2346,7 +2347,7 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_commit = (uint16_t)(nb_commit - n);
tx_id = 0;
- txq->next_rs = (uint16_t)(txq->rs_thresh - 1);
+ txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
txdp = &txq->tx_ring[tx_id];
@@ -2359,12 +2360,12 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
iavf_vtx(txdp, tx_pkts, nb_commit, flags, offload);
tx_id = (uint16_t)(tx_id + nb_commit);
- if (tx_id > txq->next_rs) {
- txq->tx_ring[txq->next_rs].cmd_type_offset_bsz |=
+ if (tx_id > txq->tx_next_rs) {
+ txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) <<
IAVF_TXD_QW1_CMD_SHIFT);
- txq->next_rs =
- (uint16_t)(txq->next_rs + txq->rs_thresh);
+ txq->tx_next_rs =
+ (uint16_t)(txq->tx_next_rs + txq->tx_rs_thresh);
}
txq->tx_tail = tx_id;
@@ -2386,10 +2387,10 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC;
uint64_t rs = IAVF_TX_DESC_CMD_RS | flags;
- if (txq->nb_free < txq->free_thresh)
+ if (txq->nb_tx_free < txq->tx_free_thresh)
iavf_tx_free_bufs_avx512(txq);
- nb_commit = (uint16_t)RTE_MIN(txq->nb_free, nb_pkts << 1);
+ nb_commit = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts << 1);
nb_commit &= 0xFFFE;
if (unlikely(nb_commit == 0))
return 0;
@@ -2400,7 +2401,7 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = (void *)txq->sw_ring;
txep += (tx_id >> 1);
- txq->nb_free = (uint16_t)(txq->nb_free - nb_commit);
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_commit);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (n != 0 && nb_commit >= n) {
@@ -2414,7 +2415,7 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_commit = (uint16_t)(nb_commit - n);
- txq->next_rs = (uint16_t)(txq->rs_thresh - 1);
+ txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
tx_id = 0;
/* avoid reach the end of ring */
txdp = txq->tx_ring;
@@ -2427,12 +2428,12 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
ctx_vtx(txdp, tx_pkts, nb_mbuf, flags, offload, txq->vlan_flag);
tx_id = (uint16_t)(tx_id + nb_commit);
- if (tx_id > txq->next_rs) {
- txq->tx_ring[txq->next_rs].cmd_type_offset_bsz |=
+ if (tx_id > txq->tx_next_rs) {
+ txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) <<
IAVF_TXD_QW1_CMD_SHIFT);
- txq->next_rs =
- (uint16_t)(txq->next_rs + txq->rs_thresh);
+ txq->tx_next_rs =
+ (uint16_t)(txq->tx_next_rs + txq->tx_rs_thresh);
}
txq->tx_tail = tx_id;
@@ -2452,7 +2453,7 @@ iavf_xmit_pkts_vec_avx512_cmn(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t ret, num;
/* cross rs_thresh boundary is not allowed */
- num = (uint16_t)RTE_MIN(nb_pkts, txq->rs_thresh);
+ num = (uint16_t)RTE_MIN(nb_pkts, txq->tx_rs_thresh);
ret = iavf_xmit_fixed_burst_vec_avx512(tx_queue, &tx_pkts[nb_tx],
num, offload);
nb_tx += ret;
@@ -2480,10 +2481,10 @@ iavf_tx_queue_release_mbufs_avx512(struct iavf_tx_queue *txq)
const uint16_t wrap_point = txq->nb_tx_desc >> txq->use_ctx; /* end of SW ring */
struct ci_tx_entry_vec *swr = (void *)txq->sw_ring;
- if (!txq->sw_ring || txq->nb_free == max_desc)
+ if (!txq->sw_ring || txq->nb_tx_free == max_desc)
return;
- i = (txq->next_dd - txq->rs_thresh + 1) >> txq->use_ctx;
+ i = (txq->tx_next_dd - txq->tx_rs_thresh + 1) >> txq->use_ctx;
while (i != end_desc) {
rte_pktmbuf_free_seg(swr[i].mbuf);
swr[i].mbuf = NULL;
@@ -2517,7 +2518,7 @@ iavf_xmit_pkts_vec_avx512_ctx_cmn(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t ret, num;
/* cross rs_thresh boundary is not allowed */
- num = (uint16_t)RTE_MIN(nb_pkts << 1, txq->rs_thresh);
+ num = (uint16_t)RTE_MIN(nb_pkts << 1, txq->tx_rs_thresh);
num = num >> 1;
ret = iavf_xmit_fixed_burst_vec_avx512_ctx(tx_queue, &tx_pkts[nb_tx],
num, offload);
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index 2c118cc059..ff24055c34 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -26,17 +26,17 @@ iavf_tx_free_bufs(struct iavf_tx_queue *txq)
struct rte_mbuf *m, *free[IAVF_VPMD_TX_MAX_FREE_BUF];
/* check DD bits on threshold descriptor */
- if ((txq->tx_ring[txq->next_dd].cmd_type_offset_bsz &
+ if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(IAVF_TXD_QW1_DTYPE_MASK)) !=
rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE))
return 0;
- n = txq->rs_thresh;
+ n = txq->tx_rs_thresh;
/* first buffer to free from S/W ring is at index
* tx_next_dd - (tx_rs_thresh-1)
*/
- txep = &txq->sw_ring[txq->next_dd - (n - 1)];
+ txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)];
m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
if (likely(m != NULL)) {
free[0] = m;
@@ -65,12 +65,12 @@ iavf_tx_free_bufs(struct iavf_tx_queue *txq)
}
/* buffers were freed, update counters */
- txq->nb_free = (uint16_t)(txq->nb_free + txq->rs_thresh);
- txq->next_dd = (uint16_t)(txq->next_dd + txq->rs_thresh);
- if (txq->next_dd >= txq->nb_tx_desc)
- txq->next_dd = (uint16_t)(txq->rs_thresh - 1);
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
+ txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
+ if (txq->tx_next_dd >= txq->nb_tx_desc)
+ txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
- return txq->rs_thresh;
+ return txq->tx_rs_thresh;
}
static inline void
@@ -109,10 +109,10 @@ _iavf_tx_queue_release_mbufs_vec(struct iavf_tx_queue *txq)
unsigned i;
const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1);
- if (!txq->sw_ring || txq->nb_free == max_desc)
+ if (!txq->sw_ring || txq->nb_tx_free == max_desc)
return;
- i = txq->next_dd - txq->rs_thresh + 1;
+ i = txq->tx_next_dd - txq->tx_rs_thresh + 1;
while (i != txq->tx_tail) {
rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
txq->sw_ring[i].mbuf = NULL;
@@ -169,8 +169,8 @@ iavf_tx_vec_queue_default(struct iavf_tx_queue *txq)
if (!txq)
return -1;
- if (txq->rs_thresh < IAVF_VPMD_TX_MAX_BURST ||
- txq->rs_thresh > IAVF_VPMD_TX_MAX_FREE_BUF)
+ if (txq->tx_rs_thresh < IAVF_VPMD_TX_MAX_BURST ||
+ txq->tx_rs_thresh > IAVF_VPMD_TX_MAX_FREE_BUF)
return -1;
if (txq->offloads & IAVF_TX_NO_VECTOR_FLAGS)
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index bc4b8f14c8..ed8455d669 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -1374,10 +1374,10 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint64_t rs = IAVF_TX_DESC_CMD_RS | flags;
int i;
- if (txq->nb_free < txq->free_thresh)
+ if (txq->nb_tx_free < txq->tx_free_thresh)
iavf_tx_free_bufs(txq);
- nb_pkts = (uint16_t)RTE_MIN(txq->nb_free, nb_pkts);
+ nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
return 0;
nb_commit = nb_pkts;
@@ -1386,7 +1386,7 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txdp = &txq->tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
- txq->nb_free = (uint16_t)(txq->nb_free - nb_pkts);
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
@@ -1400,7 +1400,7 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_commit = (uint16_t)(nb_commit - n);
tx_id = 0;
- txq->next_rs = (uint16_t)(txq->rs_thresh - 1);
+ txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
txdp = &txq->tx_ring[tx_id];
@@ -1412,12 +1412,12 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
iavf_vtx(txdp, tx_pkts, nb_commit, flags);
tx_id = (uint16_t)(tx_id + nb_commit);
- if (tx_id > txq->next_rs) {
- txq->tx_ring[txq->next_rs].cmd_type_offset_bsz |=
+ if (tx_id > txq->tx_next_rs) {
+ txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) <<
IAVF_TXD_QW1_CMD_SHIFT);
- txq->next_rs =
- (uint16_t)(txq->next_rs + txq->rs_thresh);
+ txq->tx_next_rs =
+ (uint16_t)(txq->tx_next_rs + txq->tx_rs_thresh);
}
txq->tx_tail = tx_id;
@@ -1441,7 +1441,7 @@ iavf_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t ret, num;
/* cross rs_thresh boundary is not allowed */
- num = (uint16_t)RTE_MIN(nb_pkts, txq->rs_thresh);
+ num = (uint16_t)RTE_MIN(nb_pkts, txq->tx_rs_thresh);
ret = iavf_xmit_fixed_burst_vec(tx_queue, &tx_pkts[nb_tx], num);
nb_tx += ret;
nb_pkts -= ret;
diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c
index 065ab3594c..0646a2f978 100644
--- a/drivers/net/iavf/iavf_vchnl.c
+++ b/drivers/net/iavf/iavf_vchnl.c
@@ -1247,7 +1247,7 @@ iavf_configure_queues(struct iavf_adapter *adapter,
/* Virtchnnl configure tx queues by pairs */
if (i < adapter->dev_data->nb_tx_queues) {
vc_qp->txq.ring_len = txq[i]->nb_tx_desc;
- vc_qp->txq.dma_ring_addr = txq[i]->tx_ring_phys_addr;
+ vc_qp->txq.dma_ring_addr = txq[i]->tx_ring_dma;
}
vc_qp->rxq.vsi_id = vf->vsi_res->vsi_id;
diff --git a/drivers/net/ixgbe/base/ixgbe_osdep.h b/drivers/net/ixgbe/base/ixgbe_osdep.h
index 502f386b56..95dbe2bedd 100644
--- a/drivers/net/ixgbe/base/ixgbe_osdep.h
+++ b/drivers/net/ixgbe/base/ixgbe_osdep.h
@@ -124,7 +124,7 @@ static inline uint32_t ixgbe_read_addr(volatile void* addr)
rte_write32_wc_relaxed((rte_cpu_to_le_32(value)), reg)
#define IXGBE_PCI_REG_ADDR(hw, reg) \
- ((volatile uint32_t *)((char *)(hw)->hw_addr + (reg)))
+ ((volatile void *)((char *)(hw)->hw_addr + (reg)))
#define IXGBE_PCI_REG_ARRAY_ADDR(hw, reg, index) \
IXGBE_PCI_REG_ADDR((hw), (reg) + ((index) << 2))
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index db4b993ebc..0a80b944f0 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -308,7 +308,7 @@ tx_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
/* update tail pointer */
rte_wmb();
- IXGBE_PCI_REG_WC_WRITE_RELAXED(txq->tdt_reg_addr, txq->tx_tail);
+ IXGBE_PCI_REG_WC_WRITE_RELAXED(txq->qtx_tail, txq->tx_tail);
return nb_pkts;
}
@@ -946,7 +946,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u tx_tail=%u nb_tx=%u",
(unsigned) txq->port_id, (unsigned) txq->queue_id,
(unsigned) tx_id, (unsigned) nb_tx);
- IXGBE_PCI_REG_WC_WRITE_RELAXED(txq->tdt_reg_addr, tx_id);
+ IXGBE_PCI_REG_WC_WRITE_RELAXED(txq->qtx_tail, tx_id);
txq->tx_tail = tx_id;
return nb_tx;
@@ -2786,11 +2786,11 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
hw->mac.type == ixgbe_mac_X550_vf ||
hw->mac.type == ixgbe_mac_X550EM_x_vf ||
hw->mac.type == ixgbe_mac_X550EM_a_vf)
- txq->tdt_reg_addr = IXGBE_PCI_REG_ADDR(hw, IXGBE_VFTDT(queue_idx));
+ txq->qtx_tail = IXGBE_PCI_REG_ADDR(hw, IXGBE_VFTDT(queue_idx));
else
- txq->tdt_reg_addr = IXGBE_PCI_REG_ADDR(hw, IXGBE_TDT(txq->reg_idx));
+ txq->qtx_tail = IXGBE_PCI_REG_ADDR(hw, IXGBE_TDT(txq->reg_idx));
- txq->tx_ring_phys_addr = tz->iova;
+ txq->tx_ring_dma = tz->iova;
txq->tx_ring = (union ixgbe_adv_tx_desc *) tz->addr;
/* Allocate software ring */
@@ -2802,7 +2802,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
return -ENOMEM;
}
PMD_INIT_LOG(DEBUG, "sw_ring=%p hw_ring=%p dma_addr=0x%"PRIx64,
- txq->sw_ring, txq->tx_ring, txq->tx_ring_phys_addr);
+ txq->sw_ring, txq->tx_ring, txq->tx_ring_dma);
/* set up vector or scalar TX function as appropriate */
ixgbe_set_tx_function(dev, txq);
@@ -5303,7 +5303,7 @@ ixgbe_dev_tx_init(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_tx_queues; i++) {
txq = dev->data->tx_queues[i];
- bus_addr = txq->tx_ring_phys_addr;
+ bus_addr = txq->tx_ring_dma;
IXGBE_WRITE_REG(hw, IXGBE_TDBAL(txq->reg_idx),
(uint32_t)(bus_addr & 0x00000000ffffffffULL));
IXGBE_WRITE_REG(hw, IXGBE_TDBAH(txq->reg_idx),
@@ -5887,7 +5887,7 @@ ixgbevf_dev_tx_init(struct rte_eth_dev *dev)
/* Setup the Base and Length of the Tx Descriptor Rings */
for (i = 0; i < dev->data->nb_tx_queues; i++) {
txq = dev->data->tx_queues[i];
- bus_addr = txq->tx_ring_phys_addr;
+ bus_addr = txq->tx_ring_dma;
IXGBE_WRITE_REG(hw, IXGBE_VFTDBAL(i),
(uint32_t)(bus_addr & 0x00000000ffffffffULL));
IXGBE_WRITE_REG(hw, IXGBE_VFTDBAH(i),
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index 1647396419..00e2009b3e 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -186,12 +186,12 @@ struct ixgbe_advctx_info {
struct ixgbe_tx_queue {
/** TX ring virtual address. */
volatile union ixgbe_adv_tx_desc *tx_ring;
- uint64_t tx_ring_phys_addr; /**< TX ring DMA address. */
+ rte_iova_t tx_ring_dma; /**< TX ring DMA address. */
union {
struct ci_tx_entry *sw_ring; /**< address of SW ring for scalar PMD. */
struct ci_tx_entry_vec *sw_ring_v; /**< address of SW ring for vector PMD */
};
- volatile uint32_t *tdt_reg_addr; /**< Address of TDT register. */
+ volatile uint8_t *qtx_tail; /**< Address of TDT register. */
uint16_t nb_tx_desc; /**< number of TX descriptors. */
uint16_t tx_tail; /**< current value of TDT reg. */
/**< Start freeing TX buffers if there are less free descriptors than
@@ -218,7 +218,7 @@ struct ixgbe_tx_queue {
/** Hardware context0 history. */
struct ixgbe_advctx_info ctx_cache[IXGBE_CTX_NUM];
const struct ixgbe_txq_ops *ops; /**< txq ops */
- uint8_t tx_deferred_start; /**< not in global dev start. */
+ bool tx_deferred_start; /**< not in global dev start. */
#ifdef RTE_LIB_SECURITY
uint8_t using_ipsec;
/**< indicates that IPsec TX feature is in use */
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
index 02b53c008e..871c1a7cd2 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
@@ -628,7 +628,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_tail = tx_id;
- IXGBE_PCI_REG_WRITE(txq->tdt_reg_addr, txq->tx_tail);
+ IXGBE_PCI_REG_WRITE(txq->qtx_tail, txq->tx_tail);
return nb_pkts;
}
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
index c8b5377c9f..37f2079519 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -751,7 +751,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_tail = tx_id;
- IXGBE_PCI_REG_WC_WRITE(txq->tdt_reg_addr, txq->tx_tail);
+ IXGBE_PCI_REG_WC_WRITE(txq->qtx_tail, txq->tx_tail);
return nb_pkts;
}
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v2 05/22] drivers/net: add prefix for driver-specific structs
2024-12-03 16:41 ` [PATCH v2 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (3 preceding siblings ...)
2024-12-03 16:41 ` [PATCH v2 04/22] drivers/net: align Tx queue struct field names Bruce Richardson
@ 2024-12-03 16:41 ` Bruce Richardson
2024-12-03 16:41 ` [PATCH v2 06/22] net/_common_intel: merge ice and i40e Tx queue struct Bruce Richardson
` (16 subsequent siblings)
21 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-03 16:41 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Ian Stokes, David Christensen,
Konstantin Ananyev, Wathsala Vithanage, Vladimir Medvedkin,
Anatoly Burakov
In preparation for merging the Tx structs for multiple drivers into a
single struct, rename the driver-specific pointers in each struct to
have a prefix on it, to avoid conflicts.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/i40e/i40e_fdir.c | 6 +--
.../net/i40e/i40e_recycle_mbufs_vec_common.c | 2 +-
drivers/net/i40e/i40e_rxtx.c | 30 ++++++------
drivers/net/i40e/i40e_rxtx.h | 4 +-
drivers/net/i40e/i40e_rxtx_vec_altivec.c | 6 +--
drivers/net/i40e/i40e_rxtx_vec_avx2.c | 6 +--
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 8 ++--
drivers/net/i40e/i40e_rxtx_vec_common.h | 2 +-
drivers/net/i40e/i40e_rxtx_vec_neon.c | 6 +--
drivers/net/i40e/i40e_rxtx_vec_sse.c | 6 +--
drivers/net/iavf/iavf_rxtx.c | 24 +++++-----
drivers/net/iavf/iavf_rxtx.h | 4 +-
drivers/net/iavf/iavf_rxtx_vec_avx2.c | 6 +--
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 14 +++---
drivers/net/iavf/iavf_rxtx_vec_common.h | 2 +-
drivers/net/iavf/iavf_rxtx_vec_sse.c | 6 +--
drivers/net/ice/ice_dcf_ethdev.c | 4 +-
drivers/net/ice/ice_rxtx.c | 48 +++++++++----------
drivers/net/ice/ice_rxtx.h | 4 +-
drivers/net/ice/ice_rxtx_vec_avx2.c | 6 +--
drivers/net/ice/ice_rxtx_vec_avx512.c | 8 ++--
drivers/net/ice/ice_rxtx_vec_common.h | 4 +-
drivers/net/ice/ice_rxtx_vec_sse.c | 6 +--
.../ixgbe/ixgbe_recycle_mbufs_vec_common.c | 2 +-
drivers/net/ixgbe/ixgbe_rxtx.c | 22 ++++-----
drivers/net/ixgbe/ixgbe_rxtx.h | 2 +-
drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 6 +--
drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 6 +--
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 6 +--
29 files changed, 128 insertions(+), 128 deletions(-)
diff --git a/drivers/net/i40e/i40e_fdir.c b/drivers/net/i40e/i40e_fdir.c
index 47f79ecf11..c600167634 100644
--- a/drivers/net/i40e/i40e_fdir.c
+++ b/drivers/net/i40e/i40e_fdir.c
@@ -1383,7 +1383,7 @@ i40e_find_available_buffer(struct rte_eth_dev *dev)
volatile struct i40e_tx_desc *tmp_txdp;
tmp_tail = txq->tx_tail;
- tmp_txdp = &txq->tx_ring[tmp_tail + 1];
+ tmp_txdp = &txq->i40e_tx_ring[tmp_tail + 1];
do {
if ((tmp_txdp->cmd_type_offset_bsz &
@@ -1640,7 +1640,7 @@ i40e_flow_fdir_filter_programming(struct i40e_pf *pf,
PMD_DRV_LOG(INFO, "filling filter programming descriptor.");
fdirdp = (volatile struct i40e_filter_program_desc *)
- (&txq->tx_ring[txq->tx_tail]);
+ (&txq->i40e_tx_ring[txq->tx_tail]);
fdirdp->qindex_flex_ptype_vsi =
rte_cpu_to_le_32((fdir_action->rx_queue <<
@@ -1710,7 +1710,7 @@ i40e_flow_fdir_filter_programming(struct i40e_pf *pf,
fdirdp->fd_id = rte_cpu_to_le_32(filter->soft_id);
PMD_DRV_LOG(INFO, "filling transmit descriptor.");
- txdp = &txq->tx_ring[txq->tx_tail + 1];
+ txdp = &txq->i40e_tx_ring[txq->tx_tail + 1];
txdp->buffer_addr = rte_cpu_to_le_64(pf->fdir.dma_addr[txq->tx_tail >> 1]);
td_cmd = I40E_TX_DESC_CMD_EOP |
diff --git a/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c b/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
index 260d238ce4..8679e5c1fd 100644
--- a/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
+++ b/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
@@ -75,7 +75,7 @@ i40e_recycle_tx_mbufs_reuse_vec(void *tx_queue,
return 0;
/* check DD bits on threshold descriptor */
- if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+ if ((txq->i40e_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) !=
rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE))
return 0;
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index b0bb20fe9a..34ef931859 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -379,7 +379,7 @@ static inline int
i40e_xmit_cleanup(struct i40e_tx_queue *txq)
{
struct ci_tx_entry *sw_ring = txq->sw_ring;
- volatile struct i40e_tx_desc *txd = txq->tx_ring;
+ volatile struct i40e_tx_desc *txd = txq->i40e_tx_ring;
uint16_t last_desc_cleaned = txq->last_desc_cleaned;
uint16_t nb_tx_desc = txq->nb_tx_desc;
uint16_t desc_to_clean_to;
@@ -1103,7 +1103,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
txq = tx_queue;
sw_ring = txq->sw_ring;
- txr = txq->tx_ring;
+ txr = txq->i40e_tx_ring;
tx_id = txq->tx_tail;
txe = &sw_ring[tx_id];
@@ -1338,7 +1338,7 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
const uint16_t k = RTE_ALIGN_FLOOR(tx_rs_thresh, RTE_I40E_TX_MAX_FREE_BUF_SZ);
const uint16_t m = tx_rs_thresh % RTE_I40E_TX_MAX_FREE_BUF_SZ;
- if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+ if ((txq->i40e_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) !=
rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE))
return 0;
@@ -1417,7 +1417,7 @@ i40e_tx_fill_hw_ring(struct i40e_tx_queue *txq,
struct rte_mbuf **pkts,
uint16_t nb_pkts)
{
- volatile struct i40e_tx_desc *txdp = &(txq->tx_ring[txq->tx_tail]);
+ volatile struct i40e_tx_desc *txdp = &txq->i40e_tx_ring[txq->tx_tail];
struct ci_tx_entry *txep = &txq->sw_ring[txq->tx_tail];
const int N_PER_LOOP = 4;
const int N_PER_LOOP_MASK = N_PER_LOOP - 1;
@@ -1445,7 +1445,7 @@ tx_xmit_pkts(struct i40e_tx_queue *txq,
struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- volatile struct i40e_tx_desc *txr = txq->tx_ring;
+ volatile struct i40e_tx_desc *txr = txq->i40e_tx_ring;
uint16_t n = 0;
/**
@@ -1556,7 +1556,7 @@ i40e_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts
bool pkt_error = false;
const char *reason = NULL;
uint16_t good_pkts = nb_pkts;
- struct i40e_adapter *adapter = txq->vsi->adapter;
+ struct i40e_adapter *adapter = txq->i40e_vsi->adapter;
for (idx = 0; idx < nb_pkts; idx++) {
mb = tx_pkts[idx];
@@ -2329,7 +2329,7 @@ i40e_dev_tx_descriptor_status(void *tx_queue, uint16_t offset)
desc -= txq->nb_tx_desc;
}
- status = &txq->tx_ring[desc].cmd_type_offset_bsz;
+ status = &txq->i40e_tx_ring[desc].cmd_type_offset_bsz;
mask = rte_le_to_cpu_64(I40E_TXD_QW1_DTYPE_MASK);
expect = rte_cpu_to_le_64(
I40E_TX_DESC_DTYPE_DESC_DONE << I40E_TXD_QW1_DTYPE_SHIFT);
@@ -2527,7 +2527,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate TX hardware ring descriptors. */
ring_size = sizeof(struct i40e_tx_desc) * I40E_MAX_RING_DESC;
ring_size = RTE_ALIGN(ring_size, I40E_DMA_MEM_ALIGN);
- tz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx,
+ tz = rte_eth_dma_zone_reserve(dev, "i40e_tx_ring", queue_idx,
ring_size, I40E_RING_BASE_ALIGN, socket_id);
if (!tz) {
i40e_tx_queue_release(txq);
@@ -2546,11 +2546,11 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->reg_idx = reg_idx;
txq->port_id = dev->data->port_id;
txq->offloads = offloads;
- txq->vsi = vsi;
+ txq->i40e_vsi = vsi;
txq->tx_deferred_start = tx_conf->tx_deferred_start;
txq->tx_ring_dma = tz->iova;
- txq->tx_ring = (struct i40e_tx_desc *)tz->addr;
+ txq->i40e_tx_ring = (struct i40e_tx_desc *)tz->addr;
/* Allocate software ring */
txq->sw_ring =
@@ -2885,11 +2885,11 @@ i40e_reset_tx_queue(struct i40e_tx_queue *txq)
txe = txq->sw_ring;
size = sizeof(struct i40e_tx_desc) * txq->nb_tx_desc;
for (i = 0; i < size; i++)
- ((volatile char *)txq->tx_ring)[i] = 0;
+ ((volatile char *)txq->i40e_tx_ring)[i] = 0;
prev = (uint16_t)(txq->nb_tx_desc - 1);
for (i = 0; i < txq->nb_tx_desc; i++) {
- volatile struct i40e_tx_desc *txd = &txq->tx_ring[i];
+ volatile struct i40e_tx_desc *txd = &txq->i40e_tx_ring[i];
txd->cmd_type_offset_bsz =
rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE);
@@ -2914,7 +2914,7 @@ int
i40e_tx_queue_init(struct i40e_tx_queue *txq)
{
enum i40e_status_code err = I40E_SUCCESS;
- struct i40e_vsi *vsi = txq->vsi;
+ struct i40e_vsi *vsi = txq->i40e_vsi;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
uint16_t pf_q = txq->reg_idx;
struct i40e_hmc_obj_txq tx_ctx;
@@ -3207,10 +3207,10 @@ i40e_fdir_setup_tx_resources(struct i40e_pf *pf)
txq->nb_tx_desc = I40E_FDIR_NUM_TX_DESC;
txq->queue_id = I40E_FDIR_QUEUE_ID;
txq->reg_idx = pf->fdir.fdir_vsi->base_queue;
- txq->vsi = pf->fdir.fdir_vsi;
+ txq->i40e_vsi = pf->fdir.fdir_vsi;
txq->tx_ring_dma = tz->iova;
- txq->tx_ring = (struct i40e_tx_desc *)tz->addr;
+ txq->i40e_tx_ring = (struct i40e_tx_desc *)tz->addr;
/*
* don't need to allocate software ring and reset for the fdir
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index f420c98687..8315ee2f59 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -130,7 +130,7 @@ struct i40e_rx_queue {
struct i40e_tx_queue {
uint16_t nb_tx_desc; /**< number of TX descriptors */
rte_iova_t tx_ring_dma; /**< TX ring DMA address */
- volatile struct i40e_tx_desc *tx_ring; /**< TX ring virtual address */
+ volatile struct i40e_tx_desc *i40e_tx_ring; /**< TX ring virtual address */
struct ci_tx_entry *sw_ring; /**< virtual address of SW ring */
uint16_t tx_tail; /**< current value of tail register */
volatile uint8_t *qtx_tail; /**< register address of tail */
@@ -150,7 +150,7 @@ struct i40e_tx_queue {
uint16_t port_id; /**< Device port identifier. */
uint16_t queue_id; /**< TX queue index. */
uint16_t reg_idx;
- struct i40e_vsi *vsi; /**< the VSI this queue belongs to */
+ struct i40e_vsi *i40e_vsi; /**< the VSI this queue belongs to */
uint16_t tx_next_dd;
uint16_t tx_next_rs;
bool q_set; /**< indicate if tx queue has been configured */
diff --git a/drivers/net/i40e/i40e_rxtx_vec_altivec.c b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
index 80f07a3e10..bf0e9ebd71 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_altivec.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
@@ -568,7 +568,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->i40e_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -588,7 +588,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->i40e_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
}
@@ -598,7 +598,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->i40e_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)I40E_TX_DESC_CMD_RS) <<
I40E_TXD_QW1_CMD_SHIFT);
txq->tx_next_rs =
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx2.c b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
index b26bae4757..5042e348db 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx2.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
@@ -758,7 +758,7 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->i40e_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -779,7 +779,7 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->i40e_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
}
@@ -789,7 +789,7 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->i40e_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)I40E_TX_DESC_CMD_RS) <<
I40E_TXD_QW1_CMD_SHIFT);
txq->tx_next_rs =
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index 8b8a16daa8..04fbe3b2e3 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -764,7 +764,7 @@ i40e_tx_free_bufs_avx512(struct i40e_tx_queue *txq)
struct rte_mbuf *m, *free[RTE_I40E_TX_MAX_FREE_BUF_SZ];
/* check DD bits on threshold descriptor */
- if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+ if ((txq->i40e_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) !=
rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE))
return 0;
@@ -948,7 +948,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->i40e_tx_ring[tx_id];
txep = (void *)txq->sw_ring;
txep += tx_id;
@@ -970,7 +970,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = txq->tx_ring;
+ txdp = txq->i40e_tx_ring;
txep = (void *)txq->sw_ring;
}
@@ -980,7 +980,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->i40e_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)I40E_TX_DESC_CMD_RS) <<
I40E_TXD_QW1_CMD_SHIFT);
txq->tx_next_rs =
diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index 325e99c1a4..e81f958361 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -26,7 +26,7 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
struct rte_mbuf *m, *free[RTE_I40E_TX_MAX_FREE_BUF_SZ];
/* check DD bits on threshold descriptor */
- if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+ if ((txq->i40e_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) !=
rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE))
return 0;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_neon.c b/drivers/net/i40e/i40e_rxtx_vec_neon.c
index 26bc345a0a..05191e4884 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_neon.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_neon.c
@@ -695,7 +695,7 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->i40e_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -715,7 +715,7 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->i40e_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
}
@@ -725,7 +725,7 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->i40e_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)I40E_TX_DESC_CMD_RS) <<
I40E_TXD_QW1_CMD_SHIFT);
txq->tx_next_rs =
diff --git a/drivers/net/i40e/i40e_rxtx_vec_sse.c b/drivers/net/i40e/i40e_rxtx_vec_sse.c
index ebc32b0d27..d81b553842 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_sse.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_sse.c
@@ -714,7 +714,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->i40e_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -734,7 +734,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->i40e_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
}
@@ -744,7 +744,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->i40e_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)I40E_TX_DESC_CMD_RS) <<
I40E_TXD_QW1_CMD_SHIFT);
txq->tx_next_rs =
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index adaaeb4625..6eda91e76b 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -296,11 +296,11 @@ reset_tx_queue(struct iavf_tx_queue *txq)
txe = txq->sw_ring;
size = sizeof(struct iavf_tx_desc) * txq->nb_tx_desc;
for (i = 0; i < size; i++)
- ((volatile char *)txq->tx_ring)[i] = 0;
+ ((volatile char *)txq->iavf_tx_ring)[i] = 0;
prev = (uint16_t)(txq->nb_tx_desc - 1);
for (i = 0; i < txq->nb_tx_desc; i++) {
- txq->tx_ring[i].cmd_type_offset_bsz =
+ txq->iavf_tx_ring[i].cmd_type_offset_bsz =
rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE);
txe[i].mbuf = NULL;
txe[i].last_id = i;
@@ -851,7 +851,7 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->port_id = dev->data->port_id;
txq->offloads = offloads;
txq->tx_deferred_start = tx_conf->tx_deferred_start;
- txq->vsi = vsi;
+ txq->iavf_vsi = vsi;
if (iavf_ipsec_crypto_supported(adapter))
txq->ipsec_crypto_pkt_md_offset =
@@ -872,7 +872,7 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate TX hardware ring descriptors. */
ring_size = sizeof(struct iavf_tx_desc) * IAVF_MAX_RING_DESC;
ring_size = RTE_ALIGN(ring_size, IAVF_DMA_MEM_ALIGN);
- mz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx,
+ mz = rte_eth_dma_zone_reserve(dev, "iavf_tx_ring", queue_idx,
ring_size, IAVF_RING_BASE_ALIGN,
socket_id);
if (!mz) {
@@ -882,7 +882,7 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,
return -ENOMEM;
}
txq->tx_ring_dma = mz->iova;
- txq->tx_ring = (struct iavf_tx_desc *)mz->addr;
+ txq->iavf_tx_ring = (struct iavf_tx_desc *)mz->addr;
txq->mz = mz;
reset_tx_queue(txq);
@@ -2385,7 +2385,7 @@ iavf_xmit_cleanup(struct iavf_tx_queue *txq)
uint16_t desc_to_clean_to;
uint16_t nb_tx_to_clean;
- volatile struct iavf_tx_desc *txd = txq->tx_ring;
+ volatile struct iavf_tx_desc *txd = txq->iavf_tx_ring;
desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->tx_rs_thresh);
if (desc_to_clean_to >= nb_tx_desc)
@@ -2796,7 +2796,7 @@ uint16_t
iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
struct iavf_tx_queue *txq = tx_queue;
- volatile struct iavf_tx_desc *txr = txq->tx_ring;
+ volatile struct iavf_tx_desc *txr = txq->iavf_tx_ring;
struct ci_tx_entry *txe_ring = txq->sw_ring;
struct ci_tx_entry *txe, *txn;
struct rte_mbuf *mb, *mb_seg;
@@ -3803,10 +3803,10 @@ iavf_xmit_pkts_no_poll(void *tx_queue, struct rte_mbuf **tx_pkts,
struct iavf_tx_queue *txq = tx_queue;
enum iavf_tx_burst_type tx_burst_type;
- if (!txq->vsi || txq->vsi->adapter->no_poll)
+ if (!txq->iavf_vsi || txq->iavf_vsi->adapter->no_poll)
return 0;
- tx_burst_type = txq->vsi->adapter->tx_burst_type;
+ tx_burst_type = txq->iavf_vsi->adapter->tx_burst_type;
return iavf_tx_pkt_burst_ops[tx_burst_type](tx_queue,
tx_pkts, nb_pkts);
@@ -3824,9 +3824,9 @@ iavf_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts,
const char *reason = NULL;
bool pkt_error = false;
struct iavf_tx_queue *txq = tx_queue;
- struct iavf_adapter *adapter = txq->vsi->adapter;
+ struct iavf_adapter *adapter = txq->iavf_vsi->adapter;
enum iavf_tx_burst_type tx_burst_type =
- txq->vsi->adapter->tx_burst_type;
+ txq->iavf_vsi->adapter->tx_burst_type;
for (idx = 0; idx < nb_pkts; idx++) {
mb = tx_pkts[idx];
@@ -4440,7 +4440,7 @@ iavf_dev_tx_desc_status(void *tx_queue, uint16_t offset)
desc -= txq->nb_tx_desc;
}
- status = &txq->tx_ring[desc].cmd_type_offset_bsz;
+ status = &txq->iavf_tx_ring[desc].cmd_type_offset_bsz;
mask = rte_le_to_cpu_64(IAVF_TXD_QW1_DTYPE_MASK);
expect = rte_cpu_to_le_64(
IAVF_TX_DESC_DTYPE_DESC_DONE << IAVF_TXD_QW1_DTYPE_SHIFT);
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index 44e2de731c..cc1eaaf54c 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -276,7 +276,7 @@ struct iavf_rx_queue {
/* Structure associated with each TX queue. */
struct iavf_tx_queue {
const struct rte_memzone *mz; /* memzone for Tx ring */
- volatile struct iavf_tx_desc *tx_ring; /* Tx ring virtual address */
+ volatile struct iavf_tx_desc *iavf_tx_ring; /* Tx ring virtual address */
rte_iova_t tx_ring_dma; /* Tx ring DMA address */
struct ci_tx_entry *sw_ring; /* address array of SW ring */
uint16_t nb_tx_desc; /* ring length */
@@ -289,7 +289,7 @@ struct iavf_tx_queue {
uint16_t tx_free_thresh;
uint16_t tx_rs_thresh;
uint8_t rel_mbufs_type;
- struct iavf_vsi *vsi; /**< the VSI this queue belongs to */
+ struct iavf_vsi *iavf_vsi; /**< the VSI this queue belongs to */
uint16_t port_id;
uint16_t queue_id;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index 42e09a2adf..f33ceceee1 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -1751,7 +1751,7 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_commit = nb_pkts;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->iavf_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -1772,7 +1772,7 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->iavf_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
}
@@ -1782,7 +1782,7 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->iavf_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) <<
IAVF_TXD_QW1_CMD_SHIFT);
txq->tx_next_rs =
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index dc1fef24f0..97420a75fd 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -1854,7 +1854,7 @@ iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq)
struct rte_mbuf *m, *free[IAVF_VPMD_TX_MAX_FREE_BUF];
/* check DD bits on threshold descriptor */
- if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+ if ((txq->iavf_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(IAVF_TXD_QW1_DTYPE_MASK)) !=
rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE))
return 0;
@@ -2328,7 +2328,7 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_commit = nb_pkts;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->iavf_tx_ring[tx_id];
txep = (void *)txq->sw_ring;
txep += tx_id;
@@ -2350,7 +2350,7 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->iavf_tx_ring[tx_id];
txep = (void *)txq->sw_ring;
txep += tx_id;
}
@@ -2361,7 +2361,7 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->iavf_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) <<
IAVF_TXD_QW1_CMD_SHIFT);
txq->tx_next_rs =
@@ -2397,7 +2397,7 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_pkts = nb_commit >> 1;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->iavf_tx_ring[tx_id];
txep = (void *)txq->sw_ring;
txep += (tx_id >> 1);
@@ -2418,7 +2418,7 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
tx_id = 0;
/* avoid reach the end of ring */
- txdp = txq->tx_ring;
+ txdp = txq->iavf_tx_ring;
txep = (void *)txq->sw_ring;
}
@@ -2429,7 +2429,7 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->iavf_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) <<
IAVF_TXD_QW1_CMD_SHIFT);
txq->tx_next_rs =
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index ff24055c34..6305c8cdd6 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -26,7 +26,7 @@ iavf_tx_free_bufs(struct iavf_tx_queue *txq)
struct rte_mbuf *m, *free[IAVF_VPMD_TX_MAX_FREE_BUF];
/* check DD bits on threshold descriptor */
- if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+ if ((txq->iavf_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(IAVF_TXD_QW1_DTYPE_MASK)) !=
rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE))
return 0;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index ed8455d669..64c3bf0eaa 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -1383,7 +1383,7 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_commit = nb_pkts;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->iavf_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -1403,7 +1403,7 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->iavf_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
}
@@ -1413,7 +1413,7 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->iavf_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) <<
IAVF_TXD_QW1_CMD_SHIFT);
txq->tx_next_rs =
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 4b98e4066b..4ffd1f5567 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -401,11 +401,11 @@ reset_tx_queue(struct ice_tx_queue *txq)
txe = txq->sw_ring;
size = sizeof(struct ice_tx_desc) * txq->nb_tx_desc;
for (i = 0; i < size; i++)
- ((volatile char *)txq->tx_ring)[i] = 0;
+ ((volatile char *)txq->ice_tx_ring)[i] = 0;
prev = (uint16_t)(txq->nb_tx_desc - 1);
for (i = 0; i < txq->nb_tx_desc; i++) {
- txq->tx_ring[i].cmd_type_offset_bsz =
+ txq->ice_tx_ring[i].cmd_type_offset_bsz =
rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE);
txe[i].mbuf = NULL;
txe[i].last_id = i;
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index d584086a36..5ec92f6d0c 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -776,7 +776,7 @@ ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
if (!txq_elem)
return -ENOMEM;
- vsi = txq->vsi;
+ vsi = txq->ice_vsi;
hw = ICE_VSI_TO_HW(vsi);
pf = ICE_VSI_TO_PF(vsi);
@@ -966,7 +966,7 @@ ice_fdir_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
if (!txq_elem)
return -ENOMEM;
- vsi = txq->vsi;
+ vsi = txq->ice_vsi;
hw = ICE_VSI_TO_HW(vsi);
memset(&tx_ctx, 0, sizeof(tx_ctx));
@@ -1039,11 +1039,11 @@ ice_reset_tx_queue(struct ice_tx_queue *txq)
txe = txq->sw_ring;
size = sizeof(struct ice_tx_desc) * txq->nb_tx_desc;
for (i = 0; i < size; i++)
- ((volatile char *)txq->tx_ring)[i] = 0;
+ ((volatile char *)txq->ice_tx_ring)[i] = 0;
prev = (uint16_t)(txq->nb_tx_desc - 1);
for (i = 0; i < txq->nb_tx_desc; i++) {
- volatile struct ice_tx_desc *txd = &txq->tx_ring[i];
+ volatile struct ice_tx_desc *txd = &txq->ice_tx_ring[i];
txd->cmd_type_offset_bsz =
rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE);
@@ -1153,7 +1153,7 @@ ice_fdir_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
PMD_DRV_LOG(INFO, "TX queue %u not started", tx_queue_id);
return 0;
}
- vsi = txq->vsi;
+ vsi = txq->ice_vsi;
q_ids[0] = txq->reg_idx;
q_teids[0] = txq->q_teid;
@@ -1479,7 +1479,7 @@ ice_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate TX hardware ring descriptors. */
ring_size = sizeof(struct ice_tx_desc) * ICE_MAX_RING_DESC;
ring_size = RTE_ALIGN(ring_size, ICE_DMA_MEM_ALIGN);
- tz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx,
+ tz = rte_eth_dma_zone_reserve(dev, "ice_tx_ring", queue_idx,
ring_size, ICE_RING_BASE_ALIGN,
socket_id);
if (!tz) {
@@ -1500,11 +1500,11 @@ ice_tx_queue_setup(struct rte_eth_dev *dev,
txq->reg_idx = vsi->base_queue + queue_idx;
txq->port_id = dev->data->port_id;
txq->offloads = offloads;
- txq->vsi = vsi;
+ txq->ice_vsi = vsi;
txq->tx_deferred_start = tx_conf->tx_deferred_start;
txq->tx_ring_dma = tz->iova;
- txq->tx_ring = tz->addr;
+ txq->ice_tx_ring = tz->addr;
/* Allocate software ring */
txq->sw_ring =
@@ -2372,7 +2372,7 @@ ice_tx_descriptor_status(void *tx_queue, uint16_t offset)
desc -= txq->nb_tx_desc;
}
- status = &txq->tx_ring[desc].cmd_type_offset_bsz;
+ status = &txq->ice_tx_ring[desc].cmd_type_offset_bsz;
mask = rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M);
expect = rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE <<
ICE_TXD_QW1_DTYPE_S);
@@ -2452,10 +2452,10 @@ ice_fdir_setup_tx_resources(struct ice_pf *pf)
txq->nb_tx_desc = ICE_FDIR_NUM_TX_DESC;
txq->queue_id = ICE_FDIR_QUEUE_ID;
txq->reg_idx = pf->fdir.fdir_vsi->base_queue;
- txq->vsi = pf->fdir.fdir_vsi;
+ txq->ice_vsi = pf->fdir.fdir_vsi;
txq->tx_ring_dma = tz->iova;
- txq->tx_ring = (struct ice_tx_desc *)tz->addr;
+ txq->ice_tx_ring = (struct ice_tx_desc *)tz->addr;
/*
* don't need to allocate software ring and reset for the fdir
* program queue just set the queue has been configured.
@@ -2838,7 +2838,7 @@ static inline int
ice_xmit_cleanup(struct ice_tx_queue *txq)
{
struct ci_tx_entry *sw_ring = txq->sw_ring;
- volatile struct ice_tx_desc *txd = txq->tx_ring;
+ volatile struct ice_tx_desc *txd = txq->ice_tx_ring;
uint16_t last_desc_cleaned = txq->last_desc_cleaned;
uint16_t nb_tx_desc = txq->nb_tx_desc;
uint16_t desc_to_clean_to;
@@ -2959,7 +2959,7 @@ uint16_t
ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
struct ice_tx_queue *txq;
- volatile struct ice_tx_desc *tx_ring;
+ volatile struct ice_tx_desc *ice_tx_ring;
volatile struct ice_tx_desc *txd;
struct ci_tx_entry *sw_ring;
struct ci_tx_entry *txe, *txn;
@@ -2981,7 +2981,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
txq = tx_queue;
sw_ring = txq->sw_ring;
- tx_ring = txq->tx_ring;
+ ice_tx_ring = txq->ice_tx_ring;
tx_id = txq->tx_tail;
txe = &sw_ring[tx_id];
@@ -3064,7 +3064,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
/* Setup TX context descriptor if required */
volatile struct ice_tx_ctx_desc *ctx_txd =
(volatile struct ice_tx_ctx_desc *)
- &tx_ring[tx_id];
+ &ice_tx_ring[tx_id];
uint16_t cd_l2tag2 = 0;
uint64_t cd_type_cmd_tso_mss = ICE_TX_DESC_DTYPE_CTX;
@@ -3082,7 +3082,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
cd_type_cmd_tso_mss |=
((uint64_t)ICE_TX_CTX_DESC_TSYN <<
ICE_TXD_CTX_QW1_CMD_S) |
- (((uint64_t)txq->vsi->adapter->ptp_tx_index <<
+ (((uint64_t)txq->ice_vsi->adapter->ptp_tx_index <<
ICE_TXD_CTX_QW1_TSYN_S) & ICE_TXD_CTX_QW1_TSYN_M);
ctx_txd->tunneling_params =
@@ -3106,7 +3106,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
m_seg = tx_pkt;
do {
- txd = &tx_ring[tx_id];
+ txd = &ice_tx_ring[tx_id];
txn = &sw_ring[txe->next_id];
if (txe->mbuf)
@@ -3134,7 +3134,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
txe->last_id = tx_last;
tx_id = txe->next_id;
txe = txn;
- txd = &tx_ring[tx_id];
+ txd = &ice_tx_ring[tx_id];
txn = &sw_ring[txe->next_id];
}
@@ -3187,7 +3187,7 @@ ice_tx_free_bufs(struct ice_tx_queue *txq)
struct ci_tx_entry *txep;
uint16_t i;
- if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+ if ((txq->ice_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) !=
rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))
return 0;
@@ -3360,7 +3360,7 @@ static inline void
ice_tx_fill_hw_ring(struct ice_tx_queue *txq, struct rte_mbuf **pkts,
uint16_t nb_pkts)
{
- volatile struct ice_tx_desc *txdp = &txq->tx_ring[txq->tx_tail];
+ volatile struct ice_tx_desc *txdp = &txq->ice_tx_ring[txq->tx_tail];
struct ci_tx_entry *txep = &txq->sw_ring[txq->tx_tail];
const int N_PER_LOOP = 4;
const int N_PER_LOOP_MASK = N_PER_LOOP - 1;
@@ -3393,7 +3393,7 @@ tx_xmit_pkts(struct ice_tx_queue *txq,
struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- volatile struct ice_tx_desc *txr = txq->tx_ring;
+ volatile struct ice_tx_desc *txr = txq->ice_tx_ring;
uint16_t n = 0;
/**
@@ -3722,7 +3722,7 @@ ice_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
bool pkt_error = false;
uint16_t good_pkts = nb_pkts;
const char *reason = NULL;
- struct ice_adapter *adapter = txq->vsi->adapter;
+ struct ice_adapter *adapter = txq->ice_vsi->adapter;
uint64_t ol_flags;
for (idx = 0; idx < nb_pkts; idx++) {
@@ -4701,11 +4701,11 @@ ice_fdir_programming(struct ice_pf *pf, struct ice_fltr_desc *fdir_desc)
uint16_t i;
fdirdp = (volatile struct ice_fltr_desc *)
- (&txq->tx_ring[txq->tx_tail]);
+ (&txq->ice_tx_ring[txq->tx_tail]);
fdirdp->qidx_compq_space_stat = fdir_desc->qidx_compq_space_stat;
fdirdp->dtype_cmd_vsi_fdid = fdir_desc->dtype_cmd_vsi_fdid;
- txdp = &txq->tx_ring[txq->tx_tail + 1];
+ txdp = &txq->ice_tx_ring[txq->tx_tail + 1];
txdp->buf_addr = rte_cpu_to_le_64(pf->fdir.dma_addr);
td_cmd = ICE_TX_DESC_CMD_EOP |
ICE_TX_DESC_CMD_RS |
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index 8d1a1a8676..3257f449f5 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -148,7 +148,7 @@ struct ice_rx_queue {
struct ice_tx_queue {
uint16_t nb_tx_desc; /* number of TX descriptors */
rte_iova_t tx_ring_dma; /* TX ring DMA address */
- volatile struct ice_tx_desc *tx_ring; /* TX ring virtual address */
+ volatile struct ice_tx_desc *ice_tx_ring; /* TX ring virtual address */
struct ci_tx_entry *sw_ring; /* virtual address of SW ring */
uint16_t tx_tail; /* current value of tail register */
volatile uint8_t *qtx_tail; /* register address of tail */
@@ -171,7 +171,7 @@ struct ice_tx_queue {
uint32_t q_teid; /* TX schedule node id. */
uint16_t reg_idx;
uint64_t offloads;
- struct ice_vsi *vsi; /* the VSI this queue belongs to */
+ struct ice_vsi *ice_vsi; /* the VSI this queue belongs to */
uint16_t tx_next_dd;
uint16_t tx_next_rs;
uint64_t mbuf_errors;
diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index 336697e72d..dde07ac99e 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -874,7 +874,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->ice_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -895,7 +895,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->ice_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
}
@@ -905,7 +905,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->ice_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)ICE_TX_DESC_CMD_RS) <<
ICE_TXD_QW1_CMD_S);
txq->tx_next_rs =
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index 6b6aa3f1fe..e4d0270176 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -869,7 +869,7 @@ ice_tx_free_bufs_avx512(struct ice_tx_queue *txq)
struct rte_mbuf *m, *free[ICE_TX_MAX_FREE_BUF_SZ];
/* check DD bits on threshold descriptor */
- if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+ if ((txq->ice_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) !=
rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))
return 0;
@@ -1071,7 +1071,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->ice_tx_ring[tx_id];
txep = (void *)txq->sw_ring;
txep += tx_id;
@@ -1093,7 +1093,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = txq->tx_ring;
+ txdp = txq->ice_tx_ring;
txep = (void *)txq->sw_ring;
}
@@ -1103,7 +1103,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->ice_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)ICE_TX_DESC_CMD_RS) <<
ICE_TXD_QW1_CMD_S);
txq->tx_next_rs =
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index 32e4541267..7b865b53ad 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -22,7 +22,7 @@ ice_tx_free_bufs_vec(struct ice_tx_queue *txq)
struct rte_mbuf *m, *free[ICE_TX_MAX_FREE_BUF_SZ];
/* check DD bits on threshold descriptor */
- if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+ if ((txq->ice_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) !=
rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))
return 0;
@@ -121,7 +121,7 @@ _ice_tx_queue_release_mbufs_vec(struct ice_tx_queue *txq)
i = txq->tx_next_dd - txq->tx_rs_thresh + 1;
#ifdef __AVX512VL__
- struct rte_eth_dev *dev = &rte_eth_devices[txq->vsi->adapter->pf.dev_data->port_id];
+ struct rte_eth_dev *dev = &rte_eth_devices[txq->ice_vsi->adapter->pf.dev_data->port_id];
if (dev->tx_pkt_burst == ice_xmit_pkts_vec_avx512 ||
dev->tx_pkt_burst == ice_xmit_pkts_vec_avx512_offload) {
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index debdd8f6a2..364207e8a8 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -717,7 +717,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->ice_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -737,7 +737,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->ice_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
}
@@ -747,7 +747,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->ice_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)ICE_TX_DESC_CMD_RS) <<
ICE_TXD_QW1_CMD_S);
txq->tx_next_rs =
diff --git a/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c b/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c
index 2241726ad8..a878db3150 100644
--- a/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c
+++ b/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c
@@ -72,7 +72,7 @@ ixgbe_recycle_tx_mbufs_reuse_vec(void *tx_queue,
return 0;
/* check DD bits on threshold descriptor */
- status = txq->tx_ring[txq->tx_next_dd].wb.status;
+ status = txq->ixgbe_tx_ring[txq->tx_next_dd].wb.status;
if (!(status & IXGBE_ADVTXD_STAT_DD))
return 0;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 0a80b944f0..f7ddbba1b6 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -106,7 +106,7 @@ ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq)
struct rte_mbuf *m, *free[RTE_IXGBE_TX_MAX_FREE_BUF_SZ];
/* check DD bit on threshold descriptor */
- status = txq->tx_ring[txq->tx_next_dd].wb.status;
+ status = txq->ixgbe_tx_ring[txq->tx_next_dd].wb.status;
if (!(status & rte_cpu_to_le_32(IXGBE_ADVTXD_STAT_DD)))
return 0;
@@ -198,7 +198,7 @@ static inline void
ixgbe_tx_fill_hw_ring(struct ixgbe_tx_queue *txq, struct rte_mbuf **pkts,
uint16_t nb_pkts)
{
- volatile union ixgbe_adv_tx_desc *txdp = &(txq->tx_ring[txq->tx_tail]);
+ volatile union ixgbe_adv_tx_desc *txdp = &txq->ixgbe_tx_ring[txq->tx_tail];
struct ci_tx_entry *txep = &txq->sw_ring[txq->tx_tail];
const int N_PER_LOOP = 4;
const int N_PER_LOOP_MASK = N_PER_LOOP-1;
@@ -232,7 +232,7 @@ tx_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
- volatile union ixgbe_adv_tx_desc *tx_r = txq->tx_ring;
+ volatile union ixgbe_adv_tx_desc *tx_r = txq->ixgbe_tx_ring;
uint16_t n = 0;
/*
@@ -564,7 +564,7 @@ static inline int
ixgbe_xmit_cleanup(struct ixgbe_tx_queue *txq)
{
struct ci_tx_entry *sw_ring = txq->sw_ring;
- volatile union ixgbe_adv_tx_desc *txr = txq->tx_ring;
+ volatile union ixgbe_adv_tx_desc *txr = txq->ixgbe_tx_ring;
uint16_t last_desc_cleaned = txq->last_desc_cleaned;
uint16_t nb_tx_desc = txq->nb_tx_desc;
uint16_t desc_to_clean_to;
@@ -652,7 +652,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_offload.data[1] = 0;
txq = tx_queue;
sw_ring = txq->sw_ring;
- txr = txq->tx_ring;
+ txr = txq->ixgbe_tx_ring;
tx_id = txq->tx_tail;
txe = &sw_ring[tx_id];
txp = NULL;
@@ -2495,13 +2495,13 @@ ixgbe_reset_tx_queue(struct ixgbe_tx_queue *txq)
/* Zero out HW ring memory */
for (i = 0; i < txq->nb_tx_desc; i++) {
- txq->tx_ring[i] = zeroed_desc;
+ txq->ixgbe_tx_ring[i] = zeroed_desc;
}
/* Initialize SW ring entries */
prev = (uint16_t) (txq->nb_tx_desc - 1);
for (i = 0; i < txq->nb_tx_desc; i++) {
- volatile union ixgbe_adv_tx_desc *txd = &txq->tx_ring[i];
+ volatile union ixgbe_adv_tx_desc *txd = &txq->ixgbe_tx_ring[i];
txd->wb.status = rte_cpu_to_le_32(IXGBE_TXD_STAT_DD);
txe[i].mbuf = NULL;
@@ -2751,7 +2751,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
* handle the maximum ring size is allocated in order to allow for
* resizing in later calls to the queue setup function.
*/
- tz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx,
+ tz = rte_eth_dma_zone_reserve(dev, "ixgbe_tx_ring", queue_idx,
sizeof(union ixgbe_adv_tx_desc) * IXGBE_MAX_RING_DESC,
IXGBE_ALIGN, socket_id);
if (tz == NULL) {
@@ -2791,7 +2791,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->qtx_tail = IXGBE_PCI_REG_ADDR(hw, IXGBE_TDT(txq->reg_idx));
txq->tx_ring_dma = tz->iova;
- txq->tx_ring = (union ixgbe_adv_tx_desc *) tz->addr;
+ txq->ixgbe_tx_ring = (union ixgbe_adv_tx_desc *)tz->addr;
/* Allocate software ring */
txq->sw_ring = rte_zmalloc_socket("txq->sw_ring",
@@ -2802,7 +2802,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
return -ENOMEM;
}
PMD_INIT_LOG(DEBUG, "sw_ring=%p hw_ring=%p dma_addr=0x%"PRIx64,
- txq->sw_ring, txq->tx_ring, txq->tx_ring_dma);
+ txq->sw_ring, txq->ixgbe_tx_ring, txq->tx_ring_dma);
/* set up vector or scalar TX function as appropriate */
ixgbe_set_tx_function(dev, txq);
@@ -3328,7 +3328,7 @@ ixgbe_dev_tx_descriptor_status(void *tx_queue, uint16_t offset)
desc -= txq->nb_tx_desc;
}
- status = &txq->tx_ring[desc].wb.status;
+ status = &txq->ixgbe_tx_ring[desc].wb.status;
if (*status & rte_cpu_to_le_32(IXGBE_ADVTXD_STAT_DD))
return RTE_ETH_TX_DESC_DONE;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index 00e2009b3e..f6bae37cf3 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -185,7 +185,7 @@ struct ixgbe_advctx_info {
*/
struct ixgbe_tx_queue {
/** TX ring virtual address. */
- volatile union ixgbe_adv_tx_desc *tx_ring;
+ volatile union ixgbe_adv_tx_desc *ixgbe_tx_ring;
rte_iova_t tx_ring_dma; /**< TX ring DMA address. */
union {
struct ci_tx_entry *sw_ring; /**< address of SW ring for scalar PMD. */
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
index e9592c0d08..cc51bf6eed 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
@@ -22,7 +22,7 @@ ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq)
struct rte_mbuf *m, *free[RTE_IXGBE_TX_MAX_FREE_BUF_SZ];
/* check DD bit on threshold descriptor */
- status = txq->tx_ring[txq->tx_next_dd].wb.status;
+ status = txq->ixgbe_tx_ring[txq->tx_next_dd].wb.status;
if (!(status & IXGBE_ADVTXD_STAT_DD))
return 0;
@@ -154,11 +154,11 @@ _ixgbe_reset_tx_queue_vec(struct ixgbe_tx_queue *txq)
/* Zero out HW ring memory */
for (i = 0; i < txq->nb_tx_desc; i++)
- txq->tx_ring[i] = zeroed_desc;
+ txq->ixgbe_tx_ring[i] = zeroed_desc;
/* Initialize SW ring entries */
for (i = 0; i < txq->nb_tx_desc; i++) {
- volatile union ixgbe_adv_tx_desc *txd = &txq->tx_ring[i];
+ volatile union ixgbe_adv_tx_desc *txd = &txq->ixgbe_tx_ring[i];
txd->wb.status = IXGBE_TXD_STAT_DD;
txe[i].mbuf = NULL;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
index 871c1a7cd2..06be7ec82a 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
@@ -590,7 +590,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->ixgbe_tx_ring[tx_id];
txep = &txq->sw_ring_v[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -610,7 +610,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->ixgbe_tx_ring[tx_id];
txep = &txq->sw_ring_v[tx_id];
}
@@ -620,7 +620,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].read.cmd_type_len |=
+ txq->ixgbe_tx_ring[txq->tx_next_rs].read.cmd_type_len |=
rte_cpu_to_le_32(IXGBE_ADVTXD_DCMD_RS);
txq->tx_next_rs = (uint16_t)(txq->tx_next_rs +
txq->tx_rs_thresh);
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
index 37f2079519..a21a57bd55 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -712,7 +712,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->ixgbe_tx_ring[tx_id];
txep = &txq->sw_ring_v[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -733,7 +733,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &(txq->tx_ring[tx_id]);
+ txdp = &txq->ixgbe_tx_ring[tx_id];
txep = &txq->sw_ring_v[tx_id];
}
@@ -743,7 +743,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].read.cmd_type_len |=
+ txq->ixgbe_tx_ring[txq->tx_next_rs].read.cmd_type_len |=
rte_cpu_to_le_32(IXGBE_ADVTXD_DCMD_RS);
txq->tx_next_rs = (uint16_t)(txq->tx_next_rs +
txq->tx_rs_thresh);
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v2 06/22] net/_common_intel: merge ice and i40e Tx queue struct
2024-12-03 16:41 ` [PATCH v2 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (4 preceding siblings ...)
2024-12-03 16:41 ` [PATCH v2 05/22] drivers/net: add prefix for driver-specific structs Bruce Richardson
@ 2024-12-03 16:41 ` Bruce Richardson
2024-12-03 16:41 ` [PATCH v2 07/22] net/iavf: use common Tx queue structure Bruce Richardson
` (15 subsequent siblings)
21 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-03 16:41 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Ian Stokes, David Christensen,
Konstantin Ananyev, Wathsala Vithanage, Anatoly Burakov
The queue structures of i40e and ice drivers are virtually identical, so
merge them into a common struct. This should allow easier function
merging in future using that common struct.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 55 +++++++++++++++++
drivers/net/i40e/i40e_ethdev.c | 4 +-
drivers/net/i40e/i40e_ethdev.h | 4 +-
drivers/net/i40e/i40e_fdir.c | 4 +-
.../net/i40e/i40e_recycle_mbufs_vec_common.c | 2 +-
drivers/net/i40e/i40e_rxtx.c | 58 +++++++++---------
drivers/net/i40e/i40e_rxtx.h | 50 ++--------------
drivers/net/i40e/i40e_rxtx_vec_altivec.c | 4 +-
drivers/net/i40e/i40e_rxtx_vec_avx2.c | 4 +-
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 6 +-
drivers/net/i40e/i40e_rxtx_vec_common.h | 2 +-
drivers/net/i40e/i40e_rxtx_vec_neon.c | 4 +-
drivers/net/i40e/i40e_rxtx_vec_sse.c | 4 +-
drivers/net/ice/ice_dcf.c | 4 +-
drivers/net/ice/ice_dcf_ethdev.c | 10 ++--
drivers/net/ice/ice_diagnose.c | 2 +-
drivers/net/ice/ice_ethdev.c | 2 +-
drivers/net/ice/ice_ethdev.h | 4 +-
drivers/net/ice/ice_rxtx.c | 60 +++++++++----------
drivers/net/ice/ice_rxtx.h | 41 +------------
drivers/net/ice/ice_rxtx_vec_avx2.c | 4 +-
drivers/net/ice/ice_rxtx_vec_avx512.c | 8 +--
drivers/net/ice/ice_rxtx_vec_common.h | 8 +--
drivers/net/ice/ice_rxtx_vec_sse.c | 6 +-
24 files changed, 165 insertions(+), 185 deletions(-)
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
index 5397007411..c965f5ee6c 100644
--- a/drivers/net/_common_intel/tx.h
+++ b/drivers/net/_common_intel/tx.h
@@ -8,6 +8,9 @@
#include <stdint.h>
#include <rte_mbuf.h>
+/* forward declaration of the common intel (ci) queue structure */
+struct ci_tx_queue;
+
/**
* Structure associated with each descriptor of the TX ring of a TX queue.
*/
@@ -24,6 +27,58 @@ struct ci_tx_entry_vec {
struct rte_mbuf *mbuf; /* mbuf associated with TX desc, if any. */
};
+typedef void (*ice_tx_release_mbufs_t)(struct ci_tx_queue *txq);
+
+struct ci_tx_queue {
+ union { /* TX ring virtual address */
+ volatile struct ice_tx_desc *ice_tx_ring;
+ volatile struct i40e_tx_desc *i40e_tx_ring;
+ };
+ volatile uint8_t *qtx_tail; /* register address of tail */
+ struct ci_tx_entry *sw_ring; /* virtual address of SW ring */
+ rte_iova_t tx_ring_dma; /* TX ring DMA address */
+ uint16_t nb_tx_desc; /* number of TX descriptors */
+ uint16_t tx_tail; /* current value of tail register */
+ uint16_t nb_tx_used; /* number of TX desc used since RS bit set */
+ /* index to last TX descriptor to have been cleaned */
+ uint16_t last_desc_cleaned;
+ /* Total number of TX descriptors ready to be allocated. */
+ uint16_t nb_tx_free;
+ /* Start freeing TX buffers if there are less free descriptors than
+ * this value.
+ */
+ uint16_t tx_free_thresh;
+ /* Number of TX descriptors to use before RS bit is set. */
+ uint16_t tx_rs_thresh;
+ uint8_t pthresh; /**< Prefetch threshold register. */
+ uint8_t hthresh; /**< Host threshold register. */
+ uint8_t wthresh; /**< Write-back threshold reg. */
+ uint16_t port_id; /* Device port identifier. */
+ uint16_t queue_id; /* TX queue index. */
+ uint16_t reg_idx;
+ uint64_t offloads;
+ uint16_t tx_next_dd;
+ uint16_t tx_next_rs;
+ uint64_t mbuf_errors;
+ bool tx_deferred_start; /* don't start this queue in dev start */
+ bool q_set; /* indicate if tx queue has been configured */
+ union { /* the VSI this queue belongs to */
+ struct ice_vsi *ice_vsi;
+ struct i40e_vsi *i40e_vsi;
+ };
+ const struct rte_memzone *mz;
+
+ union {
+ struct { /* ICE driver specific values */
+ ice_tx_release_mbufs_t tx_rel_mbufs;
+ uint32_t q_teid; /* TX schedule node id. */
+ };
+ struct { /* I40E driver specific values */
+ uint8_t dcb_tc;
+ };
+ };
+};
+
static __rte_always_inline void
ci_tx_backlog_entry(struct ci_tx_entry *txep, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 30dcdc68a8..bf5560ccc8 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -3685,7 +3685,7 @@ i40e_dev_update_mbuf_stats(struct rte_eth_dev *ethdev,
struct i40e_mbuf_stats *mbuf_stats)
{
uint16_t idx;
- struct i40e_tx_queue *txq;
+ struct ci_tx_queue *txq;
for (idx = 0; idx < ethdev->data->nb_tx_queues; idx++) {
txq = ethdev->data->tx_queues[idx];
@@ -6585,7 +6585,7 @@ i40e_dev_tx_init(struct i40e_pf *pf)
struct rte_eth_dev_data *data = pf->dev_data;
uint16_t i;
uint32_t ret = I40E_SUCCESS;
- struct i40e_tx_queue *txq;
+ struct ci_tx_queue *txq;
for (i = 0; i < data->nb_tx_queues; i++) {
txq = data->tx_queues[i];
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index 98213948b4..d351193ed9 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -334,7 +334,7 @@ struct i40e_vsi_list {
};
struct i40e_rx_queue;
-struct i40e_tx_queue;
+struct ci_tx_queue;
/* Bandwidth limit information */
struct i40e_bw_info {
@@ -738,7 +738,7 @@ TAILQ_HEAD(i40e_fdir_filter_list, i40e_fdir_filter);
struct i40e_fdir_info {
struct i40e_vsi *fdir_vsi; /* pointer to fdir VSI structure */
uint16_t match_counter_index; /* Statistic counter index used for fdir*/
- struct i40e_tx_queue *txq;
+ struct ci_tx_queue *txq;
struct i40e_rx_queue *rxq;
void *prg_pkt[I40E_FDIR_PRG_PKT_CNT]; /* memory for fdir program packet */
uint64_t dma_addr[I40E_FDIR_PRG_PKT_CNT]; /* physic address of packet memory*/
diff --git a/drivers/net/i40e/i40e_fdir.c b/drivers/net/i40e/i40e_fdir.c
index c600167634..349627a2ed 100644
--- a/drivers/net/i40e/i40e_fdir.c
+++ b/drivers/net/i40e/i40e_fdir.c
@@ -1372,7 +1372,7 @@ i40e_find_available_buffer(struct rte_eth_dev *dev)
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_fdir_info *fdir_info = &pf->fdir;
- struct i40e_tx_queue *txq = pf->fdir.txq;
+ struct ci_tx_queue *txq = pf->fdir.txq;
/* no available buffer
* search for more available buffers from the current
@@ -1628,7 +1628,7 @@ i40e_flow_fdir_filter_programming(struct i40e_pf *pf,
const struct i40e_fdir_filter_conf *filter,
bool add, bool wait_status)
{
- struct i40e_tx_queue *txq = pf->fdir.txq;
+ struct ci_tx_queue *txq = pf->fdir.txq;
struct i40e_rx_queue *rxq = pf->fdir.rxq;
const struct i40e_fdir_action *fdir_action = &filter->action;
volatile struct i40e_tx_desc *txdp;
diff --git a/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c b/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
index 8679e5c1fd..5a65c80d90 100644
--- a/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
+++ b/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
@@ -55,7 +55,7 @@ uint16_t
i40e_recycle_tx_mbufs_reuse_vec(void *tx_queue,
struct rte_eth_recycle_rxq_info *recycle_rxq_info)
{
- struct i40e_tx_queue *txq = tx_queue;
+ struct ci_tx_queue *txq = tx_queue;
struct ci_tx_entry *txep;
struct rte_mbuf **rxep;
int i, n;
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 34ef931859..305bc53480 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -376,7 +376,7 @@ i40e_build_ctob(uint32_t td_cmd,
}
static inline int
-i40e_xmit_cleanup(struct i40e_tx_queue *txq)
+i40e_xmit_cleanup(struct ci_tx_queue *txq)
{
struct ci_tx_entry *sw_ring = txq->sw_ring;
volatile struct i40e_tx_desc *txd = txq->i40e_tx_ring;
@@ -1080,7 +1080,7 @@ i40e_calc_pkt_desc(struct rte_mbuf *tx_pkt)
uint16_t
i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
- struct i40e_tx_queue *txq;
+ struct ci_tx_queue *txq;
struct ci_tx_entry *sw_ring;
struct ci_tx_entry *txe, *txn;
volatile struct i40e_tx_desc *txd;
@@ -1329,7 +1329,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
}
static __rte_always_inline int
-i40e_tx_free_bufs(struct i40e_tx_queue *txq)
+i40e_tx_free_bufs(struct ci_tx_queue *txq)
{
struct ci_tx_entry *txep;
uint16_t tx_rs_thresh = txq->tx_rs_thresh;
@@ -1413,7 +1413,7 @@ tx1(volatile struct i40e_tx_desc *txdp, struct rte_mbuf **pkts)
/* Fill hardware descriptor ring with mbuf data */
static inline void
-i40e_tx_fill_hw_ring(struct i40e_tx_queue *txq,
+i40e_tx_fill_hw_ring(struct ci_tx_queue *txq,
struct rte_mbuf **pkts,
uint16_t nb_pkts)
{
@@ -1441,7 +1441,7 @@ i40e_tx_fill_hw_ring(struct i40e_tx_queue *txq,
}
static inline uint16_t
-tx_xmit_pkts(struct i40e_tx_queue *txq,
+tx_xmit_pkts(struct ci_tx_queue *txq,
struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
@@ -1504,14 +1504,14 @@ i40e_xmit_pkts_simple(void *tx_queue,
uint16_t nb_tx = 0;
if (likely(nb_pkts <= I40E_TX_MAX_BURST))
- return tx_xmit_pkts((struct i40e_tx_queue *)tx_queue,
+ return tx_xmit_pkts((struct ci_tx_queue *)tx_queue,
tx_pkts, nb_pkts);
while (nb_pkts) {
uint16_t ret, num = (uint16_t)RTE_MIN(nb_pkts,
I40E_TX_MAX_BURST);
- ret = tx_xmit_pkts((struct i40e_tx_queue *)tx_queue,
+ ret = tx_xmit_pkts((struct ci_tx_queue *)tx_queue,
&tx_pkts[nb_tx], num);
nb_tx = (uint16_t)(nb_tx + ret);
nb_pkts = (uint16_t)(nb_pkts - ret);
@@ -1527,7 +1527,7 @@ i40e_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
uint16_t nb_tx = 0;
- struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
@@ -1549,7 +1549,7 @@ i40e_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
static uint16_t
i40e_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
- struct i40e_tx_queue *txq = tx_queue;
+ struct ci_tx_queue *txq = tx_queue;
uint16_t idx;
uint64_t ol_flags;
struct rte_mbuf *mb;
@@ -1611,7 +1611,7 @@ i40e_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts
pkt_error = true;
break;
}
- if (mb->nb_segs > ((struct i40e_tx_queue *)tx_queue)->nb_tx_desc) {
+ if (mb->nb_segs > ((struct ci_tx_queue *)tx_queue)->nb_tx_desc) {
PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs out of ring length");
pkt_error = true;
break;
@@ -1873,7 +1873,7 @@ int
i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
int err;
- struct i40e_tx_queue *txq;
+ struct ci_tx_queue *txq;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
PMD_INIT_FUNC_TRACE();
@@ -1907,7 +1907,7 @@ i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
int
i40e_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
- struct i40e_tx_queue *txq;
+ struct ci_tx_queue *txq;
int err;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
@@ -2311,7 +2311,7 @@ i40e_dev_rx_descriptor_status(void *rx_queue, uint16_t offset)
int
i40e_dev_tx_descriptor_status(void *tx_queue, uint16_t offset)
{
- struct i40e_tx_queue *txq = tx_queue;
+ struct ci_tx_queue *txq = tx_queue;
volatile uint64_t *status;
uint64_t mask, expect;
uint32_t desc;
@@ -2341,7 +2341,7 @@ i40e_dev_tx_descriptor_status(void *tx_queue, uint16_t offset)
static int
i40e_dev_tx_queue_setup_runtime(struct rte_eth_dev *dev,
- struct i40e_tx_queue *txq)
+ struct ci_tx_queue *txq)
{
struct i40e_adapter *ad =
I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
@@ -2394,7 +2394,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
{
struct i40e_vsi *vsi;
struct i40e_pf *pf = NULL;
- struct i40e_tx_queue *txq;
+ struct ci_tx_queue *txq;
const struct rte_memzone *tz;
uint32_t ring_size;
uint16_t tx_rs_thresh, tx_free_thresh;
@@ -2515,7 +2515,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate the TX queue data structure. */
txq = rte_zmalloc_socket("i40e tx queue",
- sizeof(struct i40e_tx_queue),
+ sizeof(struct ci_tx_queue),
RTE_CACHE_LINE_SIZE,
socket_id);
if (!txq) {
@@ -2600,7 +2600,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
void
i40e_tx_queue_release(void *txq)
{
- struct i40e_tx_queue *q = (struct i40e_tx_queue *)txq;
+ struct ci_tx_queue *q = (struct ci_tx_queue *)txq;
if (!q) {
PMD_DRV_LOG(DEBUG, "Pointer to TX queue is NULL");
@@ -2705,7 +2705,7 @@ i40e_reset_rx_queue(struct i40e_rx_queue *rxq)
}
void
-i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq)
+i40e_tx_queue_release_mbufs(struct ci_tx_queue *txq)
{
struct rte_eth_dev *dev;
uint16_t i;
@@ -2765,7 +2765,7 @@ i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq)
}
static int
-i40e_tx_done_cleanup_full(struct i40e_tx_queue *txq,
+i40e_tx_done_cleanup_full(struct ci_tx_queue *txq,
uint32_t free_cnt)
{
struct ci_tx_entry *swr_ring = txq->sw_ring;
@@ -2824,7 +2824,7 @@ i40e_tx_done_cleanup_full(struct i40e_tx_queue *txq,
}
static int
-i40e_tx_done_cleanup_simple(struct i40e_tx_queue *txq,
+i40e_tx_done_cleanup_simple(struct ci_tx_queue *txq,
uint32_t free_cnt)
{
int i, n, cnt;
@@ -2848,7 +2848,7 @@ i40e_tx_done_cleanup_simple(struct i40e_tx_queue *txq,
}
static int
-i40e_tx_done_cleanup_vec(struct i40e_tx_queue *txq __rte_unused,
+i40e_tx_done_cleanup_vec(struct ci_tx_queue *txq __rte_unused,
uint32_t free_cnt __rte_unused)
{
return -ENOTSUP;
@@ -2856,7 +2856,7 @@ i40e_tx_done_cleanup_vec(struct i40e_tx_queue *txq __rte_unused,
int
i40e_tx_done_cleanup(void *txq, uint32_t free_cnt)
{
- struct i40e_tx_queue *q = (struct i40e_tx_queue *)txq;
+ struct ci_tx_queue *q = (struct ci_tx_queue *)txq;
struct rte_eth_dev *dev = &rte_eth_devices[q->port_id];
struct i40e_adapter *ad =
I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
@@ -2872,7 +2872,7 @@ i40e_tx_done_cleanup(void *txq, uint32_t free_cnt)
}
void
-i40e_reset_tx_queue(struct i40e_tx_queue *txq)
+i40e_reset_tx_queue(struct ci_tx_queue *txq)
{
struct ci_tx_entry *txe;
uint16_t i, prev, size;
@@ -2911,7 +2911,7 @@ i40e_reset_tx_queue(struct i40e_tx_queue *txq)
/* Init the TX queue in hardware */
int
-i40e_tx_queue_init(struct i40e_tx_queue *txq)
+i40e_tx_queue_init(struct ci_tx_queue *txq)
{
enum i40e_status_code err = I40E_SUCCESS;
struct i40e_vsi *vsi = txq->i40e_vsi;
@@ -3167,7 +3167,7 @@ i40e_dev_free_queues(struct rte_eth_dev *dev)
enum i40e_status_code
i40e_fdir_setup_tx_resources(struct i40e_pf *pf)
{
- struct i40e_tx_queue *txq;
+ struct ci_tx_queue *txq;
const struct rte_memzone *tz = NULL;
struct rte_eth_dev *dev;
uint32_t ring_size;
@@ -3181,7 +3181,7 @@ i40e_fdir_setup_tx_resources(struct i40e_pf *pf)
/* Allocate the TX queue data structure. */
txq = rte_zmalloc_socket("i40e fdir tx queue",
- sizeof(struct i40e_tx_queue),
+ sizeof(struct ci_tx_queue),
RTE_CACHE_LINE_SIZE,
SOCKET_ID_ANY);
if (!txq) {
@@ -3304,7 +3304,7 @@ void
i40e_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
struct rte_eth_txq_info *qinfo)
{
- struct i40e_tx_queue *txq;
+ struct ci_tx_queue *txq;
txq = dev->data->tx_queues[queue_id];
@@ -3552,7 +3552,7 @@ i40e_rx_burst_mode_get(struct rte_eth_dev *dev, __rte_unused uint16_t queue_id,
}
void __rte_cold
-i40e_set_tx_function_flag(struct rte_eth_dev *dev, struct i40e_tx_queue *txq)
+i40e_set_tx_function_flag(struct rte_eth_dev *dev, struct ci_tx_queue *txq)
{
struct i40e_adapter *ad =
I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
@@ -3592,7 +3592,7 @@ i40e_set_tx_function(struct rte_eth_dev *dev)
#endif
if (ad->tx_vec_allowed) {
for (i = 0; i < dev->data->nb_tx_queues; i++) {
- struct i40e_tx_queue *txq =
+ struct ci_tx_queue *txq =
dev->data->tx_queues[i];
if (txq && i40e_txq_vec_setup(txq)) {
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 8315ee2f59..043d1df912 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -124,44 +124,6 @@ struct i40e_rx_queue {
const struct rte_memzone *mz;
};
-/*
- * Structure associated with each TX queue.
- */
-struct i40e_tx_queue {
- uint16_t nb_tx_desc; /**< number of TX descriptors */
- rte_iova_t tx_ring_dma; /**< TX ring DMA address */
- volatile struct i40e_tx_desc *i40e_tx_ring; /**< TX ring virtual address */
- struct ci_tx_entry *sw_ring; /**< virtual address of SW ring */
- uint16_t tx_tail; /**< current value of tail register */
- volatile uint8_t *qtx_tail; /**< register address of tail */
- uint16_t nb_tx_used; /**< number of TX desc used since RS bit set */
- /**< index to last TX descriptor to have been cleaned */
- uint16_t last_desc_cleaned;
- /**< Total number of TX descriptors ready to be allocated. */
- uint16_t nb_tx_free;
- /**< Start freeing TX buffers if there are less free descriptors than
- this value. */
- uint16_t tx_free_thresh;
- /** Number of TX descriptors to use before RS bit is set. */
- uint16_t tx_rs_thresh;
- uint8_t pthresh; /**< Prefetch threshold register. */
- uint8_t hthresh; /**< Host threshold register. */
- uint8_t wthresh; /**< Write-back threshold reg. */
- uint16_t port_id; /**< Device port identifier. */
- uint16_t queue_id; /**< TX queue index. */
- uint16_t reg_idx;
- struct i40e_vsi *i40e_vsi; /**< the VSI this queue belongs to */
- uint16_t tx_next_dd;
- uint16_t tx_next_rs;
- bool q_set; /**< indicate if tx queue has been configured */
- uint64_t mbuf_errors;
-
- bool tx_deferred_start; /**< don't start this queue in dev start */
- uint8_t dcb_tc; /**< Traffic class of tx queue */
- uint64_t offloads; /**< Tx offload flags of RTE_ETH_TX_OFFLOAD_* */
- const struct rte_memzone *mz;
-};
-
/** Offload features */
union i40e_tx_offload {
uint64_t data;
@@ -209,15 +171,15 @@ uint16_t i40e_simple_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
uint16_t i40e_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
-int i40e_tx_queue_init(struct i40e_tx_queue *txq);
+int i40e_tx_queue_init(struct ci_tx_queue *txq);
int i40e_rx_queue_init(struct i40e_rx_queue *rxq);
-void i40e_free_tx_resources(struct i40e_tx_queue *txq);
+void i40e_free_tx_resources(struct ci_tx_queue *txq);
void i40e_free_rx_resources(struct i40e_rx_queue *rxq);
void i40e_dev_clear_queues(struct rte_eth_dev *dev);
void i40e_dev_free_queues(struct rte_eth_dev *dev);
void i40e_reset_rx_queue(struct i40e_rx_queue *rxq);
-void i40e_reset_tx_queue(struct i40e_tx_queue *txq);
-void i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq);
+void i40e_reset_tx_queue(struct ci_tx_queue *txq);
+void i40e_tx_queue_release_mbufs(struct ci_tx_queue *txq);
int i40e_tx_done_cleanup(void *txq, uint32_t free_cnt);
int i40e_alloc_rx_queue_mbufs(struct i40e_rx_queue *rxq);
void i40e_rx_queue_release_mbufs(struct i40e_rx_queue *rxq);
@@ -237,13 +199,13 @@ uint16_t i40e_recv_scattered_pkts_vec(void *rx_queue,
uint16_t nb_pkts);
int i40e_rx_vec_dev_conf_condition_check(struct rte_eth_dev *dev);
int i40e_rxq_vec_setup(struct i40e_rx_queue *rxq);
-int i40e_txq_vec_setup(struct i40e_tx_queue *txq);
+int i40e_txq_vec_setup(struct ci_tx_queue *txq);
void i40e_rx_queue_release_mbufs_vec(struct i40e_rx_queue *rxq);
uint16_t i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
void i40e_set_rx_function(struct rte_eth_dev *dev);
void i40e_set_tx_function_flag(struct rte_eth_dev *dev,
- struct i40e_tx_queue *txq);
+ struct ci_tx_queue *txq);
void i40e_set_tx_function(struct rte_eth_dev *dev);
void i40e_set_default_ptype_table(struct rte_eth_dev *dev);
void i40e_set_default_pctype_table(struct rte_eth_dev *dev);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_altivec.c b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
index bf0e9ebd71..500bba2cef 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_altivec.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
@@ -551,7 +551,7 @@ uint16_t
i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
@@ -625,7 +625,7 @@ i40e_rxq_vec_setup(struct i40e_rx_queue *rxq)
}
int __rte_cold
-i40e_txq_vec_setup(struct i40e_tx_queue __rte_unused * txq)
+i40e_txq_vec_setup(struct ci_tx_queue __rte_unused * txq)
{
return 0;
}
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx2.c b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
index 5042e348db..29bef64287 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx2.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
@@ -743,7 +743,7 @@ static inline uint16_t
i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
@@ -808,7 +808,7 @@ i40e_xmit_pkts_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
uint16_t nb_tx = 0;
- struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index 04fbe3b2e3..a3f6d1667f 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -755,7 +755,7 @@ i40e_recv_scattered_pkts_vec_avx512(void *rx_queue,
}
static __rte_always_inline int
-i40e_tx_free_bufs_avx512(struct i40e_tx_queue *txq)
+i40e_tx_free_bufs_avx512(struct ci_tx_queue *txq)
{
struct ci_tx_entry_vec *txep;
uint32_t n;
@@ -933,7 +933,7 @@ static inline uint16_t
i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
@@ -999,7 +999,7 @@ i40e_xmit_pkts_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
uint16_t nb_tx = 0;
- struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index e81f958361..57d6263ccf 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -17,7 +17,7 @@
#endif
static __rte_always_inline int
-i40e_tx_free_bufs(struct i40e_tx_queue *txq)
+i40e_tx_free_bufs(struct ci_tx_queue *txq)
{
struct ci_tx_entry *txep;
uint32_t n;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_neon.c b/drivers/net/i40e/i40e_rxtx_vec_neon.c
index 05191e4884..c97f337e43 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_neon.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_neon.c
@@ -679,7 +679,7 @@ uint16_t
i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
struct rte_mbuf **__rte_restrict tx_pkts, uint16_t nb_pkts)
{
- struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
@@ -753,7 +753,7 @@ i40e_rxq_vec_setup(struct i40e_rx_queue *rxq)
}
int __rte_cold
-i40e_txq_vec_setup(struct i40e_tx_queue __rte_unused *txq)
+i40e_txq_vec_setup(struct ci_tx_queue *txq __rte_unused)
{
return 0;
}
diff --git a/drivers/net/i40e/i40e_rxtx_vec_sse.c b/drivers/net/i40e/i40e_rxtx_vec_sse.c
index d81b553842..2c467e2089 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_sse.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_sse.c
@@ -698,7 +698,7 @@ uint16_t
i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
@@ -771,7 +771,7 @@ i40e_rxq_vec_setup(struct i40e_rx_queue *rxq)
}
int __rte_cold
-i40e_txq_vec_setup(struct i40e_tx_queue __rte_unused *txq)
+i40e_txq_vec_setup(struct ci_tx_queue *txq __rte_unused)
{
return 0;
}
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 204d4eadbb..65c18921f4 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -1177,8 +1177,8 @@ ice_dcf_configure_queues(struct ice_dcf_hw *hw)
{
struct ice_rx_queue **rxq =
(struct ice_rx_queue **)hw->eth_dev->data->rx_queues;
- struct ice_tx_queue **txq =
- (struct ice_tx_queue **)hw->eth_dev->data->tx_queues;
+ struct ci_tx_queue **txq =
+ (struct ci_tx_queue **)hw->eth_dev->data->tx_queues;
struct virtchnl_vsi_queue_config_info *vc_config;
struct virtchnl_queue_pair_info *vc_qp;
struct dcf_virtchnl_cmd args;
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 4ffd1f5567..a0c065d78c 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -387,7 +387,7 @@ reset_rx_queue(struct ice_rx_queue *rxq)
}
static inline void
-reset_tx_queue(struct ice_tx_queue *txq)
+reset_tx_queue(struct ci_tx_queue *txq)
{
struct ci_tx_entry *txe;
uint32_t i, size;
@@ -454,7 +454,7 @@ ice_dcf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
struct ice_dcf_adapter *ad = dev->data->dev_private;
struct iavf_hw *hw = &ad->real_hw.avf;
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
int err = 0;
if (tx_queue_id >= dev->data->nb_tx_queues)
@@ -486,7 +486,7 @@ ice_dcf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
struct ice_dcf_adapter *ad = dev->data->dev_private;
struct ice_dcf_hw *hw = &ad->real_hw;
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
int err;
if (tx_queue_id >= dev->data->nb_tx_queues)
@@ -511,7 +511,7 @@ static int
ice_dcf_start_queues(struct rte_eth_dev *dev)
{
struct ice_rx_queue *rxq;
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
int nb_rxq = 0;
int nb_txq, i;
@@ -638,7 +638,7 @@ ice_dcf_stop_queues(struct rte_eth_dev *dev)
struct ice_dcf_adapter *ad = dev->data->dev_private;
struct ice_dcf_hw *hw = &ad->real_hw;
struct ice_rx_queue *rxq;
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
int ret, i;
/* Stop All queues */
diff --git a/drivers/net/ice/ice_diagnose.c b/drivers/net/ice/ice_diagnose.c
index 5bec9d00ad..a50068441a 100644
--- a/drivers/net/ice/ice_diagnose.c
+++ b/drivers/net/ice/ice_diagnose.c
@@ -605,7 +605,7 @@ void print_node(const struct rte_eth_dev_data *ethdata,
get_elem_type(data->data.elem_type));
if (data->data.elem_type == ICE_AQC_ELEM_TYPE_LEAF) {
for (uint16_t i = 0; i < ethdata->nb_tx_queues; i++) {
- struct ice_tx_queue *q = ethdata->tx_queues[i];
+ struct ci_tx_queue *q = ethdata->tx_queues[i];
if (q->q_teid == data->node_teid) {
fprintf(stream, "\t\t\t\t<tr><td>TXQ</td><td>%u</td></tr>\n", i);
break;
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 93a6308a86..80eee03204 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -6448,7 +6448,7 @@ ice_update_mbuf_stats(struct rte_eth_dev *ethdev,
struct ice_mbuf_stats *mbuf_stats)
{
uint16_t idx;
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
for (idx = 0; idx < ethdev->data->nb_tx_queues; idx++) {
txq = ethdev->data->tx_queues[idx];
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index a5b27fabd2..ba54655499 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -258,7 +258,7 @@ struct ice_vsi_list {
};
struct ice_rx_queue;
-struct ice_tx_queue;
+struct ci_tx_queue;
/**
* Structure that defines a VSI, associated with a adapter.
@@ -408,7 +408,7 @@ struct ice_fdir_counter_pool_container {
*/
struct ice_fdir_info {
struct ice_vsi *fdir_vsi; /* pointer to fdir VSI structure */
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
struct ice_rx_queue *rxq;
void *prg_pkt; /* memory for fdir program packet */
uint64_t dma_addr; /* physic address of packet memory*/
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 5ec92f6d0c..bcc7c7a016 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -743,7 +743,7 @@ ice_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
int
ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
int err;
struct ice_vsi *vsi;
struct ice_hw *hw;
@@ -944,7 +944,7 @@ int
ice_fdir_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
int err;
struct ice_vsi *vsi;
struct ice_hw *hw;
@@ -1008,7 +1008,7 @@ ice_fdir_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
/* Free all mbufs for descriptors in tx queue */
static void
-_ice_tx_queue_release_mbufs(struct ice_tx_queue *txq)
+_ice_tx_queue_release_mbufs(struct ci_tx_queue *txq)
{
uint16_t i;
@@ -1026,7 +1026,7 @@ _ice_tx_queue_release_mbufs(struct ice_tx_queue *txq)
}
static void
-ice_reset_tx_queue(struct ice_tx_queue *txq)
+ice_reset_tx_queue(struct ci_tx_queue *txq)
{
struct ci_tx_entry *txe;
uint16_t i, prev, size;
@@ -1066,7 +1066,7 @@ ice_reset_tx_queue(struct ice_tx_queue *txq)
int
ice_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_vsi *vsi = pf->main_vsi;
@@ -1134,7 +1134,7 @@ ice_fdir_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
int
ice_fdir_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_vsi *vsi = pf->main_vsi;
@@ -1354,7 +1354,7 @@ ice_tx_queue_setup(struct rte_eth_dev *dev,
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_vsi *vsi = pf->main_vsi;
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
const struct rte_memzone *tz;
uint32_t ring_size;
uint16_t tx_rs_thresh, tx_free_thresh;
@@ -1467,7 +1467,7 @@ ice_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate the TX queue data structure. */
txq = rte_zmalloc_socket(NULL,
- sizeof(struct ice_tx_queue),
+ sizeof(struct ci_tx_queue),
RTE_CACHE_LINE_SIZE,
socket_id);
if (!txq) {
@@ -1542,7 +1542,7 @@ ice_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
void
ice_tx_queue_release(void *txq)
{
- struct ice_tx_queue *q = (struct ice_tx_queue *)txq;
+ struct ci_tx_queue *q = (struct ci_tx_queue *)txq;
if (!q) {
PMD_DRV_LOG(DEBUG, "Pointer to TX queue is NULL");
@@ -1577,7 +1577,7 @@ void
ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
struct rte_eth_txq_info *qinfo)
{
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
txq = dev->data->tx_queues[queue_id];
@@ -2354,7 +2354,7 @@ ice_rx_descriptor_status(void *rx_queue, uint16_t offset)
int
ice_tx_descriptor_status(void *tx_queue, uint16_t offset)
{
- struct ice_tx_queue *txq = tx_queue;
+ struct ci_tx_queue *txq = tx_queue;
volatile uint64_t *status;
uint64_t mask, expect;
uint32_t desc;
@@ -2412,7 +2412,7 @@ ice_free_queues(struct rte_eth_dev *dev)
int
ice_fdir_setup_tx_resources(struct ice_pf *pf)
{
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
const struct rte_memzone *tz = NULL;
uint32_t ring_size;
struct rte_eth_dev *dev;
@@ -2426,7 +2426,7 @@ ice_fdir_setup_tx_resources(struct ice_pf *pf)
/* Allocate the TX queue data structure. */
txq = rte_zmalloc_socket("ice fdir tx queue",
- sizeof(struct ice_tx_queue),
+ sizeof(struct ci_tx_queue),
RTE_CACHE_LINE_SIZE,
SOCKET_ID_ANY);
if (!txq) {
@@ -2835,7 +2835,7 @@ ice_txd_enable_checksum(uint64_t ol_flags,
}
static inline int
-ice_xmit_cleanup(struct ice_tx_queue *txq)
+ice_xmit_cleanup(struct ci_tx_queue *txq)
{
struct ci_tx_entry *sw_ring = txq->sw_ring;
volatile struct ice_tx_desc *txd = txq->ice_tx_ring;
@@ -2958,7 +2958,7 @@ ice_calc_pkt_desc(struct rte_mbuf *tx_pkt)
uint16_t
ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
volatile struct ice_tx_desc *ice_tx_ring;
volatile struct ice_tx_desc *txd;
struct ci_tx_entry *sw_ring;
@@ -3182,7 +3182,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
}
static __rte_always_inline int
-ice_tx_free_bufs(struct ice_tx_queue *txq)
+ice_tx_free_bufs(struct ci_tx_queue *txq)
{
struct ci_tx_entry *txep;
uint16_t i;
@@ -3218,7 +3218,7 @@ ice_tx_free_bufs(struct ice_tx_queue *txq)
}
static int
-ice_tx_done_cleanup_full(struct ice_tx_queue *txq,
+ice_tx_done_cleanup_full(struct ci_tx_queue *txq,
uint32_t free_cnt)
{
struct ci_tx_entry *swr_ring = txq->sw_ring;
@@ -3278,7 +3278,7 @@ ice_tx_done_cleanup_full(struct ice_tx_queue *txq,
#ifdef RTE_ARCH_X86
static int
-ice_tx_done_cleanup_vec(struct ice_tx_queue *txq __rte_unused,
+ice_tx_done_cleanup_vec(struct ci_tx_queue *txq __rte_unused,
uint32_t free_cnt __rte_unused)
{
return -ENOTSUP;
@@ -3286,7 +3286,7 @@ ice_tx_done_cleanup_vec(struct ice_tx_queue *txq __rte_unused,
#endif
static int
-ice_tx_done_cleanup_simple(struct ice_tx_queue *txq,
+ice_tx_done_cleanup_simple(struct ci_tx_queue *txq,
uint32_t free_cnt)
{
int i, n, cnt;
@@ -3312,7 +3312,7 @@ ice_tx_done_cleanup_simple(struct ice_tx_queue *txq,
int
ice_tx_done_cleanup(void *txq, uint32_t free_cnt)
{
- struct ice_tx_queue *q = (struct ice_tx_queue *)txq;
+ struct ci_tx_queue *q = (struct ci_tx_queue *)txq;
struct rte_eth_dev *dev = &rte_eth_devices[q->port_id];
struct ice_adapter *ad =
ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
@@ -3357,7 +3357,7 @@ tx1(volatile struct ice_tx_desc *txdp, struct rte_mbuf **pkts)
}
static inline void
-ice_tx_fill_hw_ring(struct ice_tx_queue *txq, struct rte_mbuf **pkts,
+ice_tx_fill_hw_ring(struct ci_tx_queue *txq, struct rte_mbuf **pkts,
uint16_t nb_pkts)
{
volatile struct ice_tx_desc *txdp = &txq->ice_tx_ring[txq->tx_tail];
@@ -3389,7 +3389,7 @@ ice_tx_fill_hw_ring(struct ice_tx_queue *txq, struct rte_mbuf **pkts,
}
static inline uint16_t
-tx_xmit_pkts(struct ice_tx_queue *txq,
+tx_xmit_pkts(struct ci_tx_queue *txq,
struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
@@ -3452,14 +3452,14 @@ ice_xmit_pkts_simple(void *tx_queue,
uint16_t nb_tx = 0;
if (likely(nb_pkts <= ICE_TX_MAX_BURST))
- return tx_xmit_pkts((struct ice_tx_queue *)tx_queue,
+ return tx_xmit_pkts((struct ci_tx_queue *)tx_queue,
tx_pkts, nb_pkts);
while (nb_pkts) {
uint16_t ret, num = (uint16_t)RTE_MIN(nb_pkts,
ICE_TX_MAX_BURST);
- ret = tx_xmit_pkts((struct ice_tx_queue *)tx_queue,
+ ret = tx_xmit_pkts((struct ci_tx_queue *)tx_queue,
&tx_pkts[nb_tx], num);
nb_tx = (uint16_t)(nb_tx + ret);
nb_pkts = (uint16_t)(nb_pkts - ret);
@@ -3667,7 +3667,7 @@ ice_rx_burst_mode_get(struct rte_eth_dev *dev, __rte_unused uint16_t queue_id,
}
void __rte_cold
-ice_set_tx_function_flag(struct rte_eth_dev *dev, struct ice_tx_queue *txq)
+ice_set_tx_function_flag(struct rte_eth_dev *dev, struct ci_tx_queue *txq)
{
struct ice_adapter *ad =
ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
@@ -3716,7 +3716,7 @@ ice_check_empty_mbuf(struct rte_mbuf *tx_pkt)
static uint16_t
ice_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
- struct ice_tx_queue *txq = tx_queue;
+ struct ci_tx_queue *txq = tx_queue;
uint16_t idx;
struct rte_mbuf *mb;
bool pkt_error = false;
@@ -3778,7 +3778,7 @@ ice_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
pkt_error = true;
break;
}
- if (mb->nb_segs > ((struct ice_tx_queue *)tx_queue)->nb_tx_desc) {
+ if (mb->nb_segs > ((struct ci_tx_queue *)tx_queue)->nb_tx_desc) {
PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs out of ring length");
pkt_error = true;
break;
@@ -3839,7 +3839,7 @@ ice_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
(m->tso_segsz < ICE_MIN_TSO_MSS ||
m->tso_segsz > ICE_MAX_TSO_MSS ||
m->nb_segs >
- ((struct ice_tx_queue *)tx_queue)->nb_tx_desc ||
+ ((struct ci_tx_queue *)tx_queue)->nb_tx_desc ||
m->pkt_len > ICE_MAX_TSO_FRAME_SIZE)) {
/**
* MSS outside the range are considered malicious
@@ -3881,7 +3881,7 @@ ice_set_tx_function(struct rte_eth_dev *dev)
ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
int mbuf_check = ad->devargs.mbuf_check;
#ifdef RTE_ARCH_X86
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
int i;
int tx_check_ret = -1;
@@ -4693,7 +4693,7 @@ ice_check_fdir_programming_status(struct ice_rx_queue *rxq)
int
ice_fdir_programming(struct ice_pf *pf, struct ice_fltr_desc *fdir_desc)
{
- struct ice_tx_queue *txq = pf->fdir.txq;
+ struct ci_tx_queue *txq = pf->fdir.txq;
struct ice_rx_queue *rxq = pf->fdir.rxq;
volatile struct ice_fltr_desc *fdirdp;
volatile struct ice_tx_desc *txdp;
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index 3257f449f5..1cae8a9b50 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -79,7 +79,6 @@ extern int ice_timestamp_dynfield_offset;
#define ICE_TX_MTU_SEG_MAX 8
typedef void (*ice_rx_release_mbufs_t)(struct ice_rx_queue *rxq);
-typedef void (*ice_tx_release_mbufs_t)(struct ice_tx_queue *txq);
typedef void (*ice_rxd_to_pkt_fields_t)(struct ice_rx_queue *rxq,
struct rte_mbuf *mb,
volatile union ice_rx_flex_desc *rxdp);
@@ -145,42 +144,6 @@ struct ice_rx_queue {
bool ts_enable; /* if rxq timestamp is enabled */
};
-struct ice_tx_queue {
- uint16_t nb_tx_desc; /* number of TX descriptors */
- rte_iova_t tx_ring_dma; /* TX ring DMA address */
- volatile struct ice_tx_desc *ice_tx_ring; /* TX ring virtual address */
- struct ci_tx_entry *sw_ring; /* virtual address of SW ring */
- uint16_t tx_tail; /* current value of tail register */
- volatile uint8_t *qtx_tail; /* register address of tail */
- uint16_t nb_tx_used; /* number of TX desc used since RS bit set */
- /* index to last TX descriptor to have been cleaned */
- uint16_t last_desc_cleaned;
- /* Total number of TX descriptors ready to be allocated. */
- uint16_t nb_tx_free;
- /* Start freeing TX buffers if there are less free descriptors than
- * this value.
- */
- uint16_t tx_free_thresh;
- /* Number of TX descriptors to use before RS bit is set. */
- uint16_t tx_rs_thresh;
- uint8_t pthresh; /**< Prefetch threshold register. */
- uint8_t hthresh; /**< Host threshold register. */
- uint8_t wthresh; /**< Write-back threshold reg. */
- uint16_t port_id; /* Device port identifier. */
- uint16_t queue_id; /* TX queue index. */
- uint32_t q_teid; /* TX schedule node id. */
- uint16_t reg_idx;
- uint64_t offloads;
- struct ice_vsi *ice_vsi; /* the VSI this queue belongs to */
- uint16_t tx_next_dd;
- uint16_t tx_next_rs;
- uint64_t mbuf_errors;
- bool tx_deferred_start; /* don't start this queue in dev start */
- bool q_set; /* indicate if tx queue has been configured */
- ice_tx_release_mbufs_t tx_rel_mbufs;
- const struct rte_memzone *mz;
-};
-
/* Offload features */
union ice_tx_offload {
uint64_t data;
@@ -268,7 +231,7 @@ void ice_set_rx_function(struct rte_eth_dev *dev);
uint16_t ice_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
void ice_set_tx_function_flag(struct rte_eth_dev *dev,
- struct ice_tx_queue *txq);
+ struct ci_tx_queue *txq);
void ice_set_tx_function(struct rte_eth_dev *dev);
uint32_t ice_rx_queue_count(void *rx_queue);
void ice_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
@@ -290,7 +253,7 @@ void ice_select_rxd_to_pkt_fields_handler(struct ice_rx_queue *rxq,
int ice_rx_vec_dev_check(struct rte_eth_dev *dev);
int ice_tx_vec_dev_check(struct rte_eth_dev *dev);
int ice_rxq_vec_setup(struct ice_rx_queue *rxq);
-int ice_txq_vec_setup(struct ice_tx_queue *txq);
+int ice_txq_vec_setup(struct ci_tx_queue *txq);
uint16_t ice_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
uint16_t ice_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index dde07ac99e..12ffa0fa9a 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -856,7 +856,7 @@ static __rte_always_inline uint16_t
ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool offload)
{
- struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct ice_tx_desc *txdp;
struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
@@ -924,7 +924,7 @@ ice_xmit_pkts_vec_avx2_common(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool offload)
{
uint16_t nb_tx = 0;
- struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index e4d0270176..eabd8b04a0 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -860,7 +860,7 @@ ice_recv_scattered_pkts_vec_avx512_offload(void *rx_queue,
}
static __rte_always_inline int
-ice_tx_free_bufs_avx512(struct ice_tx_queue *txq)
+ice_tx_free_bufs_avx512(struct ci_tx_queue *txq)
{
struct ci_tx_entry_vec *txep;
uint32_t n;
@@ -1053,7 +1053,7 @@ static __rte_always_inline uint16_t
ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool do_offload)
{
- struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct ice_tx_desc *txdp;
struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
@@ -1122,7 +1122,7 @@ ice_xmit_pkts_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
uint16_t nb_tx = 0;
- struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
@@ -1144,7 +1144,7 @@ ice_xmit_pkts_vec_avx512_offload(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
uint16_t nb_tx = 0;
- struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index 7b865b53ad..b39289ceb5 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -13,7 +13,7 @@
#endif
static __rte_always_inline int
-ice_tx_free_bufs_vec(struct ice_tx_queue *txq)
+ice_tx_free_bufs_vec(struct ci_tx_queue *txq)
{
struct ci_tx_entry *txep;
uint32_t n;
@@ -105,7 +105,7 @@ _ice_rx_queue_release_mbufs_vec(struct ice_rx_queue *rxq)
}
static inline void
-_ice_tx_queue_release_mbufs_vec(struct ice_tx_queue *txq)
+_ice_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq)
{
uint16_t i;
@@ -231,7 +231,7 @@ ice_rx_vec_queue_default(struct ice_rx_queue *rxq)
}
static inline int
-ice_tx_vec_queue_default(struct ice_tx_queue *txq)
+ice_tx_vec_queue_default(struct ci_tx_queue *txq)
{
if (!txq)
return -1;
@@ -273,7 +273,7 @@ static inline int
ice_tx_vec_dev_check_default(struct rte_eth_dev *dev)
{
int i;
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
int ret = 0;
int result = 0;
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index 364207e8a8..f11528385a 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -697,7 +697,7 @@ static uint16_t
ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct ice_tx_desc *txdp;
struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
@@ -766,7 +766,7 @@ ice_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
uint16_t nb_tx = 0;
- struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
@@ -793,7 +793,7 @@ ice_rxq_vec_setup(struct ice_rx_queue *rxq)
}
int __rte_cold
-ice_txq_vec_setup(struct ice_tx_queue __rte_unused *txq)
+ice_txq_vec_setup(struct ci_tx_queue *txq __rte_unused)
{
if (!txq)
return -1;
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v2 07/22] net/iavf: use common Tx queue structure
2024-12-03 16:41 ` [PATCH v2 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (5 preceding siblings ...)
2024-12-03 16:41 ` [PATCH v2 06/22] net/_common_intel: merge ice and i40e Tx queue struct Bruce Richardson
@ 2024-12-03 16:41 ` Bruce Richardson
2024-12-03 16:41 ` [PATCH v2 08/22] net/ixgbe: convert Tx queue context cache field to ptr Bruce Richardson
` (14 subsequent siblings)
21 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-03 16:41 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Vladimir Medvedkin, Ian Stokes, Konstantin Ananyev
Merge in the few additional fields used by iavf driver and convert it to
using the common Tx queue structure also.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 15 +++++++-
drivers/net/iavf/iavf.h | 2 +-
drivers/net/iavf/iavf_ethdev.c | 4 +-
drivers/net/iavf/iavf_rxtx.c | 42 ++++++++++-----------
drivers/net/iavf/iavf_rxtx.h | 49 +++----------------------
drivers/net/iavf/iavf_rxtx_vec_avx2.c | 4 +-
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 14 +++----
drivers/net/iavf/iavf_rxtx_vec_common.h | 8 ++--
drivers/net/iavf/iavf_rxtx_vec_sse.c | 8 ++--
drivers/net/iavf/iavf_vchnl.c | 6 +--
10 files changed, 62 insertions(+), 90 deletions(-)
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
index c965f5ee6c..c4a1a0c816 100644
--- a/drivers/net/_common_intel/tx.h
+++ b/drivers/net/_common_intel/tx.h
@@ -31,8 +31,9 @@ typedef void (*ice_tx_release_mbufs_t)(struct ci_tx_queue *txq);
struct ci_tx_queue {
union { /* TX ring virtual address */
- volatile struct ice_tx_desc *ice_tx_ring;
volatile struct i40e_tx_desc *i40e_tx_ring;
+ volatile struct iavf_tx_desc *iavf_tx_ring;
+ volatile struct ice_tx_desc *ice_tx_ring;
};
volatile uint8_t *qtx_tail; /* register address of tail */
struct ci_tx_entry *sw_ring; /* virtual address of SW ring */
@@ -63,8 +64,9 @@ struct ci_tx_queue {
bool tx_deferred_start; /* don't start this queue in dev start */
bool q_set; /* indicate if tx queue has been configured */
union { /* the VSI this queue belongs to */
- struct ice_vsi *ice_vsi;
struct i40e_vsi *i40e_vsi;
+ struct iavf_vsi *iavf_vsi;
+ struct ice_vsi *ice_vsi;
};
const struct rte_memzone *mz;
@@ -76,6 +78,15 @@ struct ci_tx_queue {
struct { /* I40E driver specific values */
uint8_t dcb_tc;
};
+ struct { /* iavf driver specific values */
+ uint16_t ipsec_crypto_pkt_md_offset;
+ uint8_t rel_mbufs_type;
+#define IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG1 BIT(0)
+#define IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2 BIT(1)
+ uint8_t vlan_flag;
+ uint8_t tc;
+ bool use_ctx; /* with ctx info, each pkt needs two descriptors */
+ };
};
};
diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h
index ad526c644c..956c60ef45 100644
--- a/drivers/net/iavf/iavf.h
+++ b/drivers/net/iavf/iavf.h
@@ -98,7 +98,7 @@
struct iavf_adapter;
struct iavf_rx_queue;
-struct iavf_tx_queue;
+struct ci_tx_queue;
struct iavf_ipsec_crypto_stats {
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 7f80cd6258..328c224c93 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -954,7 +954,7 @@ static int
iavf_start_queues(struct rte_eth_dev *dev)
{
struct iavf_rx_queue *rxq;
- struct iavf_tx_queue *txq;
+ struct ci_tx_queue *txq;
int i;
uint16_t nb_txq, nb_rxq;
@@ -1885,7 +1885,7 @@ iavf_dev_update_mbuf_stats(struct rte_eth_dev *ethdev,
struct iavf_mbuf_stats *mbuf_stats)
{
uint16_t idx;
- struct iavf_tx_queue *txq;
+ struct ci_tx_queue *txq;
for (idx = 0; idx < ethdev->data->nb_tx_queues; idx++) {
txq = ethdev->data->tx_queues[idx];
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 6eda91e76b..7e381b2a17 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -213,7 +213,7 @@ check_rx_vec_allow(struct iavf_rx_queue *rxq)
}
static inline bool
-check_tx_vec_allow(struct iavf_tx_queue *txq)
+check_tx_vec_allow(struct ci_tx_queue *txq)
{
if (!(txq->offloads & IAVF_TX_NO_VECTOR_FLAGS) &&
txq->tx_rs_thresh >= IAVF_VPMD_TX_MAX_BURST &&
@@ -282,7 +282,7 @@ reset_rx_queue(struct iavf_rx_queue *rxq)
}
static inline void
-reset_tx_queue(struct iavf_tx_queue *txq)
+reset_tx_queue(struct ci_tx_queue *txq)
{
struct ci_tx_entry *txe;
uint32_t i, size;
@@ -388,7 +388,7 @@ release_rxq_mbufs(struct iavf_rx_queue *rxq)
}
static inline void
-release_txq_mbufs(struct iavf_tx_queue *txq)
+release_txq_mbufs(struct ci_tx_queue *txq)
{
uint16_t i;
@@ -778,7 +778,7 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,
struct iavf_info *vf =
IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
struct iavf_vsi *vsi = &vf->vsi;
- struct iavf_tx_queue *txq;
+ struct ci_tx_queue *txq;
const struct rte_memzone *mz;
uint32_t ring_size;
uint16_t tx_rs_thresh, tx_free_thresh;
@@ -814,7 +814,7 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate the TX queue data structure. */
txq = rte_zmalloc_socket("iavf txq",
- sizeof(struct iavf_tx_queue),
+ sizeof(struct ci_tx_queue),
RTE_CACHE_LINE_SIZE,
socket_id);
if (!txq) {
@@ -979,7 +979,7 @@ iavf_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- struct iavf_tx_queue *txq;
+ struct ci_tx_queue *txq;
int err = 0;
PMD_DRV_FUNC_TRACE();
@@ -1048,7 +1048,7 @@ iavf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
struct iavf_adapter *adapter =
IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
- struct iavf_tx_queue *txq;
+ struct ci_tx_queue *txq;
int err;
PMD_DRV_FUNC_TRACE();
@@ -1092,7 +1092,7 @@ iavf_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
void
iavf_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
{
- struct iavf_tx_queue *q = dev->data->tx_queues[qid];
+ struct ci_tx_queue *q = dev->data->tx_queues[qid];
if (!q)
return;
@@ -1107,7 +1107,7 @@ static void
iavf_reset_queues(struct rte_eth_dev *dev)
{
struct iavf_rx_queue *rxq;
- struct iavf_tx_queue *txq;
+ struct ci_tx_queue *txq;
int i;
for (i = 0; i < dev->data->nb_tx_queues; i++) {
@@ -2377,7 +2377,7 @@ iavf_recv_pkts_bulk_alloc(void *rx_queue,
}
static inline int
-iavf_xmit_cleanup(struct iavf_tx_queue *txq)
+iavf_xmit_cleanup(struct ci_tx_queue *txq)
{
struct ci_tx_entry *sw_ring = txq->sw_ring;
uint16_t last_desc_cleaned = txq->last_desc_cleaned;
@@ -2781,7 +2781,7 @@ iavf_fill_data_desc(volatile struct iavf_tx_desc *desc,
static struct iavf_ipsec_crypto_pkt_metadata *
-iavf_ipsec_crypto_get_pkt_metadata(const struct iavf_tx_queue *txq,
+iavf_ipsec_crypto_get_pkt_metadata(const struct ci_tx_queue *txq,
struct rte_mbuf *m)
{
if (m->ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD)
@@ -2795,7 +2795,7 @@ iavf_ipsec_crypto_get_pkt_metadata(const struct iavf_tx_queue *txq,
uint16_t
iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
- struct iavf_tx_queue *txq = tx_queue;
+ struct ci_tx_queue *txq = tx_queue;
volatile struct iavf_tx_desc *txr = txq->iavf_tx_ring;
struct ci_tx_entry *txe_ring = txq->sw_ring;
struct ci_tx_entry *txe, *txn;
@@ -3027,7 +3027,7 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
* correct queue.
*/
static int
-iavf_check_vlan_up2tc(struct iavf_tx_queue *txq, struct rte_mbuf *m)
+iavf_check_vlan_up2tc(struct ci_tx_queue *txq, struct rte_mbuf *m)
{
struct rte_eth_dev *dev = &rte_eth_devices[txq->port_id];
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
@@ -3646,7 +3646,7 @@ iavf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
int i, ret;
uint64_t ol_flags;
struct rte_mbuf *m;
- struct iavf_tx_queue *txq = tx_queue;
+ struct ci_tx_queue *txq = tx_queue;
struct rte_eth_dev *dev = &rte_eth_devices[txq->port_id];
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
struct iavf_adapter *adapter = IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
@@ -3800,7 +3800,7 @@ static uint16_t
iavf_xmit_pkts_no_poll(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct iavf_tx_queue *txq = tx_queue;
+ struct ci_tx_queue *txq = tx_queue;
enum iavf_tx_burst_type tx_burst_type;
if (!txq->iavf_vsi || txq->iavf_vsi->adapter->no_poll)
@@ -3823,7 +3823,7 @@ iavf_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t good_pkts = nb_pkts;
const char *reason = NULL;
bool pkt_error = false;
- struct iavf_tx_queue *txq = tx_queue;
+ struct ci_tx_queue *txq = tx_queue;
struct iavf_adapter *adapter = txq->iavf_vsi->adapter;
enum iavf_tx_burst_type tx_burst_type =
txq->iavf_vsi->adapter->tx_burst_type;
@@ -4144,7 +4144,7 @@ iavf_set_tx_function(struct rte_eth_dev *dev)
int mbuf_check = adapter->devargs.mbuf_check;
int no_poll_on_link_down = adapter->devargs.no_poll_on_link_down;
#ifdef RTE_ARCH_X86
- struct iavf_tx_queue *txq;
+ struct ci_tx_queue *txq;
int i;
int check_ret;
bool use_sse = false;
@@ -4265,7 +4265,7 @@ iavf_set_tx_function(struct rte_eth_dev *dev)
}
static int
-iavf_tx_done_cleanup_full(struct iavf_tx_queue *txq,
+iavf_tx_done_cleanup_full(struct ci_tx_queue *txq,
uint32_t free_cnt)
{
struct ci_tx_entry *swr_ring = txq->sw_ring;
@@ -4324,7 +4324,7 @@ iavf_tx_done_cleanup_full(struct iavf_tx_queue *txq,
int
iavf_dev_tx_done_cleanup(void *txq, uint32_t free_cnt)
{
- struct iavf_tx_queue *q = (struct iavf_tx_queue *)txq;
+ struct ci_tx_queue *q = (struct ci_tx_queue *)txq;
return iavf_tx_done_cleanup_full(q, free_cnt);
}
@@ -4350,7 +4350,7 @@ void
iavf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
struct rte_eth_txq_info *qinfo)
{
- struct iavf_tx_queue *txq;
+ struct ci_tx_queue *txq;
txq = dev->data->tx_queues[queue_id];
@@ -4422,7 +4422,7 @@ iavf_dev_rx_desc_status(void *rx_queue, uint16_t offset)
int
iavf_dev_tx_desc_status(void *tx_queue, uint16_t offset)
{
- struct iavf_tx_queue *txq = tx_queue;
+ struct ci_tx_queue *txq = tx_queue;
volatile uint64_t *status;
uint64_t mask, expect;
uint32_t desc;
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index cc1eaaf54c..c18e01560c 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -211,7 +211,7 @@ struct iavf_rxq_ops {
};
struct iavf_txq_ops {
- void (*release_mbufs)(struct iavf_tx_queue *txq);
+ void (*release_mbufs)(struct ci_tx_queue *txq);
};
@@ -273,43 +273,6 @@ struct iavf_rx_queue {
uint64_t hw_time_update;
};
-/* Structure associated with each TX queue. */
-struct iavf_tx_queue {
- const struct rte_memzone *mz; /* memzone for Tx ring */
- volatile struct iavf_tx_desc *iavf_tx_ring; /* Tx ring virtual address */
- rte_iova_t tx_ring_dma; /* Tx ring DMA address */
- struct ci_tx_entry *sw_ring; /* address array of SW ring */
- uint16_t nb_tx_desc; /* ring length */
- uint16_t tx_tail; /* current value of tail */
- volatile uint8_t *qtx_tail; /* register address of tail */
- /* number of used desc since RS bit set */
- uint16_t nb_tx_used;
- uint16_t nb_tx_free;
- uint16_t last_desc_cleaned; /* last desc have been cleaned*/
- uint16_t tx_free_thresh;
- uint16_t tx_rs_thresh;
- uint8_t rel_mbufs_type;
- struct iavf_vsi *iavf_vsi; /**< the VSI this queue belongs to */
-
- uint16_t port_id;
- uint16_t queue_id;
- uint64_t offloads;
- uint16_t tx_next_dd; /* next to set RS, for VPMD */
- uint16_t tx_next_rs; /* next to check DD, for VPMD */
- uint16_t ipsec_crypto_pkt_md_offset;
-
- uint64_t mbuf_errors;
-
- bool q_set; /* if rx queue has been configured */
- bool tx_deferred_start; /* don't start this queue in dev start */
- const struct iavf_txq_ops *ops;
-#define IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG1 BIT(0)
-#define IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2 BIT(1)
- uint8_t vlan_flag;
- uint8_t tc;
- uint8_t use_ctx:1; /* if use the ctx desc, a packet needs two descriptors */
-};
-
/* Offload features */
union iavf_tx_offload {
uint64_t data;
@@ -724,7 +687,7 @@ int iavf_get_monitor_addr(void *rx_queue, struct rte_power_monitor_cond *pmc);
int iavf_rx_vec_dev_check(struct rte_eth_dev *dev);
int iavf_tx_vec_dev_check(struct rte_eth_dev *dev);
int iavf_rxq_vec_setup(struct iavf_rx_queue *rxq);
-int iavf_txq_vec_setup(struct iavf_tx_queue *txq);
+int iavf_txq_vec_setup(struct ci_tx_queue *txq);
uint16_t iavf_recv_pkts_vec_avx512(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
uint16_t iavf_recv_pkts_vec_avx512_offload(void *rx_queue,
@@ -757,14 +720,14 @@ uint16_t iavf_xmit_pkts_vec_avx512_ctx_offload(void *tx_queue, struct rte_mbuf *
uint16_t nb_pkts);
uint16_t iavf_xmit_pkts_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
-int iavf_txq_vec_setup_avx512(struct iavf_tx_queue *txq);
+int iavf_txq_vec_setup_avx512(struct ci_tx_queue *txq);
uint8_t iavf_proto_xtr_type_to_rxdid(uint8_t xtr_type);
void iavf_set_default_ptype_table(struct rte_eth_dev *dev);
-void iavf_tx_queue_release_mbufs_avx512(struct iavf_tx_queue *txq);
+void iavf_tx_queue_release_mbufs_avx512(struct ci_tx_queue *txq);
void iavf_rx_queue_release_mbufs_sse(struct iavf_rx_queue *rxq);
-void iavf_tx_queue_release_mbufs_sse(struct iavf_tx_queue *txq);
+void iavf_tx_queue_release_mbufs_sse(struct ci_tx_queue *txq);
static inline
void iavf_dump_rx_descriptor(struct iavf_rx_queue *rxq,
@@ -791,7 +754,7 @@ void iavf_dump_rx_descriptor(struct iavf_rx_queue *rxq,
* to print the qwords
*/
static inline
-void iavf_dump_tx_descriptor(const struct iavf_tx_queue *txq,
+void iavf_dump_tx_descriptor(const struct ci_tx_queue *txq,
const volatile void *desc, uint16_t tx_id)
{
const char *name;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index f33ceceee1..fdb98b417a 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -1734,7 +1734,7 @@ static __rte_always_inline uint16_t
iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool offload)
{
- struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
@@ -1801,7 +1801,7 @@ iavf_xmit_pkts_vec_avx2_common(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool offload)
{
uint16_t nb_tx = 0;
- struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index 97420a75fd..9cf7171524 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -1845,7 +1845,7 @@ iavf_recv_scattered_pkts_vec_avx512_flex_rxd_offload(void *rx_queue,
}
static __rte_always_inline int
-iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq)
+iavf_tx_free_bufs_avx512(struct ci_tx_queue *txq)
{
struct ci_tx_entry_vec *txep;
uint32_t n;
@@ -2311,7 +2311,7 @@ static __rte_always_inline uint16_t
iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool offload)
{
- struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
@@ -2379,7 +2379,7 @@ static __rte_always_inline uint16_t
iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool offload)
{
- struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, nb_mbuf, tx_id;
@@ -2447,7 +2447,7 @@ iavf_xmit_pkts_vec_avx512_cmn(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool offload)
{
uint16_t nb_tx = 0;
- struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
@@ -2473,7 +2473,7 @@ iavf_xmit_pkts_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
}
void __rte_cold
-iavf_tx_queue_release_mbufs_avx512(struct iavf_tx_queue *txq)
+iavf_tx_queue_release_mbufs_avx512(struct ci_tx_queue *txq)
{
unsigned int i;
const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1);
@@ -2494,7 +2494,7 @@ iavf_tx_queue_release_mbufs_avx512(struct iavf_tx_queue *txq)
}
int __rte_cold
-iavf_txq_vec_setup_avx512(struct iavf_tx_queue *txq)
+iavf_txq_vec_setup_avx512(struct ci_tx_queue *txq)
{
txq->rel_mbufs_type = IAVF_REL_MBUFS_AVX512_VEC;
return 0;
@@ -2512,7 +2512,7 @@ iavf_xmit_pkts_vec_avx512_ctx_cmn(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool offload)
{
uint16_t nb_tx = 0;
- struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index 6305c8cdd6..f1bb12c4f4 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -17,7 +17,7 @@
#endif
static __rte_always_inline int
-iavf_tx_free_bufs(struct iavf_tx_queue *txq)
+iavf_tx_free_bufs(struct ci_tx_queue *txq)
{
struct ci_tx_entry *txep;
uint32_t n;
@@ -104,7 +104,7 @@ _iavf_rx_queue_release_mbufs_vec(struct iavf_rx_queue *rxq)
}
static inline void
-_iavf_tx_queue_release_mbufs_vec(struct iavf_tx_queue *txq)
+_iavf_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq)
{
unsigned i;
const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1);
@@ -164,7 +164,7 @@ iavf_rx_vec_queue_default(struct iavf_rx_queue *rxq)
}
static inline int
-iavf_tx_vec_queue_default(struct iavf_tx_queue *txq)
+iavf_tx_vec_queue_default(struct ci_tx_queue *txq)
{
if (!txq)
return -1;
@@ -227,7 +227,7 @@ static inline int
iavf_tx_vec_dev_check_default(struct rte_eth_dev *dev)
{
int i;
- struct iavf_tx_queue *txq;
+ struct ci_tx_queue *txq;
int ret;
int result = 0;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index 64c3bf0eaa..5c0b2fff46 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -1366,7 +1366,7 @@ uint16_t
iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
@@ -1435,7 +1435,7 @@ iavf_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
uint16_t nb_tx = 0;
- struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
@@ -1459,13 +1459,13 @@ iavf_rx_queue_release_mbufs_sse(struct iavf_rx_queue *rxq)
}
void __rte_cold
-iavf_tx_queue_release_mbufs_sse(struct iavf_tx_queue *txq)
+iavf_tx_queue_release_mbufs_sse(struct ci_tx_queue *txq)
{
_iavf_tx_queue_release_mbufs_vec(txq);
}
int __rte_cold
-iavf_txq_vec_setup(struct iavf_tx_queue *txq)
+iavf_txq_vec_setup(struct ci_tx_queue *txq)
{
txq->rel_mbufs_type = IAVF_REL_MBUFS_SSE_VEC;
return 0;
diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c
index 0646a2f978..c74466735d 100644
--- a/drivers/net/iavf/iavf_vchnl.c
+++ b/drivers/net/iavf/iavf_vchnl.c
@@ -1218,10 +1218,8 @@ int
iavf_configure_queues(struct iavf_adapter *adapter,
uint16_t num_queue_pairs, uint16_t index)
{
- struct iavf_rx_queue **rxq =
- (struct iavf_rx_queue **)adapter->dev_data->rx_queues;
- struct iavf_tx_queue **txq =
- (struct iavf_tx_queue **)adapter->dev_data->tx_queues;
+ struct iavf_rx_queue **rxq = (struct iavf_rx_queue **)adapter->dev_data->rx_queues;
+ struct ci_tx_queue **txq = (struct ci_tx_queue **)adapter->dev_data->tx_queues;
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
struct virtchnl_vsi_queue_config_info *vc_config;
struct virtchnl_queue_pair_info *vc_qp;
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v2 08/22] net/ixgbe: convert Tx queue context cache field to ptr
2024-12-03 16:41 ` [PATCH v2 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (6 preceding siblings ...)
2024-12-03 16:41 ` [PATCH v2 07/22] net/iavf: use common Tx queue structure Bruce Richardson
@ 2024-12-03 16:41 ` Bruce Richardson
2024-12-03 16:41 ` [PATCH v2 09/22] net/ixgbe: use common Tx queue structure Bruce Richardson
` (13 subsequent siblings)
21 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-03 16:41 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Anatoly Burakov, Vladimir Medvedkin
Rather than having a two element array of context cache values inside
the Tx queue structure, convert it to a pointer to a cache at the end of
the structure. This makes future merging of the structure easier as we
don't need the "ixgbe_advctx_info" struct defined when defining a
combined queue structure.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/ixgbe/ixgbe_rxtx.c | 7 ++++---
drivers/net/ixgbe/ixgbe_rxtx.h | 4 ++--
2 files changed, 6 insertions(+), 5 deletions(-)
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index f7ddbba1b6..2ca26cd132 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -2522,8 +2522,7 @@ ixgbe_reset_tx_queue(struct ixgbe_tx_queue *txq)
txq->last_desc_cleaned = (uint16_t)(txq->nb_tx_desc - 1);
txq->nb_tx_free = (uint16_t)(txq->nb_tx_desc - 1);
txq->ctx_curr = 0;
- memset((void *)&txq->ctx_cache, 0,
- IXGBE_CTX_NUM * sizeof(struct ixgbe_advctx_info));
+ memset(txq->ctx_cache, 0, IXGBE_CTX_NUM * sizeof(struct ixgbe_advctx_info));
}
static const struct ixgbe_txq_ops def_txq_ops = {
@@ -2741,10 +2740,12 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
}
/* First allocate the tx queue data structure */
- txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct ixgbe_tx_queue),
+ txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct ixgbe_tx_queue) +
+ sizeof(struct ixgbe_advctx_info) * IXGBE_CTX_NUM,
RTE_CACHE_LINE_SIZE, socket_id);
if (txq == NULL)
return -ENOMEM;
+ txq->ctx_cache = RTE_PTR_ADD(txq, sizeof(struct ixgbe_tx_queue));
/*
* Allocate TX ring hardware descriptors. A memzone large enough to
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index f6bae37cf3..847cacf7b5 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -215,8 +215,8 @@ struct ixgbe_tx_queue {
uint8_t wthresh; /**< Write-back threshold reg. */
uint64_t offloads; /**< Tx offload flags of RTE_ETH_TX_OFFLOAD_* */
uint32_t ctx_curr; /**< Hardware context states. */
- /** Hardware context0 history. */
- struct ixgbe_advctx_info ctx_cache[IXGBE_CTX_NUM];
+ /** Hardware context history. */
+ struct ixgbe_advctx_info *ctx_cache;
const struct ixgbe_txq_ops *ops; /**< txq ops */
bool tx_deferred_start; /**< not in global dev start. */
#ifdef RTE_LIB_SECURITY
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v2 09/22] net/ixgbe: use common Tx queue structure
2024-12-03 16:41 ` [PATCH v2 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (7 preceding siblings ...)
2024-12-03 16:41 ` [PATCH v2 08/22] net/ixgbe: convert Tx queue context cache field to ptr Bruce Richardson
@ 2024-12-03 16:41 ` Bruce Richardson
2024-12-03 16:41 ` [PATCH v2 10/22] net/_common_intel: pack " Bruce Richardson
` (12 subsequent siblings)
21 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-03 16:41 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Anatoly Burakov, Vladimir Medvedkin,
Wathsala Vithanage, Konstantin Ananyev
Merge in additional fields used by the ixgbe driver and then convert it
over to using the common Tx queue structure.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 14 +++-
drivers/net/ixgbe/ixgbe_ethdev.c | 4 +-
.../ixgbe/ixgbe_recycle_mbufs_vec_common.c | 2 +-
drivers/net/ixgbe/ixgbe_rxtx.c | 64 +++++++++----------
drivers/net/ixgbe/ixgbe_rxtx.h | 56 ++--------------
drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 26 ++++----
drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 14 ++--
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 14 ++--
8 files changed, 80 insertions(+), 114 deletions(-)
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
index c4a1a0c816..51ae3b051d 100644
--- a/drivers/net/_common_intel/tx.h
+++ b/drivers/net/_common_intel/tx.h
@@ -34,9 +34,13 @@ struct ci_tx_queue {
volatile struct i40e_tx_desc *i40e_tx_ring;
volatile struct iavf_tx_desc *iavf_tx_ring;
volatile struct ice_tx_desc *ice_tx_ring;
+ volatile union ixgbe_adv_tx_desc *ixgbe_tx_ring;
};
volatile uint8_t *qtx_tail; /* register address of tail */
- struct ci_tx_entry *sw_ring; /* virtual address of SW ring */
+ union {
+ struct ci_tx_entry *sw_ring; /* virtual address of SW ring */
+ struct ci_tx_entry_vec *sw_ring_vec;
+ };
rte_iova_t tx_ring_dma; /* TX ring DMA address */
uint16_t nb_tx_desc; /* number of TX descriptors */
uint16_t tx_tail; /* current value of tail register */
@@ -87,6 +91,14 @@ struct ci_tx_queue {
uint8_t tc;
bool use_ctx; /* with ctx info, each pkt needs two descriptors */
};
+ struct { /* ixgbe specific values */
+ const struct ixgbe_txq_ops *ops;
+ struct ixgbe_advctx_info *ctx_cache;
+ uint32_t ctx_curr;
+#ifdef RTE_LIB_SECURITY
+ uint8_t using_ipsec; /**< indicates that IPsec TX feature is in use */
+#endif
+ };
};
};
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 8bee97d191..5f18fbaad5 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1118,7 +1118,7 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
* RX and TX function.
*/
if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
- struct ixgbe_tx_queue *txq;
+ struct ci_tx_queue *txq;
/* TX queue function in primary, set by last queue initialized
* Tx queue may not initialized by primary process
*/
@@ -1623,7 +1623,7 @@ eth_ixgbevf_dev_init(struct rte_eth_dev *eth_dev)
* RX function
*/
if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
- struct ixgbe_tx_queue *txq;
+ struct ci_tx_queue *txq;
/* TX queue function in primary, set by last queue initialized
* Tx queue may not initialized by primary process
*/
diff --git a/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c b/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c
index a878db3150..3fd05ed5eb 100644
--- a/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c
+++ b/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c
@@ -51,7 +51,7 @@ uint16_t
ixgbe_recycle_tx_mbufs_reuse_vec(void *tx_queue,
struct rte_eth_recycle_rxq_info *recycle_rxq_info)
{
- struct ixgbe_tx_queue *txq = tx_queue;
+ struct ci_tx_queue *txq = tx_queue;
struct ci_tx_entry *txep;
struct rte_mbuf **rxep;
int i, n;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 2ca26cd132..344ef85685 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -98,7 +98,7 @@
* Return the total number of buffers freed.
*/
static __rte_always_inline int
-ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq)
+ixgbe_tx_free_bufs(struct ci_tx_queue *txq)
{
struct ci_tx_entry *txep;
uint32_t status;
@@ -195,7 +195,7 @@ tx1(volatile union ixgbe_adv_tx_desc *txdp, struct rte_mbuf **pkts)
* Copy mbuf pointers to the S/W ring.
*/
static inline void
-ixgbe_tx_fill_hw_ring(struct ixgbe_tx_queue *txq, struct rte_mbuf **pkts,
+ixgbe_tx_fill_hw_ring(struct ci_tx_queue *txq, struct rte_mbuf **pkts,
uint16_t nb_pkts)
{
volatile union ixgbe_adv_tx_desc *txdp = &txq->ixgbe_tx_ring[txq->tx_tail];
@@ -231,7 +231,7 @@ static inline uint16_t
tx_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile union ixgbe_adv_tx_desc *tx_r = txq->ixgbe_tx_ring;
uint16_t n = 0;
@@ -344,7 +344,7 @@ ixgbe_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
uint16_t nb_tx = 0;
- struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
@@ -362,7 +362,7 @@ ixgbe_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
}
static inline void
-ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
+ixgbe_set_xmit_ctx(struct ci_tx_queue *txq,
volatile struct ixgbe_adv_tx_context_desc *ctx_txd,
uint64_t ol_flags, union ixgbe_tx_offload tx_offload,
__rte_unused uint64_t *mdata)
@@ -493,7 +493,7 @@ ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
* or create a new context descriptor.
*/
static inline uint32_t
-what_advctx_update(struct ixgbe_tx_queue *txq, uint64_t flags,
+what_advctx_update(struct ci_tx_queue *txq, uint64_t flags,
union ixgbe_tx_offload tx_offload)
{
/* If match with the current used context */
@@ -561,7 +561,7 @@ tx_desc_ol_flags_to_cmdtype(uint64_t ol_flags)
/* Reset transmit descriptors after they have been used */
static inline int
-ixgbe_xmit_cleanup(struct ixgbe_tx_queue *txq)
+ixgbe_xmit_cleanup(struct ci_tx_queue *txq)
{
struct ci_tx_entry *sw_ring = txq->sw_ring;
volatile union ixgbe_adv_tx_desc *txr = txq->ixgbe_tx_ring;
@@ -623,7 +623,7 @@ uint16_t
ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct ixgbe_tx_queue *txq;
+ struct ci_tx_queue *txq;
struct ci_tx_entry *sw_ring;
struct ci_tx_entry *txe, *txn;
volatile union ixgbe_adv_tx_desc *txr;
@@ -963,7 +963,7 @@ ixgbe_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
int i, ret;
uint64_t ol_flags;
struct rte_mbuf *m;
- struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
for (i = 0; i < nb_pkts; i++) {
m = tx_pkts[i];
@@ -2335,7 +2335,7 @@ ixgbe_recv_pkts_lro_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts,
**********************************************************************/
static void __rte_cold
-ixgbe_tx_queue_release_mbufs(struct ixgbe_tx_queue *txq)
+ixgbe_tx_queue_release_mbufs(struct ci_tx_queue *txq)
{
unsigned i;
@@ -2350,7 +2350,7 @@ ixgbe_tx_queue_release_mbufs(struct ixgbe_tx_queue *txq)
}
static int
-ixgbe_tx_done_cleanup_full(struct ixgbe_tx_queue *txq, uint32_t free_cnt)
+ixgbe_tx_done_cleanup_full(struct ci_tx_queue *txq, uint32_t free_cnt)
{
struct ci_tx_entry *swr_ring = txq->sw_ring;
uint16_t i, tx_last, tx_id;
@@ -2408,7 +2408,7 @@ ixgbe_tx_done_cleanup_full(struct ixgbe_tx_queue *txq, uint32_t free_cnt)
}
static int
-ixgbe_tx_done_cleanup_simple(struct ixgbe_tx_queue *txq,
+ixgbe_tx_done_cleanup_simple(struct ci_tx_queue *txq,
uint32_t free_cnt)
{
int i, n, cnt;
@@ -2432,7 +2432,7 @@ ixgbe_tx_done_cleanup_simple(struct ixgbe_tx_queue *txq,
}
static int
-ixgbe_tx_done_cleanup_vec(struct ixgbe_tx_queue *txq __rte_unused,
+ixgbe_tx_done_cleanup_vec(struct ci_tx_queue *txq __rte_unused,
uint32_t free_cnt __rte_unused)
{
return -ENOTSUP;
@@ -2441,7 +2441,7 @@ ixgbe_tx_done_cleanup_vec(struct ixgbe_tx_queue *txq __rte_unused,
int
ixgbe_dev_tx_done_cleanup(void *tx_queue, uint32_t free_cnt)
{
- struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
if (txq->offloads == 0 &&
#ifdef RTE_LIB_SECURITY
!(txq->using_ipsec) &&
@@ -2450,7 +2450,7 @@ ixgbe_dev_tx_done_cleanup(void *tx_queue, uint32_t free_cnt)
if (txq->tx_rs_thresh <= RTE_IXGBE_TX_MAX_FREE_BUF_SZ &&
rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128 &&
(rte_eal_process_type() != RTE_PROC_PRIMARY ||
- txq->sw_ring_v != NULL)) {
+ txq->sw_ring_vec != NULL)) {
return ixgbe_tx_done_cleanup_vec(txq, free_cnt);
} else {
return ixgbe_tx_done_cleanup_simple(txq, free_cnt);
@@ -2461,7 +2461,7 @@ ixgbe_dev_tx_done_cleanup(void *tx_queue, uint32_t free_cnt)
}
static void __rte_cold
-ixgbe_tx_free_swring(struct ixgbe_tx_queue *txq)
+ixgbe_tx_free_swring(struct ci_tx_queue *txq)
{
if (txq != NULL &&
txq->sw_ring != NULL)
@@ -2469,7 +2469,7 @@ ixgbe_tx_free_swring(struct ixgbe_tx_queue *txq)
}
static void __rte_cold
-ixgbe_tx_queue_release(struct ixgbe_tx_queue *txq)
+ixgbe_tx_queue_release(struct ci_tx_queue *txq)
{
if (txq != NULL && txq->ops != NULL) {
txq->ops->release_mbufs(txq);
@@ -2487,7 +2487,7 @@ ixgbe_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
/* (Re)set dynamic ixgbe_tx_queue fields to defaults */
static void __rte_cold
-ixgbe_reset_tx_queue(struct ixgbe_tx_queue *txq)
+ixgbe_reset_tx_queue(struct ci_tx_queue *txq)
{
static const union ixgbe_adv_tx_desc zeroed_desc = {{0}};
struct ci_tx_entry *txe = txq->sw_ring;
@@ -2536,7 +2536,7 @@ static const struct ixgbe_txq_ops def_txq_ops = {
* in dev_init by secondary process when attaching to an existing ethdev.
*/
void __rte_cold
-ixgbe_set_tx_function(struct rte_eth_dev *dev, struct ixgbe_tx_queue *txq)
+ixgbe_set_tx_function(struct rte_eth_dev *dev, struct ci_tx_queue *txq)
{
/* Use a simple Tx queue (no offloads, no multi segs) if possible */
if ((txq->offloads == 0) &&
@@ -2618,7 +2618,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
const struct rte_eth_txconf *tx_conf)
{
const struct rte_memzone *tz;
- struct ixgbe_tx_queue *txq;
+ struct ci_tx_queue *txq;
struct ixgbe_hw *hw;
uint16_t tx_rs_thresh, tx_free_thresh;
uint64_t offloads;
@@ -2740,12 +2740,12 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
}
/* First allocate the tx queue data structure */
- txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct ixgbe_tx_queue) +
+ txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct ci_tx_queue) +
sizeof(struct ixgbe_advctx_info) * IXGBE_CTX_NUM,
RTE_CACHE_LINE_SIZE, socket_id);
if (txq == NULL)
return -ENOMEM;
- txq->ctx_cache = RTE_PTR_ADD(txq, sizeof(struct ixgbe_tx_queue));
+ txq->ctx_cache = RTE_PTR_ADD(txq, sizeof(struct ci_tx_queue));
/*
* Allocate TX ring hardware descriptors. A memzone large enough to
@@ -3312,7 +3312,7 @@ ixgbe_dev_rx_descriptor_status(void *rx_queue, uint16_t offset)
int
ixgbe_dev_tx_descriptor_status(void *tx_queue, uint16_t offset)
{
- struct ixgbe_tx_queue *txq = tx_queue;
+ struct ci_tx_queue *txq = tx_queue;
volatile uint32_t *status;
uint32_t desc;
@@ -3377,7 +3377,7 @@ ixgbe_dev_clear_queues(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
for (i = 0; i < dev->data->nb_tx_queues; i++) {
- struct ixgbe_tx_queue *txq = dev->data->tx_queues[i];
+ struct ci_tx_queue *txq = dev->data->tx_queues[i];
if (txq != NULL) {
txq->ops->release_mbufs(txq);
@@ -5284,7 +5284,7 @@ void __rte_cold
ixgbe_dev_tx_init(struct rte_eth_dev *dev)
{
struct ixgbe_hw *hw;
- struct ixgbe_tx_queue *txq;
+ struct ci_tx_queue *txq;
uint64_t bus_addr;
uint32_t hlreg0;
uint32_t txctrl;
@@ -5402,7 +5402,7 @@ int __rte_cold
ixgbe_dev_rxtx_start(struct rte_eth_dev *dev)
{
struct ixgbe_hw *hw;
- struct ixgbe_tx_queue *txq;
+ struct ci_tx_queue *txq;
struct ixgbe_rx_queue *rxq;
uint32_t txdctl;
uint32_t dmatxctl;
@@ -5572,7 +5572,7 @@ int __rte_cold
ixgbe_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
struct ixgbe_hw *hw;
- struct ixgbe_tx_queue *txq;
+ struct ci_tx_queue *txq;
uint32_t txdctl;
int poll_ms;
@@ -5611,7 +5611,7 @@ int __rte_cold
ixgbe_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
struct ixgbe_hw *hw;
- struct ixgbe_tx_queue *txq;
+ struct ci_tx_queue *txq;
uint32_t txdctl;
uint32_t txtdh, txtdt;
int poll_ms;
@@ -5685,7 +5685,7 @@ void
ixgbe_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
struct rte_eth_txq_info *qinfo)
{
- struct ixgbe_tx_queue *txq;
+ struct ci_tx_queue *txq;
txq = dev->data->tx_queues[queue_id];
@@ -5877,7 +5877,7 @@ void __rte_cold
ixgbevf_dev_tx_init(struct rte_eth_dev *dev)
{
struct ixgbe_hw *hw;
- struct ixgbe_tx_queue *txq;
+ struct ci_tx_queue *txq;
uint64_t bus_addr;
uint32_t txctrl;
uint16_t i;
@@ -5918,7 +5918,7 @@ void __rte_cold
ixgbevf_dev_rxtx_start(struct rte_eth_dev *dev)
{
struct ixgbe_hw *hw;
- struct ixgbe_tx_queue *txq;
+ struct ci_tx_queue *txq;
struct ixgbe_rx_queue *rxq;
uint32_t txdctl;
uint32_t rxdctl;
@@ -6127,7 +6127,7 @@ ixgbe_xmit_fixed_burst_vec(void __rte_unused *tx_queue,
}
int
-ixgbe_txq_vec_setup(struct ixgbe_tx_queue __rte_unused *txq)
+ixgbe_txq_vec_setup(struct ci_tx_queue *txq __rte_unused)
{
return -1;
}
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index 847cacf7b5..4333e5bf2f 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -180,56 +180,10 @@ struct ixgbe_advctx_info {
union ixgbe_tx_offload tx_offload_mask;
};
-/**
- * Structure associated with each TX queue.
- */
-struct ixgbe_tx_queue {
- /** TX ring virtual address. */
- volatile union ixgbe_adv_tx_desc *ixgbe_tx_ring;
- rte_iova_t tx_ring_dma; /**< TX ring DMA address. */
- union {
- struct ci_tx_entry *sw_ring; /**< address of SW ring for scalar PMD. */
- struct ci_tx_entry_vec *sw_ring_v; /**< address of SW ring for vector PMD */
- };
- volatile uint8_t *qtx_tail; /**< Address of TDT register. */
- uint16_t nb_tx_desc; /**< number of TX descriptors. */
- uint16_t tx_tail; /**< current value of TDT reg. */
- /**< Start freeing TX buffers if there are less free descriptors than
- this value. */
- uint16_t tx_free_thresh;
- /** Number of TX descriptors to use before RS bit is set. */
- uint16_t tx_rs_thresh;
- /** Number of TX descriptors used since RS bit was set. */
- uint16_t nb_tx_used;
- /** Index to last TX descriptor to have been cleaned. */
- uint16_t last_desc_cleaned;
- /** Total number of TX descriptors ready to be allocated. */
- uint16_t nb_tx_free;
- uint16_t tx_next_dd; /**< next desc to scan for DD bit */
- uint16_t tx_next_rs; /**< next desc to set RS bit */
- uint16_t queue_id; /**< TX queue index. */
- uint16_t reg_idx; /**< TX queue register index. */
- uint16_t port_id; /**< Device port identifier. */
- uint8_t pthresh; /**< Prefetch threshold register. */
- uint8_t hthresh; /**< Host threshold register. */
- uint8_t wthresh; /**< Write-back threshold reg. */
- uint64_t offloads; /**< Tx offload flags of RTE_ETH_TX_OFFLOAD_* */
- uint32_t ctx_curr; /**< Hardware context states. */
- /** Hardware context history. */
- struct ixgbe_advctx_info *ctx_cache;
- const struct ixgbe_txq_ops *ops; /**< txq ops */
- bool tx_deferred_start; /**< not in global dev start. */
-#ifdef RTE_LIB_SECURITY
- uint8_t using_ipsec;
- /**< indicates that IPsec TX feature is in use */
-#endif
- const struct rte_memzone *mz;
-};
-
struct ixgbe_txq_ops {
- void (*release_mbufs)(struct ixgbe_tx_queue *txq);
- void (*free_swring)(struct ixgbe_tx_queue *txq);
- void (*reset)(struct ixgbe_tx_queue *txq);
+ void (*release_mbufs)(struct ci_tx_queue *txq);
+ void (*free_swring)(struct ci_tx_queue *txq);
+ void (*reset)(struct ci_tx_queue *txq);
};
/*
@@ -250,7 +204,7 @@ struct ixgbe_txq_ops {
* the queue parameters. Used in tx_queue_setup by primary process and then
* in dev_init by secondary process when attaching to an existing ethdev.
*/
-void ixgbe_set_tx_function(struct rte_eth_dev *dev, struct ixgbe_tx_queue *txq);
+void ixgbe_set_tx_function(struct rte_eth_dev *dev, struct ci_tx_queue *txq);
/**
* Sets the rx_pkt_burst callback in the ixgbe rte_eth_dev instance.
@@ -287,7 +241,7 @@ void ixgbe_recycle_rx_descriptors_refill_vec(void *rx_queue, uint16_t nb_mbufs);
uint16_t ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
-int ixgbe_txq_vec_setup(struct ixgbe_tx_queue *txq);
+int ixgbe_txq_vec_setup(struct ci_tx_queue *txq);
uint64_t ixgbe_get_tx_port_offloads(struct rte_eth_dev *dev);
uint64_t ixgbe_get_rx_queue_offloads(struct rte_eth_dev *dev);
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
index cc51bf6eed..81fd8bb64d 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
@@ -12,7 +12,7 @@
#include "ixgbe_rxtx.h"
static __rte_always_inline int
-ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq)
+ixgbe_tx_free_bufs(struct ci_tx_queue *txq)
{
struct ci_tx_entry_vec *txep;
uint32_t status;
@@ -32,7 +32,7 @@ ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq)
* first buffer to free from S/W ring is at index
* tx_next_dd - (tx_rs_thresh-1)
*/
- txep = &txq->sw_ring_v[txq->tx_next_dd - (n - 1)];
+ txep = &txq->sw_ring_vec[txq->tx_next_dd - (n - 1)];
m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
if (likely(m != NULL)) {
free[0] = m;
@@ -79,7 +79,7 @@ tx_backlog_entry(struct ci_tx_entry_vec *txep,
}
static inline void
-_ixgbe_tx_queue_release_mbufs_vec(struct ixgbe_tx_queue *txq)
+_ixgbe_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq)
{
unsigned int i;
struct ci_tx_entry_vec *txe;
@@ -92,14 +92,14 @@ _ixgbe_tx_queue_release_mbufs_vec(struct ixgbe_tx_queue *txq)
for (i = txq->tx_next_dd - (txq->tx_rs_thresh - 1);
i != txq->tx_tail;
i = (i + 1) % txq->nb_tx_desc) {
- txe = &txq->sw_ring_v[i];
+ txe = &txq->sw_ring_vec[i];
rte_pktmbuf_free_seg(txe->mbuf);
}
txq->nb_tx_free = max_desc;
/* reset tx_entry */
for (i = 0; i < txq->nb_tx_desc; i++) {
- txe = &txq->sw_ring_v[i];
+ txe = &txq->sw_ring_vec[i];
txe->mbuf = NULL;
}
}
@@ -134,22 +134,22 @@ _ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq)
}
static inline void
-_ixgbe_tx_free_swring_vec(struct ixgbe_tx_queue *txq)
+_ixgbe_tx_free_swring_vec(struct ci_tx_queue *txq)
{
if (txq == NULL)
return;
if (txq->sw_ring != NULL) {
- rte_free(txq->sw_ring_v - 1);
- txq->sw_ring_v = NULL;
+ rte_free(txq->sw_ring_vec - 1);
+ txq->sw_ring_vec = NULL;
}
}
static inline void
-_ixgbe_reset_tx_queue_vec(struct ixgbe_tx_queue *txq)
+_ixgbe_reset_tx_queue_vec(struct ci_tx_queue *txq)
{
static const union ixgbe_adv_tx_desc zeroed_desc = { { 0 } };
- struct ci_tx_entry_vec *txe = txq->sw_ring_v;
+ struct ci_tx_entry_vec *txe = txq->sw_ring_vec;
uint16_t i;
/* Zero out HW ring memory */
@@ -199,14 +199,14 @@ ixgbe_rxq_vec_setup_default(struct ixgbe_rx_queue *rxq)
}
static inline int
-ixgbe_txq_vec_setup_default(struct ixgbe_tx_queue *txq,
+ixgbe_txq_vec_setup_default(struct ci_tx_queue *txq,
const struct ixgbe_txq_ops *txq_ops)
{
- if (txq->sw_ring_v == NULL)
+ if (txq->sw_ring_vec == NULL)
return -1;
/* leave the first one for overflow */
- txq->sw_ring_v = txq->sw_ring_v + 1;
+ txq->sw_ring_vec = txq->sw_ring_vec + 1;
txq->ops = txq_ops;
return 0;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
index 06be7ec82a..cb749a3760 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
@@ -571,7 +571,7 @@ uint16_t
ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile union ixgbe_adv_tx_desc *txdp;
struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
@@ -591,7 +591,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = txq->tx_tail;
txdp = &txq->ixgbe_tx_ring[tx_id];
- txep = &txq->sw_ring_v[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -611,7 +611,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
/* avoid reach the end of ring */
txdp = &txq->ixgbe_tx_ring[tx_id];
- txep = &txq->sw_ring_v[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
}
tx_backlog_entry(txep, tx_pkts, nb_commit);
@@ -634,7 +634,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
}
static void __rte_cold
-ixgbe_tx_queue_release_mbufs_vec(struct ixgbe_tx_queue *txq)
+ixgbe_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq)
{
_ixgbe_tx_queue_release_mbufs_vec(txq);
}
@@ -646,13 +646,13 @@ ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq)
}
static void __rte_cold
-ixgbe_tx_free_swring(struct ixgbe_tx_queue *txq)
+ixgbe_tx_free_swring(struct ci_tx_queue *txq)
{
_ixgbe_tx_free_swring_vec(txq);
}
static void __rte_cold
-ixgbe_reset_tx_queue(struct ixgbe_tx_queue *txq)
+ixgbe_reset_tx_queue(struct ci_tx_queue *txq)
{
_ixgbe_reset_tx_queue_vec(txq);
}
@@ -670,7 +670,7 @@ ixgbe_rxq_vec_setup(struct ixgbe_rx_queue *rxq)
}
int __rte_cold
-ixgbe_txq_vec_setup(struct ixgbe_tx_queue *txq)
+ixgbe_txq_vec_setup(struct ci_tx_queue *txq)
{
return ixgbe_txq_vec_setup_default(txq, &vec_txq_ops);
}
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
index a21a57bd55..e46550f76a 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -693,7 +693,7 @@ uint16_t
ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile union ixgbe_adv_tx_desc *txdp;
struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
@@ -713,7 +713,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = txq->tx_tail;
txdp = &txq->ixgbe_tx_ring[tx_id];
- txep = &txq->sw_ring_v[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -734,7 +734,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
/* avoid reach the end of ring */
txdp = &txq->ixgbe_tx_ring[tx_id];
- txep = &txq->sw_ring_v[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
}
tx_backlog_entry(txep, tx_pkts, nb_commit);
@@ -757,7 +757,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
}
static void __rte_cold
-ixgbe_tx_queue_release_mbufs_vec(struct ixgbe_tx_queue *txq)
+ixgbe_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq)
{
_ixgbe_tx_queue_release_mbufs_vec(txq);
}
@@ -769,13 +769,13 @@ ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq)
}
static void __rte_cold
-ixgbe_tx_free_swring(struct ixgbe_tx_queue *txq)
+ixgbe_tx_free_swring(struct ci_tx_queue *txq)
{
_ixgbe_tx_free_swring_vec(txq);
}
static void __rte_cold
-ixgbe_reset_tx_queue(struct ixgbe_tx_queue *txq)
+ixgbe_reset_tx_queue(struct ci_tx_queue *txq)
{
_ixgbe_reset_tx_queue_vec(txq);
}
@@ -793,7 +793,7 @@ ixgbe_rxq_vec_setup(struct ixgbe_rx_queue *rxq)
}
int __rte_cold
-ixgbe_txq_vec_setup(struct ixgbe_tx_queue *txq)
+ixgbe_txq_vec_setup(struct ci_tx_queue *txq)
{
return ixgbe_txq_vec_setup_default(txq, &vec_txq_ops);
}
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v2 10/22] net/_common_intel: pack Tx queue structure
2024-12-03 16:41 ` [PATCH v2 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (8 preceding siblings ...)
2024-12-03 16:41 ` [PATCH v2 09/22] net/ixgbe: use common Tx queue structure Bruce Richardson
@ 2024-12-03 16:41 ` Bruce Richardson
2024-12-03 16:41 ` [PATCH v2 11/22] net/_common_intel: add post-Tx buffer free function Bruce Richardson
` (11 subsequent siblings)
21 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-03 16:41 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Ian Stokes, Anatoly Burakov
Move some fields about to better pack the Tx queue structure and make
sure all data used by the vector codepaths is on the first cacheline of
the structure. Checking with "pahole" on 64-bit build, only one 6-byte
hole is left in the structure - on second cacheline - after this patch.
As part of the reordering, move the p/h/wthresh values to the
ixgbe-specific part of the union. That is the only driver which actually
uses those values. i40e and ice drivers just record the values for later
return, so we can drop them from the Tx queue structure for those
drivers and just report the defaults in all cases.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 12 +++++-------
drivers/net/i40e/i40e_rxtx.c | 9 +++------
drivers/net/ice/ice_rxtx.c | 9 +++------
3 files changed, 11 insertions(+), 19 deletions(-)
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
index 51ae3b051d..c372d2838b 100644
--- a/drivers/net/_common_intel/tx.h
+++ b/drivers/net/_common_intel/tx.h
@@ -41,7 +41,6 @@ struct ci_tx_queue {
struct ci_tx_entry *sw_ring; /* virtual address of SW ring */
struct ci_tx_entry_vec *sw_ring_vec;
};
- rte_iova_t tx_ring_dma; /* TX ring DMA address */
uint16_t nb_tx_desc; /* number of TX descriptors */
uint16_t tx_tail; /* current value of tail register */
uint16_t nb_tx_used; /* number of TX desc used since RS bit set */
@@ -55,16 +54,14 @@ struct ci_tx_queue {
uint16_t tx_free_thresh;
/* Number of TX descriptors to use before RS bit is set. */
uint16_t tx_rs_thresh;
- uint8_t pthresh; /**< Prefetch threshold register. */
- uint8_t hthresh; /**< Host threshold register. */
- uint8_t wthresh; /**< Write-back threshold reg. */
uint16_t port_id; /* Device port identifier. */
uint16_t queue_id; /* TX queue index. */
uint16_t reg_idx;
- uint64_t offloads;
uint16_t tx_next_dd;
uint16_t tx_next_rs;
+ uint64_t offloads;
uint64_t mbuf_errors;
+ rte_iova_t tx_ring_dma; /* TX ring DMA address */
bool tx_deferred_start; /* don't start this queue in dev start */
bool q_set; /* indicate if tx queue has been configured */
union { /* the VSI this queue belongs to */
@@ -95,9 +92,10 @@ struct ci_tx_queue {
const struct ixgbe_txq_ops *ops;
struct ixgbe_advctx_info *ctx_cache;
uint32_t ctx_curr;
-#ifdef RTE_LIB_SECURITY
+ uint8_t pthresh; /**< Prefetch threshold register. */
+ uint8_t hthresh; /**< Host threshold register. */
+ uint8_t wthresh; /**< Write-back threshold reg. */
uint8_t using_ipsec; /**< indicates that IPsec TX feature is in use */
-#endif
};
};
};
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 305bc53480..539b170266 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2539,9 +2539,6 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->nb_tx_desc = nb_desc;
txq->tx_rs_thresh = tx_rs_thresh;
txq->tx_free_thresh = tx_free_thresh;
- txq->pthresh = tx_conf->tx_thresh.pthresh;
- txq->hthresh = tx_conf->tx_thresh.hthresh;
- txq->wthresh = tx_conf->tx_thresh.wthresh;
txq->queue_id = queue_idx;
txq->reg_idx = reg_idx;
txq->port_id = dev->data->port_id;
@@ -3310,9 +3307,9 @@ i40e_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
qinfo->nb_desc = txq->nb_tx_desc;
- qinfo->conf.tx_thresh.pthresh = txq->pthresh;
- qinfo->conf.tx_thresh.hthresh = txq->hthresh;
- qinfo->conf.tx_thresh.wthresh = txq->wthresh;
+ qinfo->conf.tx_thresh.pthresh = I40E_DEFAULT_TX_PTHRESH;
+ qinfo->conf.tx_thresh.hthresh = I40E_DEFAULT_TX_HTHRESH;
+ qinfo->conf.tx_thresh.wthresh = I40E_DEFAULT_TX_WTHRESH;
qinfo->conf.tx_free_thresh = txq->tx_free_thresh;
qinfo->conf.tx_rs_thresh = txq->tx_rs_thresh;
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index bcc7c7a016..e2e147ba3e 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -1492,9 +1492,6 @@ ice_tx_queue_setup(struct rte_eth_dev *dev,
txq->nb_tx_desc = nb_desc;
txq->tx_rs_thresh = tx_rs_thresh;
txq->tx_free_thresh = tx_free_thresh;
- txq->pthresh = tx_conf->tx_thresh.pthresh;
- txq->hthresh = tx_conf->tx_thresh.hthresh;
- txq->wthresh = tx_conf->tx_thresh.wthresh;
txq->queue_id = queue_idx;
txq->reg_idx = vsi->base_queue + queue_idx;
@@ -1583,9 +1580,9 @@ ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
qinfo->nb_desc = txq->nb_tx_desc;
- qinfo->conf.tx_thresh.pthresh = txq->pthresh;
- qinfo->conf.tx_thresh.hthresh = txq->hthresh;
- qinfo->conf.tx_thresh.wthresh = txq->wthresh;
+ qinfo->conf.tx_thresh.pthresh = ICE_DEFAULT_TX_PTHRESH;
+ qinfo->conf.tx_thresh.hthresh = ICE_DEFAULT_TX_HTHRESH;
+ qinfo->conf.tx_thresh.wthresh = ICE_DEFAULT_TX_WTHRESH;
qinfo->conf.tx_free_thresh = txq->tx_free_thresh;
qinfo->conf.tx_rs_thresh = txq->tx_rs_thresh;
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v2 11/22] net/_common_intel: add post-Tx buffer free function
2024-12-03 16:41 ` [PATCH v2 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (9 preceding siblings ...)
2024-12-03 16:41 ` [PATCH v2 10/22] net/_common_intel: pack " Bruce Richardson
@ 2024-12-03 16:41 ` Bruce Richardson
2024-12-03 16:41 ` [PATCH v2 12/22] net/_common_intel: add Tx buffer free fn for AVX-512 Bruce Richardson
` (10 subsequent siblings)
21 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-03 16:41 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Ian Stokes, Vladimir Medvedkin, Anatoly Burakov
The actions taken for post-Tx buffer free for the SSE and AVX drivers
for i40e, iavf and ice drivers are all common, so centralize those in
common/intel_eth driver.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 71 ++++++++++++++++++++++++
drivers/net/i40e/i40e_rxtx_vec_common.h | 72 ++++---------------------
drivers/net/iavf/iavf_rxtx_vec_common.h | 61 ++++-----------------
drivers/net/ice/ice_rxtx_vec_common.h | 61 ++++-----------------
4 files changed, 98 insertions(+), 167 deletions(-)
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
index c372d2838b..a930309c05 100644
--- a/drivers/net/_common_intel/tx.h
+++ b/drivers/net/_common_intel/tx.h
@@ -7,6 +7,7 @@
#include <stdint.h>
#include <rte_mbuf.h>
+#include <rte_ethdev.h>
/* forward declaration of the common intel (ci) queue structure */
struct ci_tx_queue;
@@ -107,4 +108,74 @@ ci_tx_backlog_entry(struct ci_tx_entry *txep, struct rte_mbuf **tx_pkts, uint16_
txep[i].mbuf = tx_pkts[i];
}
+#define IETH_VPMD_TX_MAX_FREE_BUF 64
+
+typedef int (*ci_desc_done_fn)(struct ci_tx_queue *txq, uint16_t idx);
+
+static __rte_always_inline int
+ci_tx_free_bufs(struct ci_tx_queue *txq, ci_desc_done_fn desc_done)
+{
+ struct ci_tx_entry *txep;
+ uint32_t n;
+ uint32_t i;
+ int nb_free = 0;
+ struct rte_mbuf *m, *free[IETH_VPMD_TX_MAX_FREE_BUF];
+
+ /* check DD bits on threshold descriptor */
+ if (!desc_done(txq, txq->tx_next_dd))
+ return 0;
+
+ n = txq->tx_rs_thresh;
+
+ /* first buffer to free from S/W ring is at index
+ * tx_next_dd - (tx_rs_thresh-1)
+ */
+ txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)];
+
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
+ for (i = 0; i < n; i++) {
+ free[i] = txep[i].mbuf;
+ /* no need to reset txep[i].mbuf in vector path */
+ }
+ rte_mempool_put_bulk(free[0]->pool, (void **)free, n);
+ goto done;
+ }
+
+ m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
+ if (likely(m != NULL)) {
+ free[0] = m;
+ nb_free = 1;
+ for (i = 1; i < n; i++) {
+ m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
+ if (likely(m != NULL)) {
+ if (likely(m->pool == free[0]->pool)) {
+ free[nb_free++] = m;
+ } else {
+ rte_mempool_put_bulk(free[0]->pool,
+ (void *)free,
+ nb_free);
+ free[0] = m;
+ nb_free = 1;
+ }
+ }
+ }
+ rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
+ } else {
+ for (i = 1; i < n; i++) {
+ m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
+ if (m != NULL)
+ rte_mempool_put(m->pool, m);
+ }
+ }
+
+done:
+ /* buffers were freed, update counters */
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
+ txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
+ if (txq->tx_next_dd >= txq->nb_tx_desc)
+ txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
+
+ return txq->tx_rs_thresh;
+}
+
#endif /* _COMMON_INTEL_TX_H_ */
diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index 57d6263ccf..907d32dd0b 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -16,72 +16,18 @@
#pragma GCC diagnostic ignored "-Wcast-qual"
#endif
+static inline int
+i40e_tx_desc_done(struct ci_tx_queue *txq, uint16_t idx)
+{
+ return (txq->i40e_tx_ring[idx].cmd_type_offset_bsz &
+ rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) ==
+ rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE);
+}
+
static __rte_always_inline int
i40e_tx_free_bufs(struct ci_tx_queue *txq)
{
- struct ci_tx_entry *txep;
- uint32_t n;
- uint32_t i;
- int nb_free = 0;
- struct rte_mbuf *m, *free[RTE_I40E_TX_MAX_FREE_BUF_SZ];
-
- /* check DD bits on threshold descriptor */
- if ((txq->i40e_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
- rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) !=
- rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE))
- return 0;
-
- n = txq->tx_rs_thresh;
-
- /* first buffer to free from S/W ring is at index
- * tx_next_dd - (tx_rs_thresh-1)
- */
- txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)];
-
- if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
- for (i = 0; i < n; i++) {
- free[i] = txep[i].mbuf;
- /* no need to reset txep[i].mbuf in vector path */
- }
- rte_mempool_put_bulk(free[0]->pool, (void **)free, n);
- goto done;
- }
-
- m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
- if (likely(m != NULL)) {
- free[0] = m;
- nb_free = 1;
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (likely(m != NULL)) {
- if (likely(m->pool == free[0]->pool)) {
- free[nb_free++] = m;
- } else {
- rte_mempool_put_bulk(free[0]->pool,
- (void *)free,
- nb_free);
- free[0] = m;
- nb_free = 1;
- }
- }
- }
- rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
- } else {
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (m != NULL)
- rte_mempool_put(m->pool, m);
- }
- }
-
-done:
- /* buffers were freed, update counters */
- txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
- txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
- if (txq->tx_next_dd >= txq->nb_tx_desc)
- txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
-
- return txq->tx_rs_thresh;
+ return ci_tx_free_bufs(txq, i40e_tx_desc_done);
}
static inline void
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index f1bb12c4f4..7130229f23 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -16,61 +16,18 @@
#pragma GCC diagnostic ignored "-Wcast-qual"
#endif
+static inline int
+iavf_tx_desc_done(struct ci_tx_queue *txq, uint16_t idx)
+{
+ return (txq->iavf_tx_ring[idx].cmd_type_offset_bsz &
+ rte_cpu_to_le_64(IAVF_TXD_QW1_DTYPE_MASK)) ==
+ rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE);
+}
+
static __rte_always_inline int
iavf_tx_free_bufs(struct ci_tx_queue *txq)
{
- struct ci_tx_entry *txep;
- uint32_t n;
- uint32_t i;
- int nb_free = 0;
- struct rte_mbuf *m, *free[IAVF_VPMD_TX_MAX_FREE_BUF];
-
- /* check DD bits on threshold descriptor */
- if ((txq->iavf_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
- rte_cpu_to_le_64(IAVF_TXD_QW1_DTYPE_MASK)) !=
- rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE))
- return 0;
-
- n = txq->tx_rs_thresh;
-
- /* first buffer to free from S/W ring is at index
- * tx_next_dd - (tx_rs_thresh-1)
- */
- txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)];
- m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
- if (likely(m != NULL)) {
- free[0] = m;
- nb_free = 1;
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (likely(m != NULL)) {
- if (likely(m->pool == free[0]->pool)) {
- free[nb_free++] = m;
- } else {
- rte_mempool_put_bulk(free[0]->pool,
- (void *)free,
- nb_free);
- free[0] = m;
- nb_free = 1;
- }
- }
- }
- rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
- } else {
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (m)
- rte_mempool_put(m->pool, m);
- }
- }
-
- /* buffers were freed, update counters */
- txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
- txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
- if (txq->tx_next_dd >= txq->nb_tx_desc)
- txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
-
- return txq->tx_rs_thresh;
+ return ci_tx_free_bufs(txq, iavf_tx_desc_done);
}
static inline void
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index b39289ceb5..c6c3933299 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -12,61 +12,18 @@
#pragma GCC diagnostic ignored "-Wcast-qual"
#endif
+static inline int
+ice_tx_desc_done(struct ci_tx_queue *txq, uint16_t idx)
+{
+ return (txq->ice_tx_ring[idx].cmd_type_offset_bsz &
+ rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) ==
+ rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE);
+}
+
static __rte_always_inline int
ice_tx_free_bufs_vec(struct ci_tx_queue *txq)
{
- struct ci_tx_entry *txep;
- uint32_t n;
- uint32_t i;
- int nb_free = 0;
- struct rte_mbuf *m, *free[ICE_TX_MAX_FREE_BUF_SZ];
-
- /* check DD bits on threshold descriptor */
- if ((txq->ice_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
- rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) !=
- rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))
- return 0;
-
- n = txq->tx_rs_thresh;
-
- /* first buffer to free from S/W ring is at index
- * tx_next_dd - (tx_rs_thresh-1)
- */
- txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)];
- m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
- if (likely(m)) {
- free[0] = m;
- nb_free = 1;
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (likely(m)) {
- if (likely(m->pool == free[0]->pool)) {
- free[nb_free++] = m;
- } else {
- rte_mempool_put_bulk(free[0]->pool,
- (void *)free,
- nb_free);
- free[0] = m;
- nb_free = 1;
- }
- }
- }
- rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
- } else {
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (m)
- rte_mempool_put(m->pool, m);
- }
- }
-
- /* buffers were freed, update counters */
- txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
- txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
- if (txq->tx_next_dd >= txq->nb_tx_desc)
- txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
-
- return txq->tx_rs_thresh;
+ return ci_tx_free_bufs(txq, ice_tx_desc_done);
}
static inline void
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v2 12/22] net/_common_intel: add Tx buffer free fn for AVX-512
2024-12-03 16:41 ` [PATCH v2 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (10 preceding siblings ...)
2024-12-03 16:41 ` [PATCH v2 11/22] net/_common_intel: add post-Tx buffer free function Bruce Richardson
@ 2024-12-03 16:41 ` Bruce Richardson
2024-12-03 16:41 ` [PATCH v2 13/22] net/iavf: use common Tx " Bruce Richardson
` (9 subsequent siblings)
21 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-03 16:41 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Konstantin Ananyev, Ian Stokes, Anatoly Burakov
AVX-512 code paths for ice and i40e drivers are common, and differ from
the regular post-Tx free function in that the SW ring from which the
buffers are freed does not contain anything other than the mbuf pointer.
Merge these into a common function in intel_common to reduce
duplication.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 92 +++++++++++++++++++
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 114 +----------------------
drivers/net/ice/ice_rxtx_vec_avx512.c | 117 +-----------------------
3 files changed, 94 insertions(+), 229 deletions(-)
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
index a930309c05..84ff839672 100644
--- a/drivers/net/_common_intel/tx.h
+++ b/drivers/net/_common_intel/tx.h
@@ -178,4 +178,96 @@ ci_tx_free_bufs(struct ci_tx_queue *txq, ci_desc_done_fn desc_done)
return txq->tx_rs_thresh;
}
+static __rte_always_inline int
+ci_tx_free_bufs_vec(struct ci_tx_queue *txq, ci_desc_done_fn desc_done)
+{
+ int nb_free = 0;
+ struct rte_mbuf *free[IETH_VPMD_TX_MAX_FREE_BUF];
+ struct rte_mbuf *m;
+
+ /* check DD bits on threshold descriptor */
+ if (!desc_done(txq, txq->tx_next_dd))
+ return 0;
+
+ const uint32_t n = txq->tx_rs_thresh;
+
+ /* first buffer to free from S/W ring is at index
+ * tx_next_dd - (tx_rs_thresh - 1)
+ */
+ struct ci_tx_entry_vec *txep = txq->sw_ring_vec;
+ txep += txq->tx_next_dd - (n - 1);
+
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
+ struct rte_mempool *mp = txep[0].mbuf->pool;
+ void **cache_objs;
+ struct rte_mempool_cache *cache = rte_mempool_default_cache(mp, rte_lcore_id());
+
+ if (!cache || cache->len == 0)
+ goto normal;
+
+ cache_objs = &cache->objs[cache->len];
+
+ if (n > RTE_MEMPOOL_CACHE_MAX_SIZE) {
+ rte_mempool_ops_enqueue_bulk(mp, (void *)txep, n);
+ goto done;
+ }
+
+ /* The cache follows the following algorithm
+ * 1. Add the objects to the cache
+ * 2. Anything greater than the cache min value (if it
+ * crosses the cache flush threshold) is flushed to the ring.
+ */
+ /* Add elements back into the cache */
+ uint32_t copied = 0;
+ /* n is multiple of 32 */
+ while (copied < n) {
+ memcpy(&cache_objs[copied], &txep[copied], 32 * sizeof(void *));
+ copied += 32;
+ }
+ cache->len += n;
+
+ if (cache->len >= cache->flushthresh) {
+ rte_mempool_ops_enqueue_bulk(mp, &cache->objs[cache->size],
+ cache->len - cache->size);
+ cache->len = cache->size;
+ }
+ goto done;
+ }
+
+normal:
+ m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
+ if (likely(m)) {
+ free[0] = m;
+ nb_free = 1;
+ for (uint32_t i = 1; i < n; i++) {
+ m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
+ if (likely(m)) {
+ if (likely(m->pool == free[0]->pool)) {
+ free[nb_free++] = m;
+ } else {
+ rte_mempool_put_bulk(free[0]->pool, (void *)free, nb_free);
+ free[0] = m;
+ nb_free = 1;
+ }
+ }
+ }
+ rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
+ } else {
+ for (uint32_t i = 1; i < n; i++) {
+ m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
+ if (m)
+ rte_mempool_put(m->pool, m);
+ }
+ }
+
+done:
+ /* buffers were freed, update counters */
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
+ txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
+ if (txq->tx_next_dd >= txq->nb_tx_desc)
+ txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
+
+ return txq->tx_rs_thresh;
+}
+
#endif /* _COMMON_INTEL_TX_H_ */
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index a3f6d1667f..9bb2a44231 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -754,118 +754,6 @@ i40e_recv_scattered_pkts_vec_avx512(void *rx_queue,
rx_pkts + retval, nb_pkts);
}
-static __rte_always_inline int
-i40e_tx_free_bufs_avx512(struct ci_tx_queue *txq)
-{
- struct ci_tx_entry_vec *txep;
- uint32_t n;
- uint32_t i;
- int nb_free = 0;
- struct rte_mbuf *m, *free[RTE_I40E_TX_MAX_FREE_BUF_SZ];
-
- /* check DD bits on threshold descriptor */
- if ((txq->i40e_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
- rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) !=
- rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE))
- return 0;
-
- n = txq->tx_rs_thresh;
-
- /* first buffer to free from S/W ring is at index
- * tx_next_dd - (tx_rs_thresh-1)
- */
- txep = (void *)txq->sw_ring;
- txep += txq->tx_next_dd - (n - 1);
-
- if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
- struct rte_mempool *mp = txep[0].mbuf->pool;
- void **cache_objs;
- struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
- rte_lcore_id());
-
- if (!cache || n > RTE_MEMPOOL_CACHE_MAX_SIZE) {
- rte_mempool_generic_put(mp, (void *)txep, n, cache);
- goto done;
- }
-
- cache_objs = &cache->objs[cache->len];
-
- /* The cache follows the following algorithm
- * 1. Add the objects to the cache
- * 2. Anything greater than the cache min value (if it
- * crosses the cache flush threshold) is flushed to the ring.
- */
- /* Add elements back into the cache */
- uint32_t copied = 0;
- /* n is multiple of 32 */
- while (copied < n) {
-#ifdef RTE_ARCH_64
- const __m512i a = _mm512_load_si512(&txep[copied]);
- const __m512i b = _mm512_load_si512(&txep[copied + 8]);
- const __m512i c = _mm512_load_si512(&txep[copied + 16]);
- const __m512i d = _mm512_load_si512(&txep[copied + 24]);
-
- _mm512_storeu_si512(&cache_objs[copied], a);
- _mm512_storeu_si512(&cache_objs[copied + 8], b);
- _mm512_storeu_si512(&cache_objs[copied + 16], c);
- _mm512_storeu_si512(&cache_objs[copied + 24], d);
-#else
- const __m512i a = _mm512_load_si512(&txep[copied]);
- const __m512i b = _mm512_load_si512(&txep[copied + 16]);
- _mm512_storeu_si512(&cache_objs[copied], a);
- _mm512_storeu_si512(&cache_objs[copied + 16], b);
-#endif
- copied += 32;
- }
- cache->len += n;
-
- if (cache->len >= cache->flushthresh) {
- rte_mempool_ops_enqueue_bulk
- (mp, &cache->objs[cache->size],
- cache->len - cache->size);
- cache->len = cache->size;
- }
- goto done;
- }
-
- m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
- if (likely(m)) {
- free[0] = m;
- nb_free = 1;
- for (i = 1; i < n; i++) {
- rte_mbuf_prefetch_part2(txep[i + 3].mbuf);
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (likely(m)) {
- if (likely(m->pool == free[0]->pool)) {
- free[nb_free++] = m;
- } else {
- rte_mempool_put_bulk(free[0]->pool,
- (void *)free,
- nb_free);
- free[0] = m;
- nb_free = 1;
- }
- }
- }
- rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
- } else {
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (m)
- rte_mempool_put(m->pool, m);
- }
- }
-
-done:
- /* buffers were freed, update counters */
- txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
- txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
- if (txq->tx_next_dd >= txq->nb_tx_desc)
- txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
-
- return txq->tx_rs_thresh;
-}
-
static inline void
vtx1(volatile struct i40e_tx_desc *txdp, struct rte_mbuf *pkt, uint64_t flags)
{
@@ -941,7 +829,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
if (txq->nb_tx_free < txq->tx_free_thresh)
- i40e_tx_free_bufs_avx512(txq);
+ ci_tx_free_bufs_vec(txq, i40e_tx_desc_done);
nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index eabd8b04a0..538be707ef 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -859,121 +859,6 @@ ice_recv_scattered_pkts_vec_avx512_offload(void *rx_queue,
rx_pkts + retval, nb_pkts);
}
-static __rte_always_inline int
-ice_tx_free_bufs_avx512(struct ci_tx_queue *txq)
-{
- struct ci_tx_entry_vec *txep;
- uint32_t n;
- uint32_t i;
- int nb_free = 0;
- struct rte_mbuf *m, *free[ICE_TX_MAX_FREE_BUF_SZ];
-
- /* check DD bits on threshold descriptor */
- if ((txq->ice_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
- rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) !=
- rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))
- return 0;
-
- n = txq->tx_rs_thresh;
-
- /* first buffer to free from S/W ring is at index
- * tx_next_dd - (tx_rs_thresh - 1)
- */
- txep = (void *)txq->sw_ring;
- txep += txq->tx_next_dd - (n - 1);
-
- if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
- struct rte_mempool *mp = txep[0].mbuf->pool;
- void **cache_objs;
- struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
- rte_lcore_id());
-
- if (!cache || cache->len == 0)
- goto normal;
-
- cache_objs = &cache->objs[cache->len];
-
- if (n > RTE_MEMPOOL_CACHE_MAX_SIZE) {
- rte_mempool_ops_enqueue_bulk(mp, (void *)txep, n);
- goto done;
- }
-
- /* The cache follows the following algorithm
- * 1. Add the objects to the cache
- * 2. Anything greater than the cache min value (if it
- * crosses the cache flush threshold) is flushed to the ring.
- */
- /* Add elements back into the cache */
- uint32_t copied = 0;
- /* n is multiple of 32 */
- while (copied < n) {
-#ifdef RTE_ARCH_64
- const __m512i a = _mm512_loadu_si512(&txep[copied]);
- const __m512i b = _mm512_loadu_si512(&txep[copied + 8]);
- const __m512i c = _mm512_loadu_si512(&txep[copied + 16]);
- const __m512i d = _mm512_loadu_si512(&txep[copied + 24]);
-
- _mm512_storeu_si512(&cache_objs[copied], a);
- _mm512_storeu_si512(&cache_objs[copied + 8], b);
- _mm512_storeu_si512(&cache_objs[copied + 16], c);
- _mm512_storeu_si512(&cache_objs[copied + 24], d);
-#else
- const __m512i a = _mm512_loadu_si512(&txep[copied]);
- const __m512i b = _mm512_loadu_si512(&txep[copied + 16]);
- _mm512_storeu_si512(&cache_objs[copied], a);
- _mm512_storeu_si512(&cache_objs[copied + 16], b);
-#endif
- copied += 32;
- }
- cache->len += n;
-
- if (cache->len >= cache->flushthresh) {
- rte_mempool_ops_enqueue_bulk
- (mp, &cache->objs[cache->size],
- cache->len - cache->size);
- cache->len = cache->size;
- }
- goto done;
- }
-
-normal:
- m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
- if (likely(m)) {
- free[0] = m;
- nb_free = 1;
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (likely(m)) {
- if (likely(m->pool == free[0]->pool)) {
- free[nb_free++] = m;
- } else {
- rte_mempool_put_bulk(free[0]->pool,
- (void *)free,
- nb_free);
- free[0] = m;
- nb_free = 1;
- }
- }
- }
- rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
- } else {
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (m)
- rte_mempool_put(m->pool, m);
- }
- }
-
-done:
- /* buffers were freed, update counters */
- txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
- txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
- if (txq->tx_next_dd >= txq->nb_tx_desc)
- txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
-
- return txq->tx_rs_thresh;
-}
-
static __rte_always_inline void
ice_vtx1(volatile struct ice_tx_desc *txdp,
struct rte_mbuf *pkt, uint64_t flags, bool do_offload)
@@ -1064,7 +949,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_pkts = RTE_MIN(nb_pkts, txq->tx_rs_thresh);
if (txq->nb_tx_free < txq->tx_free_thresh)
- ice_tx_free_bufs_avx512(txq);
+ ci_tx_free_bufs_vec(txq, ice_tx_desc_done);
nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v2 13/22] net/iavf: use common Tx free fn for AVX-512
2024-12-03 16:41 ` [PATCH v2 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (11 preceding siblings ...)
2024-12-03 16:41 ` [PATCH v2 12/22] net/_common_intel: add Tx buffer free fn for AVX-512 Bruce Richardson
@ 2024-12-03 16:41 ` Bruce Richardson
2024-12-03 16:41 ` [PATCH v2 14/22] net/ice: move Tx queue mbuf cleanup fn to common Bruce Richardson
` (8 subsequent siblings)
21 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-03 16:41 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Konstantin Ananyev, Ian Stokes,
Vladimir Medvedkin, Anatoly Burakov
Switch the iavf driver to use the common Tx free function. This requires
one additional parameter to that function, since iavf sometimes uses
context descriptors which means that we have double the descriptors per
SW ring slot.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 6 +-
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 2 +-
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 119 +-----------------------
drivers/net/ice/ice_rxtx_vec_avx512.c | 2 +-
4 files changed, 7 insertions(+), 122 deletions(-)
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
index 84ff839672..26aef528fa 100644
--- a/drivers/net/_common_intel/tx.h
+++ b/drivers/net/_common_intel/tx.h
@@ -179,7 +179,7 @@ ci_tx_free_bufs(struct ci_tx_queue *txq, ci_desc_done_fn desc_done)
}
static __rte_always_inline int
-ci_tx_free_bufs_vec(struct ci_tx_queue *txq, ci_desc_done_fn desc_done)
+ci_tx_free_bufs_vec(struct ci_tx_queue *txq, ci_desc_done_fn desc_done, bool ctx_descs)
{
int nb_free = 0;
struct rte_mbuf *free[IETH_VPMD_TX_MAX_FREE_BUF];
@@ -189,13 +189,13 @@ ci_tx_free_bufs_vec(struct ci_tx_queue *txq, ci_desc_done_fn desc_done)
if (!desc_done(txq, txq->tx_next_dd))
return 0;
- const uint32_t n = txq->tx_rs_thresh;
+ const uint32_t n = txq->tx_rs_thresh >> ctx_descs;
/* first buffer to free from S/W ring is at index
* tx_next_dd - (tx_rs_thresh - 1)
*/
struct ci_tx_entry_vec *txep = txq->sw_ring_vec;
- txep += txq->tx_next_dd - (n - 1);
+ txep += (txq->tx_next_dd >> ctx_descs) - (n - 1);
if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
struct rte_mempool *mp = txep[0].mbuf->pool;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index 9bb2a44231..c555c3491d 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -829,7 +829,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
if (txq->nb_tx_free < txq->tx_free_thresh)
- ci_tx_free_bufs_vec(txq, i40e_tx_desc_done);
+ ci_tx_free_bufs_vec(txq, i40e_tx_desc_done, false);
nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index 9cf7171524..8543490c70 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -1844,121 +1844,6 @@ iavf_recv_scattered_pkts_vec_avx512_flex_rxd_offload(void *rx_queue,
true);
}
-static __rte_always_inline int
-iavf_tx_free_bufs_avx512(struct ci_tx_queue *txq)
-{
- struct ci_tx_entry_vec *txep;
- uint32_t n;
- uint32_t i;
- int nb_free = 0;
- struct rte_mbuf *m, *free[IAVF_VPMD_TX_MAX_FREE_BUF];
-
- /* check DD bits on threshold descriptor */
- if ((txq->iavf_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
- rte_cpu_to_le_64(IAVF_TXD_QW1_DTYPE_MASK)) !=
- rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE))
- return 0;
-
- n = txq->tx_rs_thresh >> txq->use_ctx;
-
- /* first buffer to free from S/W ring is at index
- * tx_next_dd - (tx_rs_thresh-1)
- */
- txep = (void *)txq->sw_ring;
- txep += (txq->tx_next_dd >> txq->use_ctx) - (n - 1);
-
- if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
- struct rte_mempool *mp = txep[0].mbuf->pool;
- struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
- rte_lcore_id());
- void **cache_objs;
-
- if (!cache || cache->len == 0)
- goto normal;
-
- cache_objs = &cache->objs[cache->len];
-
- if (n > RTE_MEMPOOL_CACHE_MAX_SIZE) {
- rte_mempool_ops_enqueue_bulk(mp, (void *)txep, n);
- goto done;
- }
-
- /* The cache follows the following algorithm
- * 1. Add the objects to the cache
- * 2. Anything greater than the cache min value (if it crosses the
- * cache flush threshold) is flushed to the ring.
- */
- /* Add elements back into the cache */
- uint32_t copied = 0;
- /* n is multiple of 32 */
- while (copied < n) {
-#ifdef RTE_ARCH_64
- const __m512i a = _mm512_loadu_si512(&txep[copied]);
- const __m512i b = _mm512_loadu_si512(&txep[copied + 8]);
- const __m512i c = _mm512_loadu_si512(&txep[copied + 16]);
- const __m512i d = _mm512_loadu_si512(&txep[copied + 24]);
-
- _mm512_storeu_si512(&cache_objs[copied], a);
- _mm512_storeu_si512(&cache_objs[copied + 8], b);
- _mm512_storeu_si512(&cache_objs[copied + 16], c);
- _mm512_storeu_si512(&cache_objs[copied + 24], d);
-#else
- const __m512i a = _mm512_loadu_si512(&txep[copied]);
- const __m512i b = _mm512_loadu_si512(&txep[copied + 16]);
- _mm512_storeu_si512(&cache_objs[copied], a);
- _mm512_storeu_si512(&cache_objs[copied + 16], b);
-#endif
- copied += 32;
- }
- cache->len += n;
-
- if (cache->len >= cache->flushthresh) {
- rte_mempool_ops_enqueue_bulk(mp,
- &cache->objs[cache->size],
- cache->len - cache->size);
- cache->len = cache->size;
- }
- goto done;
- }
-
-normal:
- m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
- if (likely(m)) {
- free[0] = m;
- nb_free = 1;
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (likely(m)) {
- if (likely(m->pool == free[0]->pool)) {
- free[nb_free++] = m;
- } else {
- rte_mempool_put_bulk(free[0]->pool,
- (void *)free,
- nb_free);
- free[0] = m;
- nb_free = 1;
- }
- }
- }
- rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
- } else {
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (m)
- rte_mempool_put(m->pool, m);
- }
- }
-
-done:
- /* buffers were freed, update counters */
- txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
- txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
- if (txq->tx_next_dd >= txq->nb_tx_desc)
- txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
-
- return txq->tx_rs_thresh;
-}
-
static __rte_always_inline void
tx_backlog_entry_avx512(struct ci_tx_entry_vec *txep,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
@@ -2320,7 +2205,7 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint64_t rs = IAVF_TX_DESC_CMD_RS | flags;
if (txq->nb_tx_free < txq->tx_free_thresh)
- iavf_tx_free_bufs_avx512(txq);
+ ci_tx_free_bufs_vec(txq, iavf_tx_desc_done, false);
nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
@@ -2388,7 +2273,7 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
uint64_t rs = IAVF_TX_DESC_CMD_RS | flags;
if (txq->nb_tx_free < txq->tx_free_thresh)
- iavf_tx_free_bufs_avx512(txq);
+ ci_tx_free_bufs_vec(txq, iavf_tx_desc_done, true);
nb_commit = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts << 1);
nb_commit &= 0xFFFE;
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index 538be707ef..f6ec593f96 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -949,7 +949,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_pkts = RTE_MIN(nb_pkts, txq->tx_rs_thresh);
if (txq->nb_tx_free < txq->tx_free_thresh)
- ci_tx_free_bufs_vec(txq, ice_tx_desc_done);
+ ci_tx_free_bufs_vec(txq, ice_tx_desc_done, false);
nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v2 14/22] net/ice: move Tx queue mbuf cleanup fn to common
2024-12-03 16:41 ` [PATCH v2 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (12 preceding siblings ...)
2024-12-03 16:41 ` [PATCH v2 13/22] net/iavf: use common Tx " Bruce Richardson
@ 2024-12-03 16:41 ` Bruce Richardson
2024-12-03 16:41 ` [PATCH v2 15/22] net/i40e: use common Tx queue mbuf cleanup fn Bruce Richardson
` (7 subsequent siblings)
21 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-03 16:41 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Anatoly Burakov, Konstantin Ananyev
The functions to loop over the Tx queue and clean up all the mbufs on
it, e.g. for queue shutdown, is not device specific and so can move into
the common_intel headers. Only complication is ensuring that the
correct ring format, either minimal vector or full structure, is used.
Ice driver currently uses two functions and a function pointer to help
with this - though actually one of those functions uses a further check
inside it - so we can simplify this down to just one common function,
with a flag set in the appropriate place. This avoids checking for
AVX-512-specific functions, which were the only function using the
smaller struct in this driver.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 49 ++++++++++++++++++++++++-
drivers/net/ice/ice_dcf_ethdev.c | 5 +--
drivers/net/ice/ice_ethdev.h | 3 +-
drivers/net/ice/ice_rxtx.c | 33 +++++------------
drivers/net/ice/ice_rxtx_vec_common.h | 51 ---------------------------
drivers/net/ice/ice_rxtx_vec_sse.c | 4 ---
6 files changed, 60 insertions(+), 85 deletions(-)
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
index 26aef528fa..1bf2a61b2f 100644
--- a/drivers/net/_common_intel/tx.h
+++ b/drivers/net/_common_intel/tx.h
@@ -65,6 +65,8 @@ struct ci_tx_queue {
rte_iova_t tx_ring_dma; /* TX ring DMA address */
bool tx_deferred_start; /* don't start this queue in dev start */
bool q_set; /* indicate if tx queue has been configured */
+ bool vector_tx; /* port is using vector TX */
+ bool vector_sw_ring; /* port is using vectorized SW ring (ieth_tx_entry_vec) */
union { /* the VSI this queue belongs to */
struct i40e_vsi *i40e_vsi;
struct iavf_vsi *iavf_vsi;
@@ -74,7 +76,6 @@ struct ci_tx_queue {
union {
struct { /* ICE driver specific values */
- ice_tx_release_mbufs_t tx_rel_mbufs;
uint32_t q_teid; /* TX schedule node id. */
};
struct { /* I40E driver specific values */
@@ -270,4 +271,50 @@ ci_tx_free_bufs_vec(struct ci_tx_queue *txq, ci_desc_done_fn desc_done, bool ctx
return txq->tx_rs_thresh;
}
+#define IETH_FREE_BUFS_LOOP(txq, swr, start) do { \
+ uint16_t i = start; \
+ if (txq->tx_tail < i) { \
+ for (; i < txq->nb_tx_desc; i++) { \
+ rte_pktmbuf_free_seg(swr[i].mbuf); \
+ swr[i].mbuf = NULL; \
+ } \
+ i = 0; \
+ } \
+ for (; i < txq->tx_tail; i++) { \
+ rte_pktmbuf_free_seg(swr[i].mbuf); \
+ swr[i].mbuf = NULL; \
+ } \
+} while (0)
+
+static inline void
+ci_txq_release_all_mbufs(struct ci_tx_queue *txq)
+{
+ if (unlikely(!txq || !txq->sw_ring))
+ return;
+
+ if (!txq->vector_tx) {
+ for (uint16_t i = 0; i < txq->nb_tx_desc; i++) {
+ if (txq->sw_ring[i].mbuf != NULL) {
+ rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
+ txq->sw_ring[i].mbuf = NULL;
+ }
+ }
+ return;
+ }
+
+ /**
+ * vPMD tx will not set sw_ring's mbuf to NULL after free,
+ * so need to free remains more carefully.
+ */
+ const uint16_t start = txq->tx_next_dd - txq->tx_rs_thresh + 1;
+
+ if (txq->vector_sw_ring) {
+ struct ci_tx_entry_vec *swr = txq->sw_ring_vec;
+ IETH_FREE_BUFS_LOOP(txq, swr, start);
+ } else {
+ struct ci_tx_entry *swr = txq->sw_ring;
+ IETH_FREE_BUFS_LOOP(txq, swr, start);
+ }
+}
+
#endif /* _COMMON_INTEL_TX_H_ */
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index a0c065d78c..c20399cd84 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -24,6 +24,7 @@
#include "ice_generic_flow.h"
#include "ice_dcf_ethdev.h"
#include "ice_rxtx.h"
+#include "_common_intel/tx.h"
#define DCF_NUM_MACADDR_MAX 64
@@ -500,7 +501,7 @@ ice_dcf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
}
txq = dev->data->tx_queues[tx_queue_id];
- txq->tx_rel_mbufs(txq);
+ ci_txq_release_all_mbufs(txq);
reset_tx_queue(txq);
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
@@ -650,7 +651,7 @@ ice_dcf_stop_queues(struct rte_eth_dev *dev)
txq = dev->data->tx_queues[i];
if (!txq)
continue;
- txq->tx_rel_mbufs(txq);
+ ci_txq_release_all_mbufs(txq);
reset_tx_queue(txq);
dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
}
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index ba54655499..afe8dae497 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -621,13 +621,12 @@ struct ice_adapter {
/* Set bit if the engine is disabled */
unsigned long disabled_engine_mask;
struct ice_parser *psr;
-#ifdef RTE_ARCH_X86
+ /* used only on X86, zero on other Archs */
bool rx_use_avx2;
bool rx_use_avx512;
bool tx_use_avx2;
bool tx_use_avx512;
bool rx_vec_offload_support;
-#endif
};
struct ice_vsi_vlan_pvid_info {
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index e2e147ba3e..0a890e587c 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -751,6 +751,7 @@ ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
struct ice_aqc_add_tx_qgrp *txq_elem;
struct ice_tlan_ctx tx_ctx;
int buf_len;
+ struct ice_adapter *ad = ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
PMD_INIT_FUNC_TRACE();
@@ -822,6 +823,10 @@ ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
return -EIO;
}
+ /* record what kind of descriptor cleanup we need on teardown */
+ txq->vector_tx = ad->tx_vec_allowed;
+ txq->vector_sw_ring = ad->tx_use_avx512;
+
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED;
rte_free(txq_elem);
@@ -1006,25 +1011,6 @@ ice_fdir_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
return 0;
}
-/* Free all mbufs for descriptors in tx queue */
-static void
-_ice_tx_queue_release_mbufs(struct ci_tx_queue *txq)
-{
- uint16_t i;
-
- if (!txq || !txq->sw_ring) {
- PMD_DRV_LOG(DEBUG, "Pointer to txq or sw_ring is NULL");
- return;
- }
-
- for (i = 0; i < txq->nb_tx_desc; i++) {
- if (txq->sw_ring[i].mbuf) {
- rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
- txq->sw_ring[i].mbuf = NULL;
- }
- }
-}
-
static void
ice_reset_tx_queue(struct ci_tx_queue *txq)
{
@@ -1103,7 +1089,7 @@ ice_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
return -EINVAL;
}
- txq->tx_rel_mbufs(txq);
+ ci_txq_release_all_mbufs(txq);
ice_reset_tx_queue(txq);
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
@@ -1166,7 +1152,7 @@ ice_fdir_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
return -EINVAL;
}
- txq->tx_rel_mbufs(txq);
+ ci_txq_release_all_mbufs(txq);
txq->qtx_tail = NULL;
return 0;
@@ -1518,7 +1504,6 @@ ice_tx_queue_setup(struct rte_eth_dev *dev,
ice_reset_tx_queue(txq);
txq->q_set = true;
dev->data->tx_queues[queue_idx] = txq;
- txq->tx_rel_mbufs = _ice_tx_queue_release_mbufs;
ice_set_tx_function_flag(dev, txq);
return 0;
@@ -1546,8 +1531,7 @@ ice_tx_queue_release(void *txq)
return;
}
- if (q->tx_rel_mbufs != NULL)
- q->tx_rel_mbufs(q);
+ ci_txq_release_all_mbufs(q);
rte_free(q->sw_ring);
rte_memzone_free(q->mz);
rte_free(q);
@@ -2460,7 +2444,6 @@ ice_fdir_setup_tx_resources(struct ice_pf *pf)
txq->q_set = true;
pf->fdir.txq = txq;
- txq->tx_rel_mbufs = _ice_tx_queue_release_mbufs;
return ICE_SUCCESS;
}
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index c6c3933299..907828b675 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -61,57 +61,6 @@ _ice_rx_queue_release_mbufs_vec(struct ice_rx_queue *rxq)
memset(rxq->sw_ring, 0, sizeof(rxq->sw_ring[0]) * rxq->nb_rx_desc);
}
-static inline void
-_ice_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq)
-{
- uint16_t i;
-
- if (unlikely(!txq || !txq->sw_ring)) {
- PMD_DRV_LOG(DEBUG, "Pointer to rxq or sw_ring is NULL");
- return;
- }
-
- /**
- * vPMD tx will not set sw_ring's mbuf to NULL after free,
- * so need to free remains more carefully.
- */
- i = txq->tx_next_dd - txq->tx_rs_thresh + 1;
-
-#ifdef __AVX512VL__
- struct rte_eth_dev *dev = &rte_eth_devices[txq->ice_vsi->adapter->pf.dev_data->port_id];
-
- if (dev->tx_pkt_burst == ice_xmit_pkts_vec_avx512 ||
- dev->tx_pkt_burst == ice_xmit_pkts_vec_avx512_offload) {
- struct ci_tx_entry_vec *swr = (void *)txq->sw_ring;
-
- if (txq->tx_tail < i) {
- for (; i < txq->nb_tx_desc; i++) {
- rte_pktmbuf_free_seg(swr[i].mbuf);
- swr[i].mbuf = NULL;
- }
- i = 0;
- }
- for (; i < txq->tx_tail; i++) {
- rte_pktmbuf_free_seg(swr[i].mbuf);
- swr[i].mbuf = NULL;
- }
- } else
-#endif
- {
- if (txq->tx_tail < i) {
- for (; i < txq->nb_tx_desc; i++) {
- rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
- txq->sw_ring[i].mbuf = NULL;
- }
- i = 0;
- }
- for (; i < txq->tx_tail; i++) {
- rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
- txq->sw_ring[i].mbuf = NULL;
- }
- }
-}
-
static inline int
ice_rxq_vec_setup_default(struct ice_rx_queue *rxq)
{
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index f11528385a..bff39c28d8 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -795,10 +795,6 @@ ice_rxq_vec_setup(struct ice_rx_queue *rxq)
int __rte_cold
ice_txq_vec_setup(struct ci_tx_queue *txq __rte_unused)
{
- if (!txq)
- return -1;
-
- txq->tx_rel_mbufs = _ice_tx_queue_release_mbufs_vec;
return 0;
}
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v2 15/22] net/i40e: use common Tx queue mbuf cleanup fn
2024-12-03 16:41 ` [PATCH v2 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (13 preceding siblings ...)
2024-12-03 16:41 ` [PATCH v2 14/22] net/ice: move Tx queue mbuf cleanup fn to common Bruce Richardson
@ 2024-12-03 16:41 ` Bruce Richardson
2024-12-03 16:41 ` [PATCH v2 16/22] net/ixgbe: " Bruce Richardson
` (6 subsequent siblings)
21 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-03 16:41 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Ian Stokes
Update driver to be similar to the "ice" driver and use the common mbuf
ring cleanup code on shutdown of a Tx queue.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/i40e/i40e_ethdev.h | 4 +-
drivers/net/i40e/i40e_rxtx.c | 70 ++++------------------------------
drivers/net/i40e/i40e_rxtx.h | 1 -
3 files changed, 9 insertions(+), 66 deletions(-)
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index d351193ed9..ccc8732d7d 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -1260,12 +1260,12 @@ struct i40e_adapter {
/* For RSS reta table update */
uint8_t rss_reta_updated;
-#ifdef RTE_ARCH_X86
+
+ /* used only on x86, zero on other architectures */
bool rx_use_avx2;
bool rx_use_avx512;
bool tx_use_avx2;
bool tx_use_avx512;
-#endif
};
/**
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 539b170266..b70919c5dc 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -1875,6 +1875,7 @@ i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
int err;
struct ci_tx_queue *txq;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ const struct i40e_adapter *ad = I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
PMD_INIT_FUNC_TRACE();
@@ -1889,6 +1890,9 @@ i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
PMD_DRV_LOG(WARNING, "TX queue %u is deferred start",
tx_queue_id);
+ txq->vector_tx = ad->tx_vec_allowed;
+ txq->vector_sw_ring = ad->tx_use_avx512;
+
/*
* tx_queue_id is queue id application refers to, while
* rxq->reg_idx is the real queue index.
@@ -1929,7 +1933,7 @@ i40e_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
return err;
}
- i40e_tx_queue_release_mbufs(txq);
+ ci_txq_release_all_mbufs(txq);
i40e_reset_tx_queue(txq);
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
@@ -2604,7 +2608,7 @@ i40e_tx_queue_release(void *txq)
return;
}
- i40e_tx_queue_release_mbufs(q);
+ ci_txq_release_all_mbufs(q);
rte_free(q->sw_ring);
rte_memzone_free(q->mz);
rte_free(q);
@@ -2701,66 +2705,6 @@ i40e_reset_rx_queue(struct i40e_rx_queue *rxq)
rxq->rxrearm_nb = 0;
}
-void
-i40e_tx_queue_release_mbufs(struct ci_tx_queue *txq)
-{
- struct rte_eth_dev *dev;
- uint16_t i;
-
- if (!txq || !txq->sw_ring) {
- PMD_DRV_LOG(DEBUG, "Pointer to txq or sw_ring is NULL");
- return;
- }
-
- dev = &rte_eth_devices[txq->port_id];
-
- /**
- * vPMD tx will not set sw_ring's mbuf to NULL after free,
- * so need to free remains more carefully.
- */
-#ifdef CC_AVX512_SUPPORT
- if (dev->tx_pkt_burst == i40e_xmit_pkts_vec_avx512) {
- struct ci_tx_entry_vec *swr = (void *)txq->sw_ring;
-
- i = txq->tx_next_dd - txq->tx_rs_thresh + 1;
- if (txq->tx_tail < i) {
- for (; i < txq->nb_tx_desc; i++) {
- rte_pktmbuf_free_seg(swr[i].mbuf);
- swr[i].mbuf = NULL;
- }
- i = 0;
- }
- for (; i < txq->tx_tail; i++) {
- rte_pktmbuf_free_seg(swr[i].mbuf);
- swr[i].mbuf = NULL;
- }
- return;
- }
-#endif
- if (dev->tx_pkt_burst == i40e_xmit_pkts_vec_avx2 ||
- dev->tx_pkt_burst == i40e_xmit_pkts_vec) {
- i = txq->tx_next_dd - txq->tx_rs_thresh + 1;
- if (txq->tx_tail < i) {
- for (; i < txq->nb_tx_desc; i++) {
- rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
- txq->sw_ring[i].mbuf = NULL;
- }
- i = 0;
- }
- for (; i < txq->tx_tail; i++) {
- rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
- txq->sw_ring[i].mbuf = NULL;
- }
- } else {
- for (i = 0; i < txq->nb_tx_desc; i++) {
- if (txq->sw_ring[i].mbuf) {
- rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
- txq->sw_ring[i].mbuf = NULL;
- }
- }
- }
-}
-
static int
i40e_tx_done_cleanup_full(struct ci_tx_queue *txq,
uint32_t free_cnt)
@@ -3127,7 +3071,7 @@ i40e_dev_clear_queues(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_tx_queues; i++) {
if (!dev->data->tx_queues[i])
continue;
- i40e_tx_queue_release_mbufs(dev->data->tx_queues[i]);
+ ci_txq_release_all_mbufs(dev->data->tx_queues[i]);
i40e_reset_tx_queue(dev->data->tx_queues[i]);
}
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 043d1df912..858b8433e9 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -179,7 +179,6 @@ void i40e_dev_clear_queues(struct rte_eth_dev *dev);
void i40e_dev_free_queues(struct rte_eth_dev *dev);
void i40e_reset_rx_queue(struct i40e_rx_queue *rxq);
void i40e_reset_tx_queue(struct ci_tx_queue *txq);
-void i40e_tx_queue_release_mbufs(struct ci_tx_queue *txq);
int i40e_tx_done_cleanup(void *txq, uint32_t free_cnt);
int i40e_alloc_rx_queue_mbufs(struct i40e_rx_queue *rxq);
void i40e_rx_queue_release_mbufs(struct i40e_rx_queue *rxq);
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v2 16/22] net/ixgbe: use common Tx queue mbuf cleanup fn
2024-12-03 16:41 ` [PATCH v2 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (14 preceding siblings ...)
2024-12-03 16:41 ` [PATCH v2 15/22] net/i40e: use common Tx queue mbuf cleanup fn Bruce Richardson
@ 2024-12-03 16:41 ` Bruce Richardson
2024-12-03 16:41 ` [PATCH v2 17/22] net/iavf: " Bruce Richardson
` (5 subsequent siblings)
21 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-03 16:41 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Anatoly Burakov, Vladimir Medvedkin,
Wathsala Vithanage, Konstantin Ananyev
Update driver to use the common cleanup function.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/ixgbe/ixgbe_rxtx.c | 22 +++---------------
drivers/net/ixgbe/ixgbe_rxtx.h | 1 -
drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 28 ++---------------------
drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 7 ------
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 7 ------
5 files changed, 5 insertions(+), 60 deletions(-)
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 344ef85685..bf9d461b06 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -2334,21 +2334,6 @@ ixgbe_recv_pkts_lro_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts,
*
**********************************************************************/
-static void __rte_cold
-ixgbe_tx_queue_release_mbufs(struct ci_tx_queue *txq)
-{
- unsigned i;
-
- if (txq->sw_ring != NULL) {
- for (i = 0; i < txq->nb_tx_desc; i++) {
- if (txq->sw_ring[i].mbuf != NULL) {
- rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
- txq->sw_ring[i].mbuf = NULL;
- }
- }
- }
-}
-
static int
ixgbe_tx_done_cleanup_full(struct ci_tx_queue *txq, uint32_t free_cnt)
{
@@ -2472,7 +2457,7 @@ static void __rte_cold
ixgbe_tx_queue_release(struct ci_tx_queue *txq)
{
if (txq != NULL && txq->ops != NULL) {
- txq->ops->release_mbufs(txq);
+ ci_txq_release_all_mbufs(txq);
txq->ops->free_swring(txq);
rte_memzone_free(txq->mz);
rte_free(txq);
@@ -2526,7 +2511,6 @@ ixgbe_reset_tx_queue(struct ci_tx_queue *txq)
}
static const struct ixgbe_txq_ops def_txq_ops = {
- .release_mbufs = ixgbe_tx_queue_release_mbufs,
.free_swring = ixgbe_tx_free_swring,
.reset = ixgbe_reset_tx_queue,
};
@@ -3380,7 +3364,7 @@ ixgbe_dev_clear_queues(struct rte_eth_dev *dev)
struct ci_tx_queue *txq = dev->data->tx_queues[i];
if (txq != NULL) {
- txq->ops->release_mbufs(txq);
+ ci_txq_release_all_mbufs(txq);
txq->ops->reset(txq);
dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
}
@@ -5655,7 +5639,7 @@ ixgbe_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
}
if (txq->ops != NULL) {
- txq->ops->release_mbufs(txq);
+ ci_txq_release_all_mbufs(txq);
txq->ops->reset(txq);
}
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index 4333e5bf2f..11689eb432 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -181,7 +181,6 @@ struct ixgbe_advctx_info {
};
struct ixgbe_txq_ops {
- void (*release_mbufs)(struct ci_tx_queue *txq);
void (*free_swring)(struct ci_tx_queue *txq);
void (*reset)(struct ci_tx_queue *txq);
};
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
index 81fd8bb64d..65794e45cb 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
@@ -78,32 +78,6 @@ tx_backlog_entry(struct ci_tx_entry_vec *txep,
txep[i].mbuf = tx_pkts[i];
}
-static inline void
-_ixgbe_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq)
-{
- unsigned int i;
- struct ci_tx_entry_vec *txe;
- const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1);
-
- if (txq->sw_ring == NULL || txq->nb_tx_free == max_desc)
- return;
-
- /* release the used mbufs in sw_ring */
- for (i = txq->tx_next_dd - (txq->tx_rs_thresh - 1);
- i != txq->tx_tail;
- i = (i + 1) % txq->nb_tx_desc) {
- txe = &txq->sw_ring_vec[i];
- rte_pktmbuf_free_seg(txe->mbuf);
- }
- txq->nb_tx_free = max_desc;
-
- /* reset tx_entry */
- for (i = 0; i < txq->nb_tx_desc; i++) {
- txe = &txq->sw_ring_vec[i];
- txe->mbuf = NULL;
- }
-}
-
static inline void
_ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq)
{
@@ -208,6 +182,8 @@ ixgbe_txq_vec_setup_default(struct ci_tx_queue *txq,
/* leave the first one for overflow */
txq->sw_ring_vec = txq->sw_ring_vec + 1;
txq->ops = txq_ops;
+ txq->vector_tx = 1;
+ txq->vector_sw_ring = 1;
return 0;
}
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
index cb749a3760..2ccb399b64 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
@@ -633,12 +633,6 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
return nb_pkts;
}
-static void __rte_cold
-ixgbe_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq)
-{
- _ixgbe_tx_queue_release_mbufs_vec(txq);
-}
-
void __rte_cold
ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq)
{
@@ -658,7 +652,6 @@ ixgbe_reset_tx_queue(struct ci_tx_queue *txq)
}
static const struct ixgbe_txq_ops vec_txq_ops = {
- .release_mbufs = ixgbe_tx_queue_release_mbufs_vec,
.free_swring = ixgbe_tx_free_swring,
.reset = ixgbe_reset_tx_queue,
};
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
index e46550f76a..fa26365f06 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -756,12 +756,6 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
return nb_pkts;
}
-static void __rte_cold
-ixgbe_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq)
-{
- _ixgbe_tx_queue_release_mbufs_vec(txq);
-}
-
void __rte_cold
ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq)
{
@@ -781,7 +775,6 @@ ixgbe_reset_tx_queue(struct ci_tx_queue *txq)
}
static const struct ixgbe_txq_ops vec_txq_ops = {
- .release_mbufs = ixgbe_tx_queue_release_mbufs_vec,
.free_swring = ixgbe_tx_free_swring,
.reset = ixgbe_reset_tx_queue,
};
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v2 17/22] net/iavf: use common Tx queue mbuf cleanup fn
2024-12-03 16:41 ` [PATCH v2 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (15 preceding siblings ...)
2024-12-03 16:41 ` [PATCH v2 16/22] net/ixgbe: " Bruce Richardson
@ 2024-12-03 16:41 ` Bruce Richardson
2024-12-03 16:41 ` [PATCH v2 18/22] net/ice: use vector SW ring for all vector paths Bruce Richardson
` (4 subsequent siblings)
21 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-03 16:41 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Ian Stokes, Vladimir Medvedkin,
Konstantin Ananyev, Anatoly Burakov
Adjust iavf driver to also use the common mbuf freeing functions on Tx
queue release/cleanup. The implementation is complicated a little by the
need to integrate the additional "has_ctx" parameter for the iavf code,
but changes in other drivers are minimal - just a constant "false"
parameter.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 27 +++++++++---------
drivers/net/i40e/i40e_rxtx.c | 6 ++--
drivers/net/iavf/iavf_rxtx.c | 37 ++-----------------------
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 24 ++--------------
drivers/net/iavf/iavf_rxtx_vec_common.h | 18 ------------
drivers/net/iavf/iavf_rxtx_vec_sse.c | 9 ++----
drivers/net/ice/ice_dcf_ethdev.c | 4 +--
drivers/net/ice/ice_rxtx.c | 6 ++--
drivers/net/ixgbe/ixgbe_rxtx.c | 6 ++--
9 files changed, 31 insertions(+), 106 deletions(-)
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
index 1bf2a61b2f..310b51adcf 100644
--- a/drivers/net/_common_intel/tx.h
+++ b/drivers/net/_common_intel/tx.h
@@ -271,23 +271,23 @@ ci_tx_free_bufs_vec(struct ci_tx_queue *txq, ci_desc_done_fn desc_done, bool ctx
return txq->tx_rs_thresh;
}
-#define IETH_FREE_BUFS_LOOP(txq, swr, start) do { \
+#define IETH_FREE_BUFS_LOOP(swr, nb_desc, start, end) do { \
uint16_t i = start; \
- if (txq->tx_tail < i) { \
- for (; i < txq->nb_tx_desc; i++) { \
+ if (end < i) { \
+ for (; i < nb_desc; i++) { \
rte_pktmbuf_free_seg(swr[i].mbuf); \
swr[i].mbuf = NULL; \
} \
i = 0; \
} \
- for (; i < txq->tx_tail; i++) { \
+ for (; i < end; i++) { \
rte_pktmbuf_free_seg(swr[i].mbuf); \
swr[i].mbuf = NULL; \
} \
} while (0)
static inline void
-ci_txq_release_all_mbufs(struct ci_tx_queue *txq)
+ci_txq_release_all_mbufs(struct ci_tx_queue *txq, bool use_ctx)
{
if (unlikely(!txq || !txq->sw_ring))
return;
@@ -306,15 +306,14 @@ ci_txq_release_all_mbufs(struct ci_tx_queue *txq)
* vPMD tx will not set sw_ring's mbuf to NULL after free,
* so need to free remains more carefully.
*/
- const uint16_t start = txq->tx_next_dd - txq->tx_rs_thresh + 1;
-
- if (txq->vector_sw_ring) {
- struct ci_tx_entry_vec *swr = txq->sw_ring_vec;
- IETH_FREE_BUFS_LOOP(txq, swr, start);
- } else {
- struct ci_tx_entry *swr = txq->sw_ring;
- IETH_FREE_BUFS_LOOP(txq, swr, start);
- }
+ const uint16_t start = (txq->tx_next_dd - txq->tx_rs_thresh + 1) >> use_ctx;
+ const uint16_t nb_desc = txq->nb_tx_desc >> use_ctx;
+ const uint16_t end = txq->tx_tail >> use_ctx;
+
+ if (txq->vector_sw_ring)
+ IETH_FREE_BUFS_LOOP(txq->sw_ring_vec, nb_desc, start, end);
+ else
+ IETH_FREE_BUFS_LOOP(txq->sw_ring, nb_desc, start, end);
}
#endif /* _COMMON_INTEL_TX_H_ */
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index b70919c5dc..081d743e62 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -1933,7 +1933,7 @@ i40e_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
return err;
}
- ci_txq_release_all_mbufs(txq);
+ ci_txq_release_all_mbufs(txq, false);
i40e_reset_tx_queue(txq);
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
@@ -2608,7 +2608,7 @@ i40e_tx_queue_release(void *txq)
return;
}
- ci_txq_release_all_mbufs(q);
+ ci_txq_release_all_mbufs(q, false);
rte_free(q->sw_ring);
rte_memzone_free(q->mz);
rte_free(q);
@@ -3071,7 +3071,7 @@ i40e_dev_clear_queues(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_tx_queues; i++) {
if (!dev->data->tx_queues[i])
continue;
- ci_txq_release_all_mbufs(dev->data->tx_queues[i]);
+ ci_txq_release_all_mbufs(dev->data->tx_queues[i], false);
i40e_reset_tx_queue(dev->data->tx_queues[i]);
}
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 7e381b2a17..f0ab881ac5 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -387,24 +387,6 @@ release_rxq_mbufs(struct iavf_rx_queue *rxq)
rxq->rx_nb_avail = 0;
}
-static inline void
-release_txq_mbufs(struct ci_tx_queue *txq)
-{
- uint16_t i;
-
- if (!txq || !txq->sw_ring) {
- PMD_DRV_LOG(DEBUG, "Pointer to rxq or sw_ring is NULL");
- return;
- }
-
- for (i = 0; i < txq->nb_tx_desc; i++) {
- if (txq->sw_ring[i].mbuf) {
- rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
- txq->sw_ring[i].mbuf = NULL;
- }
- }
-}
-
static const
struct iavf_rxq_ops iavf_rxq_release_mbufs_ops[] = {
[IAVF_REL_MBUFS_DEFAULT].release_mbufs = release_rxq_mbufs,
@@ -413,18 +395,6 @@ struct iavf_rxq_ops iavf_rxq_release_mbufs_ops[] = {
#endif
};
-static const
-struct iavf_txq_ops iavf_txq_release_mbufs_ops[] = {
- [IAVF_REL_MBUFS_DEFAULT].release_mbufs = release_txq_mbufs,
-#ifdef RTE_ARCH_X86
- [IAVF_REL_MBUFS_SSE_VEC].release_mbufs = iavf_tx_queue_release_mbufs_sse,
-#ifdef CC_AVX512_SUPPORT
- [IAVF_REL_MBUFS_AVX512_VEC].release_mbufs = iavf_tx_queue_release_mbufs_avx512,
-#endif
-#endif
-
-};
-
static inline void
iavf_rxd_to_pkt_fields_by_comms_ovs(__rte_unused struct iavf_rx_queue *rxq,
struct rte_mbuf *mb,
@@ -889,7 +859,6 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->q_set = true;
dev->data->tx_queues[queue_idx] = txq;
txq->qtx_tail = hw->hw_addr + IAVF_QTX_TAIL1(queue_idx);
- txq->rel_mbufs_type = IAVF_REL_MBUFS_DEFAULT;
if (check_tx_vec_allow(txq) == false) {
struct iavf_adapter *ad =
@@ -1068,7 +1037,7 @@ iavf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
}
txq = dev->data->tx_queues[tx_queue_id];
- iavf_txq_release_mbufs_ops[txq->rel_mbufs_type].release_mbufs(txq);
+ ci_txq_release_all_mbufs(txq, txq->use_ctx);
reset_tx_queue(txq);
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
@@ -1097,7 +1066,7 @@ iavf_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
if (!q)
return;
- iavf_txq_release_mbufs_ops[q->rel_mbufs_type].release_mbufs(q);
+ ci_txq_release_all_mbufs(q, q->use_ctx);
rte_free(q->sw_ring);
rte_memzone_free(q->mz);
rte_free(q);
@@ -1114,7 +1083,7 @@ iavf_reset_queues(struct rte_eth_dev *dev)
txq = dev->data->tx_queues[i];
if (!txq)
continue;
- iavf_txq_release_mbufs_ops[txq->rel_mbufs_type].release_mbufs(txq);
+ ci_txq_release_all_mbufs(txq, txq->use_ctx);
reset_tx_queue(txq);
dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
}
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index 8543490c70..007759e451 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -2357,31 +2357,11 @@ iavf_xmit_pkts_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
return iavf_xmit_pkts_vec_avx512_cmn(tx_queue, tx_pkts, nb_pkts, false);
}
-void __rte_cold
-iavf_tx_queue_release_mbufs_avx512(struct ci_tx_queue *txq)
-{
- unsigned int i;
- const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1);
- const uint16_t end_desc = txq->tx_tail >> txq->use_ctx; /* next empty slot */
- const uint16_t wrap_point = txq->nb_tx_desc >> txq->use_ctx; /* end of SW ring */
- struct ci_tx_entry_vec *swr = (void *)txq->sw_ring;
-
- if (!txq->sw_ring || txq->nb_tx_free == max_desc)
- return;
-
- i = (txq->tx_next_dd - txq->tx_rs_thresh + 1) >> txq->use_ctx;
- while (i != end_desc) {
- rte_pktmbuf_free_seg(swr[i].mbuf);
- swr[i].mbuf = NULL;
- if (++i == wrap_point)
- i = 0;
- }
-}
-
int __rte_cold
iavf_txq_vec_setup_avx512(struct ci_tx_queue *txq)
{
- txq->rel_mbufs_type = IAVF_REL_MBUFS_AVX512_VEC;
+ txq->vector_tx = true;
+ txq->vector_sw_ring = true;
return 0;
}
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index 7130229f23..6f94587eee 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -60,24 +60,6 @@ _iavf_rx_queue_release_mbufs_vec(struct iavf_rx_queue *rxq)
memset(rxq->sw_ring, 0, sizeof(rxq->sw_ring[0]) * rxq->nb_rx_desc);
}
-static inline void
-_iavf_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq)
-{
- unsigned i;
- const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1);
-
- if (!txq->sw_ring || txq->nb_tx_free == max_desc)
- return;
-
- i = txq->tx_next_dd - txq->tx_rs_thresh + 1;
- while (i != txq->tx_tail) {
- rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
- txq->sw_ring[i].mbuf = NULL;
- if (++i == txq->nb_tx_desc)
- i = 0;
- }
-}
-
static inline int
iavf_rxq_vec_setup_default(struct iavf_rx_queue *rxq)
{
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index 5c0b2fff46..3adf2a59e4 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -1458,16 +1458,11 @@ iavf_rx_queue_release_mbufs_sse(struct iavf_rx_queue *rxq)
_iavf_rx_queue_release_mbufs_vec(rxq);
}
-void __rte_cold
-iavf_tx_queue_release_mbufs_sse(struct ci_tx_queue *txq)
-{
- _iavf_tx_queue_release_mbufs_vec(txq);
-}
-
int __rte_cold
iavf_txq_vec_setup(struct ci_tx_queue *txq)
{
- txq->rel_mbufs_type = IAVF_REL_MBUFS_SSE_VEC;
+ txq->vector_tx = true;
+ txq->vector_sw_ring = false;
return 0;
}
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index c20399cd84..57fe44ebb3 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -501,7 +501,7 @@ ice_dcf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
}
txq = dev->data->tx_queues[tx_queue_id];
- ci_txq_release_all_mbufs(txq);
+ ci_txq_release_all_mbufs(txq, false);
reset_tx_queue(txq);
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
@@ -651,7 +651,7 @@ ice_dcf_stop_queues(struct rte_eth_dev *dev)
txq = dev->data->tx_queues[i];
if (!txq)
continue;
- ci_txq_release_all_mbufs(txq);
+ ci_txq_release_all_mbufs(txq, false);
reset_tx_queue(txq);
dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
}
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 0a890e587c..ad0ddf6a88 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -1089,7 +1089,7 @@ ice_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
return -EINVAL;
}
- ci_txq_release_all_mbufs(txq);
+ ci_txq_release_all_mbufs(txq, false);
ice_reset_tx_queue(txq);
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
@@ -1152,7 +1152,7 @@ ice_fdir_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
return -EINVAL;
}
- ci_txq_release_all_mbufs(txq);
+ ci_txq_release_all_mbufs(txq, false);
txq->qtx_tail = NULL;
return 0;
@@ -1531,7 +1531,7 @@ ice_tx_queue_release(void *txq)
return;
}
- ci_txq_release_all_mbufs(q);
+ ci_txq_release_all_mbufs(q, false);
rte_free(q->sw_ring);
rte_memzone_free(q->mz);
rte_free(q);
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index bf9d461b06..3b7a6a6f0e 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -2457,7 +2457,7 @@ static void __rte_cold
ixgbe_tx_queue_release(struct ci_tx_queue *txq)
{
if (txq != NULL && txq->ops != NULL) {
- ci_txq_release_all_mbufs(txq);
+ ci_txq_release_all_mbufs(txq, false);
txq->ops->free_swring(txq);
rte_memzone_free(txq->mz);
rte_free(txq);
@@ -3364,7 +3364,7 @@ ixgbe_dev_clear_queues(struct rte_eth_dev *dev)
struct ci_tx_queue *txq = dev->data->tx_queues[i];
if (txq != NULL) {
- ci_txq_release_all_mbufs(txq);
+ ci_txq_release_all_mbufs(txq, false);
txq->ops->reset(txq);
dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
}
@@ -5639,7 +5639,7 @@ ixgbe_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
}
if (txq->ops != NULL) {
- ci_txq_release_all_mbufs(txq);
+ ci_txq_release_all_mbufs(txq, false);
txq->ops->reset(txq);
}
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v2 18/22] net/ice: use vector SW ring for all vector paths
2024-12-03 16:41 ` [PATCH v2 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (16 preceding siblings ...)
2024-12-03 16:41 ` [PATCH v2 17/22] net/iavf: " Bruce Richardson
@ 2024-12-03 16:41 ` Bruce Richardson
2024-12-03 16:41 ` [PATCH v2 19/22] net/i40e: " Bruce Richardson
` (3 subsequent siblings)
21 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-03 16:41 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Anatoly Burakov, Konstantin Ananyev
The AVX-512 code path used a smaller SW ring structure only containing
the mbuf pointer, but no other fields. The other fields are only used in
the scalar code path, so update all vector driver code paths to use the
smaller, faster structure.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 7 +++++++
drivers/net/ice/ice_rxtx.c | 2 +-
drivers/net/ice/ice_rxtx_vec_avx2.c | 12 ++++++------
drivers/net/ice/ice_rxtx_vec_avx512.c | 14 ++------------
drivers/net/ice/ice_rxtx_vec_common.h | 6 ------
drivers/net/ice/ice_rxtx_vec_sse.c | 12 ++++++------
6 files changed, 22 insertions(+), 31 deletions(-)
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
index 310b51adcf..aa42b9b49f 100644
--- a/drivers/net/_common_intel/tx.h
+++ b/drivers/net/_common_intel/tx.h
@@ -109,6 +109,13 @@ ci_tx_backlog_entry(struct ci_tx_entry *txep, struct rte_mbuf **tx_pkts, uint16_
txep[i].mbuf = tx_pkts[i];
}
+static __rte_always_inline void
+ci_tx_backlog_entry_vec(struct ci_tx_entry_vec *txep, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+ for (uint16_t i = 0; i < nb_pkts; ++i)
+ txep[i].mbuf = tx_pkts[i];
+}
+
#define IETH_VPMD_TX_MAX_FREE_BUF 64
typedef int (*ci_desc_done_fn)(struct ci_tx_queue *txq, uint16_t idx);
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index ad0ddf6a88..77cb6688a7 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -825,7 +825,7 @@ ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
/* record what kind of descriptor cleanup we need on teardown */
txq->vector_tx = ad->tx_vec_allowed;
- txq->vector_sw_ring = ad->tx_use_avx512;
+ txq->vector_sw_ring = txq->vector_tx;
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED;
diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index 12ffa0fa9a..98bab322b4 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -858,7 +858,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct ice_tx_desc *txdp;
- struct ci_tx_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = ICE_TD_CMD;
uint64_t rs = ICE_TX_DESC_CMD_RS | ICE_TD_CMD;
@@ -867,7 +867,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_pkts = RTE_MIN(nb_pkts, txq->tx_rs_thresh);
if (txq->nb_tx_free < txq->tx_free_thresh)
- ice_tx_free_bufs_vec(txq);
+ ci_tx_free_bufs_vec(txq, ice_tx_desc_done, false);
nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
@@ -875,13 +875,13 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = txq->tx_tail;
txdp = &txq->ice_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ci_tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, n);
ice_vtx(txdp, tx_pkts, n - 1, flags, offload);
tx_pkts += (n - 1);
@@ -896,10 +896,10 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
/* avoid reach the end of ring */
txdp = &txq->ice_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
}
- ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
ice_vtx(txdp, tx_pkts, nb_commit, flags, offload);
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index f6ec593f96..481f784e34 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -924,16 +924,6 @@ ice_vtx(volatile struct ice_tx_desc *txdp, struct rte_mbuf **pkt,
}
}
-static __rte_always_inline void
-ice_tx_backlog_entry_avx512(struct ci_tx_entry_vec *txep,
- struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
-{
- int i;
-
- for (i = 0; i < (int)nb_pkts; ++i)
- txep[i].mbuf = tx_pkts[i];
-}
-
static __rte_always_inline uint16_t
ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool do_offload)
@@ -964,7 +954,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ice_tx_backlog_entry_avx512(txep, tx_pkts, n);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, n);
ice_vtx(txdp, tx_pkts, n - 1, flags, do_offload);
tx_pkts += (n - 1);
@@ -982,7 +972,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = (void *)txq->sw_ring;
}
- ice_tx_backlog_entry_avx512(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
ice_vtx(txdp, tx_pkts, nb_commit, flags, do_offload);
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index 907828b675..aa709fb51c 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -20,12 +20,6 @@ ice_tx_desc_done(struct ci_tx_queue *txq, uint16_t idx)
rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE);
}
-static __rte_always_inline int
-ice_tx_free_bufs_vec(struct ci_tx_queue *txq)
-{
- return ci_tx_free_bufs(txq, ice_tx_desc_done);
-}
-
static inline void
_ice_rx_queue_release_mbufs_vec(struct ice_rx_queue *rxq)
{
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index bff39c28d8..73e3e9eb54 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -699,7 +699,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct ice_tx_desc *txdp;
- struct ci_tx_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = ICE_TD_CMD;
uint64_t rs = ICE_TX_DESC_CMD_RS | ICE_TD_CMD;
@@ -709,7 +709,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_pkts = RTE_MIN(nb_pkts, txq->tx_rs_thresh);
if (txq->nb_tx_free < txq->tx_free_thresh)
- ice_tx_free_bufs_vec(txq);
+ ci_tx_free_bufs_vec(txq, ice_tx_desc_done, false);
nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
nb_commit = nb_pkts;
@@ -718,13 +718,13 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = txq->tx_tail;
txdp = &txq->ice_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ci_tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
ice_vtx1(txdp, *tx_pkts, flags);
@@ -738,10 +738,10 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
/* avoid reach the end of ring */
txdp = &txq->ice_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
}
- ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
ice_vtx(txdp, tx_pkts, nb_commit, flags);
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v2 19/22] net/i40e: use vector SW ring for all vector paths
2024-12-03 16:41 ` [PATCH v2 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (17 preceding siblings ...)
2024-12-03 16:41 ` [PATCH v2 18/22] net/ice: use vector SW ring for all vector paths Bruce Richardson
@ 2024-12-03 16:41 ` Bruce Richardson
2024-12-03 16:41 ` [PATCH v2 20/22] net/iavf: " Bruce Richardson
` (2 subsequent siblings)
21 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-03 16:41 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Ian Stokes, David Christensen,
Konstantin Ananyev, Wathsala Vithanage
The AVX-512 code path used a smaller SW ring structure only containing
the mbuf pointer, but no other fields. The other fields are only used in
the scalar code path, so update all vector driver code paths (AVX2, SSE,
Neon, Altivec) to use the smaller, faster structure.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/i40e/i40e_rxtx.c | 8 +++++---
drivers/net/i40e/i40e_rxtx_vec_altivec.c | 12 ++++++------
drivers/net/i40e/i40e_rxtx_vec_avx2.c | 12 ++++++------
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 14 ++------------
drivers/net/i40e/i40e_rxtx_vec_common.h | 6 ------
drivers/net/i40e/i40e_rxtx_vec_neon.c | 12 ++++++------
drivers/net/i40e/i40e_rxtx_vec_sse.c | 12 ++++++------
7 files changed, 31 insertions(+), 45 deletions(-)
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 081d743e62..745c467912 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -1891,7 +1891,7 @@ i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
tx_queue_id);
txq->vector_tx = ad->tx_vec_allowed;
- txq->vector_sw_ring = ad->tx_use_avx512;
+ txq->vector_sw_ring = txq->vector_tx;
/*
* tx_queue_id is queue id application refers to, while
@@ -3550,9 +3550,11 @@ i40e_set_tx_function(struct rte_eth_dev *dev)
}
}
+ if (rte_vect_get_max_simd_bitwidth() < RTE_VECT_SIMD_128)
+ ad->tx_vec_allowed = false;
+
if (ad->tx_simple_allowed) {
- if (ad->tx_vec_allowed &&
- rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) {
+ if (ad->tx_vec_allowed) {
#ifdef RTE_ARCH_X86
if (ad->tx_use_avx512) {
#ifdef CC_AVX512_SUPPORT
diff --git a/drivers/net/i40e/i40e_rxtx_vec_altivec.c b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
index 500bba2cef..b6900a3e15 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_altivec.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
@@ -553,14 +553,14 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
- struct ci_tx_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = I40E_TD_CMD;
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
int i;
if (txq->nb_tx_free < txq->tx_free_thresh)
- i40e_tx_free_bufs(txq);
+ ci_tx_free_bufs_vec(txq, i40e_tx_desc_done, false);
nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
nb_commit = nb_pkts;
@@ -569,13 +569,13 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = txq->tx_tail;
txdp = &txq->i40e_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ci_tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -589,10 +589,10 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
/* avoid reach the end of ring */
txdp = &txq->i40e_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
}
- ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx2.c b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
index 29bef64287..2477573c01 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx2.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
@@ -745,13 +745,13 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
- struct ci_tx_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = I40E_TD_CMD;
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
if (txq->nb_tx_free < txq->tx_free_thresh)
- i40e_tx_free_bufs(txq);
+ ci_tx_free_bufs_vec(txq, i40e_tx_desc_done, false);
nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
@@ -759,13 +759,13 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = txq->tx_tail;
txdp = &txq->i40e_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ci_tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, n);
vtx(txdp, tx_pkts, n - 1, flags);
tx_pkts += (n - 1);
@@ -780,10 +780,10 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
/* avoid reach the end of ring */
txdp = &txq->i40e_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
}
- ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index c555c3491d..2497e6a8f0 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -807,16 +807,6 @@ vtx(volatile struct i40e_tx_desc *txdp,
}
}
-static __rte_always_inline void
-tx_backlog_entry_avx512(struct ci_tx_entry_vec *txep,
- struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
-{
- int i;
-
- for (i = 0; i < (int)nb_pkts; ++i)
- txep[i].mbuf = tx_pkts[i];
-}
-
static inline uint16_t
i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
@@ -844,7 +834,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- tx_backlog_entry_avx512(txep, tx_pkts, n);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, n);
vtx(txdp, tx_pkts, n - 1, flags);
tx_pkts += (n - 1);
@@ -862,7 +852,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = (void *)txq->sw_ring;
}
- tx_backlog_entry_avx512(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index 907d32dd0b..733dc797cd 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -24,12 +24,6 @@ i40e_tx_desc_done(struct ci_tx_queue *txq, uint16_t idx)
rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE);
}
-static __rte_always_inline int
-i40e_tx_free_bufs(struct ci_tx_queue *txq)
-{
- return ci_tx_free_bufs(txq, i40e_tx_desc_done);
-}
-
static inline void
_i40e_rx_queue_release_mbufs_vec(struct i40e_rx_queue *rxq)
{
diff --git a/drivers/net/i40e/i40e_rxtx_vec_neon.c b/drivers/net/i40e/i40e_rxtx_vec_neon.c
index c97f337e43..b398d66154 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_neon.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_neon.c
@@ -681,14 +681,14 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
{
struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
- struct ci_tx_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = I40E_TD_CMD;
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
int i;
if (txq->nb_tx_free < txq->tx_free_thresh)
- i40e_tx_free_bufs(txq);
+ ci_tx_free_bufs_vec(txq, i40e_tx_desc_done, false);
nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
@@ -696,13 +696,13 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
tx_id = txq->tx_tail;
txdp = &txq->i40e_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ci_tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -716,10 +716,10 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
/* avoid reach the end of ring */
txdp = &txq->i40e_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
}
- ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_sse.c b/drivers/net/i40e/i40e_rxtx_vec_sse.c
index 2c467e2089..90c57e59d0 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_sse.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_sse.c
@@ -700,14 +700,14 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
- struct ci_tx_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = I40E_TD_CMD;
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
int i;
if (txq->nb_tx_free < txq->tx_free_thresh)
- i40e_tx_free_bufs(txq);
+ ci_tx_free_bufs_vec(txq, i40e_tx_desc_done, false);
nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
@@ -715,13 +715,13 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = txq->tx_tail;
txdp = &txq->i40e_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ci_tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -735,10 +735,10 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
/* avoid reach the end of ring */
txdp = &txq->i40e_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
}
- ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v2 20/22] net/iavf: use vector SW ring for all vector paths
2024-12-03 16:41 ` [PATCH v2 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (18 preceding siblings ...)
2024-12-03 16:41 ` [PATCH v2 19/22] net/i40e: " Bruce Richardson
@ 2024-12-03 16:41 ` Bruce Richardson
2024-12-03 16:41 ` [PATCH v2 21/22] net/_common_intel: remove unneeded code Bruce Richardson
2024-12-03 16:41 ` [PATCH v2 22/22] net/ixgbe: use common Tx backlog entry fn Bruce Richardson
21 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-03 16:41 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Vladimir Medvedkin, Ian Stokes, Konstantin Ananyev
The AVX-512 code path used a smaller SW ring structure only containing
the mbuf pointer, but no other fields. The other fields are only used in
the scalar code path, so update all vector driver code paths (AVX2, SSE)
to use the smaller, faster structure.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/iavf/iavf_rxtx.c | 7 -------
drivers/net/iavf/iavf_rxtx_vec_avx2.c | 12 ++++++------
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 8 --------
drivers/net/iavf/iavf_rxtx_vec_common.h | 6 ------
drivers/net/iavf/iavf_rxtx_vec_sse.c | 14 +++++++-------
5 files changed, 13 insertions(+), 34 deletions(-)
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index f0ab881ac5..6692f6992b 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -4193,14 +4193,7 @@ iavf_set_tx_function(struct rte_eth_dev *dev)
txq = dev->data->tx_queues[i];
if (!txq)
continue;
-#ifdef CC_AVX512_SUPPORT
- if (use_avx512)
- iavf_txq_vec_setup_avx512(txq);
- else
- iavf_txq_vec_setup(txq);
-#else
iavf_txq_vec_setup(txq);
-#endif
}
if (no_poll_on_link_down) {
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index fdb98b417a..b847886081 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -1736,14 +1736,14 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
- struct ci_tx_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
/* bit2 is reserved and must be set to 1 according to Spec */
uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC;
uint64_t rs = IAVF_TX_DESC_CMD_RS | flags;
if (txq->nb_tx_free < txq->tx_free_thresh)
- iavf_tx_free_bufs(txq);
+ ci_tx_free_bufs_vec(txq, iavf_tx_desc_done, false);
nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
@@ -1752,13 +1752,13 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = txq->tx_tail;
txdp = &txq->iavf_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ci_tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, n);
iavf_vtx(txdp, tx_pkts, n - 1, flags, offload);
tx_pkts += (n - 1);
@@ -1773,10 +1773,10 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
/* avoid reach the end of ring */
txdp = &txq->iavf_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
}
- ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
iavf_vtx(txdp, tx_pkts, nb_commit, flags, offload);
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index 007759e451..641f3311eb 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -2357,14 +2357,6 @@ iavf_xmit_pkts_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
return iavf_xmit_pkts_vec_avx512_cmn(tx_queue, tx_pkts, nb_pkts, false);
}
-int __rte_cold
-iavf_txq_vec_setup_avx512(struct ci_tx_queue *txq)
-{
- txq->vector_tx = true;
- txq->vector_sw_ring = true;
- return 0;
-}
-
uint16_t
iavf_xmit_pkts_vec_avx512_offload(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index 6f94587eee..c69399a173 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -24,12 +24,6 @@ iavf_tx_desc_done(struct ci_tx_queue *txq, uint16_t idx)
rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE);
}
-static __rte_always_inline int
-iavf_tx_free_bufs(struct ci_tx_queue *txq)
-{
- return ci_tx_free_bufs(txq, iavf_tx_desc_done);
-}
-
static inline void
_iavf_rx_queue_release_mbufs_vec(struct iavf_rx_queue *rxq)
{
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index 3adf2a59e4..9f7db80bfd 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -1368,14 +1368,14 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
- struct ci_tx_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = IAVF_TX_DESC_CMD_EOP | 0x04; /* bit 2 must be set */
uint64_t rs = IAVF_TX_DESC_CMD_RS | flags;
int i;
if (txq->nb_tx_free < txq->tx_free_thresh)
- iavf_tx_free_bufs(txq);
+ ci_tx_free_bufs_vec(txq, iavf_tx_desc_done, false);
nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
@@ -1384,13 +1384,13 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = txq->tx_tail;
txdp = &txq->iavf_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ci_tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -1404,10 +1404,10 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
/* avoid reach the end of ring */
txdp = &txq->iavf_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
}
- ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
iavf_vtx(txdp, tx_pkts, nb_commit, flags);
@@ -1462,7 +1462,7 @@ int __rte_cold
iavf_txq_vec_setup(struct ci_tx_queue *txq)
{
txq->vector_tx = true;
- txq->vector_sw_ring = false;
+ txq->vector_sw_ring = txq->vector_tx;
return 0;
}
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v2 21/22] net/_common_intel: remove unneeded code
2024-12-03 16:41 ` [PATCH v2 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (19 preceding siblings ...)
2024-12-03 16:41 ` [PATCH v2 20/22] net/iavf: " Bruce Richardson
@ 2024-12-03 16:41 ` Bruce Richardson
2024-12-03 16:41 ` [PATCH v2 22/22] net/ixgbe: use common Tx backlog entry fn Bruce Richardson
21 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-03 16:41 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Ian Stokes, Konstantin Ananyev,
Vladimir Medvedkin, Anatoly Burakov
With all drivers using the common Tx structure updated so that their
vector paths all use the simplified Tx mbuf ring format, it's no longer
necessary to have a separate flag for the ring format and for use of a
vector driver.
Remove the former flag and base all decisions off the vector flag. With
that done, we go from having only two paths to consider for releasing
all mbufs in the ring, not three. That allows further simpification of
the "ci_txq_release_all_mbufs" function.
The separate function to free buffers from the vector driver not using
the simplified ring format can similarly be removed as no longer
necessary.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 97 +++--------------------
drivers/net/i40e/i40e_rxtx.c | 1 -
drivers/net/iavf/iavf_rxtx_vec_sse.c | 1 -
drivers/net/ice/ice_rxtx.c | 1 -
drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 1 -
5 files changed, 10 insertions(+), 91 deletions(-)
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
index aa42b9b49f..d9cf4474fc 100644
--- a/drivers/net/_common_intel/tx.h
+++ b/drivers/net/_common_intel/tx.h
@@ -66,7 +66,6 @@ struct ci_tx_queue {
bool tx_deferred_start; /* don't start this queue in dev start */
bool q_set; /* indicate if tx queue has been configured */
bool vector_tx; /* port is using vector TX */
- bool vector_sw_ring; /* port is using vectorized SW ring (ieth_tx_entry_vec) */
union { /* the VSI this queue belongs to */
struct i40e_vsi *i40e_vsi;
struct iavf_vsi *iavf_vsi;
@@ -120,72 +119,6 @@ ci_tx_backlog_entry_vec(struct ci_tx_entry_vec *txep, struct rte_mbuf **tx_pkts,
typedef int (*ci_desc_done_fn)(struct ci_tx_queue *txq, uint16_t idx);
-static __rte_always_inline int
-ci_tx_free_bufs(struct ci_tx_queue *txq, ci_desc_done_fn desc_done)
-{
- struct ci_tx_entry *txep;
- uint32_t n;
- uint32_t i;
- int nb_free = 0;
- struct rte_mbuf *m, *free[IETH_VPMD_TX_MAX_FREE_BUF];
-
- /* check DD bits on threshold descriptor */
- if (!desc_done(txq, txq->tx_next_dd))
- return 0;
-
- n = txq->tx_rs_thresh;
-
- /* first buffer to free from S/W ring is at index
- * tx_next_dd - (tx_rs_thresh-1)
- */
- txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)];
-
- if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
- for (i = 0; i < n; i++) {
- free[i] = txep[i].mbuf;
- /* no need to reset txep[i].mbuf in vector path */
- }
- rte_mempool_put_bulk(free[0]->pool, (void **)free, n);
- goto done;
- }
-
- m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
- if (likely(m != NULL)) {
- free[0] = m;
- nb_free = 1;
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (likely(m != NULL)) {
- if (likely(m->pool == free[0]->pool)) {
- free[nb_free++] = m;
- } else {
- rte_mempool_put_bulk(free[0]->pool,
- (void *)free,
- nb_free);
- free[0] = m;
- nb_free = 1;
- }
- }
- }
- rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
- } else {
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (m != NULL)
- rte_mempool_put(m->pool, m);
- }
- }
-
-done:
- /* buffers were freed, update counters */
- txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
- txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
- if (txq->tx_next_dd >= txq->nb_tx_desc)
- txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
-
- return txq->tx_rs_thresh;
-}
-
static __rte_always_inline int
ci_tx_free_bufs_vec(struct ci_tx_queue *txq, ci_desc_done_fn desc_done, bool ctx_descs)
{
@@ -278,21 +211,6 @@ ci_tx_free_bufs_vec(struct ci_tx_queue *txq, ci_desc_done_fn desc_done, bool ctx
return txq->tx_rs_thresh;
}
-#define IETH_FREE_BUFS_LOOP(swr, nb_desc, start, end) do { \
- uint16_t i = start; \
- if (end < i) { \
- for (; i < nb_desc; i++) { \
- rte_pktmbuf_free_seg(swr[i].mbuf); \
- swr[i].mbuf = NULL; \
- } \
- i = 0; \
- } \
- for (; i < end; i++) { \
- rte_pktmbuf_free_seg(swr[i].mbuf); \
- swr[i].mbuf = NULL; \
- } \
-} while (0)
-
static inline void
ci_txq_release_all_mbufs(struct ci_tx_queue *txq, bool use_ctx)
{
@@ -311,16 +229,21 @@ ci_txq_release_all_mbufs(struct ci_tx_queue *txq, bool use_ctx)
/**
* vPMD tx will not set sw_ring's mbuf to NULL after free,
- * so need to free remains more carefully.
+ * so determining buffers to free is a little more complex.
*/
const uint16_t start = (txq->tx_next_dd - txq->tx_rs_thresh + 1) >> use_ctx;
const uint16_t nb_desc = txq->nb_tx_desc >> use_ctx;
const uint16_t end = txq->tx_tail >> use_ctx;
- if (txq->vector_sw_ring)
- IETH_FREE_BUFS_LOOP(txq->sw_ring_vec, nb_desc, start, end);
- else
- IETH_FREE_BUFS_LOOP(txq->sw_ring, nb_desc, start, end);
+ uint16_t i = start;
+ if (end < i) {
+ for (; i < nb_desc; i++)
+ rte_pktmbuf_free_seg(txq->sw_ring_vec[i].mbuf);
+ i = 0;
+ }
+ for (; i < end; i++)
+ rte_pktmbuf_free_seg(txq->sw_ring_vec[i].mbuf);
+ memset(txq->sw_ring_vec, 0, sizeof(txq->sw_ring_vec[0]) * nb_desc);
}
#endif /* _COMMON_INTEL_TX_H_ */
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 745c467912..c3ff2e05c3 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -1891,7 +1891,6 @@ i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
tx_queue_id);
txq->vector_tx = ad->tx_vec_allowed;
- txq->vector_sw_ring = txq->vector_tx;
/*
* tx_queue_id is queue id application refers to, while
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index 9f7db80bfd..21d5bfd309 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -1462,7 +1462,6 @@ int __rte_cold
iavf_txq_vec_setup(struct ci_tx_queue *txq)
{
txq->vector_tx = true;
- txq->vector_sw_ring = txq->vector_tx;
return 0;
}
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 77cb6688a7..dcfa409813 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -825,7 +825,6 @@ ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
/* record what kind of descriptor cleanup we need on teardown */
txq->vector_tx = ad->tx_vec_allowed;
- txq->vector_sw_ring = txq->vector_tx;
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
index 65794e45cb..3d4840c3b7 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
@@ -183,7 +183,6 @@ ixgbe_txq_vec_setup_default(struct ci_tx_queue *txq,
txq->sw_ring_vec = txq->sw_ring_vec + 1;
txq->ops = txq_ops;
txq->vector_tx = 1;
- txq->vector_sw_ring = 1;
return 0;
}
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v2 22/22] net/ixgbe: use common Tx backlog entry fn
2024-12-03 16:41 ` [PATCH v2 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (20 preceding siblings ...)
2024-12-03 16:41 ` [PATCH v2 21/22] net/_common_intel: remove unneeded code Bruce Richardson
@ 2024-12-03 16:41 ` Bruce Richardson
21 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-03 16:41 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Anatoly Burakov, Vladimir Medvedkin,
Wathsala Vithanage, Konstantin Ananyev
Remove the custom vector Tx backlog entry function and use the standard
intel_common one, now that all vector drivers are using the same,
smaller ring structure.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 10 ----------
drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 4 ++--
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 4 ++--
3 files changed, 4 insertions(+), 14 deletions(-)
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
index 3d4840c3b7..7316fc6c3b 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
@@ -68,16 +68,6 @@ ixgbe_tx_free_bufs(struct ci_tx_queue *txq)
return txq->tx_rs_thresh;
}
-static __rte_always_inline void
-tx_backlog_entry(struct ci_tx_entry_vec *txep,
- struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
-{
- int i;
-
- for (i = 0; i < (int)nb_pkts; ++i)
- txep[i].mbuf = tx_pkts[i];
-}
-
static inline void
_ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq)
{
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
index 2ccb399b64..f879f6fa9a 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
@@ -597,7 +597,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -614,7 +614,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = &txq->sw_ring_vec[tx_id];
}
- tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
index fa26365f06..915358e16b 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -720,7 +720,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -737,7 +737,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = &txq->sw_ring_vec[tx_id];
}
- tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v3 00/22] Reduce code duplication across Intel NIC drivers
2024-11-22 12:53 [RFC PATCH 00/21] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (23 preceding siblings ...)
2024-12-03 16:41 ` [PATCH v2 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
@ 2024-12-11 17:33 ` Bruce Richardson
2024-12-11 17:33 ` [PATCH v3 01/22] net/_common_intel: add pkt reassembly fn for intel drivers Bruce Richardson
` (21 more replies)
2024-12-20 14:38 ` [PATCH v4 00/24] Reduce code duplication across Intel NIC drivers Bruce Richardson
25 siblings, 22 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-11 17:33 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson
This RFC attempts to reduce the amount of code duplication across a
number of Intel NIC drivers, specifically: ixgbe, i40e, iavf, and ice.
The first patch extract a function from the Rx side, otherwise the
majority of the changes are on the Tx side, leading to a converged Tx
queue structure across the 4 drivers, and a large number of common
functions.
v2->v3:
* Fix incorrect/unadjusted memset in patch 8, leading to incorrect
threshold tracking in ixgbe.
v1->v2:
* Fix two additional checkpatch issues that were flagged.
* Added in patch 21, which performs additional cleanup that is possible
once all vector drivers use the same mbuf free/release process.
[This brings the patchset to having over twice as many lines removed
as added (1887 vs 930), and close to having a net removal of 1kloc]
RFC->v1:
* Moved the location of the common code from "common/intel_eth" to
"net/_common_intel", and added only ".." to the driver include path so
that the paths included "_common_intel" in them, to make it clear it's
not driver-local headers.
* Due to change in location, structure/fn prefix changes from "ieth" to
"ci" for "common intel".
* Removed the seeming-arbitrary split of vector and non-vector code -
since much of the code taken from vector files was scalar code which
was used by the vector drivers.
* Split code into separate Rx and Tx files.
* Fixed multiple checkpatch issues (but not all).
* Attempted to improve name standardization, by using "_vec" as a common
suffix for all vector-related fns and data. Previously, some names had
"vec" in the middle, others had just "_v" suffix or full word "vector"
as suffix.
* Other minor changes...
Bruce Richardson (22):
net/_common_intel: add pkt reassembly fn for intel drivers
net/_common_intel: provide common Tx entry structures
net/_common_intel: add Tx mbuf ring replenish fn
drivers/net: align Tx queue struct field names
drivers/net: add prefix for driver-specific structs
net/_common_intel: merge ice and i40e Tx queue struct
net/iavf: use common Tx queue structure
net/ixgbe: convert Tx queue context cache field to ptr
net/ixgbe: use common Tx queue structure
net/_common_intel: pack Tx queue structure
net/_common_intel: add post-Tx buffer free function
net/_common_intel: add Tx buffer free fn for AVX-512
net/iavf: use common Tx free fn for AVX-512
net/ice: move Tx queue mbuf cleanup fn to common
net/i40e: use common Tx queue mbuf cleanup fn
net/ixgbe: use common Tx queue mbuf cleanup fn
net/iavf: use common Tx queue mbuf cleanup fn
net/ice: use vector SW ring for all vector paths
net/i40e: use vector SW ring for all vector paths
net/iavf: use vector SW ring for all vector paths
net/_common_intel: remove unneeded code
net/ixgbe: use common Tx backlog entry fn
drivers/net/_common_intel/rx.h | 79 ++++++
drivers/net/_common_intel/tx.h | 249 ++++++++++++++++++
drivers/net/i40e/i40e_ethdev.c | 4 +-
drivers/net/i40e/i40e_ethdev.h | 8 +-
drivers/net/i40e/i40e_fdir.c | 10 +-
.../net/i40e/i40e_recycle_mbufs_vec_common.c | 6 +-
drivers/net/i40e/i40e_rxtx.c | 192 +++++---------
drivers/net/i40e/i40e_rxtx.h | 61 +----
drivers/net/i40e/i40e_rxtx_vec_altivec.c | 26 +-
drivers/net/i40e/i40e_rxtx_vec_avx2.c | 26 +-
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 144 +---------
drivers/net/i40e/i40e_rxtx_vec_common.h | 144 +---------
drivers/net/i40e/i40e_rxtx_vec_neon.c | 26 +-
drivers/net/i40e/i40e_rxtx_vec_sse.c | 26 +-
drivers/net/i40e/meson.build | 2 +-
drivers/net/iavf/iavf.h | 2 +-
drivers/net/iavf/iavf_ethdev.c | 4 +-
drivers/net/iavf/iavf_rxtx.c | 180 +++++--------
drivers/net/iavf/iavf_rxtx.h | 61 +----
drivers/net/iavf/iavf_rxtx_vec_avx2.c | 47 ++--
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 214 +++------------
drivers/net/iavf/iavf_rxtx_vec_common.h | 160 +----------
drivers/net/iavf/iavf_rxtx_vec_sse.c | 56 ++--
drivers/net/iavf/iavf_vchnl.c | 8 +-
drivers/net/iavf/meson.build | 2 +-
drivers/net/ice/ice_dcf.c | 4 +-
drivers/net/ice/ice_dcf_ethdev.c | 21 +-
drivers/net/ice/ice_diagnose.c | 2 +-
drivers/net/ice/ice_ethdev.c | 2 +-
drivers/net/ice/ice_ethdev.h | 7 +-
drivers/net/ice/ice_rxtx.c | 163 +++++-------
drivers/net/ice/ice_rxtx.h | 52 +---
drivers/net/ice/ice_rxtx_vec_avx2.c | 26 +-
drivers/net/ice/ice_rxtx_vec_avx512.c | 153 +----------
drivers/net/ice/ice_rxtx_vec_common.h | 190 +------------
drivers/net/ice/ice_rxtx_vec_sse.c | 32 +--
drivers/net/ice/meson.build | 2 +-
drivers/net/ixgbe/base/ixgbe_osdep.h | 2 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 4 +-
.../ixgbe/ixgbe_recycle_mbufs_vec_common.c | 6 +-
drivers/net/ixgbe/ixgbe_rxtx.c | 139 +++++-----
drivers/net/ixgbe/ixgbe_rxtx.h | 73 +----
drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 131 ++-------
drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 37 ++-
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 37 ++-
drivers/net/ixgbe/meson.build | 2 +-
46 files changed, 931 insertions(+), 1891 deletions(-)
create mode 100644 drivers/net/_common_intel/rx.h
create mode 100644 drivers/net/_common_intel/tx.h
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v3 01/22] net/_common_intel: add pkt reassembly fn for intel drivers
2024-12-11 17:33 ` [PATCH v3 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
@ 2024-12-11 17:33 ` Bruce Richardson
2024-12-11 17:33 ` [PATCH v3 02/22] net/_common_intel: provide common Tx entry structures Bruce Richardson
` (20 subsequent siblings)
21 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-11 17:33 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, David Christensen, Ian Stokes,
Konstantin Ananyev, Wathsala Vithanage, Vladimir Medvedkin,
Anatoly Burakov
The code for reassembling a single, multi-mbuf packet from multiple
buffers received from the NIC is duplicated across many drivers. Rather
than having multiple copies of this function, we can create an
"_common_intel" directory to hold such functions and consolidate
multiple functions down to a single one for easier maintenance.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/rx.h | 79 +++++++++++++++++++++++
drivers/net/i40e/i40e_rxtx_vec_altivec.c | 4 +-
drivers/net/i40e/i40e_rxtx_vec_avx2.c | 4 +-
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 4 +-
drivers/net/i40e/i40e_rxtx_vec_common.h | 64 +-----------------
drivers/net/i40e/i40e_rxtx_vec_neon.c | 4 +-
drivers/net/i40e/i40e_rxtx_vec_sse.c | 4 +-
drivers/net/i40e/meson.build | 2 +-
drivers/net/iavf/iavf_rxtx_vec_avx2.c | 8 +--
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 8 +--
drivers/net/iavf/iavf_rxtx_vec_common.h | 65 +------------------
drivers/net/iavf/iavf_rxtx_vec_sse.c | 8 +--
drivers/net/iavf/meson.build | 2 +-
drivers/net/ice/ice_rxtx_vec_avx2.c | 4 +-
drivers/net/ice/ice_rxtx_vec_avx512.c | 8 +--
drivers/net/ice/ice_rxtx_vec_common.h | 66 +------------------
drivers/net/ice/ice_rxtx_vec_sse.c | 4 +-
drivers/net/ice/meson.build | 2 +-
drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 63 +-----------------
drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 4 +-
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 4 +-
drivers/net/ixgbe/meson.build | 2 +-
22 files changed, 121 insertions(+), 292 deletions(-)
create mode 100644 drivers/net/_common_intel/rx.h
diff --git a/drivers/net/_common_intel/rx.h b/drivers/net/_common_intel/rx.h
new file mode 100644
index 0000000000..5bd2fea7e3
--- /dev/null
+++ b/drivers/net/_common_intel/rx.h
@@ -0,0 +1,79 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Intel Corporation
+ */
+
+#ifndef _COMMON_INTEL_RX_H_
+#define _COMMON_INTEL_RX_H_
+
+#include <stdint.h>
+#include <unistd.h>
+#include <rte_mbuf.h>
+
+#define CI_RX_BURST 32
+
+static inline uint16_t
+ci_rx_reassemble_packets(struct rte_mbuf **rx_bufs, uint16_t nb_bufs, uint8_t *split_flags,
+ struct rte_mbuf **pkt_first_seg, struct rte_mbuf **pkt_last_seg,
+ const uint8_t crc_len)
+{
+ struct rte_mbuf *pkts[CI_RX_BURST] = {0}; /*finished pkts*/
+ struct rte_mbuf *start = *pkt_first_seg;
+ struct rte_mbuf *end = *pkt_last_seg;
+ unsigned int pkt_idx, buf_idx;
+
+ for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) {
+ if (end) {
+ /* processing a split packet */
+ end->next = rx_bufs[buf_idx];
+ rx_bufs[buf_idx]->data_len += crc_len;
+
+ start->nb_segs++;
+ start->pkt_len += rx_bufs[buf_idx]->data_len;
+ end = end->next;
+
+ if (!split_flags[buf_idx]) {
+ /* it's the last packet of the set */
+ start->hash = end->hash;
+ start->vlan_tci = end->vlan_tci;
+ start->ol_flags = end->ol_flags;
+ /* we need to strip crc for the whole packet */
+ start->pkt_len -= crc_len;
+ if (end->data_len > crc_len) {
+ end->data_len -= crc_len;
+ } else {
+ /* free up last mbuf */
+ struct rte_mbuf *secondlast = start;
+
+ start->nb_segs--;
+ while (secondlast->next != end)
+ secondlast = secondlast->next;
+ secondlast->data_len -= (crc_len - end->data_len);
+ secondlast->next = NULL;
+ rte_pktmbuf_free_seg(end);
+ }
+ pkts[pkt_idx++] = start;
+ start = NULL;
+ end = NULL;
+ }
+ } else {
+ /* not processing a split packet */
+ if (!split_flags[buf_idx]) {
+ /* not a split packet, save and skip */
+ pkts[pkt_idx++] = rx_bufs[buf_idx];
+ continue;
+ }
+ start = rx_bufs[buf_idx];
+ end = start;
+ rx_bufs[buf_idx]->data_len += crc_len;
+ rx_bufs[buf_idx]->pkt_len += crc_len;
+ }
+ }
+
+ /* save the partial packet for next time */
+ *pkt_first_seg = start;
+ *pkt_last_seg = end;
+ memcpy(rx_bufs, pkts, pkt_idx * (sizeof(*pkts)));
+ return pkt_idx;
+}
+
+#endif /* _COMMON_INTEL_RX_H_ */
diff --git a/drivers/net/i40e/i40e_rxtx_vec_altivec.c b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
index b6b0d38ec1..95829f65d5 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_altivec.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
@@ -494,8 +494,8 @@ i40e_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
if (i == nb_bufs)
return nb_bufs;
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx2.c b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
index 19cf0ac718..6dd6e55d9c 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx2.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
@@ -657,8 +657,8 @@ i40e_recv_scattered_burst_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/*
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index 3b2750221b..506f1b5878 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -725,8 +725,8 @@ i40e_recv_scattered_burst_vec_avx512(void *rx_queue,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index 8b745630e4..1248cecacd 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -8,6 +8,7 @@
#include <ethdev_driver.h>
#include <rte_malloc.h>
+#include <_common_intel/rx.h>
#include "i40e_ethdev.h"
#include "i40e_rxtx.h"
@@ -15,69 +16,6 @@
#pragma GCC diagnostic ignored "-Wcast-qual"
#endif
-static inline uint16_t
-reassemble_packets(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_bufs,
- uint16_t nb_bufs, uint8_t *split_flags)
-{
- struct rte_mbuf *pkts[RTE_I40E_VPMD_RX_BURST]; /*finished pkts*/
- struct rte_mbuf *start = rxq->pkt_first_seg;
- struct rte_mbuf *end = rxq->pkt_last_seg;
- unsigned pkt_idx, buf_idx;
-
- for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) {
- if (end != NULL) {
- /* processing a split packet */
- end->next = rx_bufs[buf_idx];
- rx_bufs[buf_idx]->data_len += rxq->crc_len;
-
- start->nb_segs++;
- start->pkt_len += rx_bufs[buf_idx]->data_len;
- end = end->next;
-
- if (!split_flags[buf_idx]) {
- /* it's the last packet of the set */
- start->hash = end->hash;
- start->vlan_tci = end->vlan_tci;
- start->ol_flags = end->ol_flags;
- /* we need to strip crc for the whole packet */
- start->pkt_len -= rxq->crc_len;
- if (end->data_len > rxq->crc_len)
- end->data_len -= rxq->crc_len;
- else {
- /* free up last mbuf */
- struct rte_mbuf *secondlast = start;
-
- start->nb_segs--;
- while (secondlast->next != end)
- secondlast = secondlast->next;
- secondlast->data_len -= (rxq->crc_len -
- end->data_len);
- secondlast->next = NULL;
- rte_pktmbuf_free_seg(end);
- }
- pkts[pkt_idx++] = start;
- start = end = NULL;
- }
- } else {
- /* not processing a split packet */
- if (!split_flags[buf_idx]) {
- /* not a split packet, save and skip */
- pkts[pkt_idx++] = rx_bufs[buf_idx];
- continue;
- }
- end = start = rx_bufs[buf_idx];
- rx_bufs[buf_idx]->data_len += rxq->crc_len;
- rx_bufs[buf_idx]->pkt_len += rxq->crc_len;
- }
- }
-
- /* save the partial packet for next time */
- rxq->pkt_first_seg = start;
- rxq->pkt_last_seg = end;
- memcpy(rx_bufs, pkts, pkt_idx * (sizeof(*pkts)));
- return pkt_idx;
-}
-
static __rte_always_inline int
i40e_tx_free_bufs(struct i40e_tx_queue *txq)
{
diff --git a/drivers/net/i40e/i40e_rxtx_vec_neon.c b/drivers/net/i40e/i40e_rxtx_vec_neon.c
index e1c5c7041b..159d971796 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_neon.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_neon.c
@@ -623,8 +623,8 @@ i40e_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/i40e/i40e_rxtx_vec_sse.c b/drivers/net/i40e/i40e_rxtx_vec_sse.c
index ad560d2b6b..3a8128e014 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_sse.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_sse.c
@@ -641,8 +641,8 @@ i40e_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/i40e/meson.build b/drivers/net/i40e/meson.build
index 5c93493124..0e0b416b8f 100644
--- a/drivers/net/i40e/meson.build
+++ b/drivers/net/i40e/meson.build
@@ -36,7 +36,7 @@ sources = files(
testpmd_sources = files('i40e_testpmd.c')
deps += ['hash']
-includes += include_directories('base')
+includes += include_directories('base', '..')
if arch_subdir == 'x86'
sources += files('i40e_rxtx_vec_sse.c')
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index 49d41af953..0baf5045c8 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -1508,8 +1508,8 @@ iavf_recv_scattered_burst_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
@@ -1597,8 +1597,8 @@ iavf_recv_scattered_burst_vec_avx2_flex_rxd(void *rx_queue,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index d6a861bf80..5a88007096 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -1685,8 +1685,8 @@ iavf_recv_scattered_burst_vec_avx512(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
@@ -1761,8 +1761,8 @@ iavf_recv_scattered_burst_vec_avx512_flex_rxd(void *rx_queue,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index 5c5220048d..26b6f07614 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -8,6 +8,7 @@
#include <ethdev_driver.h>
#include <rte_malloc.h>
+#include <_common_intel/rx.h>
#include "iavf.h"
#include "iavf_rxtx.h"
@@ -15,70 +16,6 @@
#pragma GCC diagnostic ignored "-Wcast-qual"
#endif
-static __rte_always_inline uint16_t
-reassemble_packets(struct iavf_rx_queue *rxq, struct rte_mbuf **rx_bufs,
- uint16_t nb_bufs, uint8_t *split_flags)
-{
- struct rte_mbuf *pkts[IAVF_VPMD_RX_MAX_BURST];
- struct rte_mbuf *start = rxq->pkt_first_seg;
- struct rte_mbuf *end = rxq->pkt_last_seg;
- unsigned int pkt_idx, buf_idx;
-
- for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) {
- if (end) {
- /* processing a split packet */
- end->next = rx_bufs[buf_idx];
- rx_bufs[buf_idx]->data_len += rxq->crc_len;
-
- start->nb_segs++;
- start->pkt_len += rx_bufs[buf_idx]->data_len;
- end = end->next;
-
- if (!split_flags[buf_idx]) {
- /* it's the last packet of the set */
- start->hash = end->hash;
- start->vlan_tci = end->vlan_tci;
- start->ol_flags = end->ol_flags;
- /* we need to strip crc for the whole packet */
- start->pkt_len -= rxq->crc_len;
- if (end->data_len > rxq->crc_len) {
- end->data_len -= rxq->crc_len;
- } else {
- /* free up last mbuf */
- struct rte_mbuf *secondlast = start;
-
- start->nb_segs--;
- while (secondlast->next != end)
- secondlast = secondlast->next;
- secondlast->data_len -= (rxq->crc_len -
- end->data_len);
- secondlast->next = NULL;
- rte_pktmbuf_free_seg(end);
- }
- pkts[pkt_idx++] = start;
- start = NULL;
- end = NULL;
- }
- } else {
- /* not processing a split packet */
- if (!split_flags[buf_idx]) {
- /* not a split packet, save and skip */
- pkts[pkt_idx++] = rx_bufs[buf_idx];
- continue;
- }
- end = start = rx_bufs[buf_idx];
- rx_bufs[buf_idx]->data_len += rxq->crc_len;
- rx_bufs[buf_idx]->pkt_len += rxq->crc_len;
- }
- }
-
- /* save the partial packet for next time */
- rxq->pkt_first_seg = start;
- rxq->pkt_last_seg = end;
- memcpy(rx_bufs, pkts, pkt_idx * (sizeof(*pkts)));
- return pkt_idx;
-}
-
static __rte_always_inline int
iavf_tx_free_bufs(struct iavf_tx_queue *txq)
{
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index 0db6fa8bd4..48b01462ea 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -1238,8 +1238,8 @@ iavf_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
@@ -1307,8 +1307,8 @@ iavf_recv_scattered_burst_vec_flex_rxd(void *rx_queue,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/iavf/meson.build b/drivers/net/iavf/meson.build
index b48bb83438..9106e016ef 100644
--- a/drivers/net/iavf/meson.build
+++ b/drivers/net/iavf/meson.build
@@ -5,7 +5,7 @@ if dpdk_conf.get('RTE_IOVA_IN_MBUF') == 0
subdir_done()
endif
-includes += include_directories('../../common/iavf')
+includes += include_directories('../../common/iavf', '..')
testpmd_sources = files('iavf_testpmd.c')
diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index d6e88dbb29..ca247b155c 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -726,8 +726,8 @@ ice_recv_scattered_burst_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + ice_rx_reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index add095ef06..1e603d5d8f 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -763,8 +763,8 @@ ice_recv_scattered_burst_vec_avx512(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + ice_rx_reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
@@ -805,8 +805,8 @@ ice_recv_scattered_burst_vec_avx512_offload(void *rx_queue,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + ice_rx_reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index 4b73465af5..dd7da4761f 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -5,77 +5,13 @@
#ifndef _ICE_RXTX_VEC_COMMON_H_
#define _ICE_RXTX_VEC_COMMON_H_
+#include <_common_intel/rx.h>
#include "ice_rxtx.h"
#ifndef __INTEL_COMPILER
#pragma GCC diagnostic ignored "-Wcast-qual"
#endif
-static inline uint16_t
-ice_rx_reassemble_packets(struct ice_rx_queue *rxq, struct rte_mbuf **rx_bufs,
- uint16_t nb_bufs, uint8_t *split_flags)
-{
- struct rte_mbuf *pkts[ICE_VPMD_RX_BURST] = {0}; /*finished pkts*/
- struct rte_mbuf *start = rxq->pkt_first_seg;
- struct rte_mbuf *end = rxq->pkt_last_seg;
- unsigned int pkt_idx, buf_idx;
-
- for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) {
- if (end) {
- /* processing a split packet */
- end->next = rx_bufs[buf_idx];
- rx_bufs[buf_idx]->data_len += rxq->crc_len;
-
- start->nb_segs++;
- start->pkt_len += rx_bufs[buf_idx]->data_len;
- end = end->next;
-
- if (!split_flags[buf_idx]) {
- /* it's the last packet of the set */
- start->hash = end->hash;
- start->vlan_tci = end->vlan_tci;
- start->ol_flags = end->ol_flags;
- /* we need to strip crc for the whole packet */
- start->pkt_len -= rxq->crc_len;
- if (end->data_len > rxq->crc_len) {
- end->data_len -= rxq->crc_len;
- } else {
- /* free up last mbuf */
- struct rte_mbuf *secondlast = start;
-
- start->nb_segs--;
- while (secondlast->next != end)
- secondlast = secondlast->next;
- secondlast->data_len -= (rxq->crc_len -
- end->data_len);
- secondlast->next = NULL;
- rte_pktmbuf_free_seg(end);
- }
- pkts[pkt_idx++] = start;
- start = NULL;
- end = NULL;
- }
- } else {
- /* not processing a split packet */
- if (!split_flags[buf_idx]) {
- /* not a split packet, save and skip */
- pkts[pkt_idx++] = rx_bufs[buf_idx];
- continue;
- }
- start = rx_bufs[buf_idx];
- end = start;
- rx_bufs[buf_idx]->data_len += rxq->crc_len;
- rx_bufs[buf_idx]->pkt_len += rxq->crc_len;
- }
- }
-
- /* save the partial packet for next time */
- rxq->pkt_first_seg = start;
- rxq->pkt_last_seg = end;
- memcpy(rx_bufs, pkts, pkt_idx * (sizeof(*pkts)));
- return pkt_idx;
-}
-
static __rte_always_inline int
ice_tx_free_bufs_vec(struct ice_tx_queue *txq)
{
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index c01d8ede29..01533454ba 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -640,8 +640,8 @@ ice_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + ice_rx_reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/ice/meson.build b/drivers/net/ice/meson.build
index 1c9dc0cc6d..02c028db73 100644
--- a/drivers/net/ice/meson.build
+++ b/drivers/net/ice/meson.build
@@ -19,7 +19,7 @@ sources = files(
testpmd_sources = files('ice_testpmd.c')
deps += ['hash', 'net', 'common_iavf']
-includes += include_directories('base', '../../common/iavf')
+includes += include_directories('base', '..')
if arch_subdir == 'x86'
sources += files('ice_rxtx_vec_sse.c')
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
index a4d9ec9b08..2bab17c934 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
@@ -7,71 +7,10 @@
#include <stdint.h>
#include <ethdev_driver.h>
+#include <_common_intel/rx.h>
#include "ixgbe_ethdev.h"
#include "ixgbe_rxtx.h"
-static inline uint16_t
-reassemble_packets(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_bufs,
- uint16_t nb_bufs, uint8_t *split_flags)
-{
- struct rte_mbuf *pkts[nb_bufs]; /*finished pkts*/
- struct rte_mbuf *start = rxq->pkt_first_seg;
- struct rte_mbuf *end = rxq->pkt_last_seg;
- unsigned int pkt_idx, buf_idx;
-
- for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) {
- if (end != NULL) {
- /* processing a split packet */
- end->next = rx_bufs[buf_idx];
- rx_bufs[buf_idx]->data_len += rxq->crc_len;
-
- start->nb_segs++;
- start->pkt_len += rx_bufs[buf_idx]->data_len;
- end = end->next;
-
- if (!split_flags[buf_idx]) {
- /* it's the last packet of the set */
- start->hash = end->hash;
- start->ol_flags = end->ol_flags;
- /* we need to strip crc for the whole packet */
- start->pkt_len -= rxq->crc_len;
- if (end->data_len > rxq->crc_len)
- end->data_len -= rxq->crc_len;
- else {
- /* free up last mbuf */
- struct rte_mbuf *secondlast = start;
-
- start->nb_segs--;
- while (secondlast->next != end)
- secondlast = secondlast->next;
- secondlast->data_len -= (rxq->crc_len -
- end->data_len);
- secondlast->next = NULL;
- rte_pktmbuf_free_seg(end);
- }
- pkts[pkt_idx++] = start;
- start = end = NULL;
- }
- } else {
- /* not processing a split packet */
- if (!split_flags[buf_idx]) {
- /* not a split packet, save and skip */
- pkts[pkt_idx++] = rx_bufs[buf_idx];
- continue;
- }
- end = start = rx_bufs[buf_idx];
- rx_bufs[buf_idx]->data_len += rxq->crc_len;
- rx_bufs[buf_idx]->pkt_len += rxq->crc_len;
- }
- }
-
- /* save the partial packet for next time */
- rxq->pkt_first_seg = start;
- rxq->pkt_last_seg = end;
- memcpy(rx_bufs, pkts, pkt_idx * (sizeof(*pkts)));
- return pkt_idx;
-}
-
static __rte_always_inline int
ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq)
{
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
index 952b032eb6..7b35093075 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
@@ -516,8 +516,8 @@ ixgbe_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
index a77370cdb7..a709bf8c7f 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -639,8 +639,8 @@ ixgbe_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/ixgbe/meson.build b/drivers/net/ixgbe/meson.build
index 0ae12dd5ff..a65ff51379 100644
--- a/drivers/net/ixgbe/meson.build
+++ b/drivers/net/ixgbe/meson.build
@@ -35,6 +35,6 @@ elif arch_subdir == 'arm'
sources += files('ixgbe_recycle_mbufs_vec_common.c')
endif
-includes += include_directories('base')
+includes += include_directories('base', '..')
headers = files('rte_pmd_ixgbe.h')
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v3 02/22] net/_common_intel: provide common Tx entry structures
2024-12-11 17:33 ` [PATCH v3 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
2024-12-11 17:33 ` [PATCH v3 01/22] net/_common_intel: add pkt reassembly fn for intel drivers Bruce Richardson
@ 2024-12-11 17:33 ` Bruce Richardson
2024-12-11 17:33 ` [PATCH v3 03/22] net/_common_intel: add Tx mbuf ring replenish fn Bruce Richardson
` (19 subsequent siblings)
21 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-11 17:33 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Ian Stokes, David Christensen,
Konstantin Ananyev, Wathsala Vithanage, Vladimir Medvedkin,
Anatoly Burakov
The Tx entry structures, both vector and scalar, are common across Intel
drivers, so provide a single definition to be used everywhere.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 27 +++++++++++++++++++
.../net/i40e/i40e_recycle_mbufs_vec_common.c | 2 +-
drivers/net/i40e/i40e_rxtx.c | 18 ++++++-------
drivers/net/i40e/i40e_rxtx.h | 14 +++-------
drivers/net/i40e/i40e_rxtx_vec_altivec.c | 2 +-
drivers/net/i40e/i40e_rxtx_vec_avx2.c | 2 +-
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 6 ++---
drivers/net/i40e/i40e_rxtx_vec_common.h | 4 +--
drivers/net/i40e/i40e_rxtx_vec_neon.c | 2 +-
drivers/net/i40e/i40e_rxtx_vec_sse.c | 2 +-
drivers/net/iavf/iavf_rxtx.c | 12 ++++-----
drivers/net/iavf/iavf_rxtx.h | 14 +++-------
drivers/net/iavf/iavf_rxtx_vec_avx2.c | 2 +-
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 10 +++----
drivers/net/iavf/iavf_rxtx_vec_common.h | 4 +--
drivers/net/iavf/iavf_rxtx_vec_sse.c | 2 +-
drivers/net/ice/ice_dcf_ethdev.c | 2 +-
drivers/net/ice/ice_rxtx.c | 16 +++++------
drivers/net/ice/ice_rxtx.h | 13 ++-------
drivers/net/ice/ice_rxtx_vec_avx2.c | 2 +-
drivers/net/ice/ice_rxtx_vec_avx512.c | 6 ++---
drivers/net/ice/ice_rxtx_vec_common.h | 6 ++---
drivers/net/ice/ice_rxtx_vec_sse.c | 2 +-
.../ixgbe/ixgbe_recycle_mbufs_vec_common.c | 2 +-
drivers/net/ixgbe/ixgbe_rxtx.c | 16 +++++------
drivers/net/ixgbe/ixgbe_rxtx.h | 22 +++------------
drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 8 +++---
drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 2 +-
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 2 +-
29 files changed, 105 insertions(+), 117 deletions(-)
create mode 100644 drivers/net/_common_intel/tx.h
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
new file mode 100644
index 0000000000..384352b9db
--- /dev/null
+++ b/drivers/net/_common_intel/tx.h
@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Intel Corporation
+ */
+
+#ifndef _COMMON_INTEL_TX_H_
+#define _COMMON_INTEL_TX_H_
+
+#include <stdint.h>
+#include <rte_mbuf.h>
+
+/**
+ * Structure associated with each descriptor of the TX ring of a TX queue.
+ */
+struct ci_tx_entry {
+ struct rte_mbuf *mbuf; /* mbuf associated with TX desc, if any. */
+ uint16_t next_id; /* Index of next descriptor in ring. */
+ uint16_t last_id; /* Index of last scattered descriptor. */
+};
+
+/**
+ * Structure associated with each descriptor of the TX ring of a TX queue in vector Tx.
+ */
+struct ci_tx_entry_vec {
+ struct rte_mbuf *mbuf; /* mbuf associated with TX desc, if any. */
+};
+
+#endif /* _COMMON_INTEL_TX_H_ */
diff --git a/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c b/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
index 14424c9921..260d238ce4 100644
--- a/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
+++ b/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
@@ -56,7 +56,7 @@ i40e_recycle_tx_mbufs_reuse_vec(void *tx_queue,
struct rte_eth_recycle_rxq_info *recycle_rxq_info)
{
struct i40e_tx_queue *txq = tx_queue;
- struct i40e_tx_entry *txep;
+ struct ci_tx_entry *txep;
struct rte_mbuf **rxep;
int i, n;
uint16_t nb_recycle_mbufs;
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 839c8a5442..2e1f07d2a1 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -378,7 +378,7 @@ i40e_build_ctob(uint32_t td_cmd,
static inline int
i40e_xmit_cleanup(struct i40e_tx_queue *txq)
{
- struct i40e_tx_entry *sw_ring = txq->sw_ring;
+ struct ci_tx_entry *sw_ring = txq->sw_ring;
volatile struct i40e_tx_desc *txd = txq->tx_ring;
uint16_t last_desc_cleaned = txq->last_desc_cleaned;
uint16_t nb_tx_desc = txq->nb_tx_desc;
@@ -1081,8 +1081,8 @@ uint16_t
i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
struct i40e_tx_queue *txq;
- struct i40e_tx_entry *sw_ring;
- struct i40e_tx_entry *txe, *txn;
+ struct ci_tx_entry *sw_ring;
+ struct ci_tx_entry *txe, *txn;
volatile struct i40e_tx_desc *txd;
volatile struct i40e_tx_desc *txr;
struct rte_mbuf *tx_pkt;
@@ -1331,7 +1331,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
static __rte_always_inline int
i40e_tx_free_bufs(struct i40e_tx_queue *txq)
{
- struct i40e_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint16_t tx_rs_thresh = txq->tx_rs_thresh;
uint16_t i = 0, j = 0;
struct rte_mbuf *free[RTE_I40E_TX_MAX_FREE_BUF_SZ];
@@ -1418,7 +1418,7 @@ i40e_tx_fill_hw_ring(struct i40e_tx_queue *txq,
uint16_t nb_pkts)
{
volatile struct i40e_tx_desc *txdp = &(txq->tx_ring[txq->tx_tail]);
- struct i40e_tx_entry *txep = &(txq->sw_ring[txq->tx_tail]);
+ struct ci_tx_entry *txep = &txq->sw_ring[txq->tx_tail];
const int N_PER_LOOP = 4;
const int N_PER_LOOP_MASK = N_PER_LOOP - 1;
int mainpart, leftover;
@@ -2555,7 +2555,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate software ring */
txq->sw_ring =
rte_zmalloc_socket("i40e tx sw ring",
- sizeof(struct i40e_tx_entry) * nb_desc,
+ sizeof(struct ci_tx_entry) * nb_desc,
RTE_CACHE_LINE_SIZE,
socket_id);
if (!txq->sw_ring) {
@@ -2723,7 +2723,7 @@ i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq)
*/
#ifdef CC_AVX512_SUPPORT
if (dev->tx_pkt_burst == i40e_xmit_pkts_vec_avx512) {
- struct i40e_vec_tx_entry *swr = (void *)txq->sw_ring;
+ struct ci_tx_entry_vec *swr = (void *)txq->sw_ring;
i = txq->tx_next_dd - txq->tx_rs_thresh + 1;
if (txq->tx_tail < i) {
@@ -2768,7 +2768,7 @@ static int
i40e_tx_done_cleanup_full(struct i40e_tx_queue *txq,
uint32_t free_cnt)
{
- struct i40e_tx_entry *swr_ring = txq->sw_ring;
+ struct ci_tx_entry *swr_ring = txq->sw_ring;
uint16_t i, tx_last, tx_id;
uint16_t nb_tx_free_last;
uint16_t nb_tx_to_clean;
@@ -2874,7 +2874,7 @@ i40e_tx_done_cleanup(void *txq, uint32_t free_cnt)
void
i40e_reset_tx_queue(struct i40e_tx_queue *txq)
{
- struct i40e_tx_entry *txe;
+ struct ci_tx_entry *txe;
uint16_t i, prev, size;
if (!txq) {
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 33fc9770d9..0f5d3cb0b7 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -5,6 +5,8 @@
#ifndef _I40E_RXTX_H_
#define _I40E_RXTX_H_
+#include <_common_intel/tx.h>
+
#define RTE_PMD_I40E_RX_MAX_BURST 32
#define RTE_PMD_I40E_TX_MAX_BURST 32
@@ -122,16 +124,6 @@ struct i40e_rx_queue {
const struct rte_memzone *mz;
};
-struct i40e_tx_entry {
- struct rte_mbuf *mbuf;
- uint16_t next_id;
- uint16_t last_id;
-};
-
-struct i40e_vec_tx_entry {
- struct rte_mbuf *mbuf;
-};
-
/*
* Structure associated with each TX queue.
*/
@@ -139,7 +131,7 @@ struct i40e_tx_queue {
uint16_t nb_tx_desc; /**< number of TX descriptors */
uint64_t tx_ring_phys_addr; /**< TX ring DMA address */
volatile struct i40e_tx_desc *tx_ring; /**< TX ring virtual address */
- struct i40e_tx_entry *sw_ring; /**< virtual address of SW ring */
+ struct ci_tx_entry *sw_ring; /**< virtual address of SW ring */
uint16_t tx_tail; /**< current value of tail register */
volatile uint8_t *qtx_tail; /**< register address of tail */
uint16_t nb_tx_used; /**< number of TX desc used since RS bit set */
diff --git a/drivers/net/i40e/i40e_rxtx_vec_altivec.c b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
index 95829f65d5..ca1038eaa6 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_altivec.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
@@ -553,7 +553,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
- struct i40e_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = I40E_TD_CMD;
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx2.c b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
index 6dd6e55d9c..e8441de759 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx2.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
@@ -745,7 +745,7 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
- struct i40e_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = I40E_TD_CMD;
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index 506f1b5878..8b8a16daa8 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -757,7 +757,7 @@ i40e_recv_scattered_pkts_vec_avx512(void *rx_queue,
static __rte_always_inline int
i40e_tx_free_bufs_avx512(struct i40e_tx_queue *txq)
{
- struct i40e_vec_tx_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint32_t n;
uint32_t i;
int nb_free = 0;
@@ -920,7 +920,7 @@ vtx(volatile struct i40e_tx_desc *txdp,
}
static __rte_always_inline void
-tx_backlog_entry_avx512(struct i40e_vec_tx_entry *txep,
+tx_backlog_entry_avx512(struct ci_tx_entry_vec *txep,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
int i;
@@ -935,7 +935,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
- struct i40e_vec_tx_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = I40E_TD_CMD;
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index 1248cecacd..619fb89110 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -19,7 +19,7 @@
static __rte_always_inline int
i40e_tx_free_bufs(struct i40e_tx_queue *txq)
{
- struct i40e_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint32_t n;
uint32_t i;
int nb_free = 0;
@@ -85,7 +85,7 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
}
static __rte_always_inline void
-tx_backlog_entry(struct i40e_tx_entry *txep,
+tx_backlog_entry(struct ci_tx_entry *txep,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
int i;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_neon.c b/drivers/net/i40e/i40e_rxtx_vec_neon.c
index 159d971796..9b90a32e28 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_neon.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_neon.c
@@ -681,7 +681,7 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
{
struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
- struct i40e_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = I40E_TD_CMD;
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_sse.c b/drivers/net/i40e/i40e_rxtx_vec_sse.c
index 3a8128e014..e1fa2ed543 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_sse.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_sse.c
@@ -700,7 +700,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
- struct i40e_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = I40E_TD_CMD;
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 6a093c6746..e337f20073 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -284,7 +284,7 @@ reset_rx_queue(struct iavf_rx_queue *rxq)
static inline void
reset_tx_queue(struct iavf_tx_queue *txq)
{
- struct iavf_tx_entry *txe;
+ struct ci_tx_entry *txe;
uint32_t i, size;
uint16_t prev;
@@ -860,7 +860,7 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate software ring */
txq->sw_ring =
rte_zmalloc_socket("iavf tx sw ring",
- sizeof(struct iavf_tx_entry) * nb_desc,
+ sizeof(struct ci_tx_entry) * nb_desc,
RTE_CACHE_LINE_SIZE,
socket_id);
if (!txq->sw_ring) {
@@ -2379,7 +2379,7 @@ iavf_recv_pkts_bulk_alloc(void *rx_queue,
static inline int
iavf_xmit_cleanup(struct iavf_tx_queue *txq)
{
- struct iavf_tx_entry *sw_ring = txq->sw_ring;
+ struct ci_tx_entry *sw_ring = txq->sw_ring;
uint16_t last_desc_cleaned = txq->last_desc_cleaned;
uint16_t nb_tx_desc = txq->nb_tx_desc;
uint16_t desc_to_clean_to;
@@ -2797,8 +2797,8 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
struct iavf_tx_queue *txq = tx_queue;
volatile struct iavf_tx_desc *txr = txq->tx_ring;
- struct iavf_tx_entry *txe_ring = txq->sw_ring;
- struct iavf_tx_entry *txe, *txn;
+ struct ci_tx_entry *txe_ring = txq->sw_ring;
+ struct ci_tx_entry *txe, *txn;
struct rte_mbuf *mb, *mb_seg;
uint64_t buf_dma_addr;
uint16_t desc_idx, desc_idx_last;
@@ -4268,7 +4268,7 @@ static int
iavf_tx_done_cleanup_full(struct iavf_tx_queue *txq,
uint32_t free_cnt)
{
- struct iavf_tx_entry *swr_ring = txq->sw_ring;
+ struct ci_tx_entry *swr_ring = txq->sw_ring;
uint16_t tx_last, tx_id;
uint16_t nb_tx_free_last;
uint16_t nb_tx_to_clean;
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index 7b56076d32..1a191f2c89 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -5,6 +5,8 @@
#ifndef _IAVF_RXTX_H_
#define _IAVF_RXTX_H_
+#include <_common_intel/tx.h>
+
/* In QLEN must be whole number of 32 descriptors. */
#define IAVF_ALIGN_RING_DESC 32
#define IAVF_MIN_RING_DESC 64
@@ -271,22 +273,12 @@ struct iavf_rx_queue {
uint64_t hw_time_update;
};
-struct iavf_tx_entry {
- struct rte_mbuf *mbuf;
- uint16_t next_id;
- uint16_t last_id;
-};
-
-struct iavf_tx_vec_entry {
- struct rte_mbuf *mbuf;
-};
-
/* Structure associated with each TX queue. */
struct iavf_tx_queue {
const struct rte_memzone *mz; /* memzone for Tx ring */
volatile struct iavf_tx_desc *tx_ring; /* Tx ring virtual address */
uint64_t tx_ring_phys_addr; /* Tx ring DMA address */
- struct iavf_tx_entry *sw_ring; /* address array of SW ring */
+ struct ci_tx_entry *sw_ring; /* address array of SW ring */
uint16_t nb_tx_desc; /* ring length */
uint16_t tx_tail; /* current value of tail */
volatile uint8_t *qtx_tail; /* register address of tail */
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index 0baf5045c8..e7d3d52655 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -1736,7 +1736,7 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
- struct iavf_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
/* bit2 is reserved and must be set to 1 according to Spec */
uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index 5a88007096..a899309f94 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -1847,7 +1847,7 @@ iavf_recv_scattered_pkts_vec_avx512_flex_rxd_offload(void *rx_queue,
static __rte_always_inline int
iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq)
{
- struct iavf_tx_vec_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint32_t n;
uint32_t i;
int nb_free = 0;
@@ -1960,7 +1960,7 @@ iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq)
}
static __rte_always_inline void
-tx_backlog_entry_avx512(struct iavf_tx_vec_entry *txep,
+tx_backlog_entry_avx512(struct ci_tx_entry_vec *txep,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
int i;
@@ -2313,7 +2313,7 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
- struct iavf_tx_vec_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
/* bit2 is reserved and must be set to 1 according to Spec */
uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC;
@@ -2380,7 +2380,7 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
- struct iavf_tx_vec_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, nb_mbuf, tx_id;
/* bit2 is reserved and must be set to 1 according to Spec */
uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC;
@@ -2478,7 +2478,7 @@ iavf_tx_queue_release_mbufs_avx512(struct iavf_tx_queue *txq)
const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1);
const uint16_t end_desc = txq->tx_tail >> txq->use_ctx; /* next empty slot */
const uint16_t wrap_point = txq->nb_tx_desc >> txq->use_ctx; /* end of SW ring */
- struct iavf_tx_vec_entry *swr = (void *)txq->sw_ring;
+ struct ci_tx_entry_vec *swr = (void *)txq->sw_ring;
if (!txq->sw_ring || txq->nb_free == max_desc)
return;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index 26b6f07614..df40857218 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -19,7 +19,7 @@
static __rte_always_inline int
iavf_tx_free_bufs(struct iavf_tx_queue *txq)
{
- struct iavf_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint32_t n;
uint32_t i;
int nb_free = 0;
@@ -74,7 +74,7 @@ iavf_tx_free_bufs(struct iavf_tx_queue *txq)
}
static __rte_always_inline void
-tx_backlog_entry(struct iavf_tx_entry *txep,
+tx_backlog_entry(struct ci_tx_entry *txep,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
int i;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index 48b01462ea..0a30b1ef64 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -1368,7 +1368,7 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
- struct iavf_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = IAVF_TX_DESC_CMD_EOP | 0x04; /* bit 2 must be set */
uint64_t rs = IAVF_TX_DESC_CMD_RS | flags;
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 91f4943a11..4b98e4066b 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -389,7 +389,7 @@ reset_rx_queue(struct ice_rx_queue *rxq)
static inline void
reset_tx_queue(struct ice_tx_queue *txq)
{
- struct ice_tx_entry *txe;
+ struct ci_tx_entry *txe;
uint32_t i, size;
uint16_t prev;
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 0c7106c7e0..d584086a36 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -1028,7 +1028,7 @@ _ice_tx_queue_release_mbufs(struct ice_tx_queue *txq)
static void
ice_reset_tx_queue(struct ice_tx_queue *txq)
{
- struct ice_tx_entry *txe;
+ struct ci_tx_entry *txe;
uint16_t i, prev, size;
if (!txq) {
@@ -1509,7 +1509,7 @@ ice_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate software ring */
txq->sw_ring =
rte_zmalloc_socket(NULL,
- sizeof(struct ice_tx_entry) * nb_desc,
+ sizeof(struct ci_tx_entry) * nb_desc,
RTE_CACHE_LINE_SIZE,
socket_id);
if (!txq->sw_ring) {
@@ -2837,7 +2837,7 @@ ice_txd_enable_checksum(uint64_t ol_flags,
static inline int
ice_xmit_cleanup(struct ice_tx_queue *txq)
{
- struct ice_tx_entry *sw_ring = txq->sw_ring;
+ struct ci_tx_entry *sw_ring = txq->sw_ring;
volatile struct ice_tx_desc *txd = txq->tx_ring;
uint16_t last_desc_cleaned = txq->last_desc_cleaned;
uint16_t nb_tx_desc = txq->nb_tx_desc;
@@ -2961,8 +2961,8 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
struct ice_tx_queue *txq;
volatile struct ice_tx_desc *tx_ring;
volatile struct ice_tx_desc *txd;
- struct ice_tx_entry *sw_ring;
- struct ice_tx_entry *txe, *txn;
+ struct ci_tx_entry *sw_ring;
+ struct ci_tx_entry *txe, *txn;
struct rte_mbuf *tx_pkt;
struct rte_mbuf *m_seg;
uint32_t cd_tunneling_params;
@@ -3184,7 +3184,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
static __rte_always_inline int
ice_tx_free_bufs(struct ice_tx_queue *txq)
{
- struct ice_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint16_t i;
if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
@@ -3221,7 +3221,7 @@ static int
ice_tx_done_cleanup_full(struct ice_tx_queue *txq,
uint32_t free_cnt)
{
- struct ice_tx_entry *swr_ring = txq->sw_ring;
+ struct ci_tx_entry *swr_ring = txq->sw_ring;
uint16_t i, tx_last, tx_id;
uint16_t nb_tx_free_last;
uint16_t nb_tx_to_clean;
@@ -3361,7 +3361,7 @@ ice_tx_fill_hw_ring(struct ice_tx_queue *txq, struct rte_mbuf **pkts,
uint16_t nb_pkts)
{
volatile struct ice_tx_desc *txdp = &txq->tx_ring[txq->tx_tail];
- struct ice_tx_entry *txep = &txq->sw_ring[txq->tx_tail];
+ struct ci_tx_entry *txep = &txq->sw_ring[txq->tx_tail];
const int N_PER_LOOP = 4;
const int N_PER_LOOP_MASK = N_PER_LOOP - 1;
int mainpart, leftover;
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index 45f25b3609..8d1a1a8676 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -5,6 +5,7 @@
#ifndef _ICE_RXTX_H_
#define _ICE_RXTX_H_
+#include <_common_intel/tx.h>
#include "ice_ethdev.h"
#define ICE_ALIGN_RING_DESC 32
@@ -144,21 +145,11 @@ struct ice_rx_queue {
bool ts_enable; /* if rxq timestamp is enabled */
};
-struct ice_tx_entry {
- struct rte_mbuf *mbuf;
- uint16_t next_id;
- uint16_t last_id;
-};
-
-struct ice_vec_tx_entry {
- struct rte_mbuf *mbuf;
-};
-
struct ice_tx_queue {
uint16_t nb_tx_desc; /* number of TX descriptors */
rte_iova_t tx_ring_dma; /* TX ring DMA address */
volatile struct ice_tx_desc *tx_ring; /* TX ring virtual address */
- struct ice_tx_entry *sw_ring; /* virtual address of SW ring */
+ struct ci_tx_entry *sw_ring; /* virtual address of SW ring */
uint16_t tx_tail; /* current value of tail register */
volatile uint8_t *qtx_tail; /* register address of tail */
uint16_t nb_tx_used; /* number of TX desc used since RS bit set */
diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index ca247b155c..cf1862263a 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -858,7 +858,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
volatile struct ice_tx_desc *txdp;
- struct ice_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = ICE_TD_CMD;
uint64_t rs = ICE_TX_DESC_CMD_RS | ICE_TD_CMD;
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index 1e603d5d8f..6b6aa3f1fe 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -862,7 +862,7 @@ ice_recv_scattered_pkts_vec_avx512_offload(void *rx_queue,
static __rte_always_inline int
ice_tx_free_bufs_avx512(struct ice_tx_queue *txq)
{
- struct ice_vec_tx_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint32_t n;
uint32_t i;
int nb_free = 0;
@@ -1040,7 +1040,7 @@ ice_vtx(volatile struct ice_tx_desc *txdp, struct rte_mbuf **pkt,
}
static __rte_always_inline void
-ice_tx_backlog_entry_avx512(struct ice_vec_tx_entry *txep,
+ice_tx_backlog_entry_avx512(struct ci_tx_entry_vec *txep,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
int i;
@@ -1055,7 +1055,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
volatile struct ice_tx_desc *txdp;
- struct ice_vec_tx_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = ICE_TD_CMD;
uint64_t rs = ICE_TX_DESC_CMD_RS | ICE_TD_CMD;
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index dd7da4761f..3dc6061e84 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -15,7 +15,7 @@
static __rte_always_inline int
ice_tx_free_bufs_vec(struct ice_tx_queue *txq)
{
- struct ice_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint32_t n;
uint32_t i;
int nb_free = 0;
@@ -70,7 +70,7 @@ ice_tx_free_bufs_vec(struct ice_tx_queue *txq)
}
static __rte_always_inline void
-ice_tx_backlog_entry(struct ice_tx_entry *txep,
+ice_tx_backlog_entry(struct ci_tx_entry *txep,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
int i;
@@ -135,7 +135,7 @@ _ice_tx_queue_release_mbufs_vec(struct ice_tx_queue *txq)
if (dev->tx_pkt_burst == ice_xmit_pkts_vec_avx512 ||
dev->tx_pkt_burst == ice_xmit_pkts_vec_avx512_offload) {
- struct ice_vec_tx_entry *swr = (void *)txq->sw_ring;
+ struct ci_tx_entry_vec *swr = (void *)txq->sw_ring;
if (txq->tx_tail < i) {
for (; i < txq->nb_tx_desc; i++) {
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index 01533454ba..889b754cc1 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -699,7 +699,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
volatile struct ice_tx_desc *txdp;
- struct ice_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = ICE_TD_CMD;
uint64_t rs = ICE_TX_DESC_CMD_RS | ICE_TD_CMD;
diff --git a/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c b/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c
index d451562269..2241726ad8 100644
--- a/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c
+++ b/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c
@@ -52,7 +52,7 @@ ixgbe_recycle_tx_mbufs_reuse_vec(void *tx_queue,
struct rte_eth_recycle_rxq_info *recycle_rxq_info)
{
struct ixgbe_tx_queue *txq = tx_queue;
- struct ixgbe_tx_entry *txep;
+ struct ci_tx_entry *txep;
struct rte_mbuf **rxep;
int i, n;
uint32_t status;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 7d16eb9df7..db4b993ebc 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -100,7 +100,7 @@
static __rte_always_inline int
ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq)
{
- struct ixgbe_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint32_t status;
int i, nb_free = 0;
struct rte_mbuf *m, *free[RTE_IXGBE_TX_MAX_FREE_BUF_SZ];
@@ -199,7 +199,7 @@ ixgbe_tx_fill_hw_ring(struct ixgbe_tx_queue *txq, struct rte_mbuf **pkts,
uint16_t nb_pkts)
{
volatile union ixgbe_adv_tx_desc *txdp = &(txq->tx_ring[txq->tx_tail]);
- struct ixgbe_tx_entry *txep = &(txq->sw_ring[txq->tx_tail]);
+ struct ci_tx_entry *txep = &txq->sw_ring[txq->tx_tail];
const int N_PER_LOOP = 4;
const int N_PER_LOOP_MASK = N_PER_LOOP-1;
int mainpart, leftover;
@@ -563,7 +563,7 @@ tx_desc_ol_flags_to_cmdtype(uint64_t ol_flags)
static inline int
ixgbe_xmit_cleanup(struct ixgbe_tx_queue *txq)
{
- struct ixgbe_tx_entry *sw_ring = txq->sw_ring;
+ struct ci_tx_entry *sw_ring = txq->sw_ring;
volatile union ixgbe_adv_tx_desc *txr = txq->tx_ring;
uint16_t last_desc_cleaned = txq->last_desc_cleaned;
uint16_t nb_tx_desc = txq->nb_tx_desc;
@@ -624,8 +624,8 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
struct ixgbe_tx_queue *txq;
- struct ixgbe_tx_entry *sw_ring;
- struct ixgbe_tx_entry *txe, *txn;
+ struct ci_tx_entry *sw_ring;
+ struct ci_tx_entry *txe, *txn;
volatile union ixgbe_adv_tx_desc *txr;
volatile union ixgbe_adv_tx_desc *txd, *txp;
struct rte_mbuf *tx_pkt;
@@ -2352,7 +2352,7 @@ ixgbe_tx_queue_release_mbufs(struct ixgbe_tx_queue *txq)
static int
ixgbe_tx_done_cleanup_full(struct ixgbe_tx_queue *txq, uint32_t free_cnt)
{
- struct ixgbe_tx_entry *swr_ring = txq->sw_ring;
+ struct ci_tx_entry *swr_ring = txq->sw_ring;
uint16_t i, tx_last, tx_id;
uint16_t nb_tx_free_last;
uint16_t nb_tx_to_clean;
@@ -2490,7 +2490,7 @@ static void __rte_cold
ixgbe_reset_tx_queue(struct ixgbe_tx_queue *txq)
{
static const union ixgbe_adv_tx_desc zeroed_desc = {{0}};
- struct ixgbe_tx_entry *txe = txq->sw_ring;
+ struct ci_tx_entry *txe = txq->sw_ring;
uint16_t prev, i;
/* Zero out HW ring memory */
@@ -2795,7 +2795,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate software ring */
txq->sw_ring = rte_zmalloc_socket("txq->sw_ring",
- sizeof(struct ixgbe_tx_entry) * nb_desc,
+ sizeof(struct ci_tx_entry) * nb_desc,
RTE_CACHE_LINE_SIZE, socket_id);
if (txq->sw_ring == NULL) {
ixgbe_tx_queue_release(txq);
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index 0550c1da60..1647396419 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -5,6 +5,8 @@
#ifndef _IXGBE_RXTX_H_
#define _IXGBE_RXTX_H_
+#include <_common_intel/tx.h>
+
/*
* Rings setup and release.
*
@@ -75,22 +77,6 @@ struct ixgbe_scattered_rx_entry {
struct rte_mbuf *fbuf; /**< First segment of the fragmented packet. */
};
-/**
- * Structure associated with each descriptor of the TX ring of a TX queue.
- */
-struct ixgbe_tx_entry {
- struct rte_mbuf *mbuf; /**< mbuf associated with TX desc, if any. */
- uint16_t next_id; /**< Index of next descriptor in ring. */
- uint16_t last_id; /**< Index of last scattered descriptor. */
-};
-
-/**
- * Structure associated with each descriptor of the TX ring of a TX queue.
- */
-struct ixgbe_tx_entry_v {
- struct rte_mbuf *mbuf; /**< mbuf associated with TX desc, if any. */
-};
-
/**
* Structure associated with each RX queue.
*/
@@ -202,8 +188,8 @@ struct ixgbe_tx_queue {
volatile union ixgbe_adv_tx_desc *tx_ring;
uint64_t tx_ring_phys_addr; /**< TX ring DMA address. */
union {
- struct ixgbe_tx_entry *sw_ring; /**< address of SW ring for scalar PMD. */
- struct ixgbe_tx_entry_v *sw_ring_v; /**< address of SW ring for vector PMD */
+ struct ci_tx_entry *sw_ring; /**< address of SW ring for scalar PMD. */
+ struct ci_tx_entry_vec *sw_ring_v; /**< address of SW ring for vector PMD */
};
volatile uint32_t *tdt_reg_addr; /**< Address of TDT register. */
uint16_t nb_tx_desc; /**< number of TX descriptors. */
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
index 2bab17c934..e9592c0d08 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
@@ -14,7 +14,7 @@
static __rte_always_inline int
ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq)
{
- struct ixgbe_tx_entry_v *txep;
+ struct ci_tx_entry_vec *txep;
uint32_t status;
uint32_t n;
uint32_t i;
@@ -69,7 +69,7 @@ ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq)
}
static __rte_always_inline void
-tx_backlog_entry(struct ixgbe_tx_entry_v *txep,
+tx_backlog_entry(struct ci_tx_entry_vec *txep,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
int i;
@@ -82,7 +82,7 @@ static inline void
_ixgbe_tx_queue_release_mbufs_vec(struct ixgbe_tx_queue *txq)
{
unsigned int i;
- struct ixgbe_tx_entry_v *txe;
+ struct ci_tx_entry_vec *txe;
const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1);
if (txq->sw_ring == NULL || txq->nb_tx_free == max_desc)
@@ -149,7 +149,7 @@ static inline void
_ixgbe_reset_tx_queue_vec(struct ixgbe_tx_queue *txq)
{
static const union ixgbe_adv_tx_desc zeroed_desc = { { 0 } };
- struct ixgbe_tx_entry_v *txe = txq->sw_ring_v;
+ struct ci_tx_entry_vec *txe = txq->sw_ring_v;
uint16_t i;
/* Zero out HW ring memory */
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
index 7b35093075..02b53c008e 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
@@ -573,7 +573,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
volatile union ixgbe_adv_tx_desc *txdp;
- struct ixgbe_tx_entry_v *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = DCMD_DTYP_FLAGS;
uint64_t rs = IXGBE_ADVTXD_DCMD_RS | DCMD_DTYP_FLAGS;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
index a709bf8c7f..c8b5377c9f 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -695,7 +695,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
volatile union ixgbe_adv_tx_desc *txdp;
- struct ixgbe_tx_entry_v *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = DCMD_DTYP_FLAGS;
uint64_t rs = IXGBE_ADVTXD_DCMD_RS|DCMD_DTYP_FLAGS;
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v3 03/22] net/_common_intel: add Tx mbuf ring replenish fn
2024-12-11 17:33 ` [PATCH v3 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
2024-12-11 17:33 ` [PATCH v3 01/22] net/_common_intel: add pkt reassembly fn for intel drivers Bruce Richardson
2024-12-11 17:33 ` [PATCH v3 02/22] net/_common_intel: provide common Tx entry structures Bruce Richardson
@ 2024-12-11 17:33 ` Bruce Richardson
2024-12-11 17:33 ` [PATCH v3 04/22] drivers/net: align Tx queue struct field names Bruce Richardson
` (18 subsequent siblings)
21 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-11 17:33 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, David Christensen, Ian Stokes,
Konstantin Ananyev, Wathsala Vithanage, Vladimir Medvedkin,
Anatoly Burakov
Move the short function used to place mbufs on the SW Tx ring to common
code to avoid duplication.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 7 +++++++
drivers/net/i40e/i40e_rxtx_vec_altivec.c | 4 ++--
drivers/net/i40e/i40e_rxtx_vec_avx2.c | 4 ++--
drivers/net/i40e/i40e_rxtx_vec_common.h | 10 ----------
drivers/net/i40e/i40e_rxtx_vec_neon.c | 4 ++--
drivers/net/i40e/i40e_rxtx_vec_sse.c | 4 ++--
drivers/net/iavf/iavf_rxtx_vec_avx2.c | 4 ++--
drivers/net/iavf/iavf_rxtx_vec_common.h | 10 ----------
drivers/net/iavf/iavf_rxtx_vec_sse.c | 4 ++--
drivers/net/ice/ice_rxtx_vec_avx2.c | 4 ++--
drivers/net/ice/ice_rxtx_vec_common.h | 10 ----------
drivers/net/ice/ice_rxtx_vec_sse.c | 4 ++--
12 files changed, 23 insertions(+), 46 deletions(-)
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
index 384352b9db..5397007411 100644
--- a/drivers/net/_common_intel/tx.h
+++ b/drivers/net/_common_intel/tx.h
@@ -24,4 +24,11 @@ struct ci_tx_entry_vec {
struct rte_mbuf *mbuf; /* mbuf associated with TX desc, if any. */
};
+static __rte_always_inline void
+ci_tx_backlog_entry(struct ci_tx_entry *txep, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+ for (uint16_t i = 0; i < (int)nb_pkts; ++i)
+ txep[i].mbuf = tx_pkts[i];
+}
+
#endif /* _COMMON_INTEL_TX_H_ */
diff --git a/drivers/net/i40e/i40e_rxtx_vec_altivec.c b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
index ca1038eaa6..80f07a3e10 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_altivec.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
@@ -575,7 +575,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -592,7 +592,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = &txq->sw_ring[tx_id];
}
- tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx2.c b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
index e8441de759..b26bae4757 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx2.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
@@ -765,7 +765,7 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry(txep, tx_pkts, n);
vtx(txdp, tx_pkts, n - 1, flags);
tx_pkts += (n - 1);
@@ -783,7 +783,7 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = &txq->sw_ring[tx_id];
}
- tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index 619fb89110..325e99c1a4 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -84,16 +84,6 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
return txq->tx_rs_thresh;
}
-static __rte_always_inline void
-tx_backlog_entry(struct ci_tx_entry *txep,
- struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
-{
- int i;
-
- for (i = 0; i < (int)nb_pkts; ++i)
- txep[i].mbuf = tx_pkts[i];
-}
-
static inline void
_i40e_rx_queue_release_mbufs_vec(struct i40e_rx_queue *rxq)
{
diff --git a/drivers/net/i40e/i40e_rxtx_vec_neon.c b/drivers/net/i40e/i40e_rxtx_vec_neon.c
index 9b90a32e28..26bc345a0a 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_neon.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_neon.c
@@ -702,7 +702,7 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -719,7 +719,7 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
txep = &txq->sw_ring[tx_id];
}
- tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_sse.c b/drivers/net/i40e/i40e_rxtx_vec_sse.c
index e1fa2ed543..ebc32b0d27 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_sse.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_sse.c
@@ -721,7 +721,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -738,7 +738,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = &txq->sw_ring[tx_id];
}
- tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index e7d3d52655..28885800e0 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -1757,7 +1757,7 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry(txep, tx_pkts, n);
iavf_vtx(txdp, tx_pkts, n - 1, flags, offload);
tx_pkts += (n - 1);
@@ -1775,7 +1775,7 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = &txq->sw_ring[tx_id];
}
- tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
iavf_vtx(txdp, tx_pkts, nb_commit, flags, offload);
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index df40857218..2c118cc059 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -73,16 +73,6 @@ iavf_tx_free_bufs(struct iavf_tx_queue *txq)
return txq->rs_thresh;
}
-static __rte_always_inline void
-tx_backlog_entry(struct ci_tx_entry *txep,
- struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
-{
- int i;
-
- for (i = 0; i < (int)nb_pkts; ++i)
- txep[i].mbuf = tx_pkts[i];
-}
-
static inline void
_iavf_rx_queue_release_mbufs_vec(struct iavf_rx_queue *rxq)
{
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index 0a30b1ef64..bc4b8f14c8 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -1390,7 +1390,7 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -1407,7 +1407,7 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = &txq->sw_ring[tx_id];
}
- tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
iavf_vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index cf1862263a..336697e72d 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -881,7 +881,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ice_tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry(txep, tx_pkts, n);
ice_vtx(txdp, tx_pkts, n - 1, flags, offload);
tx_pkts += (n - 1);
@@ -899,7 +899,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = &txq->sw_ring[tx_id];
}
- ice_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
ice_vtx(txdp, tx_pkts, nb_commit, flags, offload);
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index 3dc6061e84..32e4541267 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -69,16 +69,6 @@ ice_tx_free_bufs_vec(struct ice_tx_queue *txq)
return txq->tx_rs_thresh;
}
-static __rte_always_inline void
-ice_tx_backlog_entry(struct ci_tx_entry *txep,
- struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
-{
- int i;
-
- for (i = 0; i < (int)nb_pkts; ++i)
- txep[i].mbuf = tx_pkts[i];
-}
-
static inline void
_ice_rx_queue_release_mbufs_vec(struct ice_rx_queue *rxq)
{
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index 889b754cc1..debdd8f6a2 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -724,7 +724,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ice_tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
ice_vtx1(txdp, *tx_pkts, flags);
@@ -741,7 +741,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = &txq->sw_ring[tx_id];
}
- ice_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
ice_vtx(txdp, tx_pkts, nb_commit, flags);
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v3 04/22] drivers/net: align Tx queue struct field names
2024-12-11 17:33 ` [PATCH v3 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (2 preceding siblings ...)
2024-12-11 17:33 ` [PATCH v3 03/22] net/_common_intel: add Tx mbuf ring replenish fn Bruce Richardson
@ 2024-12-11 17:33 ` Bruce Richardson
2024-12-11 17:33 ` [PATCH v3 05/22] drivers/net: add prefix for driver-specific structs Bruce Richardson
` (17 subsequent siblings)
21 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-11 17:33 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Ian Stokes, Vladimir Medvedkin,
Konstantin Ananyev, Anatoly Burakov, Wathsala Vithanage
Across the various Intel drivers sometimes different names are given to
fields in the Tx queue structure which have the same function. Do some
renaming to align things better for future merging.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/i40e/i40e_rxtx.c | 6 +--
drivers/net/i40e/i40e_rxtx.h | 2 +-
drivers/net/iavf/iavf_rxtx.c | 60 ++++++++++++-------------
drivers/net/iavf/iavf_rxtx.h | 14 +++---
drivers/net/iavf/iavf_rxtx_vec_avx2.c | 19 ++++----
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 57 +++++++++++------------
drivers/net/iavf/iavf_rxtx_vec_common.h | 24 +++++-----
drivers/net/iavf/iavf_rxtx_vec_sse.c | 18 ++++----
drivers/net/iavf/iavf_vchnl.c | 2 +-
drivers/net/ixgbe/base/ixgbe_osdep.h | 2 +-
drivers/net/ixgbe/ixgbe_rxtx.c | 16 +++----
drivers/net/ixgbe/ixgbe_rxtx.h | 6 +--
drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 2 +-
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 2 +-
14 files changed, 116 insertions(+), 114 deletions(-)
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 2e1f07d2a1..b0bb20fe9a 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2549,7 +2549,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->vsi = vsi;
txq->tx_deferred_start = tx_conf->tx_deferred_start;
- txq->tx_ring_phys_addr = tz->iova;
+ txq->tx_ring_dma = tz->iova;
txq->tx_ring = (struct i40e_tx_desc *)tz->addr;
/* Allocate software ring */
@@ -2923,7 +2923,7 @@ i40e_tx_queue_init(struct i40e_tx_queue *txq)
/* clear the context structure first */
memset(&tx_ctx, 0, sizeof(tx_ctx));
tx_ctx.new_context = 1;
- tx_ctx.base = txq->tx_ring_phys_addr / I40E_QUEUE_BASE_ADDR_UNIT;
+ tx_ctx.base = txq->tx_ring_dma / I40E_QUEUE_BASE_ADDR_UNIT;
tx_ctx.qlen = txq->nb_tx_desc;
#ifdef RTE_LIBRTE_IEEE1588
@@ -3209,7 +3209,7 @@ i40e_fdir_setup_tx_resources(struct i40e_pf *pf)
txq->reg_idx = pf->fdir.fdir_vsi->base_queue;
txq->vsi = pf->fdir.fdir_vsi;
- txq->tx_ring_phys_addr = tz->iova;
+ txq->tx_ring_dma = tz->iova;
txq->tx_ring = (struct i40e_tx_desc *)tz->addr;
/*
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 0f5d3cb0b7..f420c98687 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -129,7 +129,7 @@ struct i40e_rx_queue {
*/
struct i40e_tx_queue {
uint16_t nb_tx_desc; /**< number of TX descriptors */
- uint64_t tx_ring_phys_addr; /**< TX ring DMA address */
+ rte_iova_t tx_ring_dma; /**< TX ring DMA address */
volatile struct i40e_tx_desc *tx_ring; /**< TX ring virtual address */
struct ci_tx_entry *sw_ring; /**< virtual address of SW ring */
uint16_t tx_tail; /**< current value of tail register */
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index e337f20073..adaaeb4625 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -216,8 +216,8 @@ static inline bool
check_tx_vec_allow(struct iavf_tx_queue *txq)
{
if (!(txq->offloads & IAVF_TX_NO_VECTOR_FLAGS) &&
- txq->rs_thresh >= IAVF_VPMD_TX_MAX_BURST &&
- txq->rs_thresh <= IAVF_VPMD_TX_MAX_FREE_BUF) {
+ txq->tx_rs_thresh >= IAVF_VPMD_TX_MAX_BURST &&
+ txq->tx_rs_thresh <= IAVF_VPMD_TX_MAX_FREE_BUF) {
PMD_INIT_LOG(DEBUG, "Vector tx can be enabled on this txq.");
return true;
}
@@ -309,13 +309,13 @@ reset_tx_queue(struct iavf_tx_queue *txq)
}
txq->tx_tail = 0;
- txq->nb_used = 0;
+ txq->nb_tx_used = 0;
txq->last_desc_cleaned = txq->nb_tx_desc - 1;
- txq->nb_free = txq->nb_tx_desc - 1;
+ txq->nb_tx_free = txq->nb_tx_desc - 1;
- txq->next_dd = txq->rs_thresh - 1;
- txq->next_rs = txq->rs_thresh - 1;
+ txq->tx_next_dd = txq->tx_rs_thresh - 1;
+ txq->tx_next_rs = txq->tx_rs_thresh - 1;
}
static int
@@ -845,8 +845,8 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,
}
txq->nb_tx_desc = nb_desc;
- txq->rs_thresh = tx_rs_thresh;
- txq->free_thresh = tx_free_thresh;
+ txq->tx_rs_thresh = tx_rs_thresh;
+ txq->tx_free_thresh = tx_free_thresh;
txq->queue_id = queue_idx;
txq->port_id = dev->data->port_id;
txq->offloads = offloads;
@@ -881,7 +881,7 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,
rte_free(txq);
return -ENOMEM;
}
- txq->tx_ring_phys_addr = mz->iova;
+ txq->tx_ring_dma = mz->iova;
txq->tx_ring = (struct iavf_tx_desc *)mz->addr;
txq->mz = mz;
@@ -2387,7 +2387,7 @@ iavf_xmit_cleanup(struct iavf_tx_queue *txq)
volatile struct iavf_tx_desc *txd = txq->tx_ring;
- desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->rs_thresh);
+ desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->tx_rs_thresh);
if (desc_to_clean_to >= nb_tx_desc)
desc_to_clean_to = (uint16_t)(desc_to_clean_to - nb_tx_desc);
@@ -2411,7 +2411,7 @@ iavf_xmit_cleanup(struct iavf_tx_queue *txq)
txd[desc_to_clean_to].cmd_type_offset_bsz = 0;
txq->last_desc_cleaned = desc_to_clean_to;
- txq->nb_free = (uint16_t)(txq->nb_free + nb_tx_to_clean);
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + nb_tx_to_clean);
return 0;
}
@@ -2807,7 +2807,7 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
/* Check if the descriptor ring needs to be cleaned. */
- if (txq->nb_free < txq->free_thresh)
+ if (txq->nb_tx_free < txq->tx_free_thresh)
iavf_xmit_cleanup(txq);
desc_idx = txq->tx_tail;
@@ -2862,14 +2862,14 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
"port_id=%u queue_id=%u tx_first=%u tx_last=%u",
txq->port_id, txq->queue_id, desc_idx, desc_idx_last);
- if (nb_desc_required > txq->nb_free) {
+ if (nb_desc_required > txq->nb_tx_free) {
if (iavf_xmit_cleanup(txq)) {
if (idx == 0)
return 0;
goto end_of_tx;
}
- if (unlikely(nb_desc_required > txq->rs_thresh)) {
- while (nb_desc_required > txq->nb_free) {
+ if (unlikely(nb_desc_required > txq->tx_rs_thresh)) {
+ while (nb_desc_required > txq->nb_tx_free) {
if (iavf_xmit_cleanup(txq)) {
if (idx == 0)
return 0;
@@ -2991,10 +2991,10 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
/* The last packet data descriptor needs End Of Packet (EOP) */
ddesc_cmd = IAVF_TX_DESC_CMD_EOP;
- txq->nb_used = (uint16_t)(txq->nb_used + nb_desc_required);
- txq->nb_free = (uint16_t)(txq->nb_free - nb_desc_required);
+ txq->nb_tx_used = (uint16_t)(txq->nb_tx_used + nb_desc_required);
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_desc_required);
- if (txq->nb_used >= txq->rs_thresh) {
+ if (txq->nb_tx_used >= txq->tx_rs_thresh) {
PMD_TX_LOG(DEBUG, "Setting RS bit on TXD id="
"%4u (port=%d queue=%d)",
desc_idx_last, txq->port_id, txq->queue_id);
@@ -3002,7 +3002,7 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
ddesc_cmd |= IAVF_TX_DESC_CMD_RS;
/* Update txq RS bit counters */
- txq->nb_used = 0;
+ txq->nb_tx_used = 0;
}
ddesc->cmd_type_offset_bsz |= rte_cpu_to_le_64(ddesc_cmd <<
@@ -4278,11 +4278,11 @@ iavf_tx_done_cleanup_full(struct iavf_tx_queue *txq,
tx_id = txq->tx_tail;
tx_last = tx_id;
- if (txq->nb_free == 0 && iavf_xmit_cleanup(txq))
+ if (txq->nb_tx_free == 0 && iavf_xmit_cleanup(txq))
return 0;
- nb_tx_to_clean = txq->nb_free;
- nb_tx_free_last = txq->nb_free;
+ nb_tx_to_clean = txq->nb_tx_free;
+ nb_tx_free_last = txq->nb_tx_free;
if (!free_cnt)
free_cnt = txq->nb_tx_desc;
@@ -4305,16 +4305,16 @@ iavf_tx_done_cleanup_full(struct iavf_tx_queue *txq,
tx_id = swr_ring[tx_id].next_id;
} while (--nb_tx_to_clean && pkt_cnt < free_cnt && tx_id != tx_last);
- if (txq->rs_thresh > txq->nb_tx_desc -
- txq->nb_free || tx_id == tx_last)
+ if (txq->tx_rs_thresh > txq->nb_tx_desc -
+ txq->nb_tx_free || tx_id == tx_last)
break;
if (pkt_cnt < free_cnt) {
if (iavf_xmit_cleanup(txq))
break;
- nb_tx_to_clean = txq->nb_free - nb_tx_free_last;
- nb_tx_free_last = txq->nb_free;
+ nb_tx_to_clean = txq->nb_tx_free - nb_tx_free_last;
+ nb_tx_free_last = txq->nb_tx_free;
}
}
@@ -4356,8 +4356,8 @@ iavf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
qinfo->nb_desc = txq->nb_tx_desc;
- qinfo->conf.tx_free_thresh = txq->free_thresh;
- qinfo->conf.tx_rs_thresh = txq->rs_thresh;
+ qinfo->conf.tx_free_thresh = txq->tx_free_thresh;
+ qinfo->conf.tx_rs_thresh = txq->tx_rs_thresh;
qinfo->conf.offloads = txq->offloads;
qinfo->conf.tx_deferred_start = txq->tx_deferred_start;
}
@@ -4432,8 +4432,8 @@ iavf_dev_tx_desc_status(void *tx_queue, uint16_t offset)
desc = txq->tx_tail + offset;
/* go to next desc that has the RS bit */
- desc = ((desc + txq->rs_thresh - 1) / txq->rs_thresh) *
- txq->rs_thresh;
+ desc = ((desc + txq->tx_rs_thresh - 1) / txq->tx_rs_thresh) *
+ txq->tx_rs_thresh;
if (desc >= txq->nb_tx_desc) {
desc -= txq->nb_tx_desc;
if (desc >= txq->nb_tx_desc)
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index 1a191f2c89..44e2de731c 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -277,25 +277,25 @@ struct iavf_rx_queue {
struct iavf_tx_queue {
const struct rte_memzone *mz; /* memzone for Tx ring */
volatile struct iavf_tx_desc *tx_ring; /* Tx ring virtual address */
- uint64_t tx_ring_phys_addr; /* Tx ring DMA address */
+ rte_iova_t tx_ring_dma; /* Tx ring DMA address */
struct ci_tx_entry *sw_ring; /* address array of SW ring */
uint16_t nb_tx_desc; /* ring length */
uint16_t tx_tail; /* current value of tail */
volatile uint8_t *qtx_tail; /* register address of tail */
/* number of used desc since RS bit set */
- uint16_t nb_used;
- uint16_t nb_free;
+ uint16_t nb_tx_used;
+ uint16_t nb_tx_free;
uint16_t last_desc_cleaned; /* last desc have been cleaned*/
- uint16_t free_thresh;
- uint16_t rs_thresh;
+ uint16_t tx_free_thresh;
+ uint16_t tx_rs_thresh;
uint8_t rel_mbufs_type;
struct iavf_vsi *vsi; /**< the VSI this queue belongs to */
uint16_t port_id;
uint16_t queue_id;
uint64_t offloads;
- uint16_t next_dd; /* next to set RS, for VPMD */
- uint16_t next_rs; /* next to check DD, for VPMD */
+ uint16_t tx_next_dd; /* next to set RS, for VPMD */
+ uint16_t tx_next_rs; /* next to check DD, for VPMD */
uint16_t ipsec_crypto_pkt_md_offset;
uint64_t mbuf_errors;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index 28885800e0..42e09a2adf 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -1742,18 +1742,19 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC;
uint64_t rs = IAVF_TX_DESC_CMD_RS | flags;
- if (txq->nb_free < txq->free_thresh)
+ if (txq->nb_tx_free < txq->tx_free_thresh)
iavf_tx_free_bufs(txq);
- nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_free, nb_pkts);
+ nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
return 0;
+ nb_commit = nb_pkts;
tx_id = txq->tx_tail;
txdp = &txq->tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
- txq->nb_free = (uint16_t)(txq->nb_free - nb_pkts);
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
@@ -1768,7 +1769,7 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_commit = (uint16_t)(nb_commit - n);
tx_id = 0;
- txq->next_rs = (uint16_t)(txq->rs_thresh - 1);
+ txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
txdp = &txq->tx_ring[tx_id];
@@ -1780,12 +1781,12 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
iavf_vtx(txdp, tx_pkts, nb_commit, flags, offload);
tx_id = (uint16_t)(tx_id + nb_commit);
- if (tx_id > txq->next_rs) {
- txq->tx_ring[txq->next_rs].cmd_type_offset_bsz |=
+ if (tx_id > txq->tx_next_rs) {
+ txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) <<
IAVF_TXD_QW1_CMD_SHIFT);
- txq->next_rs =
- (uint16_t)(txq->next_rs + txq->rs_thresh);
+ txq->tx_next_rs =
+ (uint16_t)(txq->tx_next_rs + txq->tx_rs_thresh);
}
txq->tx_tail = tx_id;
@@ -1806,7 +1807,7 @@ iavf_xmit_pkts_vec_avx2_common(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t ret, num;
/* cross rs_thresh boundary is not allowed */
- num = (uint16_t)RTE_MIN(nb_pkts, txq->rs_thresh);
+ num = (uint16_t)RTE_MIN(nb_pkts, txq->tx_rs_thresh);
ret = iavf_xmit_fixed_burst_vec_avx2(tx_queue, &tx_pkts[nb_tx],
num, offload);
nb_tx += ret;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index a899309f94..dc1fef24f0 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -1854,18 +1854,18 @@ iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq)
struct rte_mbuf *m, *free[IAVF_VPMD_TX_MAX_FREE_BUF];
/* check DD bits on threshold descriptor */
- if ((txq->tx_ring[txq->next_dd].cmd_type_offset_bsz &
+ if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(IAVF_TXD_QW1_DTYPE_MASK)) !=
rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE))
return 0;
- n = txq->rs_thresh >> txq->use_ctx;
+ n = txq->tx_rs_thresh >> txq->use_ctx;
/* first buffer to free from S/W ring is at index
* tx_next_dd - (tx_rs_thresh-1)
*/
txep = (void *)txq->sw_ring;
- txep += (txq->next_dd >> txq->use_ctx) - (n - 1);
+ txep += (txq->tx_next_dd >> txq->use_ctx) - (n - 1);
if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
struct rte_mempool *mp = txep[0].mbuf->pool;
@@ -1951,12 +1951,12 @@ iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq)
done:
/* buffers were freed, update counters */
- txq->nb_free = (uint16_t)(txq->nb_free + txq->rs_thresh);
- txq->next_dd = (uint16_t)(txq->next_dd + txq->rs_thresh);
- if (txq->next_dd >= txq->nb_tx_desc)
- txq->next_dd = (uint16_t)(txq->rs_thresh - 1);
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
+ txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
+ if (txq->tx_next_dd >= txq->nb_tx_desc)
+ txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
- return txq->rs_thresh;
+ return txq->tx_rs_thresh;
}
static __rte_always_inline void
@@ -2319,19 +2319,20 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC;
uint64_t rs = IAVF_TX_DESC_CMD_RS | flags;
- if (txq->nb_free < txq->free_thresh)
+ if (txq->nb_tx_free < txq->tx_free_thresh)
iavf_tx_free_bufs_avx512(txq);
- nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_free, nb_pkts);
+ nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
return 0;
+ nb_commit = nb_pkts;
tx_id = txq->tx_tail;
txdp = &txq->tx_ring[tx_id];
txep = (void *)txq->sw_ring;
txep += tx_id;
- txq->nb_free = (uint16_t)(txq->nb_free - nb_pkts);
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
@@ -2346,7 +2347,7 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_commit = (uint16_t)(nb_commit - n);
tx_id = 0;
- txq->next_rs = (uint16_t)(txq->rs_thresh - 1);
+ txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
txdp = &txq->tx_ring[tx_id];
@@ -2359,12 +2360,12 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
iavf_vtx(txdp, tx_pkts, nb_commit, flags, offload);
tx_id = (uint16_t)(tx_id + nb_commit);
- if (tx_id > txq->next_rs) {
- txq->tx_ring[txq->next_rs].cmd_type_offset_bsz |=
+ if (tx_id > txq->tx_next_rs) {
+ txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) <<
IAVF_TXD_QW1_CMD_SHIFT);
- txq->next_rs =
- (uint16_t)(txq->next_rs + txq->rs_thresh);
+ txq->tx_next_rs =
+ (uint16_t)(txq->tx_next_rs + txq->tx_rs_thresh);
}
txq->tx_tail = tx_id;
@@ -2386,10 +2387,10 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC;
uint64_t rs = IAVF_TX_DESC_CMD_RS | flags;
- if (txq->nb_free < txq->free_thresh)
+ if (txq->nb_tx_free < txq->tx_free_thresh)
iavf_tx_free_bufs_avx512(txq);
- nb_commit = (uint16_t)RTE_MIN(txq->nb_free, nb_pkts << 1);
+ nb_commit = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts << 1);
nb_commit &= 0xFFFE;
if (unlikely(nb_commit == 0))
return 0;
@@ -2400,7 +2401,7 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = (void *)txq->sw_ring;
txep += (tx_id >> 1);
- txq->nb_free = (uint16_t)(txq->nb_free - nb_commit);
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_commit);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (n != 0 && nb_commit >= n) {
@@ -2414,7 +2415,7 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_commit = (uint16_t)(nb_commit - n);
- txq->next_rs = (uint16_t)(txq->rs_thresh - 1);
+ txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
tx_id = 0;
/* avoid reach the end of ring */
txdp = txq->tx_ring;
@@ -2427,12 +2428,12 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
ctx_vtx(txdp, tx_pkts, nb_mbuf, flags, offload, txq->vlan_flag);
tx_id = (uint16_t)(tx_id + nb_commit);
- if (tx_id > txq->next_rs) {
- txq->tx_ring[txq->next_rs].cmd_type_offset_bsz |=
+ if (tx_id > txq->tx_next_rs) {
+ txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) <<
IAVF_TXD_QW1_CMD_SHIFT);
- txq->next_rs =
- (uint16_t)(txq->next_rs + txq->rs_thresh);
+ txq->tx_next_rs =
+ (uint16_t)(txq->tx_next_rs + txq->tx_rs_thresh);
}
txq->tx_tail = tx_id;
@@ -2452,7 +2453,7 @@ iavf_xmit_pkts_vec_avx512_cmn(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t ret, num;
/* cross rs_thresh boundary is not allowed */
- num = (uint16_t)RTE_MIN(nb_pkts, txq->rs_thresh);
+ num = (uint16_t)RTE_MIN(nb_pkts, txq->tx_rs_thresh);
ret = iavf_xmit_fixed_burst_vec_avx512(tx_queue, &tx_pkts[nb_tx],
num, offload);
nb_tx += ret;
@@ -2480,10 +2481,10 @@ iavf_tx_queue_release_mbufs_avx512(struct iavf_tx_queue *txq)
const uint16_t wrap_point = txq->nb_tx_desc >> txq->use_ctx; /* end of SW ring */
struct ci_tx_entry_vec *swr = (void *)txq->sw_ring;
- if (!txq->sw_ring || txq->nb_free == max_desc)
+ if (!txq->sw_ring || txq->nb_tx_free == max_desc)
return;
- i = (txq->next_dd - txq->rs_thresh + 1) >> txq->use_ctx;
+ i = (txq->tx_next_dd - txq->tx_rs_thresh + 1) >> txq->use_ctx;
while (i != end_desc) {
rte_pktmbuf_free_seg(swr[i].mbuf);
swr[i].mbuf = NULL;
@@ -2517,7 +2518,7 @@ iavf_xmit_pkts_vec_avx512_ctx_cmn(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t ret, num;
/* cross rs_thresh boundary is not allowed */
- num = (uint16_t)RTE_MIN(nb_pkts << 1, txq->rs_thresh);
+ num = (uint16_t)RTE_MIN(nb_pkts << 1, txq->tx_rs_thresh);
num = num >> 1;
ret = iavf_xmit_fixed_burst_vec_avx512_ctx(tx_queue, &tx_pkts[nb_tx],
num, offload);
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index 2c118cc059..ff24055c34 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -26,17 +26,17 @@ iavf_tx_free_bufs(struct iavf_tx_queue *txq)
struct rte_mbuf *m, *free[IAVF_VPMD_TX_MAX_FREE_BUF];
/* check DD bits on threshold descriptor */
- if ((txq->tx_ring[txq->next_dd].cmd_type_offset_bsz &
+ if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(IAVF_TXD_QW1_DTYPE_MASK)) !=
rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE))
return 0;
- n = txq->rs_thresh;
+ n = txq->tx_rs_thresh;
/* first buffer to free from S/W ring is at index
* tx_next_dd - (tx_rs_thresh-1)
*/
- txep = &txq->sw_ring[txq->next_dd - (n - 1)];
+ txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)];
m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
if (likely(m != NULL)) {
free[0] = m;
@@ -65,12 +65,12 @@ iavf_tx_free_bufs(struct iavf_tx_queue *txq)
}
/* buffers were freed, update counters */
- txq->nb_free = (uint16_t)(txq->nb_free + txq->rs_thresh);
- txq->next_dd = (uint16_t)(txq->next_dd + txq->rs_thresh);
- if (txq->next_dd >= txq->nb_tx_desc)
- txq->next_dd = (uint16_t)(txq->rs_thresh - 1);
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
+ txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
+ if (txq->tx_next_dd >= txq->nb_tx_desc)
+ txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
- return txq->rs_thresh;
+ return txq->tx_rs_thresh;
}
static inline void
@@ -109,10 +109,10 @@ _iavf_tx_queue_release_mbufs_vec(struct iavf_tx_queue *txq)
unsigned i;
const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1);
- if (!txq->sw_ring || txq->nb_free == max_desc)
+ if (!txq->sw_ring || txq->nb_tx_free == max_desc)
return;
- i = txq->next_dd - txq->rs_thresh + 1;
+ i = txq->tx_next_dd - txq->tx_rs_thresh + 1;
while (i != txq->tx_tail) {
rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
txq->sw_ring[i].mbuf = NULL;
@@ -169,8 +169,8 @@ iavf_tx_vec_queue_default(struct iavf_tx_queue *txq)
if (!txq)
return -1;
- if (txq->rs_thresh < IAVF_VPMD_TX_MAX_BURST ||
- txq->rs_thresh > IAVF_VPMD_TX_MAX_FREE_BUF)
+ if (txq->tx_rs_thresh < IAVF_VPMD_TX_MAX_BURST ||
+ txq->tx_rs_thresh > IAVF_VPMD_TX_MAX_FREE_BUF)
return -1;
if (txq->offloads & IAVF_TX_NO_VECTOR_FLAGS)
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index bc4b8f14c8..ed8455d669 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -1374,10 +1374,10 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint64_t rs = IAVF_TX_DESC_CMD_RS | flags;
int i;
- if (txq->nb_free < txq->free_thresh)
+ if (txq->nb_tx_free < txq->tx_free_thresh)
iavf_tx_free_bufs(txq);
- nb_pkts = (uint16_t)RTE_MIN(txq->nb_free, nb_pkts);
+ nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
return 0;
nb_commit = nb_pkts;
@@ -1386,7 +1386,7 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txdp = &txq->tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
- txq->nb_free = (uint16_t)(txq->nb_free - nb_pkts);
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
@@ -1400,7 +1400,7 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_commit = (uint16_t)(nb_commit - n);
tx_id = 0;
- txq->next_rs = (uint16_t)(txq->rs_thresh - 1);
+ txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
txdp = &txq->tx_ring[tx_id];
@@ -1412,12 +1412,12 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
iavf_vtx(txdp, tx_pkts, nb_commit, flags);
tx_id = (uint16_t)(tx_id + nb_commit);
- if (tx_id > txq->next_rs) {
- txq->tx_ring[txq->next_rs].cmd_type_offset_bsz |=
+ if (tx_id > txq->tx_next_rs) {
+ txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) <<
IAVF_TXD_QW1_CMD_SHIFT);
- txq->next_rs =
- (uint16_t)(txq->next_rs + txq->rs_thresh);
+ txq->tx_next_rs =
+ (uint16_t)(txq->tx_next_rs + txq->tx_rs_thresh);
}
txq->tx_tail = tx_id;
@@ -1441,7 +1441,7 @@ iavf_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t ret, num;
/* cross rs_thresh boundary is not allowed */
- num = (uint16_t)RTE_MIN(nb_pkts, txq->rs_thresh);
+ num = (uint16_t)RTE_MIN(nb_pkts, txq->tx_rs_thresh);
ret = iavf_xmit_fixed_burst_vec(tx_queue, &tx_pkts[nb_tx], num);
nb_tx += ret;
nb_pkts -= ret;
diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c
index 065ab3594c..0646a2f978 100644
--- a/drivers/net/iavf/iavf_vchnl.c
+++ b/drivers/net/iavf/iavf_vchnl.c
@@ -1247,7 +1247,7 @@ iavf_configure_queues(struct iavf_adapter *adapter,
/* Virtchnnl configure tx queues by pairs */
if (i < adapter->dev_data->nb_tx_queues) {
vc_qp->txq.ring_len = txq[i]->nb_tx_desc;
- vc_qp->txq.dma_ring_addr = txq[i]->tx_ring_phys_addr;
+ vc_qp->txq.dma_ring_addr = txq[i]->tx_ring_dma;
}
vc_qp->rxq.vsi_id = vf->vsi_res->vsi_id;
diff --git a/drivers/net/ixgbe/base/ixgbe_osdep.h b/drivers/net/ixgbe/base/ixgbe_osdep.h
index 502f386b56..95dbe2bedd 100644
--- a/drivers/net/ixgbe/base/ixgbe_osdep.h
+++ b/drivers/net/ixgbe/base/ixgbe_osdep.h
@@ -124,7 +124,7 @@ static inline uint32_t ixgbe_read_addr(volatile void* addr)
rte_write32_wc_relaxed((rte_cpu_to_le_32(value)), reg)
#define IXGBE_PCI_REG_ADDR(hw, reg) \
- ((volatile uint32_t *)((char *)(hw)->hw_addr + (reg)))
+ ((volatile void *)((char *)(hw)->hw_addr + (reg)))
#define IXGBE_PCI_REG_ARRAY_ADDR(hw, reg, index) \
IXGBE_PCI_REG_ADDR((hw), (reg) + ((index) << 2))
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index db4b993ebc..0a80b944f0 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -308,7 +308,7 @@ tx_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
/* update tail pointer */
rte_wmb();
- IXGBE_PCI_REG_WC_WRITE_RELAXED(txq->tdt_reg_addr, txq->tx_tail);
+ IXGBE_PCI_REG_WC_WRITE_RELAXED(txq->qtx_tail, txq->tx_tail);
return nb_pkts;
}
@@ -946,7 +946,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u tx_tail=%u nb_tx=%u",
(unsigned) txq->port_id, (unsigned) txq->queue_id,
(unsigned) tx_id, (unsigned) nb_tx);
- IXGBE_PCI_REG_WC_WRITE_RELAXED(txq->tdt_reg_addr, tx_id);
+ IXGBE_PCI_REG_WC_WRITE_RELAXED(txq->qtx_tail, tx_id);
txq->tx_tail = tx_id;
return nb_tx;
@@ -2786,11 +2786,11 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
hw->mac.type == ixgbe_mac_X550_vf ||
hw->mac.type == ixgbe_mac_X550EM_x_vf ||
hw->mac.type == ixgbe_mac_X550EM_a_vf)
- txq->tdt_reg_addr = IXGBE_PCI_REG_ADDR(hw, IXGBE_VFTDT(queue_idx));
+ txq->qtx_tail = IXGBE_PCI_REG_ADDR(hw, IXGBE_VFTDT(queue_idx));
else
- txq->tdt_reg_addr = IXGBE_PCI_REG_ADDR(hw, IXGBE_TDT(txq->reg_idx));
+ txq->qtx_tail = IXGBE_PCI_REG_ADDR(hw, IXGBE_TDT(txq->reg_idx));
- txq->tx_ring_phys_addr = tz->iova;
+ txq->tx_ring_dma = tz->iova;
txq->tx_ring = (union ixgbe_adv_tx_desc *) tz->addr;
/* Allocate software ring */
@@ -2802,7 +2802,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
return -ENOMEM;
}
PMD_INIT_LOG(DEBUG, "sw_ring=%p hw_ring=%p dma_addr=0x%"PRIx64,
- txq->sw_ring, txq->tx_ring, txq->tx_ring_phys_addr);
+ txq->sw_ring, txq->tx_ring, txq->tx_ring_dma);
/* set up vector or scalar TX function as appropriate */
ixgbe_set_tx_function(dev, txq);
@@ -5303,7 +5303,7 @@ ixgbe_dev_tx_init(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_tx_queues; i++) {
txq = dev->data->tx_queues[i];
- bus_addr = txq->tx_ring_phys_addr;
+ bus_addr = txq->tx_ring_dma;
IXGBE_WRITE_REG(hw, IXGBE_TDBAL(txq->reg_idx),
(uint32_t)(bus_addr & 0x00000000ffffffffULL));
IXGBE_WRITE_REG(hw, IXGBE_TDBAH(txq->reg_idx),
@@ -5887,7 +5887,7 @@ ixgbevf_dev_tx_init(struct rte_eth_dev *dev)
/* Setup the Base and Length of the Tx Descriptor Rings */
for (i = 0; i < dev->data->nb_tx_queues; i++) {
txq = dev->data->tx_queues[i];
- bus_addr = txq->tx_ring_phys_addr;
+ bus_addr = txq->tx_ring_dma;
IXGBE_WRITE_REG(hw, IXGBE_VFTDBAL(i),
(uint32_t)(bus_addr & 0x00000000ffffffffULL));
IXGBE_WRITE_REG(hw, IXGBE_VFTDBAH(i),
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index 1647396419..00e2009b3e 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -186,12 +186,12 @@ struct ixgbe_advctx_info {
struct ixgbe_tx_queue {
/** TX ring virtual address. */
volatile union ixgbe_adv_tx_desc *tx_ring;
- uint64_t tx_ring_phys_addr; /**< TX ring DMA address. */
+ rte_iova_t tx_ring_dma; /**< TX ring DMA address. */
union {
struct ci_tx_entry *sw_ring; /**< address of SW ring for scalar PMD. */
struct ci_tx_entry_vec *sw_ring_v; /**< address of SW ring for vector PMD */
};
- volatile uint32_t *tdt_reg_addr; /**< Address of TDT register. */
+ volatile uint8_t *qtx_tail; /**< Address of TDT register. */
uint16_t nb_tx_desc; /**< number of TX descriptors. */
uint16_t tx_tail; /**< current value of TDT reg. */
/**< Start freeing TX buffers if there are less free descriptors than
@@ -218,7 +218,7 @@ struct ixgbe_tx_queue {
/** Hardware context0 history. */
struct ixgbe_advctx_info ctx_cache[IXGBE_CTX_NUM];
const struct ixgbe_txq_ops *ops; /**< txq ops */
- uint8_t tx_deferred_start; /**< not in global dev start. */
+ bool tx_deferred_start; /**< not in global dev start. */
#ifdef RTE_LIB_SECURITY
uint8_t using_ipsec;
/**< indicates that IPsec TX feature is in use */
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
index 02b53c008e..871c1a7cd2 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
@@ -628,7 +628,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_tail = tx_id;
- IXGBE_PCI_REG_WRITE(txq->tdt_reg_addr, txq->tx_tail);
+ IXGBE_PCI_REG_WRITE(txq->qtx_tail, txq->tx_tail);
return nb_pkts;
}
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
index c8b5377c9f..37f2079519 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -751,7 +751,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_tail = tx_id;
- IXGBE_PCI_REG_WC_WRITE(txq->tdt_reg_addr, txq->tx_tail);
+ IXGBE_PCI_REG_WC_WRITE(txq->qtx_tail, txq->tx_tail);
return nb_pkts;
}
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v3 05/22] drivers/net: add prefix for driver-specific structs
2024-12-11 17:33 ` [PATCH v3 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (3 preceding siblings ...)
2024-12-11 17:33 ` [PATCH v3 04/22] drivers/net: align Tx queue struct field names Bruce Richardson
@ 2024-12-11 17:33 ` Bruce Richardson
2024-12-11 17:33 ` [PATCH v3 06/22] net/_common_intel: merge ice and i40e Tx queue struct Bruce Richardson
` (16 subsequent siblings)
21 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-11 17:33 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Ian Stokes, David Christensen,
Konstantin Ananyev, Wathsala Vithanage, Vladimir Medvedkin,
Anatoly Burakov
In preparation for merging the Tx structs for multiple drivers into a
single struct, rename the driver-specific pointers in each struct to
have a prefix on it, to avoid conflicts.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/i40e/i40e_fdir.c | 6 +--
.../net/i40e/i40e_recycle_mbufs_vec_common.c | 2 +-
drivers/net/i40e/i40e_rxtx.c | 30 ++++++------
drivers/net/i40e/i40e_rxtx.h | 4 +-
drivers/net/i40e/i40e_rxtx_vec_altivec.c | 6 +--
drivers/net/i40e/i40e_rxtx_vec_avx2.c | 6 +--
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 8 ++--
drivers/net/i40e/i40e_rxtx_vec_common.h | 2 +-
drivers/net/i40e/i40e_rxtx_vec_neon.c | 6 +--
drivers/net/i40e/i40e_rxtx_vec_sse.c | 6 +--
drivers/net/iavf/iavf_rxtx.c | 24 +++++-----
drivers/net/iavf/iavf_rxtx.h | 4 +-
drivers/net/iavf/iavf_rxtx_vec_avx2.c | 6 +--
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 14 +++---
drivers/net/iavf/iavf_rxtx_vec_common.h | 2 +-
drivers/net/iavf/iavf_rxtx_vec_sse.c | 6 +--
drivers/net/ice/ice_dcf_ethdev.c | 4 +-
drivers/net/ice/ice_rxtx.c | 48 +++++++++----------
drivers/net/ice/ice_rxtx.h | 4 +-
drivers/net/ice/ice_rxtx_vec_avx2.c | 6 +--
drivers/net/ice/ice_rxtx_vec_avx512.c | 8 ++--
drivers/net/ice/ice_rxtx_vec_common.h | 4 +-
drivers/net/ice/ice_rxtx_vec_sse.c | 6 +--
.../ixgbe/ixgbe_recycle_mbufs_vec_common.c | 2 +-
drivers/net/ixgbe/ixgbe_rxtx.c | 22 ++++-----
drivers/net/ixgbe/ixgbe_rxtx.h | 2 +-
drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 6 +--
drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 6 +--
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 6 +--
29 files changed, 128 insertions(+), 128 deletions(-)
diff --git a/drivers/net/i40e/i40e_fdir.c b/drivers/net/i40e/i40e_fdir.c
index 47f79ecf11..c600167634 100644
--- a/drivers/net/i40e/i40e_fdir.c
+++ b/drivers/net/i40e/i40e_fdir.c
@@ -1383,7 +1383,7 @@ i40e_find_available_buffer(struct rte_eth_dev *dev)
volatile struct i40e_tx_desc *tmp_txdp;
tmp_tail = txq->tx_tail;
- tmp_txdp = &txq->tx_ring[tmp_tail + 1];
+ tmp_txdp = &txq->i40e_tx_ring[tmp_tail + 1];
do {
if ((tmp_txdp->cmd_type_offset_bsz &
@@ -1640,7 +1640,7 @@ i40e_flow_fdir_filter_programming(struct i40e_pf *pf,
PMD_DRV_LOG(INFO, "filling filter programming descriptor.");
fdirdp = (volatile struct i40e_filter_program_desc *)
- (&txq->tx_ring[txq->tx_tail]);
+ (&txq->i40e_tx_ring[txq->tx_tail]);
fdirdp->qindex_flex_ptype_vsi =
rte_cpu_to_le_32((fdir_action->rx_queue <<
@@ -1710,7 +1710,7 @@ i40e_flow_fdir_filter_programming(struct i40e_pf *pf,
fdirdp->fd_id = rte_cpu_to_le_32(filter->soft_id);
PMD_DRV_LOG(INFO, "filling transmit descriptor.");
- txdp = &txq->tx_ring[txq->tx_tail + 1];
+ txdp = &txq->i40e_tx_ring[txq->tx_tail + 1];
txdp->buffer_addr = rte_cpu_to_le_64(pf->fdir.dma_addr[txq->tx_tail >> 1]);
td_cmd = I40E_TX_DESC_CMD_EOP |
diff --git a/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c b/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
index 260d238ce4..8679e5c1fd 100644
--- a/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
+++ b/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
@@ -75,7 +75,7 @@ i40e_recycle_tx_mbufs_reuse_vec(void *tx_queue,
return 0;
/* check DD bits on threshold descriptor */
- if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+ if ((txq->i40e_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) !=
rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE))
return 0;
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index b0bb20fe9a..34ef931859 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -379,7 +379,7 @@ static inline int
i40e_xmit_cleanup(struct i40e_tx_queue *txq)
{
struct ci_tx_entry *sw_ring = txq->sw_ring;
- volatile struct i40e_tx_desc *txd = txq->tx_ring;
+ volatile struct i40e_tx_desc *txd = txq->i40e_tx_ring;
uint16_t last_desc_cleaned = txq->last_desc_cleaned;
uint16_t nb_tx_desc = txq->nb_tx_desc;
uint16_t desc_to_clean_to;
@@ -1103,7 +1103,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
txq = tx_queue;
sw_ring = txq->sw_ring;
- txr = txq->tx_ring;
+ txr = txq->i40e_tx_ring;
tx_id = txq->tx_tail;
txe = &sw_ring[tx_id];
@@ -1338,7 +1338,7 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
const uint16_t k = RTE_ALIGN_FLOOR(tx_rs_thresh, RTE_I40E_TX_MAX_FREE_BUF_SZ);
const uint16_t m = tx_rs_thresh % RTE_I40E_TX_MAX_FREE_BUF_SZ;
- if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+ if ((txq->i40e_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) !=
rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE))
return 0;
@@ -1417,7 +1417,7 @@ i40e_tx_fill_hw_ring(struct i40e_tx_queue *txq,
struct rte_mbuf **pkts,
uint16_t nb_pkts)
{
- volatile struct i40e_tx_desc *txdp = &(txq->tx_ring[txq->tx_tail]);
+ volatile struct i40e_tx_desc *txdp = &txq->i40e_tx_ring[txq->tx_tail];
struct ci_tx_entry *txep = &txq->sw_ring[txq->tx_tail];
const int N_PER_LOOP = 4;
const int N_PER_LOOP_MASK = N_PER_LOOP - 1;
@@ -1445,7 +1445,7 @@ tx_xmit_pkts(struct i40e_tx_queue *txq,
struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- volatile struct i40e_tx_desc *txr = txq->tx_ring;
+ volatile struct i40e_tx_desc *txr = txq->i40e_tx_ring;
uint16_t n = 0;
/**
@@ -1556,7 +1556,7 @@ i40e_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts
bool pkt_error = false;
const char *reason = NULL;
uint16_t good_pkts = nb_pkts;
- struct i40e_adapter *adapter = txq->vsi->adapter;
+ struct i40e_adapter *adapter = txq->i40e_vsi->adapter;
for (idx = 0; idx < nb_pkts; idx++) {
mb = tx_pkts[idx];
@@ -2329,7 +2329,7 @@ i40e_dev_tx_descriptor_status(void *tx_queue, uint16_t offset)
desc -= txq->nb_tx_desc;
}
- status = &txq->tx_ring[desc].cmd_type_offset_bsz;
+ status = &txq->i40e_tx_ring[desc].cmd_type_offset_bsz;
mask = rte_le_to_cpu_64(I40E_TXD_QW1_DTYPE_MASK);
expect = rte_cpu_to_le_64(
I40E_TX_DESC_DTYPE_DESC_DONE << I40E_TXD_QW1_DTYPE_SHIFT);
@@ -2527,7 +2527,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate TX hardware ring descriptors. */
ring_size = sizeof(struct i40e_tx_desc) * I40E_MAX_RING_DESC;
ring_size = RTE_ALIGN(ring_size, I40E_DMA_MEM_ALIGN);
- tz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx,
+ tz = rte_eth_dma_zone_reserve(dev, "i40e_tx_ring", queue_idx,
ring_size, I40E_RING_BASE_ALIGN, socket_id);
if (!tz) {
i40e_tx_queue_release(txq);
@@ -2546,11 +2546,11 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->reg_idx = reg_idx;
txq->port_id = dev->data->port_id;
txq->offloads = offloads;
- txq->vsi = vsi;
+ txq->i40e_vsi = vsi;
txq->tx_deferred_start = tx_conf->tx_deferred_start;
txq->tx_ring_dma = tz->iova;
- txq->tx_ring = (struct i40e_tx_desc *)tz->addr;
+ txq->i40e_tx_ring = (struct i40e_tx_desc *)tz->addr;
/* Allocate software ring */
txq->sw_ring =
@@ -2885,11 +2885,11 @@ i40e_reset_tx_queue(struct i40e_tx_queue *txq)
txe = txq->sw_ring;
size = sizeof(struct i40e_tx_desc) * txq->nb_tx_desc;
for (i = 0; i < size; i++)
- ((volatile char *)txq->tx_ring)[i] = 0;
+ ((volatile char *)txq->i40e_tx_ring)[i] = 0;
prev = (uint16_t)(txq->nb_tx_desc - 1);
for (i = 0; i < txq->nb_tx_desc; i++) {
- volatile struct i40e_tx_desc *txd = &txq->tx_ring[i];
+ volatile struct i40e_tx_desc *txd = &txq->i40e_tx_ring[i];
txd->cmd_type_offset_bsz =
rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE);
@@ -2914,7 +2914,7 @@ int
i40e_tx_queue_init(struct i40e_tx_queue *txq)
{
enum i40e_status_code err = I40E_SUCCESS;
- struct i40e_vsi *vsi = txq->vsi;
+ struct i40e_vsi *vsi = txq->i40e_vsi;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
uint16_t pf_q = txq->reg_idx;
struct i40e_hmc_obj_txq tx_ctx;
@@ -3207,10 +3207,10 @@ i40e_fdir_setup_tx_resources(struct i40e_pf *pf)
txq->nb_tx_desc = I40E_FDIR_NUM_TX_DESC;
txq->queue_id = I40E_FDIR_QUEUE_ID;
txq->reg_idx = pf->fdir.fdir_vsi->base_queue;
- txq->vsi = pf->fdir.fdir_vsi;
+ txq->i40e_vsi = pf->fdir.fdir_vsi;
txq->tx_ring_dma = tz->iova;
- txq->tx_ring = (struct i40e_tx_desc *)tz->addr;
+ txq->i40e_tx_ring = (struct i40e_tx_desc *)tz->addr;
/*
* don't need to allocate software ring and reset for the fdir
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index f420c98687..8315ee2f59 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -130,7 +130,7 @@ struct i40e_rx_queue {
struct i40e_tx_queue {
uint16_t nb_tx_desc; /**< number of TX descriptors */
rte_iova_t tx_ring_dma; /**< TX ring DMA address */
- volatile struct i40e_tx_desc *tx_ring; /**< TX ring virtual address */
+ volatile struct i40e_tx_desc *i40e_tx_ring; /**< TX ring virtual address */
struct ci_tx_entry *sw_ring; /**< virtual address of SW ring */
uint16_t tx_tail; /**< current value of tail register */
volatile uint8_t *qtx_tail; /**< register address of tail */
@@ -150,7 +150,7 @@ struct i40e_tx_queue {
uint16_t port_id; /**< Device port identifier. */
uint16_t queue_id; /**< TX queue index. */
uint16_t reg_idx;
- struct i40e_vsi *vsi; /**< the VSI this queue belongs to */
+ struct i40e_vsi *i40e_vsi; /**< the VSI this queue belongs to */
uint16_t tx_next_dd;
uint16_t tx_next_rs;
bool q_set; /**< indicate if tx queue has been configured */
diff --git a/drivers/net/i40e/i40e_rxtx_vec_altivec.c b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
index 80f07a3e10..bf0e9ebd71 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_altivec.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
@@ -568,7 +568,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->i40e_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -588,7 +588,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->i40e_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
}
@@ -598,7 +598,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->i40e_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)I40E_TX_DESC_CMD_RS) <<
I40E_TXD_QW1_CMD_SHIFT);
txq->tx_next_rs =
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx2.c b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
index b26bae4757..5042e348db 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx2.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
@@ -758,7 +758,7 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->i40e_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -779,7 +779,7 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->i40e_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
}
@@ -789,7 +789,7 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->i40e_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)I40E_TX_DESC_CMD_RS) <<
I40E_TXD_QW1_CMD_SHIFT);
txq->tx_next_rs =
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index 8b8a16daa8..04fbe3b2e3 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -764,7 +764,7 @@ i40e_tx_free_bufs_avx512(struct i40e_tx_queue *txq)
struct rte_mbuf *m, *free[RTE_I40E_TX_MAX_FREE_BUF_SZ];
/* check DD bits on threshold descriptor */
- if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+ if ((txq->i40e_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) !=
rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE))
return 0;
@@ -948,7 +948,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->i40e_tx_ring[tx_id];
txep = (void *)txq->sw_ring;
txep += tx_id;
@@ -970,7 +970,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = txq->tx_ring;
+ txdp = txq->i40e_tx_ring;
txep = (void *)txq->sw_ring;
}
@@ -980,7 +980,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->i40e_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)I40E_TX_DESC_CMD_RS) <<
I40E_TXD_QW1_CMD_SHIFT);
txq->tx_next_rs =
diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index 325e99c1a4..e81f958361 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -26,7 +26,7 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
struct rte_mbuf *m, *free[RTE_I40E_TX_MAX_FREE_BUF_SZ];
/* check DD bits on threshold descriptor */
- if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+ if ((txq->i40e_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) !=
rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE))
return 0;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_neon.c b/drivers/net/i40e/i40e_rxtx_vec_neon.c
index 26bc345a0a..05191e4884 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_neon.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_neon.c
@@ -695,7 +695,7 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->i40e_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -715,7 +715,7 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->i40e_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
}
@@ -725,7 +725,7 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->i40e_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)I40E_TX_DESC_CMD_RS) <<
I40E_TXD_QW1_CMD_SHIFT);
txq->tx_next_rs =
diff --git a/drivers/net/i40e/i40e_rxtx_vec_sse.c b/drivers/net/i40e/i40e_rxtx_vec_sse.c
index ebc32b0d27..d81b553842 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_sse.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_sse.c
@@ -714,7 +714,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->i40e_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -734,7 +734,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->i40e_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
}
@@ -744,7 +744,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->i40e_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)I40E_TX_DESC_CMD_RS) <<
I40E_TXD_QW1_CMD_SHIFT);
txq->tx_next_rs =
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index adaaeb4625..6eda91e76b 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -296,11 +296,11 @@ reset_tx_queue(struct iavf_tx_queue *txq)
txe = txq->sw_ring;
size = sizeof(struct iavf_tx_desc) * txq->nb_tx_desc;
for (i = 0; i < size; i++)
- ((volatile char *)txq->tx_ring)[i] = 0;
+ ((volatile char *)txq->iavf_tx_ring)[i] = 0;
prev = (uint16_t)(txq->nb_tx_desc - 1);
for (i = 0; i < txq->nb_tx_desc; i++) {
- txq->tx_ring[i].cmd_type_offset_bsz =
+ txq->iavf_tx_ring[i].cmd_type_offset_bsz =
rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE);
txe[i].mbuf = NULL;
txe[i].last_id = i;
@@ -851,7 +851,7 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->port_id = dev->data->port_id;
txq->offloads = offloads;
txq->tx_deferred_start = tx_conf->tx_deferred_start;
- txq->vsi = vsi;
+ txq->iavf_vsi = vsi;
if (iavf_ipsec_crypto_supported(adapter))
txq->ipsec_crypto_pkt_md_offset =
@@ -872,7 +872,7 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate TX hardware ring descriptors. */
ring_size = sizeof(struct iavf_tx_desc) * IAVF_MAX_RING_DESC;
ring_size = RTE_ALIGN(ring_size, IAVF_DMA_MEM_ALIGN);
- mz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx,
+ mz = rte_eth_dma_zone_reserve(dev, "iavf_tx_ring", queue_idx,
ring_size, IAVF_RING_BASE_ALIGN,
socket_id);
if (!mz) {
@@ -882,7 +882,7 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,
return -ENOMEM;
}
txq->tx_ring_dma = mz->iova;
- txq->tx_ring = (struct iavf_tx_desc *)mz->addr;
+ txq->iavf_tx_ring = (struct iavf_tx_desc *)mz->addr;
txq->mz = mz;
reset_tx_queue(txq);
@@ -2385,7 +2385,7 @@ iavf_xmit_cleanup(struct iavf_tx_queue *txq)
uint16_t desc_to_clean_to;
uint16_t nb_tx_to_clean;
- volatile struct iavf_tx_desc *txd = txq->tx_ring;
+ volatile struct iavf_tx_desc *txd = txq->iavf_tx_ring;
desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->tx_rs_thresh);
if (desc_to_clean_to >= nb_tx_desc)
@@ -2796,7 +2796,7 @@ uint16_t
iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
struct iavf_tx_queue *txq = tx_queue;
- volatile struct iavf_tx_desc *txr = txq->tx_ring;
+ volatile struct iavf_tx_desc *txr = txq->iavf_tx_ring;
struct ci_tx_entry *txe_ring = txq->sw_ring;
struct ci_tx_entry *txe, *txn;
struct rte_mbuf *mb, *mb_seg;
@@ -3803,10 +3803,10 @@ iavf_xmit_pkts_no_poll(void *tx_queue, struct rte_mbuf **tx_pkts,
struct iavf_tx_queue *txq = tx_queue;
enum iavf_tx_burst_type tx_burst_type;
- if (!txq->vsi || txq->vsi->adapter->no_poll)
+ if (!txq->iavf_vsi || txq->iavf_vsi->adapter->no_poll)
return 0;
- tx_burst_type = txq->vsi->adapter->tx_burst_type;
+ tx_burst_type = txq->iavf_vsi->adapter->tx_burst_type;
return iavf_tx_pkt_burst_ops[tx_burst_type](tx_queue,
tx_pkts, nb_pkts);
@@ -3824,9 +3824,9 @@ iavf_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts,
const char *reason = NULL;
bool pkt_error = false;
struct iavf_tx_queue *txq = tx_queue;
- struct iavf_adapter *adapter = txq->vsi->adapter;
+ struct iavf_adapter *adapter = txq->iavf_vsi->adapter;
enum iavf_tx_burst_type tx_burst_type =
- txq->vsi->adapter->tx_burst_type;
+ txq->iavf_vsi->adapter->tx_burst_type;
for (idx = 0; idx < nb_pkts; idx++) {
mb = tx_pkts[idx];
@@ -4440,7 +4440,7 @@ iavf_dev_tx_desc_status(void *tx_queue, uint16_t offset)
desc -= txq->nb_tx_desc;
}
- status = &txq->tx_ring[desc].cmd_type_offset_bsz;
+ status = &txq->iavf_tx_ring[desc].cmd_type_offset_bsz;
mask = rte_le_to_cpu_64(IAVF_TXD_QW1_DTYPE_MASK);
expect = rte_cpu_to_le_64(
IAVF_TX_DESC_DTYPE_DESC_DONE << IAVF_TXD_QW1_DTYPE_SHIFT);
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index 44e2de731c..cc1eaaf54c 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -276,7 +276,7 @@ struct iavf_rx_queue {
/* Structure associated with each TX queue. */
struct iavf_tx_queue {
const struct rte_memzone *mz; /* memzone for Tx ring */
- volatile struct iavf_tx_desc *tx_ring; /* Tx ring virtual address */
+ volatile struct iavf_tx_desc *iavf_tx_ring; /* Tx ring virtual address */
rte_iova_t tx_ring_dma; /* Tx ring DMA address */
struct ci_tx_entry *sw_ring; /* address array of SW ring */
uint16_t nb_tx_desc; /* ring length */
@@ -289,7 +289,7 @@ struct iavf_tx_queue {
uint16_t tx_free_thresh;
uint16_t tx_rs_thresh;
uint8_t rel_mbufs_type;
- struct iavf_vsi *vsi; /**< the VSI this queue belongs to */
+ struct iavf_vsi *iavf_vsi; /**< the VSI this queue belongs to */
uint16_t port_id;
uint16_t queue_id;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index 42e09a2adf..f33ceceee1 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -1751,7 +1751,7 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_commit = nb_pkts;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->iavf_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -1772,7 +1772,7 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->iavf_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
}
@@ -1782,7 +1782,7 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->iavf_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) <<
IAVF_TXD_QW1_CMD_SHIFT);
txq->tx_next_rs =
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index dc1fef24f0..97420a75fd 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -1854,7 +1854,7 @@ iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq)
struct rte_mbuf *m, *free[IAVF_VPMD_TX_MAX_FREE_BUF];
/* check DD bits on threshold descriptor */
- if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+ if ((txq->iavf_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(IAVF_TXD_QW1_DTYPE_MASK)) !=
rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE))
return 0;
@@ -2328,7 +2328,7 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_commit = nb_pkts;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->iavf_tx_ring[tx_id];
txep = (void *)txq->sw_ring;
txep += tx_id;
@@ -2350,7 +2350,7 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->iavf_tx_ring[tx_id];
txep = (void *)txq->sw_ring;
txep += tx_id;
}
@@ -2361,7 +2361,7 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->iavf_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) <<
IAVF_TXD_QW1_CMD_SHIFT);
txq->tx_next_rs =
@@ -2397,7 +2397,7 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_pkts = nb_commit >> 1;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->iavf_tx_ring[tx_id];
txep = (void *)txq->sw_ring;
txep += (tx_id >> 1);
@@ -2418,7 +2418,7 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
tx_id = 0;
/* avoid reach the end of ring */
- txdp = txq->tx_ring;
+ txdp = txq->iavf_tx_ring;
txep = (void *)txq->sw_ring;
}
@@ -2429,7 +2429,7 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->iavf_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) <<
IAVF_TXD_QW1_CMD_SHIFT);
txq->tx_next_rs =
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index ff24055c34..6305c8cdd6 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -26,7 +26,7 @@ iavf_tx_free_bufs(struct iavf_tx_queue *txq)
struct rte_mbuf *m, *free[IAVF_VPMD_TX_MAX_FREE_BUF];
/* check DD bits on threshold descriptor */
- if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+ if ((txq->iavf_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(IAVF_TXD_QW1_DTYPE_MASK)) !=
rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE))
return 0;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index ed8455d669..64c3bf0eaa 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -1383,7 +1383,7 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_commit = nb_pkts;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->iavf_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -1403,7 +1403,7 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->iavf_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
}
@@ -1413,7 +1413,7 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->iavf_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) <<
IAVF_TXD_QW1_CMD_SHIFT);
txq->tx_next_rs =
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 4b98e4066b..4ffd1f5567 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -401,11 +401,11 @@ reset_tx_queue(struct ice_tx_queue *txq)
txe = txq->sw_ring;
size = sizeof(struct ice_tx_desc) * txq->nb_tx_desc;
for (i = 0; i < size; i++)
- ((volatile char *)txq->tx_ring)[i] = 0;
+ ((volatile char *)txq->ice_tx_ring)[i] = 0;
prev = (uint16_t)(txq->nb_tx_desc - 1);
for (i = 0; i < txq->nb_tx_desc; i++) {
- txq->tx_ring[i].cmd_type_offset_bsz =
+ txq->ice_tx_ring[i].cmd_type_offset_bsz =
rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE);
txe[i].mbuf = NULL;
txe[i].last_id = i;
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index d584086a36..5ec92f6d0c 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -776,7 +776,7 @@ ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
if (!txq_elem)
return -ENOMEM;
- vsi = txq->vsi;
+ vsi = txq->ice_vsi;
hw = ICE_VSI_TO_HW(vsi);
pf = ICE_VSI_TO_PF(vsi);
@@ -966,7 +966,7 @@ ice_fdir_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
if (!txq_elem)
return -ENOMEM;
- vsi = txq->vsi;
+ vsi = txq->ice_vsi;
hw = ICE_VSI_TO_HW(vsi);
memset(&tx_ctx, 0, sizeof(tx_ctx));
@@ -1039,11 +1039,11 @@ ice_reset_tx_queue(struct ice_tx_queue *txq)
txe = txq->sw_ring;
size = sizeof(struct ice_tx_desc) * txq->nb_tx_desc;
for (i = 0; i < size; i++)
- ((volatile char *)txq->tx_ring)[i] = 0;
+ ((volatile char *)txq->ice_tx_ring)[i] = 0;
prev = (uint16_t)(txq->nb_tx_desc - 1);
for (i = 0; i < txq->nb_tx_desc; i++) {
- volatile struct ice_tx_desc *txd = &txq->tx_ring[i];
+ volatile struct ice_tx_desc *txd = &txq->ice_tx_ring[i];
txd->cmd_type_offset_bsz =
rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE);
@@ -1153,7 +1153,7 @@ ice_fdir_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
PMD_DRV_LOG(INFO, "TX queue %u not started", tx_queue_id);
return 0;
}
- vsi = txq->vsi;
+ vsi = txq->ice_vsi;
q_ids[0] = txq->reg_idx;
q_teids[0] = txq->q_teid;
@@ -1479,7 +1479,7 @@ ice_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate TX hardware ring descriptors. */
ring_size = sizeof(struct ice_tx_desc) * ICE_MAX_RING_DESC;
ring_size = RTE_ALIGN(ring_size, ICE_DMA_MEM_ALIGN);
- tz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx,
+ tz = rte_eth_dma_zone_reserve(dev, "ice_tx_ring", queue_idx,
ring_size, ICE_RING_BASE_ALIGN,
socket_id);
if (!tz) {
@@ -1500,11 +1500,11 @@ ice_tx_queue_setup(struct rte_eth_dev *dev,
txq->reg_idx = vsi->base_queue + queue_idx;
txq->port_id = dev->data->port_id;
txq->offloads = offloads;
- txq->vsi = vsi;
+ txq->ice_vsi = vsi;
txq->tx_deferred_start = tx_conf->tx_deferred_start;
txq->tx_ring_dma = tz->iova;
- txq->tx_ring = tz->addr;
+ txq->ice_tx_ring = tz->addr;
/* Allocate software ring */
txq->sw_ring =
@@ -2372,7 +2372,7 @@ ice_tx_descriptor_status(void *tx_queue, uint16_t offset)
desc -= txq->nb_tx_desc;
}
- status = &txq->tx_ring[desc].cmd_type_offset_bsz;
+ status = &txq->ice_tx_ring[desc].cmd_type_offset_bsz;
mask = rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M);
expect = rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE <<
ICE_TXD_QW1_DTYPE_S);
@@ -2452,10 +2452,10 @@ ice_fdir_setup_tx_resources(struct ice_pf *pf)
txq->nb_tx_desc = ICE_FDIR_NUM_TX_DESC;
txq->queue_id = ICE_FDIR_QUEUE_ID;
txq->reg_idx = pf->fdir.fdir_vsi->base_queue;
- txq->vsi = pf->fdir.fdir_vsi;
+ txq->ice_vsi = pf->fdir.fdir_vsi;
txq->tx_ring_dma = tz->iova;
- txq->tx_ring = (struct ice_tx_desc *)tz->addr;
+ txq->ice_tx_ring = (struct ice_tx_desc *)tz->addr;
/*
* don't need to allocate software ring and reset for the fdir
* program queue just set the queue has been configured.
@@ -2838,7 +2838,7 @@ static inline int
ice_xmit_cleanup(struct ice_tx_queue *txq)
{
struct ci_tx_entry *sw_ring = txq->sw_ring;
- volatile struct ice_tx_desc *txd = txq->tx_ring;
+ volatile struct ice_tx_desc *txd = txq->ice_tx_ring;
uint16_t last_desc_cleaned = txq->last_desc_cleaned;
uint16_t nb_tx_desc = txq->nb_tx_desc;
uint16_t desc_to_clean_to;
@@ -2959,7 +2959,7 @@ uint16_t
ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
struct ice_tx_queue *txq;
- volatile struct ice_tx_desc *tx_ring;
+ volatile struct ice_tx_desc *ice_tx_ring;
volatile struct ice_tx_desc *txd;
struct ci_tx_entry *sw_ring;
struct ci_tx_entry *txe, *txn;
@@ -2981,7 +2981,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
txq = tx_queue;
sw_ring = txq->sw_ring;
- tx_ring = txq->tx_ring;
+ ice_tx_ring = txq->ice_tx_ring;
tx_id = txq->tx_tail;
txe = &sw_ring[tx_id];
@@ -3064,7 +3064,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
/* Setup TX context descriptor if required */
volatile struct ice_tx_ctx_desc *ctx_txd =
(volatile struct ice_tx_ctx_desc *)
- &tx_ring[tx_id];
+ &ice_tx_ring[tx_id];
uint16_t cd_l2tag2 = 0;
uint64_t cd_type_cmd_tso_mss = ICE_TX_DESC_DTYPE_CTX;
@@ -3082,7 +3082,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
cd_type_cmd_tso_mss |=
((uint64_t)ICE_TX_CTX_DESC_TSYN <<
ICE_TXD_CTX_QW1_CMD_S) |
- (((uint64_t)txq->vsi->adapter->ptp_tx_index <<
+ (((uint64_t)txq->ice_vsi->adapter->ptp_tx_index <<
ICE_TXD_CTX_QW1_TSYN_S) & ICE_TXD_CTX_QW1_TSYN_M);
ctx_txd->tunneling_params =
@@ -3106,7 +3106,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
m_seg = tx_pkt;
do {
- txd = &tx_ring[tx_id];
+ txd = &ice_tx_ring[tx_id];
txn = &sw_ring[txe->next_id];
if (txe->mbuf)
@@ -3134,7 +3134,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
txe->last_id = tx_last;
tx_id = txe->next_id;
txe = txn;
- txd = &tx_ring[tx_id];
+ txd = &ice_tx_ring[tx_id];
txn = &sw_ring[txe->next_id];
}
@@ -3187,7 +3187,7 @@ ice_tx_free_bufs(struct ice_tx_queue *txq)
struct ci_tx_entry *txep;
uint16_t i;
- if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+ if ((txq->ice_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) !=
rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))
return 0;
@@ -3360,7 +3360,7 @@ static inline void
ice_tx_fill_hw_ring(struct ice_tx_queue *txq, struct rte_mbuf **pkts,
uint16_t nb_pkts)
{
- volatile struct ice_tx_desc *txdp = &txq->tx_ring[txq->tx_tail];
+ volatile struct ice_tx_desc *txdp = &txq->ice_tx_ring[txq->tx_tail];
struct ci_tx_entry *txep = &txq->sw_ring[txq->tx_tail];
const int N_PER_LOOP = 4;
const int N_PER_LOOP_MASK = N_PER_LOOP - 1;
@@ -3393,7 +3393,7 @@ tx_xmit_pkts(struct ice_tx_queue *txq,
struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- volatile struct ice_tx_desc *txr = txq->tx_ring;
+ volatile struct ice_tx_desc *txr = txq->ice_tx_ring;
uint16_t n = 0;
/**
@@ -3722,7 +3722,7 @@ ice_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
bool pkt_error = false;
uint16_t good_pkts = nb_pkts;
const char *reason = NULL;
- struct ice_adapter *adapter = txq->vsi->adapter;
+ struct ice_adapter *adapter = txq->ice_vsi->adapter;
uint64_t ol_flags;
for (idx = 0; idx < nb_pkts; idx++) {
@@ -4701,11 +4701,11 @@ ice_fdir_programming(struct ice_pf *pf, struct ice_fltr_desc *fdir_desc)
uint16_t i;
fdirdp = (volatile struct ice_fltr_desc *)
- (&txq->tx_ring[txq->tx_tail]);
+ (&txq->ice_tx_ring[txq->tx_tail]);
fdirdp->qidx_compq_space_stat = fdir_desc->qidx_compq_space_stat;
fdirdp->dtype_cmd_vsi_fdid = fdir_desc->dtype_cmd_vsi_fdid;
- txdp = &txq->tx_ring[txq->tx_tail + 1];
+ txdp = &txq->ice_tx_ring[txq->tx_tail + 1];
txdp->buf_addr = rte_cpu_to_le_64(pf->fdir.dma_addr);
td_cmd = ICE_TX_DESC_CMD_EOP |
ICE_TX_DESC_CMD_RS |
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index 8d1a1a8676..3257f449f5 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -148,7 +148,7 @@ struct ice_rx_queue {
struct ice_tx_queue {
uint16_t nb_tx_desc; /* number of TX descriptors */
rte_iova_t tx_ring_dma; /* TX ring DMA address */
- volatile struct ice_tx_desc *tx_ring; /* TX ring virtual address */
+ volatile struct ice_tx_desc *ice_tx_ring; /* TX ring virtual address */
struct ci_tx_entry *sw_ring; /* virtual address of SW ring */
uint16_t tx_tail; /* current value of tail register */
volatile uint8_t *qtx_tail; /* register address of tail */
@@ -171,7 +171,7 @@ struct ice_tx_queue {
uint32_t q_teid; /* TX schedule node id. */
uint16_t reg_idx;
uint64_t offloads;
- struct ice_vsi *vsi; /* the VSI this queue belongs to */
+ struct ice_vsi *ice_vsi; /* the VSI this queue belongs to */
uint16_t tx_next_dd;
uint16_t tx_next_rs;
uint64_t mbuf_errors;
diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index 336697e72d..dde07ac99e 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -874,7 +874,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->ice_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -895,7 +895,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->ice_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
}
@@ -905,7 +905,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->ice_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)ICE_TX_DESC_CMD_RS) <<
ICE_TXD_QW1_CMD_S);
txq->tx_next_rs =
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index 6b6aa3f1fe..e4d0270176 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -869,7 +869,7 @@ ice_tx_free_bufs_avx512(struct ice_tx_queue *txq)
struct rte_mbuf *m, *free[ICE_TX_MAX_FREE_BUF_SZ];
/* check DD bits on threshold descriptor */
- if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+ if ((txq->ice_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) !=
rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))
return 0;
@@ -1071,7 +1071,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->ice_tx_ring[tx_id];
txep = (void *)txq->sw_ring;
txep += tx_id;
@@ -1093,7 +1093,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = txq->tx_ring;
+ txdp = txq->ice_tx_ring;
txep = (void *)txq->sw_ring;
}
@@ -1103,7 +1103,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->ice_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)ICE_TX_DESC_CMD_RS) <<
ICE_TXD_QW1_CMD_S);
txq->tx_next_rs =
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index 32e4541267..7b865b53ad 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -22,7 +22,7 @@ ice_tx_free_bufs_vec(struct ice_tx_queue *txq)
struct rte_mbuf *m, *free[ICE_TX_MAX_FREE_BUF_SZ];
/* check DD bits on threshold descriptor */
- if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+ if ((txq->ice_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) !=
rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))
return 0;
@@ -121,7 +121,7 @@ _ice_tx_queue_release_mbufs_vec(struct ice_tx_queue *txq)
i = txq->tx_next_dd - txq->tx_rs_thresh + 1;
#ifdef __AVX512VL__
- struct rte_eth_dev *dev = &rte_eth_devices[txq->vsi->adapter->pf.dev_data->port_id];
+ struct rte_eth_dev *dev = &rte_eth_devices[txq->ice_vsi->adapter->pf.dev_data->port_id];
if (dev->tx_pkt_burst == ice_xmit_pkts_vec_avx512 ||
dev->tx_pkt_burst == ice_xmit_pkts_vec_avx512_offload) {
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index debdd8f6a2..364207e8a8 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -717,7 +717,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->ice_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -737,7 +737,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->ice_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
}
@@ -747,7 +747,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->ice_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)ICE_TX_DESC_CMD_RS) <<
ICE_TXD_QW1_CMD_S);
txq->tx_next_rs =
diff --git a/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c b/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c
index 2241726ad8..a878db3150 100644
--- a/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c
+++ b/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c
@@ -72,7 +72,7 @@ ixgbe_recycle_tx_mbufs_reuse_vec(void *tx_queue,
return 0;
/* check DD bits on threshold descriptor */
- status = txq->tx_ring[txq->tx_next_dd].wb.status;
+ status = txq->ixgbe_tx_ring[txq->tx_next_dd].wb.status;
if (!(status & IXGBE_ADVTXD_STAT_DD))
return 0;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 0a80b944f0..f7ddbba1b6 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -106,7 +106,7 @@ ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq)
struct rte_mbuf *m, *free[RTE_IXGBE_TX_MAX_FREE_BUF_SZ];
/* check DD bit on threshold descriptor */
- status = txq->tx_ring[txq->tx_next_dd].wb.status;
+ status = txq->ixgbe_tx_ring[txq->tx_next_dd].wb.status;
if (!(status & rte_cpu_to_le_32(IXGBE_ADVTXD_STAT_DD)))
return 0;
@@ -198,7 +198,7 @@ static inline void
ixgbe_tx_fill_hw_ring(struct ixgbe_tx_queue *txq, struct rte_mbuf **pkts,
uint16_t nb_pkts)
{
- volatile union ixgbe_adv_tx_desc *txdp = &(txq->tx_ring[txq->tx_tail]);
+ volatile union ixgbe_adv_tx_desc *txdp = &txq->ixgbe_tx_ring[txq->tx_tail];
struct ci_tx_entry *txep = &txq->sw_ring[txq->tx_tail];
const int N_PER_LOOP = 4;
const int N_PER_LOOP_MASK = N_PER_LOOP-1;
@@ -232,7 +232,7 @@ tx_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
- volatile union ixgbe_adv_tx_desc *tx_r = txq->tx_ring;
+ volatile union ixgbe_adv_tx_desc *tx_r = txq->ixgbe_tx_ring;
uint16_t n = 0;
/*
@@ -564,7 +564,7 @@ static inline int
ixgbe_xmit_cleanup(struct ixgbe_tx_queue *txq)
{
struct ci_tx_entry *sw_ring = txq->sw_ring;
- volatile union ixgbe_adv_tx_desc *txr = txq->tx_ring;
+ volatile union ixgbe_adv_tx_desc *txr = txq->ixgbe_tx_ring;
uint16_t last_desc_cleaned = txq->last_desc_cleaned;
uint16_t nb_tx_desc = txq->nb_tx_desc;
uint16_t desc_to_clean_to;
@@ -652,7 +652,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_offload.data[1] = 0;
txq = tx_queue;
sw_ring = txq->sw_ring;
- txr = txq->tx_ring;
+ txr = txq->ixgbe_tx_ring;
tx_id = txq->tx_tail;
txe = &sw_ring[tx_id];
txp = NULL;
@@ -2495,13 +2495,13 @@ ixgbe_reset_tx_queue(struct ixgbe_tx_queue *txq)
/* Zero out HW ring memory */
for (i = 0; i < txq->nb_tx_desc; i++) {
- txq->tx_ring[i] = zeroed_desc;
+ txq->ixgbe_tx_ring[i] = zeroed_desc;
}
/* Initialize SW ring entries */
prev = (uint16_t) (txq->nb_tx_desc - 1);
for (i = 0; i < txq->nb_tx_desc; i++) {
- volatile union ixgbe_adv_tx_desc *txd = &txq->tx_ring[i];
+ volatile union ixgbe_adv_tx_desc *txd = &txq->ixgbe_tx_ring[i];
txd->wb.status = rte_cpu_to_le_32(IXGBE_TXD_STAT_DD);
txe[i].mbuf = NULL;
@@ -2751,7 +2751,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
* handle the maximum ring size is allocated in order to allow for
* resizing in later calls to the queue setup function.
*/
- tz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx,
+ tz = rte_eth_dma_zone_reserve(dev, "ixgbe_tx_ring", queue_idx,
sizeof(union ixgbe_adv_tx_desc) * IXGBE_MAX_RING_DESC,
IXGBE_ALIGN, socket_id);
if (tz == NULL) {
@@ -2791,7 +2791,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->qtx_tail = IXGBE_PCI_REG_ADDR(hw, IXGBE_TDT(txq->reg_idx));
txq->tx_ring_dma = tz->iova;
- txq->tx_ring = (union ixgbe_adv_tx_desc *) tz->addr;
+ txq->ixgbe_tx_ring = (union ixgbe_adv_tx_desc *)tz->addr;
/* Allocate software ring */
txq->sw_ring = rte_zmalloc_socket("txq->sw_ring",
@@ -2802,7 +2802,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
return -ENOMEM;
}
PMD_INIT_LOG(DEBUG, "sw_ring=%p hw_ring=%p dma_addr=0x%"PRIx64,
- txq->sw_ring, txq->tx_ring, txq->tx_ring_dma);
+ txq->sw_ring, txq->ixgbe_tx_ring, txq->tx_ring_dma);
/* set up vector or scalar TX function as appropriate */
ixgbe_set_tx_function(dev, txq);
@@ -3328,7 +3328,7 @@ ixgbe_dev_tx_descriptor_status(void *tx_queue, uint16_t offset)
desc -= txq->nb_tx_desc;
}
- status = &txq->tx_ring[desc].wb.status;
+ status = &txq->ixgbe_tx_ring[desc].wb.status;
if (*status & rte_cpu_to_le_32(IXGBE_ADVTXD_STAT_DD))
return RTE_ETH_TX_DESC_DONE;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index 00e2009b3e..f6bae37cf3 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -185,7 +185,7 @@ struct ixgbe_advctx_info {
*/
struct ixgbe_tx_queue {
/** TX ring virtual address. */
- volatile union ixgbe_adv_tx_desc *tx_ring;
+ volatile union ixgbe_adv_tx_desc *ixgbe_tx_ring;
rte_iova_t tx_ring_dma; /**< TX ring DMA address. */
union {
struct ci_tx_entry *sw_ring; /**< address of SW ring for scalar PMD. */
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
index e9592c0d08..cc51bf6eed 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
@@ -22,7 +22,7 @@ ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq)
struct rte_mbuf *m, *free[RTE_IXGBE_TX_MAX_FREE_BUF_SZ];
/* check DD bit on threshold descriptor */
- status = txq->tx_ring[txq->tx_next_dd].wb.status;
+ status = txq->ixgbe_tx_ring[txq->tx_next_dd].wb.status;
if (!(status & IXGBE_ADVTXD_STAT_DD))
return 0;
@@ -154,11 +154,11 @@ _ixgbe_reset_tx_queue_vec(struct ixgbe_tx_queue *txq)
/* Zero out HW ring memory */
for (i = 0; i < txq->nb_tx_desc; i++)
- txq->tx_ring[i] = zeroed_desc;
+ txq->ixgbe_tx_ring[i] = zeroed_desc;
/* Initialize SW ring entries */
for (i = 0; i < txq->nb_tx_desc; i++) {
- volatile union ixgbe_adv_tx_desc *txd = &txq->tx_ring[i];
+ volatile union ixgbe_adv_tx_desc *txd = &txq->ixgbe_tx_ring[i];
txd->wb.status = IXGBE_TXD_STAT_DD;
txe[i].mbuf = NULL;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
index 871c1a7cd2..06be7ec82a 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
@@ -590,7 +590,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->ixgbe_tx_ring[tx_id];
txep = &txq->sw_ring_v[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -610,7 +610,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->ixgbe_tx_ring[tx_id];
txep = &txq->sw_ring_v[tx_id];
}
@@ -620,7 +620,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].read.cmd_type_len |=
+ txq->ixgbe_tx_ring[txq->tx_next_rs].read.cmd_type_len |=
rte_cpu_to_le_32(IXGBE_ADVTXD_DCMD_RS);
txq->tx_next_rs = (uint16_t)(txq->tx_next_rs +
txq->tx_rs_thresh);
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
index 37f2079519..a21a57bd55 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -712,7 +712,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->ixgbe_tx_ring[tx_id];
txep = &txq->sw_ring_v[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -733,7 +733,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &(txq->tx_ring[tx_id]);
+ txdp = &txq->ixgbe_tx_ring[tx_id];
txep = &txq->sw_ring_v[tx_id];
}
@@ -743,7 +743,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].read.cmd_type_len |=
+ txq->ixgbe_tx_ring[txq->tx_next_rs].read.cmd_type_len |=
rte_cpu_to_le_32(IXGBE_ADVTXD_DCMD_RS);
txq->tx_next_rs = (uint16_t)(txq->tx_next_rs +
txq->tx_rs_thresh);
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v3 06/22] net/_common_intel: merge ice and i40e Tx queue struct
2024-12-11 17:33 ` [PATCH v3 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (4 preceding siblings ...)
2024-12-11 17:33 ` [PATCH v3 05/22] drivers/net: add prefix for driver-specific structs Bruce Richardson
@ 2024-12-11 17:33 ` Bruce Richardson
2024-12-11 17:33 ` [PATCH v3 07/22] net/iavf: use common Tx queue structure Bruce Richardson
` (15 subsequent siblings)
21 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-11 17:33 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Ian Stokes, David Christensen,
Konstantin Ananyev, Wathsala Vithanage, Anatoly Burakov
The queue structures of i40e and ice drivers are virtually identical, so
merge them into a common struct. This should allow easier function
merging in future using that common struct.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 55 +++++++++++++++++
drivers/net/i40e/i40e_ethdev.c | 4 +-
drivers/net/i40e/i40e_ethdev.h | 4 +-
drivers/net/i40e/i40e_fdir.c | 4 +-
.../net/i40e/i40e_recycle_mbufs_vec_common.c | 2 +-
drivers/net/i40e/i40e_rxtx.c | 58 +++++++++---------
drivers/net/i40e/i40e_rxtx.h | 50 ++--------------
drivers/net/i40e/i40e_rxtx_vec_altivec.c | 4 +-
drivers/net/i40e/i40e_rxtx_vec_avx2.c | 4 +-
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 6 +-
drivers/net/i40e/i40e_rxtx_vec_common.h | 2 +-
drivers/net/i40e/i40e_rxtx_vec_neon.c | 4 +-
drivers/net/i40e/i40e_rxtx_vec_sse.c | 4 +-
drivers/net/ice/ice_dcf.c | 4 +-
drivers/net/ice/ice_dcf_ethdev.c | 10 ++--
drivers/net/ice/ice_diagnose.c | 2 +-
drivers/net/ice/ice_ethdev.c | 2 +-
drivers/net/ice/ice_ethdev.h | 4 +-
drivers/net/ice/ice_rxtx.c | 60 +++++++++----------
drivers/net/ice/ice_rxtx.h | 41 +------------
drivers/net/ice/ice_rxtx_vec_avx2.c | 4 +-
drivers/net/ice/ice_rxtx_vec_avx512.c | 8 +--
drivers/net/ice/ice_rxtx_vec_common.h | 8 +--
drivers/net/ice/ice_rxtx_vec_sse.c | 6 +-
24 files changed, 165 insertions(+), 185 deletions(-)
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
index 5397007411..c965f5ee6c 100644
--- a/drivers/net/_common_intel/tx.h
+++ b/drivers/net/_common_intel/tx.h
@@ -8,6 +8,9 @@
#include <stdint.h>
#include <rte_mbuf.h>
+/* forward declaration of the common intel (ci) queue structure */
+struct ci_tx_queue;
+
/**
* Structure associated with each descriptor of the TX ring of a TX queue.
*/
@@ -24,6 +27,58 @@ struct ci_tx_entry_vec {
struct rte_mbuf *mbuf; /* mbuf associated with TX desc, if any. */
};
+typedef void (*ice_tx_release_mbufs_t)(struct ci_tx_queue *txq);
+
+struct ci_tx_queue {
+ union { /* TX ring virtual address */
+ volatile struct ice_tx_desc *ice_tx_ring;
+ volatile struct i40e_tx_desc *i40e_tx_ring;
+ };
+ volatile uint8_t *qtx_tail; /* register address of tail */
+ struct ci_tx_entry *sw_ring; /* virtual address of SW ring */
+ rte_iova_t tx_ring_dma; /* TX ring DMA address */
+ uint16_t nb_tx_desc; /* number of TX descriptors */
+ uint16_t tx_tail; /* current value of tail register */
+ uint16_t nb_tx_used; /* number of TX desc used since RS bit set */
+ /* index to last TX descriptor to have been cleaned */
+ uint16_t last_desc_cleaned;
+ /* Total number of TX descriptors ready to be allocated. */
+ uint16_t nb_tx_free;
+ /* Start freeing TX buffers if there are less free descriptors than
+ * this value.
+ */
+ uint16_t tx_free_thresh;
+ /* Number of TX descriptors to use before RS bit is set. */
+ uint16_t tx_rs_thresh;
+ uint8_t pthresh; /**< Prefetch threshold register. */
+ uint8_t hthresh; /**< Host threshold register. */
+ uint8_t wthresh; /**< Write-back threshold reg. */
+ uint16_t port_id; /* Device port identifier. */
+ uint16_t queue_id; /* TX queue index. */
+ uint16_t reg_idx;
+ uint64_t offloads;
+ uint16_t tx_next_dd;
+ uint16_t tx_next_rs;
+ uint64_t mbuf_errors;
+ bool tx_deferred_start; /* don't start this queue in dev start */
+ bool q_set; /* indicate if tx queue has been configured */
+ union { /* the VSI this queue belongs to */
+ struct ice_vsi *ice_vsi;
+ struct i40e_vsi *i40e_vsi;
+ };
+ const struct rte_memzone *mz;
+
+ union {
+ struct { /* ICE driver specific values */
+ ice_tx_release_mbufs_t tx_rel_mbufs;
+ uint32_t q_teid; /* TX schedule node id. */
+ };
+ struct { /* I40E driver specific values */
+ uint8_t dcb_tc;
+ };
+ };
+};
+
static __rte_always_inline void
ci_tx_backlog_entry(struct ci_tx_entry *txep, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 30dcdc68a8..bf5560ccc8 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -3685,7 +3685,7 @@ i40e_dev_update_mbuf_stats(struct rte_eth_dev *ethdev,
struct i40e_mbuf_stats *mbuf_stats)
{
uint16_t idx;
- struct i40e_tx_queue *txq;
+ struct ci_tx_queue *txq;
for (idx = 0; idx < ethdev->data->nb_tx_queues; idx++) {
txq = ethdev->data->tx_queues[idx];
@@ -6585,7 +6585,7 @@ i40e_dev_tx_init(struct i40e_pf *pf)
struct rte_eth_dev_data *data = pf->dev_data;
uint16_t i;
uint32_t ret = I40E_SUCCESS;
- struct i40e_tx_queue *txq;
+ struct ci_tx_queue *txq;
for (i = 0; i < data->nb_tx_queues; i++) {
txq = data->tx_queues[i];
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index 98213948b4..d351193ed9 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -334,7 +334,7 @@ struct i40e_vsi_list {
};
struct i40e_rx_queue;
-struct i40e_tx_queue;
+struct ci_tx_queue;
/* Bandwidth limit information */
struct i40e_bw_info {
@@ -738,7 +738,7 @@ TAILQ_HEAD(i40e_fdir_filter_list, i40e_fdir_filter);
struct i40e_fdir_info {
struct i40e_vsi *fdir_vsi; /* pointer to fdir VSI structure */
uint16_t match_counter_index; /* Statistic counter index used for fdir*/
- struct i40e_tx_queue *txq;
+ struct ci_tx_queue *txq;
struct i40e_rx_queue *rxq;
void *prg_pkt[I40E_FDIR_PRG_PKT_CNT]; /* memory for fdir program packet */
uint64_t dma_addr[I40E_FDIR_PRG_PKT_CNT]; /* physic address of packet memory*/
diff --git a/drivers/net/i40e/i40e_fdir.c b/drivers/net/i40e/i40e_fdir.c
index c600167634..349627a2ed 100644
--- a/drivers/net/i40e/i40e_fdir.c
+++ b/drivers/net/i40e/i40e_fdir.c
@@ -1372,7 +1372,7 @@ i40e_find_available_buffer(struct rte_eth_dev *dev)
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_fdir_info *fdir_info = &pf->fdir;
- struct i40e_tx_queue *txq = pf->fdir.txq;
+ struct ci_tx_queue *txq = pf->fdir.txq;
/* no available buffer
* search for more available buffers from the current
@@ -1628,7 +1628,7 @@ i40e_flow_fdir_filter_programming(struct i40e_pf *pf,
const struct i40e_fdir_filter_conf *filter,
bool add, bool wait_status)
{
- struct i40e_tx_queue *txq = pf->fdir.txq;
+ struct ci_tx_queue *txq = pf->fdir.txq;
struct i40e_rx_queue *rxq = pf->fdir.rxq;
const struct i40e_fdir_action *fdir_action = &filter->action;
volatile struct i40e_tx_desc *txdp;
diff --git a/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c b/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
index 8679e5c1fd..5a65c80d90 100644
--- a/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
+++ b/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
@@ -55,7 +55,7 @@ uint16_t
i40e_recycle_tx_mbufs_reuse_vec(void *tx_queue,
struct rte_eth_recycle_rxq_info *recycle_rxq_info)
{
- struct i40e_tx_queue *txq = tx_queue;
+ struct ci_tx_queue *txq = tx_queue;
struct ci_tx_entry *txep;
struct rte_mbuf **rxep;
int i, n;
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 34ef931859..305bc53480 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -376,7 +376,7 @@ i40e_build_ctob(uint32_t td_cmd,
}
static inline int
-i40e_xmit_cleanup(struct i40e_tx_queue *txq)
+i40e_xmit_cleanup(struct ci_tx_queue *txq)
{
struct ci_tx_entry *sw_ring = txq->sw_ring;
volatile struct i40e_tx_desc *txd = txq->i40e_tx_ring;
@@ -1080,7 +1080,7 @@ i40e_calc_pkt_desc(struct rte_mbuf *tx_pkt)
uint16_t
i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
- struct i40e_tx_queue *txq;
+ struct ci_tx_queue *txq;
struct ci_tx_entry *sw_ring;
struct ci_tx_entry *txe, *txn;
volatile struct i40e_tx_desc *txd;
@@ -1329,7 +1329,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
}
static __rte_always_inline int
-i40e_tx_free_bufs(struct i40e_tx_queue *txq)
+i40e_tx_free_bufs(struct ci_tx_queue *txq)
{
struct ci_tx_entry *txep;
uint16_t tx_rs_thresh = txq->tx_rs_thresh;
@@ -1413,7 +1413,7 @@ tx1(volatile struct i40e_tx_desc *txdp, struct rte_mbuf **pkts)
/* Fill hardware descriptor ring with mbuf data */
static inline void
-i40e_tx_fill_hw_ring(struct i40e_tx_queue *txq,
+i40e_tx_fill_hw_ring(struct ci_tx_queue *txq,
struct rte_mbuf **pkts,
uint16_t nb_pkts)
{
@@ -1441,7 +1441,7 @@ i40e_tx_fill_hw_ring(struct i40e_tx_queue *txq,
}
static inline uint16_t
-tx_xmit_pkts(struct i40e_tx_queue *txq,
+tx_xmit_pkts(struct ci_tx_queue *txq,
struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
@@ -1504,14 +1504,14 @@ i40e_xmit_pkts_simple(void *tx_queue,
uint16_t nb_tx = 0;
if (likely(nb_pkts <= I40E_TX_MAX_BURST))
- return tx_xmit_pkts((struct i40e_tx_queue *)tx_queue,
+ return tx_xmit_pkts((struct ci_tx_queue *)tx_queue,
tx_pkts, nb_pkts);
while (nb_pkts) {
uint16_t ret, num = (uint16_t)RTE_MIN(nb_pkts,
I40E_TX_MAX_BURST);
- ret = tx_xmit_pkts((struct i40e_tx_queue *)tx_queue,
+ ret = tx_xmit_pkts((struct ci_tx_queue *)tx_queue,
&tx_pkts[nb_tx], num);
nb_tx = (uint16_t)(nb_tx + ret);
nb_pkts = (uint16_t)(nb_pkts - ret);
@@ -1527,7 +1527,7 @@ i40e_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
uint16_t nb_tx = 0;
- struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
@@ -1549,7 +1549,7 @@ i40e_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
static uint16_t
i40e_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
- struct i40e_tx_queue *txq = tx_queue;
+ struct ci_tx_queue *txq = tx_queue;
uint16_t idx;
uint64_t ol_flags;
struct rte_mbuf *mb;
@@ -1611,7 +1611,7 @@ i40e_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts
pkt_error = true;
break;
}
- if (mb->nb_segs > ((struct i40e_tx_queue *)tx_queue)->nb_tx_desc) {
+ if (mb->nb_segs > ((struct ci_tx_queue *)tx_queue)->nb_tx_desc) {
PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs out of ring length");
pkt_error = true;
break;
@@ -1873,7 +1873,7 @@ int
i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
int err;
- struct i40e_tx_queue *txq;
+ struct ci_tx_queue *txq;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
PMD_INIT_FUNC_TRACE();
@@ -1907,7 +1907,7 @@ i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
int
i40e_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
- struct i40e_tx_queue *txq;
+ struct ci_tx_queue *txq;
int err;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
@@ -2311,7 +2311,7 @@ i40e_dev_rx_descriptor_status(void *rx_queue, uint16_t offset)
int
i40e_dev_tx_descriptor_status(void *tx_queue, uint16_t offset)
{
- struct i40e_tx_queue *txq = tx_queue;
+ struct ci_tx_queue *txq = tx_queue;
volatile uint64_t *status;
uint64_t mask, expect;
uint32_t desc;
@@ -2341,7 +2341,7 @@ i40e_dev_tx_descriptor_status(void *tx_queue, uint16_t offset)
static int
i40e_dev_tx_queue_setup_runtime(struct rte_eth_dev *dev,
- struct i40e_tx_queue *txq)
+ struct ci_tx_queue *txq)
{
struct i40e_adapter *ad =
I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
@@ -2394,7 +2394,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
{
struct i40e_vsi *vsi;
struct i40e_pf *pf = NULL;
- struct i40e_tx_queue *txq;
+ struct ci_tx_queue *txq;
const struct rte_memzone *tz;
uint32_t ring_size;
uint16_t tx_rs_thresh, tx_free_thresh;
@@ -2515,7 +2515,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate the TX queue data structure. */
txq = rte_zmalloc_socket("i40e tx queue",
- sizeof(struct i40e_tx_queue),
+ sizeof(struct ci_tx_queue),
RTE_CACHE_LINE_SIZE,
socket_id);
if (!txq) {
@@ -2600,7 +2600,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
void
i40e_tx_queue_release(void *txq)
{
- struct i40e_tx_queue *q = (struct i40e_tx_queue *)txq;
+ struct ci_tx_queue *q = (struct ci_tx_queue *)txq;
if (!q) {
PMD_DRV_LOG(DEBUG, "Pointer to TX queue is NULL");
@@ -2705,7 +2705,7 @@ i40e_reset_rx_queue(struct i40e_rx_queue *rxq)
}
void
-i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq)
+i40e_tx_queue_release_mbufs(struct ci_tx_queue *txq)
{
struct rte_eth_dev *dev;
uint16_t i;
@@ -2765,7 +2765,7 @@ i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq)
}
static int
-i40e_tx_done_cleanup_full(struct i40e_tx_queue *txq,
+i40e_tx_done_cleanup_full(struct ci_tx_queue *txq,
uint32_t free_cnt)
{
struct ci_tx_entry *swr_ring = txq->sw_ring;
@@ -2824,7 +2824,7 @@ i40e_tx_done_cleanup_full(struct i40e_tx_queue *txq,
}
static int
-i40e_tx_done_cleanup_simple(struct i40e_tx_queue *txq,
+i40e_tx_done_cleanup_simple(struct ci_tx_queue *txq,
uint32_t free_cnt)
{
int i, n, cnt;
@@ -2848,7 +2848,7 @@ i40e_tx_done_cleanup_simple(struct i40e_tx_queue *txq,
}
static int
-i40e_tx_done_cleanup_vec(struct i40e_tx_queue *txq __rte_unused,
+i40e_tx_done_cleanup_vec(struct ci_tx_queue *txq __rte_unused,
uint32_t free_cnt __rte_unused)
{
return -ENOTSUP;
@@ -2856,7 +2856,7 @@ i40e_tx_done_cleanup_vec(struct i40e_tx_queue *txq __rte_unused,
int
i40e_tx_done_cleanup(void *txq, uint32_t free_cnt)
{
- struct i40e_tx_queue *q = (struct i40e_tx_queue *)txq;
+ struct ci_tx_queue *q = (struct ci_tx_queue *)txq;
struct rte_eth_dev *dev = &rte_eth_devices[q->port_id];
struct i40e_adapter *ad =
I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
@@ -2872,7 +2872,7 @@ i40e_tx_done_cleanup(void *txq, uint32_t free_cnt)
}
void
-i40e_reset_tx_queue(struct i40e_tx_queue *txq)
+i40e_reset_tx_queue(struct ci_tx_queue *txq)
{
struct ci_tx_entry *txe;
uint16_t i, prev, size;
@@ -2911,7 +2911,7 @@ i40e_reset_tx_queue(struct i40e_tx_queue *txq)
/* Init the TX queue in hardware */
int
-i40e_tx_queue_init(struct i40e_tx_queue *txq)
+i40e_tx_queue_init(struct ci_tx_queue *txq)
{
enum i40e_status_code err = I40E_SUCCESS;
struct i40e_vsi *vsi = txq->i40e_vsi;
@@ -3167,7 +3167,7 @@ i40e_dev_free_queues(struct rte_eth_dev *dev)
enum i40e_status_code
i40e_fdir_setup_tx_resources(struct i40e_pf *pf)
{
- struct i40e_tx_queue *txq;
+ struct ci_tx_queue *txq;
const struct rte_memzone *tz = NULL;
struct rte_eth_dev *dev;
uint32_t ring_size;
@@ -3181,7 +3181,7 @@ i40e_fdir_setup_tx_resources(struct i40e_pf *pf)
/* Allocate the TX queue data structure. */
txq = rte_zmalloc_socket("i40e fdir tx queue",
- sizeof(struct i40e_tx_queue),
+ sizeof(struct ci_tx_queue),
RTE_CACHE_LINE_SIZE,
SOCKET_ID_ANY);
if (!txq) {
@@ -3304,7 +3304,7 @@ void
i40e_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
struct rte_eth_txq_info *qinfo)
{
- struct i40e_tx_queue *txq;
+ struct ci_tx_queue *txq;
txq = dev->data->tx_queues[queue_id];
@@ -3552,7 +3552,7 @@ i40e_rx_burst_mode_get(struct rte_eth_dev *dev, __rte_unused uint16_t queue_id,
}
void __rte_cold
-i40e_set_tx_function_flag(struct rte_eth_dev *dev, struct i40e_tx_queue *txq)
+i40e_set_tx_function_flag(struct rte_eth_dev *dev, struct ci_tx_queue *txq)
{
struct i40e_adapter *ad =
I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
@@ -3592,7 +3592,7 @@ i40e_set_tx_function(struct rte_eth_dev *dev)
#endif
if (ad->tx_vec_allowed) {
for (i = 0; i < dev->data->nb_tx_queues; i++) {
- struct i40e_tx_queue *txq =
+ struct ci_tx_queue *txq =
dev->data->tx_queues[i];
if (txq && i40e_txq_vec_setup(txq)) {
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 8315ee2f59..043d1df912 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -124,44 +124,6 @@ struct i40e_rx_queue {
const struct rte_memzone *mz;
};
-/*
- * Structure associated with each TX queue.
- */
-struct i40e_tx_queue {
- uint16_t nb_tx_desc; /**< number of TX descriptors */
- rte_iova_t tx_ring_dma; /**< TX ring DMA address */
- volatile struct i40e_tx_desc *i40e_tx_ring; /**< TX ring virtual address */
- struct ci_tx_entry *sw_ring; /**< virtual address of SW ring */
- uint16_t tx_tail; /**< current value of tail register */
- volatile uint8_t *qtx_tail; /**< register address of tail */
- uint16_t nb_tx_used; /**< number of TX desc used since RS bit set */
- /**< index to last TX descriptor to have been cleaned */
- uint16_t last_desc_cleaned;
- /**< Total number of TX descriptors ready to be allocated. */
- uint16_t nb_tx_free;
- /**< Start freeing TX buffers if there are less free descriptors than
- this value. */
- uint16_t tx_free_thresh;
- /** Number of TX descriptors to use before RS bit is set. */
- uint16_t tx_rs_thresh;
- uint8_t pthresh; /**< Prefetch threshold register. */
- uint8_t hthresh; /**< Host threshold register. */
- uint8_t wthresh; /**< Write-back threshold reg. */
- uint16_t port_id; /**< Device port identifier. */
- uint16_t queue_id; /**< TX queue index. */
- uint16_t reg_idx;
- struct i40e_vsi *i40e_vsi; /**< the VSI this queue belongs to */
- uint16_t tx_next_dd;
- uint16_t tx_next_rs;
- bool q_set; /**< indicate if tx queue has been configured */
- uint64_t mbuf_errors;
-
- bool tx_deferred_start; /**< don't start this queue in dev start */
- uint8_t dcb_tc; /**< Traffic class of tx queue */
- uint64_t offloads; /**< Tx offload flags of RTE_ETH_TX_OFFLOAD_* */
- const struct rte_memzone *mz;
-};
-
/** Offload features */
union i40e_tx_offload {
uint64_t data;
@@ -209,15 +171,15 @@ uint16_t i40e_simple_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
uint16_t i40e_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
-int i40e_tx_queue_init(struct i40e_tx_queue *txq);
+int i40e_tx_queue_init(struct ci_tx_queue *txq);
int i40e_rx_queue_init(struct i40e_rx_queue *rxq);
-void i40e_free_tx_resources(struct i40e_tx_queue *txq);
+void i40e_free_tx_resources(struct ci_tx_queue *txq);
void i40e_free_rx_resources(struct i40e_rx_queue *rxq);
void i40e_dev_clear_queues(struct rte_eth_dev *dev);
void i40e_dev_free_queues(struct rte_eth_dev *dev);
void i40e_reset_rx_queue(struct i40e_rx_queue *rxq);
-void i40e_reset_tx_queue(struct i40e_tx_queue *txq);
-void i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq);
+void i40e_reset_tx_queue(struct ci_tx_queue *txq);
+void i40e_tx_queue_release_mbufs(struct ci_tx_queue *txq);
int i40e_tx_done_cleanup(void *txq, uint32_t free_cnt);
int i40e_alloc_rx_queue_mbufs(struct i40e_rx_queue *rxq);
void i40e_rx_queue_release_mbufs(struct i40e_rx_queue *rxq);
@@ -237,13 +199,13 @@ uint16_t i40e_recv_scattered_pkts_vec(void *rx_queue,
uint16_t nb_pkts);
int i40e_rx_vec_dev_conf_condition_check(struct rte_eth_dev *dev);
int i40e_rxq_vec_setup(struct i40e_rx_queue *rxq);
-int i40e_txq_vec_setup(struct i40e_tx_queue *txq);
+int i40e_txq_vec_setup(struct ci_tx_queue *txq);
void i40e_rx_queue_release_mbufs_vec(struct i40e_rx_queue *rxq);
uint16_t i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
void i40e_set_rx_function(struct rte_eth_dev *dev);
void i40e_set_tx_function_flag(struct rte_eth_dev *dev,
- struct i40e_tx_queue *txq);
+ struct ci_tx_queue *txq);
void i40e_set_tx_function(struct rte_eth_dev *dev);
void i40e_set_default_ptype_table(struct rte_eth_dev *dev);
void i40e_set_default_pctype_table(struct rte_eth_dev *dev);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_altivec.c b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
index bf0e9ebd71..500bba2cef 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_altivec.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
@@ -551,7 +551,7 @@ uint16_t
i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
@@ -625,7 +625,7 @@ i40e_rxq_vec_setup(struct i40e_rx_queue *rxq)
}
int __rte_cold
-i40e_txq_vec_setup(struct i40e_tx_queue __rte_unused * txq)
+i40e_txq_vec_setup(struct ci_tx_queue __rte_unused * txq)
{
return 0;
}
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx2.c b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
index 5042e348db..29bef64287 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx2.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
@@ -743,7 +743,7 @@ static inline uint16_t
i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
@@ -808,7 +808,7 @@ i40e_xmit_pkts_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
uint16_t nb_tx = 0;
- struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index 04fbe3b2e3..a3f6d1667f 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -755,7 +755,7 @@ i40e_recv_scattered_pkts_vec_avx512(void *rx_queue,
}
static __rte_always_inline int
-i40e_tx_free_bufs_avx512(struct i40e_tx_queue *txq)
+i40e_tx_free_bufs_avx512(struct ci_tx_queue *txq)
{
struct ci_tx_entry_vec *txep;
uint32_t n;
@@ -933,7 +933,7 @@ static inline uint16_t
i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
@@ -999,7 +999,7 @@ i40e_xmit_pkts_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
uint16_t nb_tx = 0;
- struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index e81f958361..57d6263ccf 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -17,7 +17,7 @@
#endif
static __rte_always_inline int
-i40e_tx_free_bufs(struct i40e_tx_queue *txq)
+i40e_tx_free_bufs(struct ci_tx_queue *txq)
{
struct ci_tx_entry *txep;
uint32_t n;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_neon.c b/drivers/net/i40e/i40e_rxtx_vec_neon.c
index 05191e4884..c97f337e43 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_neon.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_neon.c
@@ -679,7 +679,7 @@ uint16_t
i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
struct rte_mbuf **__rte_restrict tx_pkts, uint16_t nb_pkts)
{
- struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
@@ -753,7 +753,7 @@ i40e_rxq_vec_setup(struct i40e_rx_queue *rxq)
}
int __rte_cold
-i40e_txq_vec_setup(struct i40e_tx_queue __rte_unused *txq)
+i40e_txq_vec_setup(struct ci_tx_queue *txq __rte_unused)
{
return 0;
}
diff --git a/drivers/net/i40e/i40e_rxtx_vec_sse.c b/drivers/net/i40e/i40e_rxtx_vec_sse.c
index d81b553842..2c467e2089 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_sse.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_sse.c
@@ -698,7 +698,7 @@ uint16_t
i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
@@ -771,7 +771,7 @@ i40e_rxq_vec_setup(struct i40e_rx_queue *rxq)
}
int __rte_cold
-i40e_txq_vec_setup(struct i40e_tx_queue __rte_unused *txq)
+i40e_txq_vec_setup(struct ci_tx_queue *txq __rte_unused)
{
return 0;
}
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 204d4eadbb..65c18921f4 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -1177,8 +1177,8 @@ ice_dcf_configure_queues(struct ice_dcf_hw *hw)
{
struct ice_rx_queue **rxq =
(struct ice_rx_queue **)hw->eth_dev->data->rx_queues;
- struct ice_tx_queue **txq =
- (struct ice_tx_queue **)hw->eth_dev->data->tx_queues;
+ struct ci_tx_queue **txq =
+ (struct ci_tx_queue **)hw->eth_dev->data->tx_queues;
struct virtchnl_vsi_queue_config_info *vc_config;
struct virtchnl_queue_pair_info *vc_qp;
struct dcf_virtchnl_cmd args;
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 4ffd1f5567..a0c065d78c 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -387,7 +387,7 @@ reset_rx_queue(struct ice_rx_queue *rxq)
}
static inline void
-reset_tx_queue(struct ice_tx_queue *txq)
+reset_tx_queue(struct ci_tx_queue *txq)
{
struct ci_tx_entry *txe;
uint32_t i, size;
@@ -454,7 +454,7 @@ ice_dcf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
struct ice_dcf_adapter *ad = dev->data->dev_private;
struct iavf_hw *hw = &ad->real_hw.avf;
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
int err = 0;
if (tx_queue_id >= dev->data->nb_tx_queues)
@@ -486,7 +486,7 @@ ice_dcf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
struct ice_dcf_adapter *ad = dev->data->dev_private;
struct ice_dcf_hw *hw = &ad->real_hw;
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
int err;
if (tx_queue_id >= dev->data->nb_tx_queues)
@@ -511,7 +511,7 @@ static int
ice_dcf_start_queues(struct rte_eth_dev *dev)
{
struct ice_rx_queue *rxq;
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
int nb_rxq = 0;
int nb_txq, i;
@@ -638,7 +638,7 @@ ice_dcf_stop_queues(struct rte_eth_dev *dev)
struct ice_dcf_adapter *ad = dev->data->dev_private;
struct ice_dcf_hw *hw = &ad->real_hw;
struct ice_rx_queue *rxq;
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
int ret, i;
/* Stop All queues */
diff --git a/drivers/net/ice/ice_diagnose.c b/drivers/net/ice/ice_diagnose.c
index 5bec9d00ad..a50068441a 100644
--- a/drivers/net/ice/ice_diagnose.c
+++ b/drivers/net/ice/ice_diagnose.c
@@ -605,7 +605,7 @@ void print_node(const struct rte_eth_dev_data *ethdata,
get_elem_type(data->data.elem_type));
if (data->data.elem_type == ICE_AQC_ELEM_TYPE_LEAF) {
for (uint16_t i = 0; i < ethdata->nb_tx_queues; i++) {
- struct ice_tx_queue *q = ethdata->tx_queues[i];
+ struct ci_tx_queue *q = ethdata->tx_queues[i];
if (q->q_teid == data->node_teid) {
fprintf(stream, "\t\t\t\t<tr><td>TXQ</td><td>%u</td></tr>\n", i);
break;
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 93a6308a86..80eee03204 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -6448,7 +6448,7 @@ ice_update_mbuf_stats(struct rte_eth_dev *ethdev,
struct ice_mbuf_stats *mbuf_stats)
{
uint16_t idx;
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
for (idx = 0; idx < ethdev->data->nb_tx_queues; idx++) {
txq = ethdev->data->tx_queues[idx];
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index a5b27fabd2..ba54655499 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -258,7 +258,7 @@ struct ice_vsi_list {
};
struct ice_rx_queue;
-struct ice_tx_queue;
+struct ci_tx_queue;
/**
* Structure that defines a VSI, associated with a adapter.
@@ -408,7 +408,7 @@ struct ice_fdir_counter_pool_container {
*/
struct ice_fdir_info {
struct ice_vsi *fdir_vsi; /* pointer to fdir VSI structure */
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
struct ice_rx_queue *rxq;
void *prg_pkt; /* memory for fdir program packet */
uint64_t dma_addr; /* physic address of packet memory*/
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 5ec92f6d0c..bcc7c7a016 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -743,7 +743,7 @@ ice_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
int
ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
int err;
struct ice_vsi *vsi;
struct ice_hw *hw;
@@ -944,7 +944,7 @@ int
ice_fdir_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
int err;
struct ice_vsi *vsi;
struct ice_hw *hw;
@@ -1008,7 +1008,7 @@ ice_fdir_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
/* Free all mbufs for descriptors in tx queue */
static void
-_ice_tx_queue_release_mbufs(struct ice_tx_queue *txq)
+_ice_tx_queue_release_mbufs(struct ci_tx_queue *txq)
{
uint16_t i;
@@ -1026,7 +1026,7 @@ _ice_tx_queue_release_mbufs(struct ice_tx_queue *txq)
}
static void
-ice_reset_tx_queue(struct ice_tx_queue *txq)
+ice_reset_tx_queue(struct ci_tx_queue *txq)
{
struct ci_tx_entry *txe;
uint16_t i, prev, size;
@@ -1066,7 +1066,7 @@ ice_reset_tx_queue(struct ice_tx_queue *txq)
int
ice_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_vsi *vsi = pf->main_vsi;
@@ -1134,7 +1134,7 @@ ice_fdir_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
int
ice_fdir_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_vsi *vsi = pf->main_vsi;
@@ -1354,7 +1354,7 @@ ice_tx_queue_setup(struct rte_eth_dev *dev,
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_vsi *vsi = pf->main_vsi;
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
const struct rte_memzone *tz;
uint32_t ring_size;
uint16_t tx_rs_thresh, tx_free_thresh;
@@ -1467,7 +1467,7 @@ ice_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate the TX queue data structure. */
txq = rte_zmalloc_socket(NULL,
- sizeof(struct ice_tx_queue),
+ sizeof(struct ci_tx_queue),
RTE_CACHE_LINE_SIZE,
socket_id);
if (!txq) {
@@ -1542,7 +1542,7 @@ ice_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
void
ice_tx_queue_release(void *txq)
{
- struct ice_tx_queue *q = (struct ice_tx_queue *)txq;
+ struct ci_tx_queue *q = (struct ci_tx_queue *)txq;
if (!q) {
PMD_DRV_LOG(DEBUG, "Pointer to TX queue is NULL");
@@ -1577,7 +1577,7 @@ void
ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
struct rte_eth_txq_info *qinfo)
{
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
txq = dev->data->tx_queues[queue_id];
@@ -2354,7 +2354,7 @@ ice_rx_descriptor_status(void *rx_queue, uint16_t offset)
int
ice_tx_descriptor_status(void *tx_queue, uint16_t offset)
{
- struct ice_tx_queue *txq = tx_queue;
+ struct ci_tx_queue *txq = tx_queue;
volatile uint64_t *status;
uint64_t mask, expect;
uint32_t desc;
@@ -2412,7 +2412,7 @@ ice_free_queues(struct rte_eth_dev *dev)
int
ice_fdir_setup_tx_resources(struct ice_pf *pf)
{
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
const struct rte_memzone *tz = NULL;
uint32_t ring_size;
struct rte_eth_dev *dev;
@@ -2426,7 +2426,7 @@ ice_fdir_setup_tx_resources(struct ice_pf *pf)
/* Allocate the TX queue data structure. */
txq = rte_zmalloc_socket("ice fdir tx queue",
- sizeof(struct ice_tx_queue),
+ sizeof(struct ci_tx_queue),
RTE_CACHE_LINE_SIZE,
SOCKET_ID_ANY);
if (!txq) {
@@ -2835,7 +2835,7 @@ ice_txd_enable_checksum(uint64_t ol_flags,
}
static inline int
-ice_xmit_cleanup(struct ice_tx_queue *txq)
+ice_xmit_cleanup(struct ci_tx_queue *txq)
{
struct ci_tx_entry *sw_ring = txq->sw_ring;
volatile struct ice_tx_desc *txd = txq->ice_tx_ring;
@@ -2958,7 +2958,7 @@ ice_calc_pkt_desc(struct rte_mbuf *tx_pkt)
uint16_t
ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
volatile struct ice_tx_desc *ice_tx_ring;
volatile struct ice_tx_desc *txd;
struct ci_tx_entry *sw_ring;
@@ -3182,7 +3182,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
}
static __rte_always_inline int
-ice_tx_free_bufs(struct ice_tx_queue *txq)
+ice_tx_free_bufs(struct ci_tx_queue *txq)
{
struct ci_tx_entry *txep;
uint16_t i;
@@ -3218,7 +3218,7 @@ ice_tx_free_bufs(struct ice_tx_queue *txq)
}
static int
-ice_tx_done_cleanup_full(struct ice_tx_queue *txq,
+ice_tx_done_cleanup_full(struct ci_tx_queue *txq,
uint32_t free_cnt)
{
struct ci_tx_entry *swr_ring = txq->sw_ring;
@@ -3278,7 +3278,7 @@ ice_tx_done_cleanup_full(struct ice_tx_queue *txq,
#ifdef RTE_ARCH_X86
static int
-ice_tx_done_cleanup_vec(struct ice_tx_queue *txq __rte_unused,
+ice_tx_done_cleanup_vec(struct ci_tx_queue *txq __rte_unused,
uint32_t free_cnt __rte_unused)
{
return -ENOTSUP;
@@ -3286,7 +3286,7 @@ ice_tx_done_cleanup_vec(struct ice_tx_queue *txq __rte_unused,
#endif
static int
-ice_tx_done_cleanup_simple(struct ice_tx_queue *txq,
+ice_tx_done_cleanup_simple(struct ci_tx_queue *txq,
uint32_t free_cnt)
{
int i, n, cnt;
@@ -3312,7 +3312,7 @@ ice_tx_done_cleanup_simple(struct ice_tx_queue *txq,
int
ice_tx_done_cleanup(void *txq, uint32_t free_cnt)
{
- struct ice_tx_queue *q = (struct ice_tx_queue *)txq;
+ struct ci_tx_queue *q = (struct ci_tx_queue *)txq;
struct rte_eth_dev *dev = &rte_eth_devices[q->port_id];
struct ice_adapter *ad =
ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
@@ -3357,7 +3357,7 @@ tx1(volatile struct ice_tx_desc *txdp, struct rte_mbuf **pkts)
}
static inline void
-ice_tx_fill_hw_ring(struct ice_tx_queue *txq, struct rte_mbuf **pkts,
+ice_tx_fill_hw_ring(struct ci_tx_queue *txq, struct rte_mbuf **pkts,
uint16_t nb_pkts)
{
volatile struct ice_tx_desc *txdp = &txq->ice_tx_ring[txq->tx_tail];
@@ -3389,7 +3389,7 @@ ice_tx_fill_hw_ring(struct ice_tx_queue *txq, struct rte_mbuf **pkts,
}
static inline uint16_t
-tx_xmit_pkts(struct ice_tx_queue *txq,
+tx_xmit_pkts(struct ci_tx_queue *txq,
struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
@@ -3452,14 +3452,14 @@ ice_xmit_pkts_simple(void *tx_queue,
uint16_t nb_tx = 0;
if (likely(nb_pkts <= ICE_TX_MAX_BURST))
- return tx_xmit_pkts((struct ice_tx_queue *)tx_queue,
+ return tx_xmit_pkts((struct ci_tx_queue *)tx_queue,
tx_pkts, nb_pkts);
while (nb_pkts) {
uint16_t ret, num = (uint16_t)RTE_MIN(nb_pkts,
ICE_TX_MAX_BURST);
- ret = tx_xmit_pkts((struct ice_tx_queue *)tx_queue,
+ ret = tx_xmit_pkts((struct ci_tx_queue *)tx_queue,
&tx_pkts[nb_tx], num);
nb_tx = (uint16_t)(nb_tx + ret);
nb_pkts = (uint16_t)(nb_pkts - ret);
@@ -3667,7 +3667,7 @@ ice_rx_burst_mode_get(struct rte_eth_dev *dev, __rte_unused uint16_t queue_id,
}
void __rte_cold
-ice_set_tx_function_flag(struct rte_eth_dev *dev, struct ice_tx_queue *txq)
+ice_set_tx_function_flag(struct rte_eth_dev *dev, struct ci_tx_queue *txq)
{
struct ice_adapter *ad =
ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
@@ -3716,7 +3716,7 @@ ice_check_empty_mbuf(struct rte_mbuf *tx_pkt)
static uint16_t
ice_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
- struct ice_tx_queue *txq = tx_queue;
+ struct ci_tx_queue *txq = tx_queue;
uint16_t idx;
struct rte_mbuf *mb;
bool pkt_error = false;
@@ -3778,7 +3778,7 @@ ice_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
pkt_error = true;
break;
}
- if (mb->nb_segs > ((struct ice_tx_queue *)tx_queue)->nb_tx_desc) {
+ if (mb->nb_segs > ((struct ci_tx_queue *)tx_queue)->nb_tx_desc) {
PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs out of ring length");
pkt_error = true;
break;
@@ -3839,7 +3839,7 @@ ice_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
(m->tso_segsz < ICE_MIN_TSO_MSS ||
m->tso_segsz > ICE_MAX_TSO_MSS ||
m->nb_segs >
- ((struct ice_tx_queue *)tx_queue)->nb_tx_desc ||
+ ((struct ci_tx_queue *)tx_queue)->nb_tx_desc ||
m->pkt_len > ICE_MAX_TSO_FRAME_SIZE)) {
/**
* MSS outside the range are considered malicious
@@ -3881,7 +3881,7 @@ ice_set_tx_function(struct rte_eth_dev *dev)
ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
int mbuf_check = ad->devargs.mbuf_check;
#ifdef RTE_ARCH_X86
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
int i;
int tx_check_ret = -1;
@@ -4693,7 +4693,7 @@ ice_check_fdir_programming_status(struct ice_rx_queue *rxq)
int
ice_fdir_programming(struct ice_pf *pf, struct ice_fltr_desc *fdir_desc)
{
- struct ice_tx_queue *txq = pf->fdir.txq;
+ struct ci_tx_queue *txq = pf->fdir.txq;
struct ice_rx_queue *rxq = pf->fdir.rxq;
volatile struct ice_fltr_desc *fdirdp;
volatile struct ice_tx_desc *txdp;
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index 3257f449f5..1cae8a9b50 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -79,7 +79,6 @@ extern int ice_timestamp_dynfield_offset;
#define ICE_TX_MTU_SEG_MAX 8
typedef void (*ice_rx_release_mbufs_t)(struct ice_rx_queue *rxq);
-typedef void (*ice_tx_release_mbufs_t)(struct ice_tx_queue *txq);
typedef void (*ice_rxd_to_pkt_fields_t)(struct ice_rx_queue *rxq,
struct rte_mbuf *mb,
volatile union ice_rx_flex_desc *rxdp);
@@ -145,42 +144,6 @@ struct ice_rx_queue {
bool ts_enable; /* if rxq timestamp is enabled */
};
-struct ice_tx_queue {
- uint16_t nb_tx_desc; /* number of TX descriptors */
- rte_iova_t tx_ring_dma; /* TX ring DMA address */
- volatile struct ice_tx_desc *ice_tx_ring; /* TX ring virtual address */
- struct ci_tx_entry *sw_ring; /* virtual address of SW ring */
- uint16_t tx_tail; /* current value of tail register */
- volatile uint8_t *qtx_tail; /* register address of tail */
- uint16_t nb_tx_used; /* number of TX desc used since RS bit set */
- /* index to last TX descriptor to have been cleaned */
- uint16_t last_desc_cleaned;
- /* Total number of TX descriptors ready to be allocated. */
- uint16_t nb_tx_free;
- /* Start freeing TX buffers if there are less free descriptors than
- * this value.
- */
- uint16_t tx_free_thresh;
- /* Number of TX descriptors to use before RS bit is set. */
- uint16_t tx_rs_thresh;
- uint8_t pthresh; /**< Prefetch threshold register. */
- uint8_t hthresh; /**< Host threshold register. */
- uint8_t wthresh; /**< Write-back threshold reg. */
- uint16_t port_id; /* Device port identifier. */
- uint16_t queue_id; /* TX queue index. */
- uint32_t q_teid; /* TX schedule node id. */
- uint16_t reg_idx;
- uint64_t offloads;
- struct ice_vsi *ice_vsi; /* the VSI this queue belongs to */
- uint16_t tx_next_dd;
- uint16_t tx_next_rs;
- uint64_t mbuf_errors;
- bool tx_deferred_start; /* don't start this queue in dev start */
- bool q_set; /* indicate if tx queue has been configured */
- ice_tx_release_mbufs_t tx_rel_mbufs;
- const struct rte_memzone *mz;
-};
-
/* Offload features */
union ice_tx_offload {
uint64_t data;
@@ -268,7 +231,7 @@ void ice_set_rx_function(struct rte_eth_dev *dev);
uint16_t ice_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
void ice_set_tx_function_flag(struct rte_eth_dev *dev,
- struct ice_tx_queue *txq);
+ struct ci_tx_queue *txq);
void ice_set_tx_function(struct rte_eth_dev *dev);
uint32_t ice_rx_queue_count(void *rx_queue);
void ice_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
@@ -290,7 +253,7 @@ void ice_select_rxd_to_pkt_fields_handler(struct ice_rx_queue *rxq,
int ice_rx_vec_dev_check(struct rte_eth_dev *dev);
int ice_tx_vec_dev_check(struct rte_eth_dev *dev);
int ice_rxq_vec_setup(struct ice_rx_queue *rxq);
-int ice_txq_vec_setup(struct ice_tx_queue *txq);
+int ice_txq_vec_setup(struct ci_tx_queue *txq);
uint16_t ice_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
uint16_t ice_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index dde07ac99e..12ffa0fa9a 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -856,7 +856,7 @@ static __rte_always_inline uint16_t
ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool offload)
{
- struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct ice_tx_desc *txdp;
struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
@@ -924,7 +924,7 @@ ice_xmit_pkts_vec_avx2_common(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool offload)
{
uint16_t nb_tx = 0;
- struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index e4d0270176..eabd8b04a0 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -860,7 +860,7 @@ ice_recv_scattered_pkts_vec_avx512_offload(void *rx_queue,
}
static __rte_always_inline int
-ice_tx_free_bufs_avx512(struct ice_tx_queue *txq)
+ice_tx_free_bufs_avx512(struct ci_tx_queue *txq)
{
struct ci_tx_entry_vec *txep;
uint32_t n;
@@ -1053,7 +1053,7 @@ static __rte_always_inline uint16_t
ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool do_offload)
{
- struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct ice_tx_desc *txdp;
struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
@@ -1122,7 +1122,7 @@ ice_xmit_pkts_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
uint16_t nb_tx = 0;
- struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
@@ -1144,7 +1144,7 @@ ice_xmit_pkts_vec_avx512_offload(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
uint16_t nb_tx = 0;
- struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index 7b865b53ad..b39289ceb5 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -13,7 +13,7 @@
#endif
static __rte_always_inline int
-ice_tx_free_bufs_vec(struct ice_tx_queue *txq)
+ice_tx_free_bufs_vec(struct ci_tx_queue *txq)
{
struct ci_tx_entry *txep;
uint32_t n;
@@ -105,7 +105,7 @@ _ice_rx_queue_release_mbufs_vec(struct ice_rx_queue *rxq)
}
static inline void
-_ice_tx_queue_release_mbufs_vec(struct ice_tx_queue *txq)
+_ice_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq)
{
uint16_t i;
@@ -231,7 +231,7 @@ ice_rx_vec_queue_default(struct ice_rx_queue *rxq)
}
static inline int
-ice_tx_vec_queue_default(struct ice_tx_queue *txq)
+ice_tx_vec_queue_default(struct ci_tx_queue *txq)
{
if (!txq)
return -1;
@@ -273,7 +273,7 @@ static inline int
ice_tx_vec_dev_check_default(struct rte_eth_dev *dev)
{
int i;
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
int ret = 0;
int result = 0;
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index 364207e8a8..f11528385a 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -697,7 +697,7 @@ static uint16_t
ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct ice_tx_desc *txdp;
struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
@@ -766,7 +766,7 @@ ice_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
uint16_t nb_tx = 0;
- struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
@@ -793,7 +793,7 @@ ice_rxq_vec_setup(struct ice_rx_queue *rxq)
}
int __rte_cold
-ice_txq_vec_setup(struct ice_tx_queue __rte_unused *txq)
+ice_txq_vec_setup(struct ci_tx_queue *txq __rte_unused)
{
if (!txq)
return -1;
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v3 07/22] net/iavf: use common Tx queue structure
2024-12-11 17:33 ` [PATCH v3 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (5 preceding siblings ...)
2024-12-11 17:33 ` [PATCH v3 06/22] net/_common_intel: merge ice and i40e Tx queue struct Bruce Richardson
@ 2024-12-11 17:33 ` Bruce Richardson
2024-12-11 17:33 ` [PATCH v3 08/22] net/ixgbe: convert Tx queue context cache field to ptr Bruce Richardson
` (14 subsequent siblings)
21 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-11 17:33 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Vladimir Medvedkin, Ian Stokes, Konstantin Ananyev
Merge in the few additional fields used by iavf driver and convert it to
using the common Tx queue structure also.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 15 +++++++-
drivers/net/iavf/iavf.h | 2 +-
drivers/net/iavf/iavf_ethdev.c | 4 +-
drivers/net/iavf/iavf_rxtx.c | 42 ++++++++++-----------
drivers/net/iavf/iavf_rxtx.h | 49 +++----------------------
drivers/net/iavf/iavf_rxtx_vec_avx2.c | 4 +-
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 14 +++----
drivers/net/iavf/iavf_rxtx_vec_common.h | 8 ++--
drivers/net/iavf/iavf_rxtx_vec_sse.c | 8 ++--
drivers/net/iavf/iavf_vchnl.c | 6 +--
10 files changed, 62 insertions(+), 90 deletions(-)
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
index c965f5ee6c..c4a1a0c816 100644
--- a/drivers/net/_common_intel/tx.h
+++ b/drivers/net/_common_intel/tx.h
@@ -31,8 +31,9 @@ typedef void (*ice_tx_release_mbufs_t)(struct ci_tx_queue *txq);
struct ci_tx_queue {
union { /* TX ring virtual address */
- volatile struct ice_tx_desc *ice_tx_ring;
volatile struct i40e_tx_desc *i40e_tx_ring;
+ volatile struct iavf_tx_desc *iavf_tx_ring;
+ volatile struct ice_tx_desc *ice_tx_ring;
};
volatile uint8_t *qtx_tail; /* register address of tail */
struct ci_tx_entry *sw_ring; /* virtual address of SW ring */
@@ -63,8 +64,9 @@ struct ci_tx_queue {
bool tx_deferred_start; /* don't start this queue in dev start */
bool q_set; /* indicate if tx queue has been configured */
union { /* the VSI this queue belongs to */
- struct ice_vsi *ice_vsi;
struct i40e_vsi *i40e_vsi;
+ struct iavf_vsi *iavf_vsi;
+ struct ice_vsi *ice_vsi;
};
const struct rte_memzone *mz;
@@ -76,6 +78,15 @@ struct ci_tx_queue {
struct { /* I40E driver specific values */
uint8_t dcb_tc;
};
+ struct { /* iavf driver specific values */
+ uint16_t ipsec_crypto_pkt_md_offset;
+ uint8_t rel_mbufs_type;
+#define IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG1 BIT(0)
+#define IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2 BIT(1)
+ uint8_t vlan_flag;
+ uint8_t tc;
+ bool use_ctx; /* with ctx info, each pkt needs two descriptors */
+ };
};
};
diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h
index ad526c644c..956c60ef45 100644
--- a/drivers/net/iavf/iavf.h
+++ b/drivers/net/iavf/iavf.h
@@ -98,7 +98,7 @@
struct iavf_adapter;
struct iavf_rx_queue;
-struct iavf_tx_queue;
+struct ci_tx_queue;
struct iavf_ipsec_crypto_stats {
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 7f80cd6258..328c224c93 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -954,7 +954,7 @@ static int
iavf_start_queues(struct rte_eth_dev *dev)
{
struct iavf_rx_queue *rxq;
- struct iavf_tx_queue *txq;
+ struct ci_tx_queue *txq;
int i;
uint16_t nb_txq, nb_rxq;
@@ -1885,7 +1885,7 @@ iavf_dev_update_mbuf_stats(struct rte_eth_dev *ethdev,
struct iavf_mbuf_stats *mbuf_stats)
{
uint16_t idx;
- struct iavf_tx_queue *txq;
+ struct ci_tx_queue *txq;
for (idx = 0; idx < ethdev->data->nb_tx_queues; idx++) {
txq = ethdev->data->tx_queues[idx];
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 6eda91e76b..7e381b2a17 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -213,7 +213,7 @@ check_rx_vec_allow(struct iavf_rx_queue *rxq)
}
static inline bool
-check_tx_vec_allow(struct iavf_tx_queue *txq)
+check_tx_vec_allow(struct ci_tx_queue *txq)
{
if (!(txq->offloads & IAVF_TX_NO_VECTOR_FLAGS) &&
txq->tx_rs_thresh >= IAVF_VPMD_TX_MAX_BURST &&
@@ -282,7 +282,7 @@ reset_rx_queue(struct iavf_rx_queue *rxq)
}
static inline void
-reset_tx_queue(struct iavf_tx_queue *txq)
+reset_tx_queue(struct ci_tx_queue *txq)
{
struct ci_tx_entry *txe;
uint32_t i, size;
@@ -388,7 +388,7 @@ release_rxq_mbufs(struct iavf_rx_queue *rxq)
}
static inline void
-release_txq_mbufs(struct iavf_tx_queue *txq)
+release_txq_mbufs(struct ci_tx_queue *txq)
{
uint16_t i;
@@ -778,7 +778,7 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,
struct iavf_info *vf =
IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
struct iavf_vsi *vsi = &vf->vsi;
- struct iavf_tx_queue *txq;
+ struct ci_tx_queue *txq;
const struct rte_memzone *mz;
uint32_t ring_size;
uint16_t tx_rs_thresh, tx_free_thresh;
@@ -814,7 +814,7 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate the TX queue data structure. */
txq = rte_zmalloc_socket("iavf txq",
- sizeof(struct iavf_tx_queue),
+ sizeof(struct ci_tx_queue),
RTE_CACHE_LINE_SIZE,
socket_id);
if (!txq) {
@@ -979,7 +979,7 @@ iavf_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- struct iavf_tx_queue *txq;
+ struct ci_tx_queue *txq;
int err = 0;
PMD_DRV_FUNC_TRACE();
@@ -1048,7 +1048,7 @@ iavf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
struct iavf_adapter *adapter =
IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
- struct iavf_tx_queue *txq;
+ struct ci_tx_queue *txq;
int err;
PMD_DRV_FUNC_TRACE();
@@ -1092,7 +1092,7 @@ iavf_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
void
iavf_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
{
- struct iavf_tx_queue *q = dev->data->tx_queues[qid];
+ struct ci_tx_queue *q = dev->data->tx_queues[qid];
if (!q)
return;
@@ -1107,7 +1107,7 @@ static void
iavf_reset_queues(struct rte_eth_dev *dev)
{
struct iavf_rx_queue *rxq;
- struct iavf_tx_queue *txq;
+ struct ci_tx_queue *txq;
int i;
for (i = 0; i < dev->data->nb_tx_queues; i++) {
@@ -2377,7 +2377,7 @@ iavf_recv_pkts_bulk_alloc(void *rx_queue,
}
static inline int
-iavf_xmit_cleanup(struct iavf_tx_queue *txq)
+iavf_xmit_cleanup(struct ci_tx_queue *txq)
{
struct ci_tx_entry *sw_ring = txq->sw_ring;
uint16_t last_desc_cleaned = txq->last_desc_cleaned;
@@ -2781,7 +2781,7 @@ iavf_fill_data_desc(volatile struct iavf_tx_desc *desc,
static struct iavf_ipsec_crypto_pkt_metadata *
-iavf_ipsec_crypto_get_pkt_metadata(const struct iavf_tx_queue *txq,
+iavf_ipsec_crypto_get_pkt_metadata(const struct ci_tx_queue *txq,
struct rte_mbuf *m)
{
if (m->ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD)
@@ -2795,7 +2795,7 @@ iavf_ipsec_crypto_get_pkt_metadata(const struct iavf_tx_queue *txq,
uint16_t
iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
- struct iavf_tx_queue *txq = tx_queue;
+ struct ci_tx_queue *txq = tx_queue;
volatile struct iavf_tx_desc *txr = txq->iavf_tx_ring;
struct ci_tx_entry *txe_ring = txq->sw_ring;
struct ci_tx_entry *txe, *txn;
@@ -3027,7 +3027,7 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
* correct queue.
*/
static int
-iavf_check_vlan_up2tc(struct iavf_tx_queue *txq, struct rte_mbuf *m)
+iavf_check_vlan_up2tc(struct ci_tx_queue *txq, struct rte_mbuf *m)
{
struct rte_eth_dev *dev = &rte_eth_devices[txq->port_id];
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
@@ -3646,7 +3646,7 @@ iavf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
int i, ret;
uint64_t ol_flags;
struct rte_mbuf *m;
- struct iavf_tx_queue *txq = tx_queue;
+ struct ci_tx_queue *txq = tx_queue;
struct rte_eth_dev *dev = &rte_eth_devices[txq->port_id];
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
struct iavf_adapter *adapter = IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
@@ -3800,7 +3800,7 @@ static uint16_t
iavf_xmit_pkts_no_poll(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct iavf_tx_queue *txq = tx_queue;
+ struct ci_tx_queue *txq = tx_queue;
enum iavf_tx_burst_type tx_burst_type;
if (!txq->iavf_vsi || txq->iavf_vsi->adapter->no_poll)
@@ -3823,7 +3823,7 @@ iavf_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t good_pkts = nb_pkts;
const char *reason = NULL;
bool pkt_error = false;
- struct iavf_tx_queue *txq = tx_queue;
+ struct ci_tx_queue *txq = tx_queue;
struct iavf_adapter *adapter = txq->iavf_vsi->adapter;
enum iavf_tx_burst_type tx_burst_type =
txq->iavf_vsi->adapter->tx_burst_type;
@@ -4144,7 +4144,7 @@ iavf_set_tx_function(struct rte_eth_dev *dev)
int mbuf_check = adapter->devargs.mbuf_check;
int no_poll_on_link_down = adapter->devargs.no_poll_on_link_down;
#ifdef RTE_ARCH_X86
- struct iavf_tx_queue *txq;
+ struct ci_tx_queue *txq;
int i;
int check_ret;
bool use_sse = false;
@@ -4265,7 +4265,7 @@ iavf_set_tx_function(struct rte_eth_dev *dev)
}
static int
-iavf_tx_done_cleanup_full(struct iavf_tx_queue *txq,
+iavf_tx_done_cleanup_full(struct ci_tx_queue *txq,
uint32_t free_cnt)
{
struct ci_tx_entry *swr_ring = txq->sw_ring;
@@ -4324,7 +4324,7 @@ iavf_tx_done_cleanup_full(struct iavf_tx_queue *txq,
int
iavf_dev_tx_done_cleanup(void *txq, uint32_t free_cnt)
{
- struct iavf_tx_queue *q = (struct iavf_tx_queue *)txq;
+ struct ci_tx_queue *q = (struct ci_tx_queue *)txq;
return iavf_tx_done_cleanup_full(q, free_cnt);
}
@@ -4350,7 +4350,7 @@ void
iavf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
struct rte_eth_txq_info *qinfo)
{
- struct iavf_tx_queue *txq;
+ struct ci_tx_queue *txq;
txq = dev->data->tx_queues[queue_id];
@@ -4422,7 +4422,7 @@ iavf_dev_rx_desc_status(void *rx_queue, uint16_t offset)
int
iavf_dev_tx_desc_status(void *tx_queue, uint16_t offset)
{
- struct iavf_tx_queue *txq = tx_queue;
+ struct ci_tx_queue *txq = tx_queue;
volatile uint64_t *status;
uint64_t mask, expect;
uint32_t desc;
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index cc1eaaf54c..c18e01560c 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -211,7 +211,7 @@ struct iavf_rxq_ops {
};
struct iavf_txq_ops {
- void (*release_mbufs)(struct iavf_tx_queue *txq);
+ void (*release_mbufs)(struct ci_tx_queue *txq);
};
@@ -273,43 +273,6 @@ struct iavf_rx_queue {
uint64_t hw_time_update;
};
-/* Structure associated with each TX queue. */
-struct iavf_tx_queue {
- const struct rte_memzone *mz; /* memzone for Tx ring */
- volatile struct iavf_tx_desc *iavf_tx_ring; /* Tx ring virtual address */
- rte_iova_t tx_ring_dma; /* Tx ring DMA address */
- struct ci_tx_entry *sw_ring; /* address array of SW ring */
- uint16_t nb_tx_desc; /* ring length */
- uint16_t tx_tail; /* current value of tail */
- volatile uint8_t *qtx_tail; /* register address of tail */
- /* number of used desc since RS bit set */
- uint16_t nb_tx_used;
- uint16_t nb_tx_free;
- uint16_t last_desc_cleaned; /* last desc have been cleaned*/
- uint16_t tx_free_thresh;
- uint16_t tx_rs_thresh;
- uint8_t rel_mbufs_type;
- struct iavf_vsi *iavf_vsi; /**< the VSI this queue belongs to */
-
- uint16_t port_id;
- uint16_t queue_id;
- uint64_t offloads;
- uint16_t tx_next_dd; /* next to set RS, for VPMD */
- uint16_t tx_next_rs; /* next to check DD, for VPMD */
- uint16_t ipsec_crypto_pkt_md_offset;
-
- uint64_t mbuf_errors;
-
- bool q_set; /* if rx queue has been configured */
- bool tx_deferred_start; /* don't start this queue in dev start */
- const struct iavf_txq_ops *ops;
-#define IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG1 BIT(0)
-#define IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2 BIT(1)
- uint8_t vlan_flag;
- uint8_t tc;
- uint8_t use_ctx:1; /* if use the ctx desc, a packet needs two descriptors */
-};
-
/* Offload features */
union iavf_tx_offload {
uint64_t data;
@@ -724,7 +687,7 @@ int iavf_get_monitor_addr(void *rx_queue, struct rte_power_monitor_cond *pmc);
int iavf_rx_vec_dev_check(struct rte_eth_dev *dev);
int iavf_tx_vec_dev_check(struct rte_eth_dev *dev);
int iavf_rxq_vec_setup(struct iavf_rx_queue *rxq);
-int iavf_txq_vec_setup(struct iavf_tx_queue *txq);
+int iavf_txq_vec_setup(struct ci_tx_queue *txq);
uint16_t iavf_recv_pkts_vec_avx512(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
uint16_t iavf_recv_pkts_vec_avx512_offload(void *rx_queue,
@@ -757,14 +720,14 @@ uint16_t iavf_xmit_pkts_vec_avx512_ctx_offload(void *tx_queue, struct rte_mbuf *
uint16_t nb_pkts);
uint16_t iavf_xmit_pkts_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
-int iavf_txq_vec_setup_avx512(struct iavf_tx_queue *txq);
+int iavf_txq_vec_setup_avx512(struct ci_tx_queue *txq);
uint8_t iavf_proto_xtr_type_to_rxdid(uint8_t xtr_type);
void iavf_set_default_ptype_table(struct rte_eth_dev *dev);
-void iavf_tx_queue_release_mbufs_avx512(struct iavf_tx_queue *txq);
+void iavf_tx_queue_release_mbufs_avx512(struct ci_tx_queue *txq);
void iavf_rx_queue_release_mbufs_sse(struct iavf_rx_queue *rxq);
-void iavf_tx_queue_release_mbufs_sse(struct iavf_tx_queue *txq);
+void iavf_tx_queue_release_mbufs_sse(struct ci_tx_queue *txq);
static inline
void iavf_dump_rx_descriptor(struct iavf_rx_queue *rxq,
@@ -791,7 +754,7 @@ void iavf_dump_rx_descriptor(struct iavf_rx_queue *rxq,
* to print the qwords
*/
static inline
-void iavf_dump_tx_descriptor(const struct iavf_tx_queue *txq,
+void iavf_dump_tx_descriptor(const struct ci_tx_queue *txq,
const volatile void *desc, uint16_t tx_id)
{
const char *name;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index f33ceceee1..fdb98b417a 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -1734,7 +1734,7 @@ static __rte_always_inline uint16_t
iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool offload)
{
- struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
@@ -1801,7 +1801,7 @@ iavf_xmit_pkts_vec_avx2_common(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool offload)
{
uint16_t nb_tx = 0;
- struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index 97420a75fd..9cf7171524 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -1845,7 +1845,7 @@ iavf_recv_scattered_pkts_vec_avx512_flex_rxd_offload(void *rx_queue,
}
static __rte_always_inline int
-iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq)
+iavf_tx_free_bufs_avx512(struct ci_tx_queue *txq)
{
struct ci_tx_entry_vec *txep;
uint32_t n;
@@ -2311,7 +2311,7 @@ static __rte_always_inline uint16_t
iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool offload)
{
- struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
@@ -2379,7 +2379,7 @@ static __rte_always_inline uint16_t
iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool offload)
{
- struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, nb_mbuf, tx_id;
@@ -2447,7 +2447,7 @@ iavf_xmit_pkts_vec_avx512_cmn(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool offload)
{
uint16_t nb_tx = 0;
- struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
@@ -2473,7 +2473,7 @@ iavf_xmit_pkts_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
}
void __rte_cold
-iavf_tx_queue_release_mbufs_avx512(struct iavf_tx_queue *txq)
+iavf_tx_queue_release_mbufs_avx512(struct ci_tx_queue *txq)
{
unsigned int i;
const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1);
@@ -2494,7 +2494,7 @@ iavf_tx_queue_release_mbufs_avx512(struct iavf_tx_queue *txq)
}
int __rte_cold
-iavf_txq_vec_setup_avx512(struct iavf_tx_queue *txq)
+iavf_txq_vec_setup_avx512(struct ci_tx_queue *txq)
{
txq->rel_mbufs_type = IAVF_REL_MBUFS_AVX512_VEC;
return 0;
@@ -2512,7 +2512,7 @@ iavf_xmit_pkts_vec_avx512_ctx_cmn(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool offload)
{
uint16_t nb_tx = 0;
- struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index 6305c8cdd6..f1bb12c4f4 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -17,7 +17,7 @@
#endif
static __rte_always_inline int
-iavf_tx_free_bufs(struct iavf_tx_queue *txq)
+iavf_tx_free_bufs(struct ci_tx_queue *txq)
{
struct ci_tx_entry *txep;
uint32_t n;
@@ -104,7 +104,7 @@ _iavf_rx_queue_release_mbufs_vec(struct iavf_rx_queue *rxq)
}
static inline void
-_iavf_tx_queue_release_mbufs_vec(struct iavf_tx_queue *txq)
+_iavf_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq)
{
unsigned i;
const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1);
@@ -164,7 +164,7 @@ iavf_rx_vec_queue_default(struct iavf_rx_queue *rxq)
}
static inline int
-iavf_tx_vec_queue_default(struct iavf_tx_queue *txq)
+iavf_tx_vec_queue_default(struct ci_tx_queue *txq)
{
if (!txq)
return -1;
@@ -227,7 +227,7 @@ static inline int
iavf_tx_vec_dev_check_default(struct rte_eth_dev *dev)
{
int i;
- struct iavf_tx_queue *txq;
+ struct ci_tx_queue *txq;
int ret;
int result = 0;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index 64c3bf0eaa..5c0b2fff46 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -1366,7 +1366,7 @@ uint16_t
iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
@@ -1435,7 +1435,7 @@ iavf_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
uint16_t nb_tx = 0;
- struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
@@ -1459,13 +1459,13 @@ iavf_rx_queue_release_mbufs_sse(struct iavf_rx_queue *rxq)
}
void __rte_cold
-iavf_tx_queue_release_mbufs_sse(struct iavf_tx_queue *txq)
+iavf_tx_queue_release_mbufs_sse(struct ci_tx_queue *txq)
{
_iavf_tx_queue_release_mbufs_vec(txq);
}
int __rte_cold
-iavf_txq_vec_setup(struct iavf_tx_queue *txq)
+iavf_txq_vec_setup(struct ci_tx_queue *txq)
{
txq->rel_mbufs_type = IAVF_REL_MBUFS_SSE_VEC;
return 0;
diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c
index 0646a2f978..c74466735d 100644
--- a/drivers/net/iavf/iavf_vchnl.c
+++ b/drivers/net/iavf/iavf_vchnl.c
@@ -1218,10 +1218,8 @@ int
iavf_configure_queues(struct iavf_adapter *adapter,
uint16_t num_queue_pairs, uint16_t index)
{
- struct iavf_rx_queue **rxq =
- (struct iavf_rx_queue **)adapter->dev_data->rx_queues;
- struct iavf_tx_queue **txq =
- (struct iavf_tx_queue **)adapter->dev_data->tx_queues;
+ struct iavf_rx_queue **rxq = (struct iavf_rx_queue **)adapter->dev_data->rx_queues;
+ struct ci_tx_queue **txq = (struct ci_tx_queue **)adapter->dev_data->tx_queues;
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
struct virtchnl_vsi_queue_config_info *vc_config;
struct virtchnl_queue_pair_info *vc_qp;
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v3 08/22] net/ixgbe: convert Tx queue context cache field to ptr
2024-12-11 17:33 ` [PATCH v3 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (6 preceding siblings ...)
2024-12-11 17:33 ` [PATCH v3 07/22] net/iavf: use common Tx queue structure Bruce Richardson
@ 2024-12-11 17:33 ` Bruce Richardson
2024-12-11 17:33 ` [PATCH v3 09/22] net/ixgbe: use common Tx queue structure Bruce Richardson
` (13 subsequent siblings)
21 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-11 17:33 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Anatoly Burakov, Vladimir Medvedkin
Rather than having a two element array of context cache values inside
the Tx queue structure, convert it to a pointer to a cache at the end of
the structure. This makes future merging of the structure easier as we
don't need the "ixgbe_advctx_info" struct defined when defining a
combined queue structure.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/ixgbe/ixgbe_rxtx.c | 7 ++++---
drivers/net/ixgbe/ixgbe_rxtx.h | 4 ++--
drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 3 +--
3 files changed, 7 insertions(+), 7 deletions(-)
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index f7ddbba1b6..2ca26cd132 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -2522,8 +2522,7 @@ ixgbe_reset_tx_queue(struct ixgbe_tx_queue *txq)
txq->last_desc_cleaned = (uint16_t)(txq->nb_tx_desc - 1);
txq->nb_tx_free = (uint16_t)(txq->nb_tx_desc - 1);
txq->ctx_curr = 0;
- memset((void *)&txq->ctx_cache, 0,
- IXGBE_CTX_NUM * sizeof(struct ixgbe_advctx_info));
+ memset(txq->ctx_cache, 0, IXGBE_CTX_NUM * sizeof(struct ixgbe_advctx_info));
}
static const struct ixgbe_txq_ops def_txq_ops = {
@@ -2741,10 +2740,12 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
}
/* First allocate the tx queue data structure */
- txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct ixgbe_tx_queue),
+ txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct ixgbe_tx_queue) +
+ sizeof(struct ixgbe_advctx_info) * IXGBE_CTX_NUM,
RTE_CACHE_LINE_SIZE, socket_id);
if (txq == NULL)
return -ENOMEM;
+ txq->ctx_cache = RTE_PTR_ADD(txq, sizeof(struct ixgbe_tx_queue));
/*
* Allocate TX ring hardware descriptors. A memzone large enough to
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index f6bae37cf3..847cacf7b5 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -215,8 +215,8 @@ struct ixgbe_tx_queue {
uint8_t wthresh; /**< Write-back threshold reg. */
uint64_t offloads; /**< Tx offload flags of RTE_ETH_TX_OFFLOAD_* */
uint32_t ctx_curr; /**< Hardware context states. */
- /** Hardware context0 history. */
- struct ixgbe_advctx_info ctx_cache[IXGBE_CTX_NUM];
+ /** Hardware context history. */
+ struct ixgbe_advctx_info *ctx_cache;
const struct ixgbe_txq_ops *ops; /**< txq ops */
bool tx_deferred_start; /**< not in global dev start. */
#ifdef RTE_LIB_SECURITY
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
index cc51bf6eed..ec334b5f65 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
@@ -176,8 +176,7 @@ _ixgbe_reset_tx_queue_vec(struct ixgbe_tx_queue *txq)
txq->last_desc_cleaned = (uint16_t)(txq->nb_tx_desc - 1);
txq->nb_tx_free = (uint16_t)(txq->nb_tx_desc - 1);
txq->ctx_curr = 0;
- memset((void *)&txq->ctx_cache, 0,
- IXGBE_CTX_NUM * sizeof(struct ixgbe_advctx_info));
+ memset(txq->ctx_cache, 0, IXGBE_CTX_NUM * sizeof(struct ixgbe_advctx_info));
}
static inline int
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v3 09/22] net/ixgbe: use common Tx queue structure
2024-12-11 17:33 ` [PATCH v3 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (7 preceding siblings ...)
2024-12-11 17:33 ` [PATCH v3 08/22] net/ixgbe: convert Tx queue context cache field to ptr Bruce Richardson
@ 2024-12-11 17:33 ` Bruce Richardson
2024-12-11 17:33 ` [PATCH v3 10/22] net/_common_intel: pack " Bruce Richardson
` (12 subsequent siblings)
21 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-11 17:33 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Anatoly Burakov, Vladimir Medvedkin,
Wathsala Vithanage, Konstantin Ananyev
Merge in additional fields used by the ixgbe driver and then convert it
over to using the common Tx queue structure.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 14 +++-
drivers/net/ixgbe/ixgbe_ethdev.c | 4 +-
.../ixgbe/ixgbe_recycle_mbufs_vec_common.c | 2 +-
drivers/net/ixgbe/ixgbe_rxtx.c | 64 +++++++++----------
drivers/net/ixgbe/ixgbe_rxtx.h | 56 ++--------------
drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 26 ++++----
drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 14 ++--
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 14 ++--
8 files changed, 80 insertions(+), 114 deletions(-)
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
index c4a1a0c816..51ae3b051d 100644
--- a/drivers/net/_common_intel/tx.h
+++ b/drivers/net/_common_intel/tx.h
@@ -34,9 +34,13 @@ struct ci_tx_queue {
volatile struct i40e_tx_desc *i40e_tx_ring;
volatile struct iavf_tx_desc *iavf_tx_ring;
volatile struct ice_tx_desc *ice_tx_ring;
+ volatile union ixgbe_adv_tx_desc *ixgbe_tx_ring;
};
volatile uint8_t *qtx_tail; /* register address of tail */
- struct ci_tx_entry *sw_ring; /* virtual address of SW ring */
+ union {
+ struct ci_tx_entry *sw_ring; /* virtual address of SW ring */
+ struct ci_tx_entry_vec *sw_ring_vec;
+ };
rte_iova_t tx_ring_dma; /* TX ring DMA address */
uint16_t nb_tx_desc; /* number of TX descriptors */
uint16_t tx_tail; /* current value of tail register */
@@ -87,6 +91,14 @@ struct ci_tx_queue {
uint8_t tc;
bool use_ctx; /* with ctx info, each pkt needs two descriptors */
};
+ struct { /* ixgbe specific values */
+ const struct ixgbe_txq_ops *ops;
+ struct ixgbe_advctx_info *ctx_cache;
+ uint32_t ctx_curr;
+#ifdef RTE_LIB_SECURITY
+ uint8_t using_ipsec; /**< indicates that IPsec TX feature is in use */
+#endif
+ };
};
};
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 8bee97d191..5f18fbaad5 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1118,7 +1118,7 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
* RX and TX function.
*/
if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
- struct ixgbe_tx_queue *txq;
+ struct ci_tx_queue *txq;
/* TX queue function in primary, set by last queue initialized
* Tx queue may not initialized by primary process
*/
@@ -1623,7 +1623,7 @@ eth_ixgbevf_dev_init(struct rte_eth_dev *eth_dev)
* RX function
*/
if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
- struct ixgbe_tx_queue *txq;
+ struct ci_tx_queue *txq;
/* TX queue function in primary, set by last queue initialized
* Tx queue may not initialized by primary process
*/
diff --git a/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c b/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c
index a878db3150..3fd05ed5eb 100644
--- a/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c
+++ b/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c
@@ -51,7 +51,7 @@ uint16_t
ixgbe_recycle_tx_mbufs_reuse_vec(void *tx_queue,
struct rte_eth_recycle_rxq_info *recycle_rxq_info)
{
- struct ixgbe_tx_queue *txq = tx_queue;
+ struct ci_tx_queue *txq = tx_queue;
struct ci_tx_entry *txep;
struct rte_mbuf **rxep;
int i, n;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 2ca26cd132..344ef85685 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -98,7 +98,7 @@
* Return the total number of buffers freed.
*/
static __rte_always_inline int
-ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq)
+ixgbe_tx_free_bufs(struct ci_tx_queue *txq)
{
struct ci_tx_entry *txep;
uint32_t status;
@@ -195,7 +195,7 @@ tx1(volatile union ixgbe_adv_tx_desc *txdp, struct rte_mbuf **pkts)
* Copy mbuf pointers to the S/W ring.
*/
static inline void
-ixgbe_tx_fill_hw_ring(struct ixgbe_tx_queue *txq, struct rte_mbuf **pkts,
+ixgbe_tx_fill_hw_ring(struct ci_tx_queue *txq, struct rte_mbuf **pkts,
uint16_t nb_pkts)
{
volatile union ixgbe_adv_tx_desc *txdp = &txq->ixgbe_tx_ring[txq->tx_tail];
@@ -231,7 +231,7 @@ static inline uint16_t
tx_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile union ixgbe_adv_tx_desc *tx_r = txq->ixgbe_tx_ring;
uint16_t n = 0;
@@ -344,7 +344,7 @@ ixgbe_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
uint16_t nb_tx = 0;
- struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
@@ -362,7 +362,7 @@ ixgbe_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
}
static inline void
-ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
+ixgbe_set_xmit_ctx(struct ci_tx_queue *txq,
volatile struct ixgbe_adv_tx_context_desc *ctx_txd,
uint64_t ol_flags, union ixgbe_tx_offload tx_offload,
__rte_unused uint64_t *mdata)
@@ -493,7 +493,7 @@ ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
* or create a new context descriptor.
*/
static inline uint32_t
-what_advctx_update(struct ixgbe_tx_queue *txq, uint64_t flags,
+what_advctx_update(struct ci_tx_queue *txq, uint64_t flags,
union ixgbe_tx_offload tx_offload)
{
/* If match with the current used context */
@@ -561,7 +561,7 @@ tx_desc_ol_flags_to_cmdtype(uint64_t ol_flags)
/* Reset transmit descriptors after they have been used */
static inline int
-ixgbe_xmit_cleanup(struct ixgbe_tx_queue *txq)
+ixgbe_xmit_cleanup(struct ci_tx_queue *txq)
{
struct ci_tx_entry *sw_ring = txq->sw_ring;
volatile union ixgbe_adv_tx_desc *txr = txq->ixgbe_tx_ring;
@@ -623,7 +623,7 @@ uint16_t
ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct ixgbe_tx_queue *txq;
+ struct ci_tx_queue *txq;
struct ci_tx_entry *sw_ring;
struct ci_tx_entry *txe, *txn;
volatile union ixgbe_adv_tx_desc *txr;
@@ -963,7 +963,7 @@ ixgbe_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
int i, ret;
uint64_t ol_flags;
struct rte_mbuf *m;
- struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
for (i = 0; i < nb_pkts; i++) {
m = tx_pkts[i];
@@ -2335,7 +2335,7 @@ ixgbe_recv_pkts_lro_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts,
**********************************************************************/
static void __rte_cold
-ixgbe_tx_queue_release_mbufs(struct ixgbe_tx_queue *txq)
+ixgbe_tx_queue_release_mbufs(struct ci_tx_queue *txq)
{
unsigned i;
@@ -2350,7 +2350,7 @@ ixgbe_tx_queue_release_mbufs(struct ixgbe_tx_queue *txq)
}
static int
-ixgbe_tx_done_cleanup_full(struct ixgbe_tx_queue *txq, uint32_t free_cnt)
+ixgbe_tx_done_cleanup_full(struct ci_tx_queue *txq, uint32_t free_cnt)
{
struct ci_tx_entry *swr_ring = txq->sw_ring;
uint16_t i, tx_last, tx_id;
@@ -2408,7 +2408,7 @@ ixgbe_tx_done_cleanup_full(struct ixgbe_tx_queue *txq, uint32_t free_cnt)
}
static int
-ixgbe_tx_done_cleanup_simple(struct ixgbe_tx_queue *txq,
+ixgbe_tx_done_cleanup_simple(struct ci_tx_queue *txq,
uint32_t free_cnt)
{
int i, n, cnt;
@@ -2432,7 +2432,7 @@ ixgbe_tx_done_cleanup_simple(struct ixgbe_tx_queue *txq,
}
static int
-ixgbe_tx_done_cleanup_vec(struct ixgbe_tx_queue *txq __rte_unused,
+ixgbe_tx_done_cleanup_vec(struct ci_tx_queue *txq __rte_unused,
uint32_t free_cnt __rte_unused)
{
return -ENOTSUP;
@@ -2441,7 +2441,7 @@ ixgbe_tx_done_cleanup_vec(struct ixgbe_tx_queue *txq __rte_unused,
int
ixgbe_dev_tx_done_cleanup(void *tx_queue, uint32_t free_cnt)
{
- struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
if (txq->offloads == 0 &&
#ifdef RTE_LIB_SECURITY
!(txq->using_ipsec) &&
@@ -2450,7 +2450,7 @@ ixgbe_dev_tx_done_cleanup(void *tx_queue, uint32_t free_cnt)
if (txq->tx_rs_thresh <= RTE_IXGBE_TX_MAX_FREE_BUF_SZ &&
rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128 &&
(rte_eal_process_type() != RTE_PROC_PRIMARY ||
- txq->sw_ring_v != NULL)) {
+ txq->sw_ring_vec != NULL)) {
return ixgbe_tx_done_cleanup_vec(txq, free_cnt);
} else {
return ixgbe_tx_done_cleanup_simple(txq, free_cnt);
@@ -2461,7 +2461,7 @@ ixgbe_dev_tx_done_cleanup(void *tx_queue, uint32_t free_cnt)
}
static void __rte_cold
-ixgbe_tx_free_swring(struct ixgbe_tx_queue *txq)
+ixgbe_tx_free_swring(struct ci_tx_queue *txq)
{
if (txq != NULL &&
txq->sw_ring != NULL)
@@ -2469,7 +2469,7 @@ ixgbe_tx_free_swring(struct ixgbe_tx_queue *txq)
}
static void __rte_cold
-ixgbe_tx_queue_release(struct ixgbe_tx_queue *txq)
+ixgbe_tx_queue_release(struct ci_tx_queue *txq)
{
if (txq != NULL && txq->ops != NULL) {
txq->ops->release_mbufs(txq);
@@ -2487,7 +2487,7 @@ ixgbe_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
/* (Re)set dynamic ixgbe_tx_queue fields to defaults */
static void __rte_cold
-ixgbe_reset_tx_queue(struct ixgbe_tx_queue *txq)
+ixgbe_reset_tx_queue(struct ci_tx_queue *txq)
{
static const union ixgbe_adv_tx_desc zeroed_desc = {{0}};
struct ci_tx_entry *txe = txq->sw_ring;
@@ -2536,7 +2536,7 @@ static const struct ixgbe_txq_ops def_txq_ops = {
* in dev_init by secondary process when attaching to an existing ethdev.
*/
void __rte_cold
-ixgbe_set_tx_function(struct rte_eth_dev *dev, struct ixgbe_tx_queue *txq)
+ixgbe_set_tx_function(struct rte_eth_dev *dev, struct ci_tx_queue *txq)
{
/* Use a simple Tx queue (no offloads, no multi segs) if possible */
if ((txq->offloads == 0) &&
@@ -2618,7 +2618,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
const struct rte_eth_txconf *tx_conf)
{
const struct rte_memzone *tz;
- struct ixgbe_tx_queue *txq;
+ struct ci_tx_queue *txq;
struct ixgbe_hw *hw;
uint16_t tx_rs_thresh, tx_free_thresh;
uint64_t offloads;
@@ -2740,12 +2740,12 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
}
/* First allocate the tx queue data structure */
- txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct ixgbe_tx_queue) +
+ txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct ci_tx_queue) +
sizeof(struct ixgbe_advctx_info) * IXGBE_CTX_NUM,
RTE_CACHE_LINE_SIZE, socket_id);
if (txq == NULL)
return -ENOMEM;
- txq->ctx_cache = RTE_PTR_ADD(txq, sizeof(struct ixgbe_tx_queue));
+ txq->ctx_cache = RTE_PTR_ADD(txq, sizeof(struct ci_tx_queue));
/*
* Allocate TX ring hardware descriptors. A memzone large enough to
@@ -3312,7 +3312,7 @@ ixgbe_dev_rx_descriptor_status(void *rx_queue, uint16_t offset)
int
ixgbe_dev_tx_descriptor_status(void *tx_queue, uint16_t offset)
{
- struct ixgbe_tx_queue *txq = tx_queue;
+ struct ci_tx_queue *txq = tx_queue;
volatile uint32_t *status;
uint32_t desc;
@@ -3377,7 +3377,7 @@ ixgbe_dev_clear_queues(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
for (i = 0; i < dev->data->nb_tx_queues; i++) {
- struct ixgbe_tx_queue *txq = dev->data->tx_queues[i];
+ struct ci_tx_queue *txq = dev->data->tx_queues[i];
if (txq != NULL) {
txq->ops->release_mbufs(txq);
@@ -5284,7 +5284,7 @@ void __rte_cold
ixgbe_dev_tx_init(struct rte_eth_dev *dev)
{
struct ixgbe_hw *hw;
- struct ixgbe_tx_queue *txq;
+ struct ci_tx_queue *txq;
uint64_t bus_addr;
uint32_t hlreg0;
uint32_t txctrl;
@@ -5402,7 +5402,7 @@ int __rte_cold
ixgbe_dev_rxtx_start(struct rte_eth_dev *dev)
{
struct ixgbe_hw *hw;
- struct ixgbe_tx_queue *txq;
+ struct ci_tx_queue *txq;
struct ixgbe_rx_queue *rxq;
uint32_t txdctl;
uint32_t dmatxctl;
@@ -5572,7 +5572,7 @@ int __rte_cold
ixgbe_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
struct ixgbe_hw *hw;
- struct ixgbe_tx_queue *txq;
+ struct ci_tx_queue *txq;
uint32_t txdctl;
int poll_ms;
@@ -5611,7 +5611,7 @@ int __rte_cold
ixgbe_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
struct ixgbe_hw *hw;
- struct ixgbe_tx_queue *txq;
+ struct ci_tx_queue *txq;
uint32_t txdctl;
uint32_t txtdh, txtdt;
int poll_ms;
@@ -5685,7 +5685,7 @@ void
ixgbe_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
struct rte_eth_txq_info *qinfo)
{
- struct ixgbe_tx_queue *txq;
+ struct ci_tx_queue *txq;
txq = dev->data->tx_queues[queue_id];
@@ -5877,7 +5877,7 @@ void __rte_cold
ixgbevf_dev_tx_init(struct rte_eth_dev *dev)
{
struct ixgbe_hw *hw;
- struct ixgbe_tx_queue *txq;
+ struct ci_tx_queue *txq;
uint64_t bus_addr;
uint32_t txctrl;
uint16_t i;
@@ -5918,7 +5918,7 @@ void __rte_cold
ixgbevf_dev_rxtx_start(struct rte_eth_dev *dev)
{
struct ixgbe_hw *hw;
- struct ixgbe_tx_queue *txq;
+ struct ci_tx_queue *txq;
struct ixgbe_rx_queue *rxq;
uint32_t txdctl;
uint32_t rxdctl;
@@ -6127,7 +6127,7 @@ ixgbe_xmit_fixed_burst_vec(void __rte_unused *tx_queue,
}
int
-ixgbe_txq_vec_setup(struct ixgbe_tx_queue __rte_unused *txq)
+ixgbe_txq_vec_setup(struct ci_tx_queue *txq __rte_unused)
{
return -1;
}
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index 847cacf7b5..4333e5bf2f 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -180,56 +180,10 @@ struct ixgbe_advctx_info {
union ixgbe_tx_offload tx_offload_mask;
};
-/**
- * Structure associated with each TX queue.
- */
-struct ixgbe_tx_queue {
- /** TX ring virtual address. */
- volatile union ixgbe_adv_tx_desc *ixgbe_tx_ring;
- rte_iova_t tx_ring_dma; /**< TX ring DMA address. */
- union {
- struct ci_tx_entry *sw_ring; /**< address of SW ring for scalar PMD. */
- struct ci_tx_entry_vec *sw_ring_v; /**< address of SW ring for vector PMD */
- };
- volatile uint8_t *qtx_tail; /**< Address of TDT register. */
- uint16_t nb_tx_desc; /**< number of TX descriptors. */
- uint16_t tx_tail; /**< current value of TDT reg. */
- /**< Start freeing TX buffers if there are less free descriptors than
- this value. */
- uint16_t tx_free_thresh;
- /** Number of TX descriptors to use before RS bit is set. */
- uint16_t tx_rs_thresh;
- /** Number of TX descriptors used since RS bit was set. */
- uint16_t nb_tx_used;
- /** Index to last TX descriptor to have been cleaned. */
- uint16_t last_desc_cleaned;
- /** Total number of TX descriptors ready to be allocated. */
- uint16_t nb_tx_free;
- uint16_t tx_next_dd; /**< next desc to scan for DD bit */
- uint16_t tx_next_rs; /**< next desc to set RS bit */
- uint16_t queue_id; /**< TX queue index. */
- uint16_t reg_idx; /**< TX queue register index. */
- uint16_t port_id; /**< Device port identifier. */
- uint8_t pthresh; /**< Prefetch threshold register. */
- uint8_t hthresh; /**< Host threshold register. */
- uint8_t wthresh; /**< Write-back threshold reg. */
- uint64_t offloads; /**< Tx offload flags of RTE_ETH_TX_OFFLOAD_* */
- uint32_t ctx_curr; /**< Hardware context states. */
- /** Hardware context history. */
- struct ixgbe_advctx_info *ctx_cache;
- const struct ixgbe_txq_ops *ops; /**< txq ops */
- bool tx_deferred_start; /**< not in global dev start. */
-#ifdef RTE_LIB_SECURITY
- uint8_t using_ipsec;
- /**< indicates that IPsec TX feature is in use */
-#endif
- const struct rte_memzone *mz;
-};
-
struct ixgbe_txq_ops {
- void (*release_mbufs)(struct ixgbe_tx_queue *txq);
- void (*free_swring)(struct ixgbe_tx_queue *txq);
- void (*reset)(struct ixgbe_tx_queue *txq);
+ void (*release_mbufs)(struct ci_tx_queue *txq);
+ void (*free_swring)(struct ci_tx_queue *txq);
+ void (*reset)(struct ci_tx_queue *txq);
};
/*
@@ -250,7 +204,7 @@ struct ixgbe_txq_ops {
* the queue parameters. Used in tx_queue_setup by primary process and then
* in dev_init by secondary process when attaching to an existing ethdev.
*/
-void ixgbe_set_tx_function(struct rte_eth_dev *dev, struct ixgbe_tx_queue *txq);
+void ixgbe_set_tx_function(struct rte_eth_dev *dev, struct ci_tx_queue *txq);
/**
* Sets the rx_pkt_burst callback in the ixgbe rte_eth_dev instance.
@@ -287,7 +241,7 @@ void ixgbe_recycle_rx_descriptors_refill_vec(void *rx_queue, uint16_t nb_mbufs);
uint16_t ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
-int ixgbe_txq_vec_setup(struct ixgbe_tx_queue *txq);
+int ixgbe_txq_vec_setup(struct ci_tx_queue *txq);
uint64_t ixgbe_get_tx_port_offloads(struct rte_eth_dev *dev);
uint64_t ixgbe_get_rx_queue_offloads(struct rte_eth_dev *dev);
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
index ec334b5f65..06e760867c 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
@@ -12,7 +12,7 @@
#include "ixgbe_rxtx.h"
static __rte_always_inline int
-ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq)
+ixgbe_tx_free_bufs(struct ci_tx_queue *txq)
{
struct ci_tx_entry_vec *txep;
uint32_t status;
@@ -32,7 +32,7 @@ ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq)
* first buffer to free from S/W ring is at index
* tx_next_dd - (tx_rs_thresh-1)
*/
- txep = &txq->sw_ring_v[txq->tx_next_dd - (n - 1)];
+ txep = &txq->sw_ring_vec[txq->tx_next_dd - (n - 1)];
m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
if (likely(m != NULL)) {
free[0] = m;
@@ -79,7 +79,7 @@ tx_backlog_entry(struct ci_tx_entry_vec *txep,
}
static inline void
-_ixgbe_tx_queue_release_mbufs_vec(struct ixgbe_tx_queue *txq)
+_ixgbe_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq)
{
unsigned int i;
struct ci_tx_entry_vec *txe;
@@ -92,14 +92,14 @@ _ixgbe_tx_queue_release_mbufs_vec(struct ixgbe_tx_queue *txq)
for (i = txq->tx_next_dd - (txq->tx_rs_thresh - 1);
i != txq->tx_tail;
i = (i + 1) % txq->nb_tx_desc) {
- txe = &txq->sw_ring_v[i];
+ txe = &txq->sw_ring_vec[i];
rte_pktmbuf_free_seg(txe->mbuf);
}
txq->nb_tx_free = max_desc;
/* reset tx_entry */
for (i = 0; i < txq->nb_tx_desc; i++) {
- txe = &txq->sw_ring_v[i];
+ txe = &txq->sw_ring_vec[i];
txe->mbuf = NULL;
}
}
@@ -134,22 +134,22 @@ _ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq)
}
static inline void
-_ixgbe_tx_free_swring_vec(struct ixgbe_tx_queue *txq)
+_ixgbe_tx_free_swring_vec(struct ci_tx_queue *txq)
{
if (txq == NULL)
return;
if (txq->sw_ring != NULL) {
- rte_free(txq->sw_ring_v - 1);
- txq->sw_ring_v = NULL;
+ rte_free(txq->sw_ring_vec - 1);
+ txq->sw_ring_vec = NULL;
}
}
static inline void
-_ixgbe_reset_tx_queue_vec(struct ixgbe_tx_queue *txq)
+_ixgbe_reset_tx_queue_vec(struct ci_tx_queue *txq)
{
static const union ixgbe_adv_tx_desc zeroed_desc = { { 0 } };
- struct ci_tx_entry_vec *txe = txq->sw_ring_v;
+ struct ci_tx_entry_vec *txe = txq->sw_ring_vec;
uint16_t i;
/* Zero out HW ring memory */
@@ -198,14 +198,14 @@ ixgbe_rxq_vec_setup_default(struct ixgbe_rx_queue *rxq)
}
static inline int
-ixgbe_txq_vec_setup_default(struct ixgbe_tx_queue *txq,
+ixgbe_txq_vec_setup_default(struct ci_tx_queue *txq,
const struct ixgbe_txq_ops *txq_ops)
{
- if (txq->sw_ring_v == NULL)
+ if (txq->sw_ring_vec == NULL)
return -1;
/* leave the first one for overflow */
- txq->sw_ring_v = txq->sw_ring_v + 1;
+ txq->sw_ring_vec = txq->sw_ring_vec + 1;
txq->ops = txq_ops;
return 0;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
index 06be7ec82a..cb749a3760 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
@@ -571,7 +571,7 @@ uint16_t
ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile union ixgbe_adv_tx_desc *txdp;
struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
@@ -591,7 +591,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = txq->tx_tail;
txdp = &txq->ixgbe_tx_ring[tx_id];
- txep = &txq->sw_ring_v[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -611,7 +611,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
/* avoid reach the end of ring */
txdp = &txq->ixgbe_tx_ring[tx_id];
- txep = &txq->sw_ring_v[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
}
tx_backlog_entry(txep, tx_pkts, nb_commit);
@@ -634,7 +634,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
}
static void __rte_cold
-ixgbe_tx_queue_release_mbufs_vec(struct ixgbe_tx_queue *txq)
+ixgbe_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq)
{
_ixgbe_tx_queue_release_mbufs_vec(txq);
}
@@ -646,13 +646,13 @@ ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq)
}
static void __rte_cold
-ixgbe_tx_free_swring(struct ixgbe_tx_queue *txq)
+ixgbe_tx_free_swring(struct ci_tx_queue *txq)
{
_ixgbe_tx_free_swring_vec(txq);
}
static void __rte_cold
-ixgbe_reset_tx_queue(struct ixgbe_tx_queue *txq)
+ixgbe_reset_tx_queue(struct ci_tx_queue *txq)
{
_ixgbe_reset_tx_queue_vec(txq);
}
@@ -670,7 +670,7 @@ ixgbe_rxq_vec_setup(struct ixgbe_rx_queue *rxq)
}
int __rte_cold
-ixgbe_txq_vec_setup(struct ixgbe_tx_queue *txq)
+ixgbe_txq_vec_setup(struct ci_tx_queue *txq)
{
return ixgbe_txq_vec_setup_default(txq, &vec_txq_ops);
}
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
index a21a57bd55..e46550f76a 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -693,7 +693,7 @@ uint16_t
ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile union ixgbe_adv_tx_desc *txdp;
struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
@@ -713,7 +713,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = txq->tx_tail;
txdp = &txq->ixgbe_tx_ring[tx_id];
- txep = &txq->sw_ring_v[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -734,7 +734,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
/* avoid reach the end of ring */
txdp = &txq->ixgbe_tx_ring[tx_id];
- txep = &txq->sw_ring_v[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
}
tx_backlog_entry(txep, tx_pkts, nb_commit);
@@ -757,7 +757,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
}
static void __rte_cold
-ixgbe_tx_queue_release_mbufs_vec(struct ixgbe_tx_queue *txq)
+ixgbe_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq)
{
_ixgbe_tx_queue_release_mbufs_vec(txq);
}
@@ -769,13 +769,13 @@ ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq)
}
static void __rte_cold
-ixgbe_tx_free_swring(struct ixgbe_tx_queue *txq)
+ixgbe_tx_free_swring(struct ci_tx_queue *txq)
{
_ixgbe_tx_free_swring_vec(txq);
}
static void __rte_cold
-ixgbe_reset_tx_queue(struct ixgbe_tx_queue *txq)
+ixgbe_reset_tx_queue(struct ci_tx_queue *txq)
{
_ixgbe_reset_tx_queue_vec(txq);
}
@@ -793,7 +793,7 @@ ixgbe_rxq_vec_setup(struct ixgbe_rx_queue *rxq)
}
int __rte_cold
-ixgbe_txq_vec_setup(struct ixgbe_tx_queue *txq)
+ixgbe_txq_vec_setup(struct ci_tx_queue *txq)
{
return ixgbe_txq_vec_setup_default(txq, &vec_txq_ops);
}
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v3 10/22] net/_common_intel: pack Tx queue structure
2024-12-11 17:33 ` [PATCH v3 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (8 preceding siblings ...)
2024-12-11 17:33 ` [PATCH v3 09/22] net/ixgbe: use common Tx queue structure Bruce Richardson
@ 2024-12-11 17:33 ` Bruce Richardson
2024-12-11 17:33 ` [PATCH v3 11/22] net/_common_intel: add post-Tx buffer free function Bruce Richardson
` (11 subsequent siblings)
21 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-11 17:33 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Ian Stokes, Anatoly Burakov
Move some fields about to better pack the Tx queue structure and make
sure all data used by the vector codepaths is on the first cacheline of
the structure. Checking with "pahole" on 64-bit build, only one 6-byte
hole is left in the structure - on second cacheline - after this patch.
As part of the reordering, move the p/h/wthresh values to the
ixgbe-specific part of the union. That is the only driver which actually
uses those values. i40e and ice drivers just record the values for later
return, so we can drop them from the Tx queue structure for those
drivers and just report the defaults in all cases.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 12 +++++-------
drivers/net/i40e/i40e_rxtx.c | 9 +++------
drivers/net/ice/ice_rxtx.c | 9 +++------
3 files changed, 11 insertions(+), 19 deletions(-)
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
index 51ae3b051d..c372d2838b 100644
--- a/drivers/net/_common_intel/tx.h
+++ b/drivers/net/_common_intel/tx.h
@@ -41,7 +41,6 @@ struct ci_tx_queue {
struct ci_tx_entry *sw_ring; /* virtual address of SW ring */
struct ci_tx_entry_vec *sw_ring_vec;
};
- rte_iova_t tx_ring_dma; /* TX ring DMA address */
uint16_t nb_tx_desc; /* number of TX descriptors */
uint16_t tx_tail; /* current value of tail register */
uint16_t nb_tx_used; /* number of TX desc used since RS bit set */
@@ -55,16 +54,14 @@ struct ci_tx_queue {
uint16_t tx_free_thresh;
/* Number of TX descriptors to use before RS bit is set. */
uint16_t tx_rs_thresh;
- uint8_t pthresh; /**< Prefetch threshold register. */
- uint8_t hthresh; /**< Host threshold register. */
- uint8_t wthresh; /**< Write-back threshold reg. */
uint16_t port_id; /* Device port identifier. */
uint16_t queue_id; /* TX queue index. */
uint16_t reg_idx;
- uint64_t offloads;
uint16_t tx_next_dd;
uint16_t tx_next_rs;
+ uint64_t offloads;
uint64_t mbuf_errors;
+ rte_iova_t tx_ring_dma; /* TX ring DMA address */
bool tx_deferred_start; /* don't start this queue in dev start */
bool q_set; /* indicate if tx queue has been configured */
union { /* the VSI this queue belongs to */
@@ -95,9 +92,10 @@ struct ci_tx_queue {
const struct ixgbe_txq_ops *ops;
struct ixgbe_advctx_info *ctx_cache;
uint32_t ctx_curr;
-#ifdef RTE_LIB_SECURITY
+ uint8_t pthresh; /**< Prefetch threshold register. */
+ uint8_t hthresh; /**< Host threshold register. */
+ uint8_t wthresh; /**< Write-back threshold reg. */
uint8_t using_ipsec; /**< indicates that IPsec TX feature is in use */
-#endif
};
};
};
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 305bc53480..539b170266 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2539,9 +2539,6 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->nb_tx_desc = nb_desc;
txq->tx_rs_thresh = tx_rs_thresh;
txq->tx_free_thresh = tx_free_thresh;
- txq->pthresh = tx_conf->tx_thresh.pthresh;
- txq->hthresh = tx_conf->tx_thresh.hthresh;
- txq->wthresh = tx_conf->tx_thresh.wthresh;
txq->queue_id = queue_idx;
txq->reg_idx = reg_idx;
txq->port_id = dev->data->port_id;
@@ -3310,9 +3307,9 @@ i40e_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
qinfo->nb_desc = txq->nb_tx_desc;
- qinfo->conf.tx_thresh.pthresh = txq->pthresh;
- qinfo->conf.tx_thresh.hthresh = txq->hthresh;
- qinfo->conf.tx_thresh.wthresh = txq->wthresh;
+ qinfo->conf.tx_thresh.pthresh = I40E_DEFAULT_TX_PTHRESH;
+ qinfo->conf.tx_thresh.hthresh = I40E_DEFAULT_TX_HTHRESH;
+ qinfo->conf.tx_thresh.wthresh = I40E_DEFAULT_TX_WTHRESH;
qinfo->conf.tx_free_thresh = txq->tx_free_thresh;
qinfo->conf.tx_rs_thresh = txq->tx_rs_thresh;
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index bcc7c7a016..e2e147ba3e 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -1492,9 +1492,6 @@ ice_tx_queue_setup(struct rte_eth_dev *dev,
txq->nb_tx_desc = nb_desc;
txq->tx_rs_thresh = tx_rs_thresh;
txq->tx_free_thresh = tx_free_thresh;
- txq->pthresh = tx_conf->tx_thresh.pthresh;
- txq->hthresh = tx_conf->tx_thresh.hthresh;
- txq->wthresh = tx_conf->tx_thresh.wthresh;
txq->queue_id = queue_idx;
txq->reg_idx = vsi->base_queue + queue_idx;
@@ -1583,9 +1580,9 @@ ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
qinfo->nb_desc = txq->nb_tx_desc;
- qinfo->conf.tx_thresh.pthresh = txq->pthresh;
- qinfo->conf.tx_thresh.hthresh = txq->hthresh;
- qinfo->conf.tx_thresh.wthresh = txq->wthresh;
+ qinfo->conf.tx_thresh.pthresh = ICE_DEFAULT_TX_PTHRESH;
+ qinfo->conf.tx_thresh.hthresh = ICE_DEFAULT_TX_HTHRESH;
+ qinfo->conf.tx_thresh.wthresh = ICE_DEFAULT_TX_WTHRESH;
qinfo->conf.tx_free_thresh = txq->tx_free_thresh;
qinfo->conf.tx_rs_thresh = txq->tx_rs_thresh;
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v3 11/22] net/_common_intel: add post-Tx buffer free function
2024-12-11 17:33 ` [PATCH v3 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (9 preceding siblings ...)
2024-12-11 17:33 ` [PATCH v3 10/22] net/_common_intel: pack " Bruce Richardson
@ 2024-12-11 17:33 ` Bruce Richardson
2024-12-11 17:33 ` [PATCH v3 12/22] net/_common_intel: add Tx buffer free fn for AVX-512 Bruce Richardson
` (10 subsequent siblings)
21 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-11 17:33 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Ian Stokes, Vladimir Medvedkin, Anatoly Burakov
The actions taken for post-Tx buffer free for the SSE and AVX drivers
for i40e, iavf and ice drivers are all common, so centralize those in
common/intel_eth driver.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 71 ++++++++++++++++++++++++
drivers/net/i40e/i40e_rxtx_vec_common.h | 72 ++++---------------------
drivers/net/iavf/iavf_rxtx_vec_common.h | 61 ++++-----------------
drivers/net/ice/ice_rxtx_vec_common.h | 61 ++++-----------------
4 files changed, 98 insertions(+), 167 deletions(-)
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
index c372d2838b..a930309c05 100644
--- a/drivers/net/_common_intel/tx.h
+++ b/drivers/net/_common_intel/tx.h
@@ -7,6 +7,7 @@
#include <stdint.h>
#include <rte_mbuf.h>
+#include <rte_ethdev.h>
/* forward declaration of the common intel (ci) queue structure */
struct ci_tx_queue;
@@ -107,4 +108,74 @@ ci_tx_backlog_entry(struct ci_tx_entry *txep, struct rte_mbuf **tx_pkts, uint16_
txep[i].mbuf = tx_pkts[i];
}
+#define IETH_VPMD_TX_MAX_FREE_BUF 64
+
+typedef int (*ci_desc_done_fn)(struct ci_tx_queue *txq, uint16_t idx);
+
+static __rte_always_inline int
+ci_tx_free_bufs(struct ci_tx_queue *txq, ci_desc_done_fn desc_done)
+{
+ struct ci_tx_entry *txep;
+ uint32_t n;
+ uint32_t i;
+ int nb_free = 0;
+ struct rte_mbuf *m, *free[IETH_VPMD_TX_MAX_FREE_BUF];
+
+ /* check DD bits on threshold descriptor */
+ if (!desc_done(txq, txq->tx_next_dd))
+ return 0;
+
+ n = txq->tx_rs_thresh;
+
+ /* first buffer to free from S/W ring is at index
+ * tx_next_dd - (tx_rs_thresh-1)
+ */
+ txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)];
+
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
+ for (i = 0; i < n; i++) {
+ free[i] = txep[i].mbuf;
+ /* no need to reset txep[i].mbuf in vector path */
+ }
+ rte_mempool_put_bulk(free[0]->pool, (void **)free, n);
+ goto done;
+ }
+
+ m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
+ if (likely(m != NULL)) {
+ free[0] = m;
+ nb_free = 1;
+ for (i = 1; i < n; i++) {
+ m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
+ if (likely(m != NULL)) {
+ if (likely(m->pool == free[0]->pool)) {
+ free[nb_free++] = m;
+ } else {
+ rte_mempool_put_bulk(free[0]->pool,
+ (void *)free,
+ nb_free);
+ free[0] = m;
+ nb_free = 1;
+ }
+ }
+ }
+ rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
+ } else {
+ for (i = 1; i < n; i++) {
+ m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
+ if (m != NULL)
+ rte_mempool_put(m->pool, m);
+ }
+ }
+
+done:
+ /* buffers were freed, update counters */
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
+ txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
+ if (txq->tx_next_dd >= txq->nb_tx_desc)
+ txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
+
+ return txq->tx_rs_thresh;
+}
+
#endif /* _COMMON_INTEL_TX_H_ */
diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index 57d6263ccf..907d32dd0b 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -16,72 +16,18 @@
#pragma GCC diagnostic ignored "-Wcast-qual"
#endif
+static inline int
+i40e_tx_desc_done(struct ci_tx_queue *txq, uint16_t idx)
+{
+ return (txq->i40e_tx_ring[idx].cmd_type_offset_bsz &
+ rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) ==
+ rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE);
+}
+
static __rte_always_inline int
i40e_tx_free_bufs(struct ci_tx_queue *txq)
{
- struct ci_tx_entry *txep;
- uint32_t n;
- uint32_t i;
- int nb_free = 0;
- struct rte_mbuf *m, *free[RTE_I40E_TX_MAX_FREE_BUF_SZ];
-
- /* check DD bits on threshold descriptor */
- if ((txq->i40e_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
- rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) !=
- rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE))
- return 0;
-
- n = txq->tx_rs_thresh;
-
- /* first buffer to free from S/W ring is at index
- * tx_next_dd - (tx_rs_thresh-1)
- */
- txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)];
-
- if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
- for (i = 0; i < n; i++) {
- free[i] = txep[i].mbuf;
- /* no need to reset txep[i].mbuf in vector path */
- }
- rte_mempool_put_bulk(free[0]->pool, (void **)free, n);
- goto done;
- }
-
- m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
- if (likely(m != NULL)) {
- free[0] = m;
- nb_free = 1;
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (likely(m != NULL)) {
- if (likely(m->pool == free[0]->pool)) {
- free[nb_free++] = m;
- } else {
- rte_mempool_put_bulk(free[0]->pool,
- (void *)free,
- nb_free);
- free[0] = m;
- nb_free = 1;
- }
- }
- }
- rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
- } else {
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (m != NULL)
- rte_mempool_put(m->pool, m);
- }
- }
-
-done:
- /* buffers were freed, update counters */
- txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
- txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
- if (txq->tx_next_dd >= txq->nb_tx_desc)
- txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
-
- return txq->tx_rs_thresh;
+ return ci_tx_free_bufs(txq, i40e_tx_desc_done);
}
static inline void
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index f1bb12c4f4..7130229f23 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -16,61 +16,18 @@
#pragma GCC diagnostic ignored "-Wcast-qual"
#endif
+static inline int
+iavf_tx_desc_done(struct ci_tx_queue *txq, uint16_t idx)
+{
+ return (txq->iavf_tx_ring[idx].cmd_type_offset_bsz &
+ rte_cpu_to_le_64(IAVF_TXD_QW1_DTYPE_MASK)) ==
+ rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE);
+}
+
static __rte_always_inline int
iavf_tx_free_bufs(struct ci_tx_queue *txq)
{
- struct ci_tx_entry *txep;
- uint32_t n;
- uint32_t i;
- int nb_free = 0;
- struct rte_mbuf *m, *free[IAVF_VPMD_TX_MAX_FREE_BUF];
-
- /* check DD bits on threshold descriptor */
- if ((txq->iavf_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
- rte_cpu_to_le_64(IAVF_TXD_QW1_DTYPE_MASK)) !=
- rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE))
- return 0;
-
- n = txq->tx_rs_thresh;
-
- /* first buffer to free from S/W ring is at index
- * tx_next_dd - (tx_rs_thresh-1)
- */
- txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)];
- m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
- if (likely(m != NULL)) {
- free[0] = m;
- nb_free = 1;
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (likely(m != NULL)) {
- if (likely(m->pool == free[0]->pool)) {
- free[nb_free++] = m;
- } else {
- rte_mempool_put_bulk(free[0]->pool,
- (void *)free,
- nb_free);
- free[0] = m;
- nb_free = 1;
- }
- }
- }
- rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
- } else {
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (m)
- rte_mempool_put(m->pool, m);
- }
- }
-
- /* buffers were freed, update counters */
- txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
- txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
- if (txq->tx_next_dd >= txq->nb_tx_desc)
- txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
-
- return txq->tx_rs_thresh;
+ return ci_tx_free_bufs(txq, iavf_tx_desc_done);
}
static inline void
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index b39289ceb5..c6c3933299 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -12,61 +12,18 @@
#pragma GCC diagnostic ignored "-Wcast-qual"
#endif
+static inline int
+ice_tx_desc_done(struct ci_tx_queue *txq, uint16_t idx)
+{
+ return (txq->ice_tx_ring[idx].cmd_type_offset_bsz &
+ rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) ==
+ rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE);
+}
+
static __rte_always_inline int
ice_tx_free_bufs_vec(struct ci_tx_queue *txq)
{
- struct ci_tx_entry *txep;
- uint32_t n;
- uint32_t i;
- int nb_free = 0;
- struct rte_mbuf *m, *free[ICE_TX_MAX_FREE_BUF_SZ];
-
- /* check DD bits on threshold descriptor */
- if ((txq->ice_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
- rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) !=
- rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))
- return 0;
-
- n = txq->tx_rs_thresh;
-
- /* first buffer to free from S/W ring is at index
- * tx_next_dd - (tx_rs_thresh-1)
- */
- txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)];
- m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
- if (likely(m)) {
- free[0] = m;
- nb_free = 1;
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (likely(m)) {
- if (likely(m->pool == free[0]->pool)) {
- free[nb_free++] = m;
- } else {
- rte_mempool_put_bulk(free[0]->pool,
- (void *)free,
- nb_free);
- free[0] = m;
- nb_free = 1;
- }
- }
- }
- rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
- } else {
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (m)
- rte_mempool_put(m->pool, m);
- }
- }
-
- /* buffers were freed, update counters */
- txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
- txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
- if (txq->tx_next_dd >= txq->nb_tx_desc)
- txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
-
- return txq->tx_rs_thresh;
+ return ci_tx_free_bufs(txq, ice_tx_desc_done);
}
static inline void
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v3 12/22] net/_common_intel: add Tx buffer free fn for AVX-512
2024-12-11 17:33 ` [PATCH v3 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (10 preceding siblings ...)
2024-12-11 17:33 ` [PATCH v3 11/22] net/_common_intel: add post-Tx buffer free function Bruce Richardson
@ 2024-12-11 17:33 ` Bruce Richardson
2024-12-11 17:33 ` [PATCH v3 13/22] net/iavf: use common Tx " Bruce Richardson
` (9 subsequent siblings)
21 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-11 17:33 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Konstantin Ananyev, Ian Stokes, Anatoly Burakov
AVX-512 code paths for ice and i40e drivers are common, and differ from
the regular post-Tx free function in that the SW ring from which the
buffers are freed does not contain anything other than the mbuf pointer.
Merge these into a common function in intel_common to reduce
duplication.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 92 +++++++++++++++++++
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 114 +----------------------
drivers/net/ice/ice_rxtx_vec_avx512.c | 117 +-----------------------
3 files changed, 94 insertions(+), 229 deletions(-)
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
index a930309c05..84ff839672 100644
--- a/drivers/net/_common_intel/tx.h
+++ b/drivers/net/_common_intel/tx.h
@@ -178,4 +178,96 @@ ci_tx_free_bufs(struct ci_tx_queue *txq, ci_desc_done_fn desc_done)
return txq->tx_rs_thresh;
}
+static __rte_always_inline int
+ci_tx_free_bufs_vec(struct ci_tx_queue *txq, ci_desc_done_fn desc_done)
+{
+ int nb_free = 0;
+ struct rte_mbuf *free[IETH_VPMD_TX_MAX_FREE_BUF];
+ struct rte_mbuf *m;
+
+ /* check DD bits on threshold descriptor */
+ if (!desc_done(txq, txq->tx_next_dd))
+ return 0;
+
+ const uint32_t n = txq->tx_rs_thresh;
+
+ /* first buffer to free from S/W ring is at index
+ * tx_next_dd - (tx_rs_thresh - 1)
+ */
+ struct ci_tx_entry_vec *txep = txq->sw_ring_vec;
+ txep += txq->tx_next_dd - (n - 1);
+
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
+ struct rte_mempool *mp = txep[0].mbuf->pool;
+ void **cache_objs;
+ struct rte_mempool_cache *cache = rte_mempool_default_cache(mp, rte_lcore_id());
+
+ if (!cache || cache->len == 0)
+ goto normal;
+
+ cache_objs = &cache->objs[cache->len];
+
+ if (n > RTE_MEMPOOL_CACHE_MAX_SIZE) {
+ rte_mempool_ops_enqueue_bulk(mp, (void *)txep, n);
+ goto done;
+ }
+
+ /* The cache follows the following algorithm
+ * 1. Add the objects to the cache
+ * 2. Anything greater than the cache min value (if it
+ * crosses the cache flush threshold) is flushed to the ring.
+ */
+ /* Add elements back into the cache */
+ uint32_t copied = 0;
+ /* n is multiple of 32 */
+ while (copied < n) {
+ memcpy(&cache_objs[copied], &txep[copied], 32 * sizeof(void *));
+ copied += 32;
+ }
+ cache->len += n;
+
+ if (cache->len >= cache->flushthresh) {
+ rte_mempool_ops_enqueue_bulk(mp, &cache->objs[cache->size],
+ cache->len - cache->size);
+ cache->len = cache->size;
+ }
+ goto done;
+ }
+
+normal:
+ m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
+ if (likely(m)) {
+ free[0] = m;
+ nb_free = 1;
+ for (uint32_t i = 1; i < n; i++) {
+ m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
+ if (likely(m)) {
+ if (likely(m->pool == free[0]->pool)) {
+ free[nb_free++] = m;
+ } else {
+ rte_mempool_put_bulk(free[0]->pool, (void *)free, nb_free);
+ free[0] = m;
+ nb_free = 1;
+ }
+ }
+ }
+ rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
+ } else {
+ for (uint32_t i = 1; i < n; i++) {
+ m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
+ if (m)
+ rte_mempool_put(m->pool, m);
+ }
+ }
+
+done:
+ /* buffers were freed, update counters */
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
+ txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
+ if (txq->tx_next_dd >= txq->nb_tx_desc)
+ txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
+
+ return txq->tx_rs_thresh;
+}
+
#endif /* _COMMON_INTEL_TX_H_ */
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index a3f6d1667f..9bb2a44231 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -754,118 +754,6 @@ i40e_recv_scattered_pkts_vec_avx512(void *rx_queue,
rx_pkts + retval, nb_pkts);
}
-static __rte_always_inline int
-i40e_tx_free_bufs_avx512(struct ci_tx_queue *txq)
-{
- struct ci_tx_entry_vec *txep;
- uint32_t n;
- uint32_t i;
- int nb_free = 0;
- struct rte_mbuf *m, *free[RTE_I40E_TX_MAX_FREE_BUF_SZ];
-
- /* check DD bits on threshold descriptor */
- if ((txq->i40e_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
- rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) !=
- rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE))
- return 0;
-
- n = txq->tx_rs_thresh;
-
- /* first buffer to free from S/W ring is at index
- * tx_next_dd - (tx_rs_thresh-1)
- */
- txep = (void *)txq->sw_ring;
- txep += txq->tx_next_dd - (n - 1);
-
- if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
- struct rte_mempool *mp = txep[0].mbuf->pool;
- void **cache_objs;
- struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
- rte_lcore_id());
-
- if (!cache || n > RTE_MEMPOOL_CACHE_MAX_SIZE) {
- rte_mempool_generic_put(mp, (void *)txep, n, cache);
- goto done;
- }
-
- cache_objs = &cache->objs[cache->len];
-
- /* The cache follows the following algorithm
- * 1. Add the objects to the cache
- * 2. Anything greater than the cache min value (if it
- * crosses the cache flush threshold) is flushed to the ring.
- */
- /* Add elements back into the cache */
- uint32_t copied = 0;
- /* n is multiple of 32 */
- while (copied < n) {
-#ifdef RTE_ARCH_64
- const __m512i a = _mm512_load_si512(&txep[copied]);
- const __m512i b = _mm512_load_si512(&txep[copied + 8]);
- const __m512i c = _mm512_load_si512(&txep[copied + 16]);
- const __m512i d = _mm512_load_si512(&txep[copied + 24]);
-
- _mm512_storeu_si512(&cache_objs[copied], a);
- _mm512_storeu_si512(&cache_objs[copied + 8], b);
- _mm512_storeu_si512(&cache_objs[copied + 16], c);
- _mm512_storeu_si512(&cache_objs[copied + 24], d);
-#else
- const __m512i a = _mm512_load_si512(&txep[copied]);
- const __m512i b = _mm512_load_si512(&txep[copied + 16]);
- _mm512_storeu_si512(&cache_objs[copied], a);
- _mm512_storeu_si512(&cache_objs[copied + 16], b);
-#endif
- copied += 32;
- }
- cache->len += n;
-
- if (cache->len >= cache->flushthresh) {
- rte_mempool_ops_enqueue_bulk
- (mp, &cache->objs[cache->size],
- cache->len - cache->size);
- cache->len = cache->size;
- }
- goto done;
- }
-
- m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
- if (likely(m)) {
- free[0] = m;
- nb_free = 1;
- for (i = 1; i < n; i++) {
- rte_mbuf_prefetch_part2(txep[i + 3].mbuf);
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (likely(m)) {
- if (likely(m->pool == free[0]->pool)) {
- free[nb_free++] = m;
- } else {
- rte_mempool_put_bulk(free[0]->pool,
- (void *)free,
- nb_free);
- free[0] = m;
- nb_free = 1;
- }
- }
- }
- rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
- } else {
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (m)
- rte_mempool_put(m->pool, m);
- }
- }
-
-done:
- /* buffers were freed, update counters */
- txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
- txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
- if (txq->tx_next_dd >= txq->nb_tx_desc)
- txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
-
- return txq->tx_rs_thresh;
-}
-
static inline void
vtx1(volatile struct i40e_tx_desc *txdp, struct rte_mbuf *pkt, uint64_t flags)
{
@@ -941,7 +829,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
if (txq->nb_tx_free < txq->tx_free_thresh)
- i40e_tx_free_bufs_avx512(txq);
+ ci_tx_free_bufs_vec(txq, i40e_tx_desc_done);
nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index eabd8b04a0..538be707ef 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -859,121 +859,6 @@ ice_recv_scattered_pkts_vec_avx512_offload(void *rx_queue,
rx_pkts + retval, nb_pkts);
}
-static __rte_always_inline int
-ice_tx_free_bufs_avx512(struct ci_tx_queue *txq)
-{
- struct ci_tx_entry_vec *txep;
- uint32_t n;
- uint32_t i;
- int nb_free = 0;
- struct rte_mbuf *m, *free[ICE_TX_MAX_FREE_BUF_SZ];
-
- /* check DD bits on threshold descriptor */
- if ((txq->ice_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
- rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) !=
- rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))
- return 0;
-
- n = txq->tx_rs_thresh;
-
- /* first buffer to free from S/W ring is at index
- * tx_next_dd - (tx_rs_thresh - 1)
- */
- txep = (void *)txq->sw_ring;
- txep += txq->tx_next_dd - (n - 1);
-
- if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
- struct rte_mempool *mp = txep[0].mbuf->pool;
- void **cache_objs;
- struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
- rte_lcore_id());
-
- if (!cache || cache->len == 0)
- goto normal;
-
- cache_objs = &cache->objs[cache->len];
-
- if (n > RTE_MEMPOOL_CACHE_MAX_SIZE) {
- rte_mempool_ops_enqueue_bulk(mp, (void *)txep, n);
- goto done;
- }
-
- /* The cache follows the following algorithm
- * 1. Add the objects to the cache
- * 2. Anything greater than the cache min value (if it
- * crosses the cache flush threshold) is flushed to the ring.
- */
- /* Add elements back into the cache */
- uint32_t copied = 0;
- /* n is multiple of 32 */
- while (copied < n) {
-#ifdef RTE_ARCH_64
- const __m512i a = _mm512_loadu_si512(&txep[copied]);
- const __m512i b = _mm512_loadu_si512(&txep[copied + 8]);
- const __m512i c = _mm512_loadu_si512(&txep[copied + 16]);
- const __m512i d = _mm512_loadu_si512(&txep[copied + 24]);
-
- _mm512_storeu_si512(&cache_objs[copied], a);
- _mm512_storeu_si512(&cache_objs[copied + 8], b);
- _mm512_storeu_si512(&cache_objs[copied + 16], c);
- _mm512_storeu_si512(&cache_objs[copied + 24], d);
-#else
- const __m512i a = _mm512_loadu_si512(&txep[copied]);
- const __m512i b = _mm512_loadu_si512(&txep[copied + 16]);
- _mm512_storeu_si512(&cache_objs[copied], a);
- _mm512_storeu_si512(&cache_objs[copied + 16], b);
-#endif
- copied += 32;
- }
- cache->len += n;
-
- if (cache->len >= cache->flushthresh) {
- rte_mempool_ops_enqueue_bulk
- (mp, &cache->objs[cache->size],
- cache->len - cache->size);
- cache->len = cache->size;
- }
- goto done;
- }
-
-normal:
- m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
- if (likely(m)) {
- free[0] = m;
- nb_free = 1;
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (likely(m)) {
- if (likely(m->pool == free[0]->pool)) {
- free[nb_free++] = m;
- } else {
- rte_mempool_put_bulk(free[0]->pool,
- (void *)free,
- nb_free);
- free[0] = m;
- nb_free = 1;
- }
- }
- }
- rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
- } else {
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (m)
- rte_mempool_put(m->pool, m);
- }
- }
-
-done:
- /* buffers were freed, update counters */
- txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
- txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
- if (txq->tx_next_dd >= txq->nb_tx_desc)
- txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
-
- return txq->tx_rs_thresh;
-}
-
static __rte_always_inline void
ice_vtx1(volatile struct ice_tx_desc *txdp,
struct rte_mbuf *pkt, uint64_t flags, bool do_offload)
@@ -1064,7 +949,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_pkts = RTE_MIN(nb_pkts, txq->tx_rs_thresh);
if (txq->nb_tx_free < txq->tx_free_thresh)
- ice_tx_free_bufs_avx512(txq);
+ ci_tx_free_bufs_vec(txq, ice_tx_desc_done);
nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v3 13/22] net/iavf: use common Tx free fn for AVX-512
2024-12-11 17:33 ` [PATCH v3 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (11 preceding siblings ...)
2024-12-11 17:33 ` [PATCH v3 12/22] net/_common_intel: add Tx buffer free fn for AVX-512 Bruce Richardson
@ 2024-12-11 17:33 ` Bruce Richardson
2024-12-11 17:33 ` [PATCH v3 14/22] net/ice: move Tx queue mbuf cleanup fn to common Bruce Richardson
` (8 subsequent siblings)
21 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-11 17:33 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Konstantin Ananyev, Ian Stokes,
Vladimir Medvedkin, Anatoly Burakov
Switch the iavf driver to use the common Tx free function. This requires
one additional parameter to that function, since iavf sometimes uses
context descriptors which means that we have double the descriptors per
SW ring slot.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 6 +-
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 2 +-
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 119 +-----------------------
drivers/net/ice/ice_rxtx_vec_avx512.c | 2 +-
4 files changed, 7 insertions(+), 122 deletions(-)
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
index 84ff839672..26aef528fa 100644
--- a/drivers/net/_common_intel/tx.h
+++ b/drivers/net/_common_intel/tx.h
@@ -179,7 +179,7 @@ ci_tx_free_bufs(struct ci_tx_queue *txq, ci_desc_done_fn desc_done)
}
static __rte_always_inline int
-ci_tx_free_bufs_vec(struct ci_tx_queue *txq, ci_desc_done_fn desc_done)
+ci_tx_free_bufs_vec(struct ci_tx_queue *txq, ci_desc_done_fn desc_done, bool ctx_descs)
{
int nb_free = 0;
struct rte_mbuf *free[IETH_VPMD_TX_MAX_FREE_BUF];
@@ -189,13 +189,13 @@ ci_tx_free_bufs_vec(struct ci_tx_queue *txq, ci_desc_done_fn desc_done)
if (!desc_done(txq, txq->tx_next_dd))
return 0;
- const uint32_t n = txq->tx_rs_thresh;
+ const uint32_t n = txq->tx_rs_thresh >> ctx_descs;
/* first buffer to free from S/W ring is at index
* tx_next_dd - (tx_rs_thresh - 1)
*/
struct ci_tx_entry_vec *txep = txq->sw_ring_vec;
- txep += txq->tx_next_dd - (n - 1);
+ txep += (txq->tx_next_dd >> ctx_descs) - (n - 1);
if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
struct rte_mempool *mp = txep[0].mbuf->pool;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index 9bb2a44231..c555c3491d 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -829,7 +829,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
if (txq->nb_tx_free < txq->tx_free_thresh)
- ci_tx_free_bufs_vec(txq, i40e_tx_desc_done);
+ ci_tx_free_bufs_vec(txq, i40e_tx_desc_done, false);
nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index 9cf7171524..8543490c70 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -1844,121 +1844,6 @@ iavf_recv_scattered_pkts_vec_avx512_flex_rxd_offload(void *rx_queue,
true);
}
-static __rte_always_inline int
-iavf_tx_free_bufs_avx512(struct ci_tx_queue *txq)
-{
- struct ci_tx_entry_vec *txep;
- uint32_t n;
- uint32_t i;
- int nb_free = 0;
- struct rte_mbuf *m, *free[IAVF_VPMD_TX_MAX_FREE_BUF];
-
- /* check DD bits on threshold descriptor */
- if ((txq->iavf_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
- rte_cpu_to_le_64(IAVF_TXD_QW1_DTYPE_MASK)) !=
- rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE))
- return 0;
-
- n = txq->tx_rs_thresh >> txq->use_ctx;
-
- /* first buffer to free from S/W ring is at index
- * tx_next_dd - (tx_rs_thresh-1)
- */
- txep = (void *)txq->sw_ring;
- txep += (txq->tx_next_dd >> txq->use_ctx) - (n - 1);
-
- if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
- struct rte_mempool *mp = txep[0].mbuf->pool;
- struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
- rte_lcore_id());
- void **cache_objs;
-
- if (!cache || cache->len == 0)
- goto normal;
-
- cache_objs = &cache->objs[cache->len];
-
- if (n > RTE_MEMPOOL_CACHE_MAX_SIZE) {
- rte_mempool_ops_enqueue_bulk(mp, (void *)txep, n);
- goto done;
- }
-
- /* The cache follows the following algorithm
- * 1. Add the objects to the cache
- * 2. Anything greater than the cache min value (if it crosses the
- * cache flush threshold) is flushed to the ring.
- */
- /* Add elements back into the cache */
- uint32_t copied = 0;
- /* n is multiple of 32 */
- while (copied < n) {
-#ifdef RTE_ARCH_64
- const __m512i a = _mm512_loadu_si512(&txep[copied]);
- const __m512i b = _mm512_loadu_si512(&txep[copied + 8]);
- const __m512i c = _mm512_loadu_si512(&txep[copied + 16]);
- const __m512i d = _mm512_loadu_si512(&txep[copied + 24]);
-
- _mm512_storeu_si512(&cache_objs[copied], a);
- _mm512_storeu_si512(&cache_objs[copied + 8], b);
- _mm512_storeu_si512(&cache_objs[copied + 16], c);
- _mm512_storeu_si512(&cache_objs[copied + 24], d);
-#else
- const __m512i a = _mm512_loadu_si512(&txep[copied]);
- const __m512i b = _mm512_loadu_si512(&txep[copied + 16]);
- _mm512_storeu_si512(&cache_objs[copied], a);
- _mm512_storeu_si512(&cache_objs[copied + 16], b);
-#endif
- copied += 32;
- }
- cache->len += n;
-
- if (cache->len >= cache->flushthresh) {
- rte_mempool_ops_enqueue_bulk(mp,
- &cache->objs[cache->size],
- cache->len - cache->size);
- cache->len = cache->size;
- }
- goto done;
- }
-
-normal:
- m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
- if (likely(m)) {
- free[0] = m;
- nb_free = 1;
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (likely(m)) {
- if (likely(m->pool == free[0]->pool)) {
- free[nb_free++] = m;
- } else {
- rte_mempool_put_bulk(free[0]->pool,
- (void *)free,
- nb_free);
- free[0] = m;
- nb_free = 1;
- }
- }
- }
- rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
- } else {
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (m)
- rte_mempool_put(m->pool, m);
- }
- }
-
-done:
- /* buffers were freed, update counters */
- txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
- txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
- if (txq->tx_next_dd >= txq->nb_tx_desc)
- txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
-
- return txq->tx_rs_thresh;
-}
-
static __rte_always_inline void
tx_backlog_entry_avx512(struct ci_tx_entry_vec *txep,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
@@ -2320,7 +2205,7 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint64_t rs = IAVF_TX_DESC_CMD_RS | flags;
if (txq->nb_tx_free < txq->tx_free_thresh)
- iavf_tx_free_bufs_avx512(txq);
+ ci_tx_free_bufs_vec(txq, iavf_tx_desc_done, false);
nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
@@ -2388,7 +2273,7 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
uint64_t rs = IAVF_TX_DESC_CMD_RS | flags;
if (txq->nb_tx_free < txq->tx_free_thresh)
- iavf_tx_free_bufs_avx512(txq);
+ ci_tx_free_bufs_vec(txq, iavf_tx_desc_done, true);
nb_commit = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts << 1);
nb_commit &= 0xFFFE;
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index 538be707ef..f6ec593f96 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -949,7 +949,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_pkts = RTE_MIN(nb_pkts, txq->tx_rs_thresh);
if (txq->nb_tx_free < txq->tx_free_thresh)
- ci_tx_free_bufs_vec(txq, ice_tx_desc_done);
+ ci_tx_free_bufs_vec(txq, ice_tx_desc_done, false);
nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v3 14/22] net/ice: move Tx queue mbuf cleanup fn to common
2024-12-11 17:33 ` [PATCH v3 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (12 preceding siblings ...)
2024-12-11 17:33 ` [PATCH v3 13/22] net/iavf: use common Tx " Bruce Richardson
@ 2024-12-11 17:33 ` Bruce Richardson
2024-12-11 17:33 ` [PATCH v3 15/22] net/i40e: use common Tx queue mbuf cleanup fn Bruce Richardson
` (7 subsequent siblings)
21 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-11 17:33 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Anatoly Burakov, Konstantin Ananyev
The functions to loop over the Tx queue and clean up all the mbufs on
it, e.g. for queue shutdown, is not device specific and so can move into
the common_intel headers. Only complication is ensuring that the
correct ring format, either minimal vector or full structure, is used.
Ice driver currently uses two functions and a function pointer to help
with this - though actually one of those functions uses a further check
inside it - so we can simplify this down to just one common function,
with a flag set in the appropriate place. This avoids checking for
AVX-512-specific functions, which were the only function using the
smaller struct in this driver.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 49 ++++++++++++++++++++++++-
drivers/net/ice/ice_dcf_ethdev.c | 5 +--
drivers/net/ice/ice_ethdev.h | 3 +-
drivers/net/ice/ice_rxtx.c | 33 +++++------------
drivers/net/ice/ice_rxtx_vec_common.h | 51 ---------------------------
drivers/net/ice/ice_rxtx_vec_sse.c | 4 ---
6 files changed, 60 insertions(+), 85 deletions(-)
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
index 26aef528fa..1bf2a61b2f 100644
--- a/drivers/net/_common_intel/tx.h
+++ b/drivers/net/_common_intel/tx.h
@@ -65,6 +65,8 @@ struct ci_tx_queue {
rte_iova_t tx_ring_dma; /* TX ring DMA address */
bool tx_deferred_start; /* don't start this queue in dev start */
bool q_set; /* indicate if tx queue has been configured */
+ bool vector_tx; /* port is using vector TX */
+ bool vector_sw_ring; /* port is using vectorized SW ring (ieth_tx_entry_vec) */
union { /* the VSI this queue belongs to */
struct i40e_vsi *i40e_vsi;
struct iavf_vsi *iavf_vsi;
@@ -74,7 +76,6 @@ struct ci_tx_queue {
union {
struct { /* ICE driver specific values */
- ice_tx_release_mbufs_t tx_rel_mbufs;
uint32_t q_teid; /* TX schedule node id. */
};
struct { /* I40E driver specific values */
@@ -270,4 +271,50 @@ ci_tx_free_bufs_vec(struct ci_tx_queue *txq, ci_desc_done_fn desc_done, bool ctx
return txq->tx_rs_thresh;
}
+#define IETH_FREE_BUFS_LOOP(txq, swr, start) do { \
+ uint16_t i = start; \
+ if (txq->tx_tail < i) { \
+ for (; i < txq->nb_tx_desc; i++) { \
+ rte_pktmbuf_free_seg(swr[i].mbuf); \
+ swr[i].mbuf = NULL; \
+ } \
+ i = 0; \
+ } \
+ for (; i < txq->tx_tail; i++) { \
+ rte_pktmbuf_free_seg(swr[i].mbuf); \
+ swr[i].mbuf = NULL; \
+ } \
+} while (0)
+
+static inline void
+ci_txq_release_all_mbufs(struct ci_tx_queue *txq)
+{
+ if (unlikely(!txq || !txq->sw_ring))
+ return;
+
+ if (!txq->vector_tx) {
+ for (uint16_t i = 0; i < txq->nb_tx_desc; i++) {
+ if (txq->sw_ring[i].mbuf != NULL) {
+ rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
+ txq->sw_ring[i].mbuf = NULL;
+ }
+ }
+ return;
+ }
+
+ /**
+ * vPMD tx will not set sw_ring's mbuf to NULL after free,
+ * so need to free remains more carefully.
+ */
+ const uint16_t start = txq->tx_next_dd - txq->tx_rs_thresh + 1;
+
+ if (txq->vector_sw_ring) {
+ struct ci_tx_entry_vec *swr = txq->sw_ring_vec;
+ IETH_FREE_BUFS_LOOP(txq, swr, start);
+ } else {
+ struct ci_tx_entry *swr = txq->sw_ring;
+ IETH_FREE_BUFS_LOOP(txq, swr, start);
+ }
+}
+
#endif /* _COMMON_INTEL_TX_H_ */
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index a0c065d78c..c20399cd84 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -24,6 +24,7 @@
#include "ice_generic_flow.h"
#include "ice_dcf_ethdev.h"
#include "ice_rxtx.h"
+#include "_common_intel/tx.h"
#define DCF_NUM_MACADDR_MAX 64
@@ -500,7 +501,7 @@ ice_dcf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
}
txq = dev->data->tx_queues[tx_queue_id];
- txq->tx_rel_mbufs(txq);
+ ci_txq_release_all_mbufs(txq);
reset_tx_queue(txq);
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
@@ -650,7 +651,7 @@ ice_dcf_stop_queues(struct rte_eth_dev *dev)
txq = dev->data->tx_queues[i];
if (!txq)
continue;
- txq->tx_rel_mbufs(txq);
+ ci_txq_release_all_mbufs(txq);
reset_tx_queue(txq);
dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
}
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index ba54655499..afe8dae497 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -621,13 +621,12 @@ struct ice_adapter {
/* Set bit if the engine is disabled */
unsigned long disabled_engine_mask;
struct ice_parser *psr;
-#ifdef RTE_ARCH_X86
+ /* used only on X86, zero on other Archs */
bool rx_use_avx2;
bool rx_use_avx512;
bool tx_use_avx2;
bool tx_use_avx512;
bool rx_vec_offload_support;
-#endif
};
struct ice_vsi_vlan_pvid_info {
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index e2e147ba3e..0a890e587c 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -751,6 +751,7 @@ ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
struct ice_aqc_add_tx_qgrp *txq_elem;
struct ice_tlan_ctx tx_ctx;
int buf_len;
+ struct ice_adapter *ad = ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
PMD_INIT_FUNC_TRACE();
@@ -822,6 +823,10 @@ ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
return -EIO;
}
+ /* record what kind of descriptor cleanup we need on teardown */
+ txq->vector_tx = ad->tx_vec_allowed;
+ txq->vector_sw_ring = ad->tx_use_avx512;
+
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED;
rte_free(txq_elem);
@@ -1006,25 +1011,6 @@ ice_fdir_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
return 0;
}
-/* Free all mbufs for descriptors in tx queue */
-static void
-_ice_tx_queue_release_mbufs(struct ci_tx_queue *txq)
-{
- uint16_t i;
-
- if (!txq || !txq->sw_ring) {
- PMD_DRV_LOG(DEBUG, "Pointer to txq or sw_ring is NULL");
- return;
- }
-
- for (i = 0; i < txq->nb_tx_desc; i++) {
- if (txq->sw_ring[i].mbuf) {
- rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
- txq->sw_ring[i].mbuf = NULL;
- }
- }
-}
-
static void
ice_reset_tx_queue(struct ci_tx_queue *txq)
{
@@ -1103,7 +1089,7 @@ ice_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
return -EINVAL;
}
- txq->tx_rel_mbufs(txq);
+ ci_txq_release_all_mbufs(txq);
ice_reset_tx_queue(txq);
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
@@ -1166,7 +1152,7 @@ ice_fdir_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
return -EINVAL;
}
- txq->tx_rel_mbufs(txq);
+ ci_txq_release_all_mbufs(txq);
txq->qtx_tail = NULL;
return 0;
@@ -1518,7 +1504,6 @@ ice_tx_queue_setup(struct rte_eth_dev *dev,
ice_reset_tx_queue(txq);
txq->q_set = true;
dev->data->tx_queues[queue_idx] = txq;
- txq->tx_rel_mbufs = _ice_tx_queue_release_mbufs;
ice_set_tx_function_flag(dev, txq);
return 0;
@@ -1546,8 +1531,7 @@ ice_tx_queue_release(void *txq)
return;
}
- if (q->tx_rel_mbufs != NULL)
- q->tx_rel_mbufs(q);
+ ci_txq_release_all_mbufs(q);
rte_free(q->sw_ring);
rte_memzone_free(q->mz);
rte_free(q);
@@ -2460,7 +2444,6 @@ ice_fdir_setup_tx_resources(struct ice_pf *pf)
txq->q_set = true;
pf->fdir.txq = txq;
- txq->tx_rel_mbufs = _ice_tx_queue_release_mbufs;
return ICE_SUCCESS;
}
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index c6c3933299..907828b675 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -61,57 +61,6 @@ _ice_rx_queue_release_mbufs_vec(struct ice_rx_queue *rxq)
memset(rxq->sw_ring, 0, sizeof(rxq->sw_ring[0]) * rxq->nb_rx_desc);
}
-static inline void
-_ice_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq)
-{
- uint16_t i;
-
- if (unlikely(!txq || !txq->sw_ring)) {
- PMD_DRV_LOG(DEBUG, "Pointer to rxq or sw_ring is NULL");
- return;
- }
-
- /**
- * vPMD tx will not set sw_ring's mbuf to NULL after free,
- * so need to free remains more carefully.
- */
- i = txq->tx_next_dd - txq->tx_rs_thresh + 1;
-
-#ifdef __AVX512VL__
- struct rte_eth_dev *dev = &rte_eth_devices[txq->ice_vsi->adapter->pf.dev_data->port_id];
-
- if (dev->tx_pkt_burst == ice_xmit_pkts_vec_avx512 ||
- dev->tx_pkt_burst == ice_xmit_pkts_vec_avx512_offload) {
- struct ci_tx_entry_vec *swr = (void *)txq->sw_ring;
-
- if (txq->tx_tail < i) {
- for (; i < txq->nb_tx_desc; i++) {
- rte_pktmbuf_free_seg(swr[i].mbuf);
- swr[i].mbuf = NULL;
- }
- i = 0;
- }
- for (; i < txq->tx_tail; i++) {
- rte_pktmbuf_free_seg(swr[i].mbuf);
- swr[i].mbuf = NULL;
- }
- } else
-#endif
- {
- if (txq->tx_tail < i) {
- for (; i < txq->nb_tx_desc; i++) {
- rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
- txq->sw_ring[i].mbuf = NULL;
- }
- i = 0;
- }
- for (; i < txq->tx_tail; i++) {
- rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
- txq->sw_ring[i].mbuf = NULL;
- }
- }
-}
-
static inline int
ice_rxq_vec_setup_default(struct ice_rx_queue *rxq)
{
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index f11528385a..bff39c28d8 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -795,10 +795,6 @@ ice_rxq_vec_setup(struct ice_rx_queue *rxq)
int __rte_cold
ice_txq_vec_setup(struct ci_tx_queue *txq __rte_unused)
{
- if (!txq)
- return -1;
-
- txq->tx_rel_mbufs = _ice_tx_queue_release_mbufs_vec;
return 0;
}
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v3 15/22] net/i40e: use common Tx queue mbuf cleanup fn
2024-12-11 17:33 ` [PATCH v3 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (13 preceding siblings ...)
2024-12-11 17:33 ` [PATCH v3 14/22] net/ice: move Tx queue mbuf cleanup fn to common Bruce Richardson
@ 2024-12-11 17:33 ` Bruce Richardson
2024-12-11 17:33 ` [PATCH v3 16/22] net/ixgbe: " Bruce Richardson
` (6 subsequent siblings)
21 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-11 17:33 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Ian Stokes
Update driver to be similar to the "ice" driver and use the common mbuf
ring cleanup code on shutdown of a Tx queue.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/i40e/i40e_ethdev.h | 4 +-
drivers/net/i40e/i40e_rxtx.c | 70 ++++------------------------------
drivers/net/i40e/i40e_rxtx.h | 1 -
3 files changed, 9 insertions(+), 66 deletions(-)
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index d351193ed9..ccc8732d7d 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -1260,12 +1260,12 @@ struct i40e_adapter {
/* For RSS reta table update */
uint8_t rss_reta_updated;
-#ifdef RTE_ARCH_X86
+
+ /* used only on x86, zero on other architectures */
bool rx_use_avx2;
bool rx_use_avx512;
bool tx_use_avx2;
bool tx_use_avx512;
-#endif
};
/**
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 539b170266..b70919c5dc 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -1875,6 +1875,7 @@ i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
int err;
struct ci_tx_queue *txq;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ const struct i40e_adapter *ad = I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
PMD_INIT_FUNC_TRACE();
@@ -1889,6 +1890,9 @@ i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
PMD_DRV_LOG(WARNING, "TX queue %u is deferred start",
tx_queue_id);
+ txq->vector_tx = ad->tx_vec_allowed;
+ txq->vector_sw_ring = ad->tx_use_avx512;
+
/*
* tx_queue_id is queue id application refers to, while
* rxq->reg_idx is the real queue index.
@@ -1929,7 +1933,7 @@ i40e_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
return err;
}
- i40e_tx_queue_release_mbufs(txq);
+ ci_txq_release_all_mbufs(txq);
i40e_reset_tx_queue(txq);
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
@@ -2604,7 +2608,7 @@ i40e_tx_queue_release(void *txq)
return;
}
- i40e_tx_queue_release_mbufs(q);
+ ci_txq_release_all_mbufs(q);
rte_free(q->sw_ring);
rte_memzone_free(q->mz);
rte_free(q);
@@ -2701,66 +2705,6 @@ i40e_reset_rx_queue(struct i40e_rx_queue *rxq)
rxq->rxrearm_nb = 0;
}
-void
-i40e_tx_queue_release_mbufs(struct ci_tx_queue *txq)
-{
- struct rte_eth_dev *dev;
- uint16_t i;
-
- if (!txq || !txq->sw_ring) {
- PMD_DRV_LOG(DEBUG, "Pointer to txq or sw_ring is NULL");
- return;
- }
-
- dev = &rte_eth_devices[txq->port_id];
-
- /**
- * vPMD tx will not set sw_ring's mbuf to NULL after free,
- * so need to free remains more carefully.
- */
-#ifdef CC_AVX512_SUPPORT
- if (dev->tx_pkt_burst == i40e_xmit_pkts_vec_avx512) {
- struct ci_tx_entry_vec *swr = (void *)txq->sw_ring;
-
- i = txq->tx_next_dd - txq->tx_rs_thresh + 1;
- if (txq->tx_tail < i) {
- for (; i < txq->nb_tx_desc; i++) {
- rte_pktmbuf_free_seg(swr[i].mbuf);
- swr[i].mbuf = NULL;
- }
- i = 0;
- }
- for (; i < txq->tx_tail; i++) {
- rte_pktmbuf_free_seg(swr[i].mbuf);
- swr[i].mbuf = NULL;
- }
- return;
- }
-#endif
- if (dev->tx_pkt_burst == i40e_xmit_pkts_vec_avx2 ||
- dev->tx_pkt_burst == i40e_xmit_pkts_vec) {
- i = txq->tx_next_dd - txq->tx_rs_thresh + 1;
- if (txq->tx_tail < i) {
- for (; i < txq->nb_tx_desc; i++) {
- rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
- txq->sw_ring[i].mbuf = NULL;
- }
- i = 0;
- }
- for (; i < txq->tx_tail; i++) {
- rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
- txq->sw_ring[i].mbuf = NULL;
- }
- } else {
- for (i = 0; i < txq->nb_tx_desc; i++) {
- if (txq->sw_ring[i].mbuf) {
- rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
- txq->sw_ring[i].mbuf = NULL;
- }
- }
- }
-}
-
static int
i40e_tx_done_cleanup_full(struct ci_tx_queue *txq,
uint32_t free_cnt)
@@ -3127,7 +3071,7 @@ i40e_dev_clear_queues(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_tx_queues; i++) {
if (!dev->data->tx_queues[i])
continue;
- i40e_tx_queue_release_mbufs(dev->data->tx_queues[i]);
+ ci_txq_release_all_mbufs(dev->data->tx_queues[i]);
i40e_reset_tx_queue(dev->data->tx_queues[i]);
}
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 043d1df912..858b8433e9 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -179,7 +179,6 @@ void i40e_dev_clear_queues(struct rte_eth_dev *dev);
void i40e_dev_free_queues(struct rte_eth_dev *dev);
void i40e_reset_rx_queue(struct i40e_rx_queue *rxq);
void i40e_reset_tx_queue(struct ci_tx_queue *txq);
-void i40e_tx_queue_release_mbufs(struct ci_tx_queue *txq);
int i40e_tx_done_cleanup(void *txq, uint32_t free_cnt);
int i40e_alloc_rx_queue_mbufs(struct i40e_rx_queue *rxq);
void i40e_rx_queue_release_mbufs(struct i40e_rx_queue *rxq);
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v3 16/22] net/ixgbe: use common Tx queue mbuf cleanup fn
2024-12-11 17:33 ` [PATCH v3 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (14 preceding siblings ...)
2024-12-11 17:33 ` [PATCH v3 15/22] net/i40e: use common Tx queue mbuf cleanup fn Bruce Richardson
@ 2024-12-11 17:33 ` Bruce Richardson
2024-12-11 17:33 ` [PATCH v3 17/22] net/iavf: " Bruce Richardson
` (5 subsequent siblings)
21 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-11 17:33 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Anatoly Burakov, Vladimir Medvedkin,
Wathsala Vithanage, Konstantin Ananyev
Update driver to use the common cleanup function.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/ixgbe/ixgbe_rxtx.c | 22 +++---------------
drivers/net/ixgbe/ixgbe_rxtx.h | 1 -
drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 28 ++---------------------
drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 7 ------
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 7 ------
5 files changed, 5 insertions(+), 60 deletions(-)
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 344ef85685..bf9d461b06 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -2334,21 +2334,6 @@ ixgbe_recv_pkts_lro_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts,
*
**********************************************************************/
-static void __rte_cold
-ixgbe_tx_queue_release_mbufs(struct ci_tx_queue *txq)
-{
- unsigned i;
-
- if (txq->sw_ring != NULL) {
- for (i = 0; i < txq->nb_tx_desc; i++) {
- if (txq->sw_ring[i].mbuf != NULL) {
- rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
- txq->sw_ring[i].mbuf = NULL;
- }
- }
- }
-}
-
static int
ixgbe_tx_done_cleanup_full(struct ci_tx_queue *txq, uint32_t free_cnt)
{
@@ -2472,7 +2457,7 @@ static void __rte_cold
ixgbe_tx_queue_release(struct ci_tx_queue *txq)
{
if (txq != NULL && txq->ops != NULL) {
- txq->ops->release_mbufs(txq);
+ ci_txq_release_all_mbufs(txq);
txq->ops->free_swring(txq);
rte_memzone_free(txq->mz);
rte_free(txq);
@@ -2526,7 +2511,6 @@ ixgbe_reset_tx_queue(struct ci_tx_queue *txq)
}
static const struct ixgbe_txq_ops def_txq_ops = {
- .release_mbufs = ixgbe_tx_queue_release_mbufs,
.free_swring = ixgbe_tx_free_swring,
.reset = ixgbe_reset_tx_queue,
};
@@ -3380,7 +3364,7 @@ ixgbe_dev_clear_queues(struct rte_eth_dev *dev)
struct ci_tx_queue *txq = dev->data->tx_queues[i];
if (txq != NULL) {
- txq->ops->release_mbufs(txq);
+ ci_txq_release_all_mbufs(txq);
txq->ops->reset(txq);
dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
}
@@ -5655,7 +5639,7 @@ ixgbe_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
}
if (txq->ops != NULL) {
- txq->ops->release_mbufs(txq);
+ ci_txq_release_all_mbufs(txq);
txq->ops->reset(txq);
}
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index 4333e5bf2f..11689eb432 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -181,7 +181,6 @@ struct ixgbe_advctx_info {
};
struct ixgbe_txq_ops {
- void (*release_mbufs)(struct ci_tx_queue *txq);
void (*free_swring)(struct ci_tx_queue *txq);
void (*reset)(struct ci_tx_queue *txq);
};
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
index 06e760867c..2b12bdcc9c 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
@@ -78,32 +78,6 @@ tx_backlog_entry(struct ci_tx_entry_vec *txep,
txep[i].mbuf = tx_pkts[i];
}
-static inline void
-_ixgbe_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq)
-{
- unsigned int i;
- struct ci_tx_entry_vec *txe;
- const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1);
-
- if (txq->sw_ring == NULL || txq->nb_tx_free == max_desc)
- return;
-
- /* release the used mbufs in sw_ring */
- for (i = txq->tx_next_dd - (txq->tx_rs_thresh - 1);
- i != txq->tx_tail;
- i = (i + 1) % txq->nb_tx_desc) {
- txe = &txq->sw_ring_vec[i];
- rte_pktmbuf_free_seg(txe->mbuf);
- }
- txq->nb_tx_free = max_desc;
-
- /* reset tx_entry */
- for (i = 0; i < txq->nb_tx_desc; i++) {
- txe = &txq->sw_ring_vec[i];
- txe->mbuf = NULL;
- }
-}
-
static inline void
_ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq)
{
@@ -207,6 +181,8 @@ ixgbe_txq_vec_setup_default(struct ci_tx_queue *txq,
/* leave the first one for overflow */
txq->sw_ring_vec = txq->sw_ring_vec + 1;
txq->ops = txq_ops;
+ txq->vector_tx = 1;
+ txq->vector_sw_ring = 1;
return 0;
}
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
index cb749a3760..2ccb399b64 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
@@ -633,12 +633,6 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
return nb_pkts;
}
-static void __rte_cold
-ixgbe_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq)
-{
- _ixgbe_tx_queue_release_mbufs_vec(txq);
-}
-
void __rte_cold
ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq)
{
@@ -658,7 +652,6 @@ ixgbe_reset_tx_queue(struct ci_tx_queue *txq)
}
static const struct ixgbe_txq_ops vec_txq_ops = {
- .release_mbufs = ixgbe_tx_queue_release_mbufs_vec,
.free_swring = ixgbe_tx_free_swring,
.reset = ixgbe_reset_tx_queue,
};
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
index e46550f76a..fa26365f06 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -756,12 +756,6 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
return nb_pkts;
}
-static void __rte_cold
-ixgbe_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq)
-{
- _ixgbe_tx_queue_release_mbufs_vec(txq);
-}
-
void __rte_cold
ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq)
{
@@ -781,7 +775,6 @@ ixgbe_reset_tx_queue(struct ci_tx_queue *txq)
}
static const struct ixgbe_txq_ops vec_txq_ops = {
- .release_mbufs = ixgbe_tx_queue_release_mbufs_vec,
.free_swring = ixgbe_tx_free_swring,
.reset = ixgbe_reset_tx_queue,
};
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v3 17/22] net/iavf: use common Tx queue mbuf cleanup fn
2024-12-11 17:33 ` [PATCH v3 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (15 preceding siblings ...)
2024-12-11 17:33 ` [PATCH v3 16/22] net/ixgbe: " Bruce Richardson
@ 2024-12-11 17:33 ` Bruce Richardson
2024-12-11 17:33 ` [PATCH v3 18/22] net/ice: use vector SW ring for all vector paths Bruce Richardson
` (4 subsequent siblings)
21 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-11 17:33 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Ian Stokes, Vladimir Medvedkin,
Konstantin Ananyev, Anatoly Burakov
Adjust iavf driver to also use the common mbuf freeing functions on Tx
queue release/cleanup. The implementation is complicated a little by the
need to integrate the additional "has_ctx" parameter for the iavf code,
but changes in other drivers are minimal - just a constant "false"
parameter.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 27 +++++++++---------
drivers/net/i40e/i40e_rxtx.c | 6 ++--
drivers/net/iavf/iavf_rxtx.c | 37 ++-----------------------
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 24 ++--------------
drivers/net/iavf/iavf_rxtx_vec_common.h | 18 ------------
drivers/net/iavf/iavf_rxtx_vec_sse.c | 9 ++----
drivers/net/ice/ice_dcf_ethdev.c | 4 +--
drivers/net/ice/ice_rxtx.c | 6 ++--
drivers/net/ixgbe/ixgbe_rxtx.c | 6 ++--
9 files changed, 31 insertions(+), 106 deletions(-)
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
index 1bf2a61b2f..310b51adcf 100644
--- a/drivers/net/_common_intel/tx.h
+++ b/drivers/net/_common_intel/tx.h
@@ -271,23 +271,23 @@ ci_tx_free_bufs_vec(struct ci_tx_queue *txq, ci_desc_done_fn desc_done, bool ctx
return txq->tx_rs_thresh;
}
-#define IETH_FREE_BUFS_LOOP(txq, swr, start) do { \
+#define IETH_FREE_BUFS_LOOP(swr, nb_desc, start, end) do { \
uint16_t i = start; \
- if (txq->tx_tail < i) { \
- for (; i < txq->nb_tx_desc; i++) { \
+ if (end < i) { \
+ for (; i < nb_desc; i++) { \
rte_pktmbuf_free_seg(swr[i].mbuf); \
swr[i].mbuf = NULL; \
} \
i = 0; \
} \
- for (; i < txq->tx_tail; i++) { \
+ for (; i < end; i++) { \
rte_pktmbuf_free_seg(swr[i].mbuf); \
swr[i].mbuf = NULL; \
} \
} while (0)
static inline void
-ci_txq_release_all_mbufs(struct ci_tx_queue *txq)
+ci_txq_release_all_mbufs(struct ci_tx_queue *txq, bool use_ctx)
{
if (unlikely(!txq || !txq->sw_ring))
return;
@@ -306,15 +306,14 @@ ci_txq_release_all_mbufs(struct ci_tx_queue *txq)
* vPMD tx will not set sw_ring's mbuf to NULL after free,
* so need to free remains more carefully.
*/
- const uint16_t start = txq->tx_next_dd - txq->tx_rs_thresh + 1;
-
- if (txq->vector_sw_ring) {
- struct ci_tx_entry_vec *swr = txq->sw_ring_vec;
- IETH_FREE_BUFS_LOOP(txq, swr, start);
- } else {
- struct ci_tx_entry *swr = txq->sw_ring;
- IETH_FREE_BUFS_LOOP(txq, swr, start);
- }
+ const uint16_t start = (txq->tx_next_dd - txq->tx_rs_thresh + 1) >> use_ctx;
+ const uint16_t nb_desc = txq->nb_tx_desc >> use_ctx;
+ const uint16_t end = txq->tx_tail >> use_ctx;
+
+ if (txq->vector_sw_ring)
+ IETH_FREE_BUFS_LOOP(txq->sw_ring_vec, nb_desc, start, end);
+ else
+ IETH_FREE_BUFS_LOOP(txq->sw_ring, nb_desc, start, end);
}
#endif /* _COMMON_INTEL_TX_H_ */
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index b70919c5dc..081d743e62 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -1933,7 +1933,7 @@ i40e_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
return err;
}
- ci_txq_release_all_mbufs(txq);
+ ci_txq_release_all_mbufs(txq, false);
i40e_reset_tx_queue(txq);
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
@@ -2608,7 +2608,7 @@ i40e_tx_queue_release(void *txq)
return;
}
- ci_txq_release_all_mbufs(q);
+ ci_txq_release_all_mbufs(q, false);
rte_free(q->sw_ring);
rte_memzone_free(q->mz);
rte_free(q);
@@ -3071,7 +3071,7 @@ i40e_dev_clear_queues(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_tx_queues; i++) {
if (!dev->data->tx_queues[i])
continue;
- ci_txq_release_all_mbufs(dev->data->tx_queues[i]);
+ ci_txq_release_all_mbufs(dev->data->tx_queues[i], false);
i40e_reset_tx_queue(dev->data->tx_queues[i]);
}
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 7e381b2a17..f0ab881ac5 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -387,24 +387,6 @@ release_rxq_mbufs(struct iavf_rx_queue *rxq)
rxq->rx_nb_avail = 0;
}
-static inline void
-release_txq_mbufs(struct ci_tx_queue *txq)
-{
- uint16_t i;
-
- if (!txq || !txq->sw_ring) {
- PMD_DRV_LOG(DEBUG, "Pointer to rxq or sw_ring is NULL");
- return;
- }
-
- for (i = 0; i < txq->nb_tx_desc; i++) {
- if (txq->sw_ring[i].mbuf) {
- rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
- txq->sw_ring[i].mbuf = NULL;
- }
- }
-}
-
static const
struct iavf_rxq_ops iavf_rxq_release_mbufs_ops[] = {
[IAVF_REL_MBUFS_DEFAULT].release_mbufs = release_rxq_mbufs,
@@ -413,18 +395,6 @@ struct iavf_rxq_ops iavf_rxq_release_mbufs_ops[] = {
#endif
};
-static const
-struct iavf_txq_ops iavf_txq_release_mbufs_ops[] = {
- [IAVF_REL_MBUFS_DEFAULT].release_mbufs = release_txq_mbufs,
-#ifdef RTE_ARCH_X86
- [IAVF_REL_MBUFS_SSE_VEC].release_mbufs = iavf_tx_queue_release_mbufs_sse,
-#ifdef CC_AVX512_SUPPORT
- [IAVF_REL_MBUFS_AVX512_VEC].release_mbufs = iavf_tx_queue_release_mbufs_avx512,
-#endif
-#endif
-
-};
-
static inline void
iavf_rxd_to_pkt_fields_by_comms_ovs(__rte_unused struct iavf_rx_queue *rxq,
struct rte_mbuf *mb,
@@ -889,7 +859,6 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->q_set = true;
dev->data->tx_queues[queue_idx] = txq;
txq->qtx_tail = hw->hw_addr + IAVF_QTX_TAIL1(queue_idx);
- txq->rel_mbufs_type = IAVF_REL_MBUFS_DEFAULT;
if (check_tx_vec_allow(txq) == false) {
struct iavf_adapter *ad =
@@ -1068,7 +1037,7 @@ iavf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
}
txq = dev->data->tx_queues[tx_queue_id];
- iavf_txq_release_mbufs_ops[txq->rel_mbufs_type].release_mbufs(txq);
+ ci_txq_release_all_mbufs(txq, txq->use_ctx);
reset_tx_queue(txq);
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
@@ -1097,7 +1066,7 @@ iavf_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
if (!q)
return;
- iavf_txq_release_mbufs_ops[q->rel_mbufs_type].release_mbufs(q);
+ ci_txq_release_all_mbufs(q, q->use_ctx);
rte_free(q->sw_ring);
rte_memzone_free(q->mz);
rte_free(q);
@@ -1114,7 +1083,7 @@ iavf_reset_queues(struct rte_eth_dev *dev)
txq = dev->data->tx_queues[i];
if (!txq)
continue;
- iavf_txq_release_mbufs_ops[txq->rel_mbufs_type].release_mbufs(txq);
+ ci_txq_release_all_mbufs(txq, txq->use_ctx);
reset_tx_queue(txq);
dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
}
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index 8543490c70..007759e451 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -2357,31 +2357,11 @@ iavf_xmit_pkts_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
return iavf_xmit_pkts_vec_avx512_cmn(tx_queue, tx_pkts, nb_pkts, false);
}
-void __rte_cold
-iavf_tx_queue_release_mbufs_avx512(struct ci_tx_queue *txq)
-{
- unsigned int i;
- const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1);
- const uint16_t end_desc = txq->tx_tail >> txq->use_ctx; /* next empty slot */
- const uint16_t wrap_point = txq->nb_tx_desc >> txq->use_ctx; /* end of SW ring */
- struct ci_tx_entry_vec *swr = (void *)txq->sw_ring;
-
- if (!txq->sw_ring || txq->nb_tx_free == max_desc)
- return;
-
- i = (txq->tx_next_dd - txq->tx_rs_thresh + 1) >> txq->use_ctx;
- while (i != end_desc) {
- rte_pktmbuf_free_seg(swr[i].mbuf);
- swr[i].mbuf = NULL;
- if (++i == wrap_point)
- i = 0;
- }
-}
-
int __rte_cold
iavf_txq_vec_setup_avx512(struct ci_tx_queue *txq)
{
- txq->rel_mbufs_type = IAVF_REL_MBUFS_AVX512_VEC;
+ txq->vector_tx = true;
+ txq->vector_sw_ring = true;
return 0;
}
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index 7130229f23..6f94587eee 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -60,24 +60,6 @@ _iavf_rx_queue_release_mbufs_vec(struct iavf_rx_queue *rxq)
memset(rxq->sw_ring, 0, sizeof(rxq->sw_ring[0]) * rxq->nb_rx_desc);
}
-static inline void
-_iavf_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq)
-{
- unsigned i;
- const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1);
-
- if (!txq->sw_ring || txq->nb_tx_free == max_desc)
- return;
-
- i = txq->tx_next_dd - txq->tx_rs_thresh + 1;
- while (i != txq->tx_tail) {
- rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
- txq->sw_ring[i].mbuf = NULL;
- if (++i == txq->nb_tx_desc)
- i = 0;
- }
-}
-
static inline int
iavf_rxq_vec_setup_default(struct iavf_rx_queue *rxq)
{
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index 5c0b2fff46..3adf2a59e4 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -1458,16 +1458,11 @@ iavf_rx_queue_release_mbufs_sse(struct iavf_rx_queue *rxq)
_iavf_rx_queue_release_mbufs_vec(rxq);
}
-void __rte_cold
-iavf_tx_queue_release_mbufs_sse(struct ci_tx_queue *txq)
-{
- _iavf_tx_queue_release_mbufs_vec(txq);
-}
-
int __rte_cold
iavf_txq_vec_setup(struct ci_tx_queue *txq)
{
- txq->rel_mbufs_type = IAVF_REL_MBUFS_SSE_VEC;
+ txq->vector_tx = true;
+ txq->vector_sw_ring = false;
return 0;
}
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index c20399cd84..57fe44ebb3 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -501,7 +501,7 @@ ice_dcf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
}
txq = dev->data->tx_queues[tx_queue_id];
- ci_txq_release_all_mbufs(txq);
+ ci_txq_release_all_mbufs(txq, false);
reset_tx_queue(txq);
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
@@ -651,7 +651,7 @@ ice_dcf_stop_queues(struct rte_eth_dev *dev)
txq = dev->data->tx_queues[i];
if (!txq)
continue;
- ci_txq_release_all_mbufs(txq);
+ ci_txq_release_all_mbufs(txq, false);
reset_tx_queue(txq);
dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
}
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 0a890e587c..ad0ddf6a88 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -1089,7 +1089,7 @@ ice_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
return -EINVAL;
}
- ci_txq_release_all_mbufs(txq);
+ ci_txq_release_all_mbufs(txq, false);
ice_reset_tx_queue(txq);
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
@@ -1152,7 +1152,7 @@ ice_fdir_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
return -EINVAL;
}
- ci_txq_release_all_mbufs(txq);
+ ci_txq_release_all_mbufs(txq, false);
txq->qtx_tail = NULL;
return 0;
@@ -1531,7 +1531,7 @@ ice_tx_queue_release(void *txq)
return;
}
- ci_txq_release_all_mbufs(q);
+ ci_txq_release_all_mbufs(q, false);
rte_free(q->sw_ring);
rte_memzone_free(q->mz);
rte_free(q);
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index bf9d461b06..3b7a6a6f0e 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -2457,7 +2457,7 @@ static void __rte_cold
ixgbe_tx_queue_release(struct ci_tx_queue *txq)
{
if (txq != NULL && txq->ops != NULL) {
- ci_txq_release_all_mbufs(txq);
+ ci_txq_release_all_mbufs(txq, false);
txq->ops->free_swring(txq);
rte_memzone_free(txq->mz);
rte_free(txq);
@@ -3364,7 +3364,7 @@ ixgbe_dev_clear_queues(struct rte_eth_dev *dev)
struct ci_tx_queue *txq = dev->data->tx_queues[i];
if (txq != NULL) {
- ci_txq_release_all_mbufs(txq);
+ ci_txq_release_all_mbufs(txq, false);
txq->ops->reset(txq);
dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
}
@@ -5639,7 +5639,7 @@ ixgbe_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
}
if (txq->ops != NULL) {
- ci_txq_release_all_mbufs(txq);
+ ci_txq_release_all_mbufs(txq, false);
txq->ops->reset(txq);
}
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v3 18/22] net/ice: use vector SW ring for all vector paths
2024-12-11 17:33 ` [PATCH v3 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (16 preceding siblings ...)
2024-12-11 17:33 ` [PATCH v3 17/22] net/iavf: " Bruce Richardson
@ 2024-12-11 17:33 ` Bruce Richardson
2024-12-11 17:33 ` [PATCH v3 19/22] net/i40e: " Bruce Richardson
` (3 subsequent siblings)
21 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-11 17:33 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Anatoly Burakov, Konstantin Ananyev
The AVX-512 code path used a smaller SW ring structure only containing
the mbuf pointer, but no other fields. The other fields are only used in
the scalar code path, so update all vector driver code paths to use the
smaller, faster structure.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 7 +++++++
drivers/net/ice/ice_rxtx.c | 2 +-
drivers/net/ice/ice_rxtx_vec_avx2.c | 12 ++++++------
drivers/net/ice/ice_rxtx_vec_avx512.c | 14 ++------------
drivers/net/ice/ice_rxtx_vec_common.h | 6 ------
drivers/net/ice/ice_rxtx_vec_sse.c | 12 ++++++------
6 files changed, 22 insertions(+), 31 deletions(-)
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
index 310b51adcf..aa42b9b49f 100644
--- a/drivers/net/_common_intel/tx.h
+++ b/drivers/net/_common_intel/tx.h
@@ -109,6 +109,13 @@ ci_tx_backlog_entry(struct ci_tx_entry *txep, struct rte_mbuf **tx_pkts, uint16_
txep[i].mbuf = tx_pkts[i];
}
+static __rte_always_inline void
+ci_tx_backlog_entry_vec(struct ci_tx_entry_vec *txep, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+ for (uint16_t i = 0; i < nb_pkts; ++i)
+ txep[i].mbuf = tx_pkts[i];
+}
+
#define IETH_VPMD_TX_MAX_FREE_BUF 64
typedef int (*ci_desc_done_fn)(struct ci_tx_queue *txq, uint16_t idx);
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index ad0ddf6a88..77cb6688a7 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -825,7 +825,7 @@ ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
/* record what kind of descriptor cleanup we need on teardown */
txq->vector_tx = ad->tx_vec_allowed;
- txq->vector_sw_ring = ad->tx_use_avx512;
+ txq->vector_sw_ring = txq->vector_tx;
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED;
diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index 12ffa0fa9a..98bab322b4 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -858,7 +858,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct ice_tx_desc *txdp;
- struct ci_tx_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = ICE_TD_CMD;
uint64_t rs = ICE_TX_DESC_CMD_RS | ICE_TD_CMD;
@@ -867,7 +867,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_pkts = RTE_MIN(nb_pkts, txq->tx_rs_thresh);
if (txq->nb_tx_free < txq->tx_free_thresh)
- ice_tx_free_bufs_vec(txq);
+ ci_tx_free_bufs_vec(txq, ice_tx_desc_done, false);
nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
@@ -875,13 +875,13 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = txq->tx_tail;
txdp = &txq->ice_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ci_tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, n);
ice_vtx(txdp, tx_pkts, n - 1, flags, offload);
tx_pkts += (n - 1);
@@ -896,10 +896,10 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
/* avoid reach the end of ring */
txdp = &txq->ice_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
}
- ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
ice_vtx(txdp, tx_pkts, nb_commit, flags, offload);
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index f6ec593f96..481f784e34 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -924,16 +924,6 @@ ice_vtx(volatile struct ice_tx_desc *txdp, struct rte_mbuf **pkt,
}
}
-static __rte_always_inline void
-ice_tx_backlog_entry_avx512(struct ci_tx_entry_vec *txep,
- struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
-{
- int i;
-
- for (i = 0; i < (int)nb_pkts; ++i)
- txep[i].mbuf = tx_pkts[i];
-}
-
static __rte_always_inline uint16_t
ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool do_offload)
@@ -964,7 +954,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ice_tx_backlog_entry_avx512(txep, tx_pkts, n);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, n);
ice_vtx(txdp, tx_pkts, n - 1, flags, do_offload);
tx_pkts += (n - 1);
@@ -982,7 +972,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = (void *)txq->sw_ring;
}
- ice_tx_backlog_entry_avx512(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
ice_vtx(txdp, tx_pkts, nb_commit, flags, do_offload);
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index 907828b675..aa709fb51c 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -20,12 +20,6 @@ ice_tx_desc_done(struct ci_tx_queue *txq, uint16_t idx)
rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE);
}
-static __rte_always_inline int
-ice_tx_free_bufs_vec(struct ci_tx_queue *txq)
-{
- return ci_tx_free_bufs(txq, ice_tx_desc_done);
-}
-
static inline void
_ice_rx_queue_release_mbufs_vec(struct ice_rx_queue *rxq)
{
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index bff39c28d8..73e3e9eb54 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -699,7 +699,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct ice_tx_desc *txdp;
- struct ci_tx_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = ICE_TD_CMD;
uint64_t rs = ICE_TX_DESC_CMD_RS | ICE_TD_CMD;
@@ -709,7 +709,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_pkts = RTE_MIN(nb_pkts, txq->tx_rs_thresh);
if (txq->nb_tx_free < txq->tx_free_thresh)
- ice_tx_free_bufs_vec(txq);
+ ci_tx_free_bufs_vec(txq, ice_tx_desc_done, false);
nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
nb_commit = nb_pkts;
@@ -718,13 +718,13 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = txq->tx_tail;
txdp = &txq->ice_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ci_tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
ice_vtx1(txdp, *tx_pkts, flags);
@@ -738,10 +738,10 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
/* avoid reach the end of ring */
txdp = &txq->ice_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
}
- ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
ice_vtx(txdp, tx_pkts, nb_commit, flags);
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v3 19/22] net/i40e: use vector SW ring for all vector paths
2024-12-11 17:33 ` [PATCH v3 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (17 preceding siblings ...)
2024-12-11 17:33 ` [PATCH v3 18/22] net/ice: use vector SW ring for all vector paths Bruce Richardson
@ 2024-12-11 17:33 ` Bruce Richardson
2024-12-11 17:33 ` [PATCH v3 20/22] net/iavf: " Bruce Richardson
` (2 subsequent siblings)
21 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-11 17:33 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Ian Stokes, David Christensen,
Konstantin Ananyev, Wathsala Vithanage
The AVX-512 code path used a smaller SW ring structure only containing
the mbuf pointer, but no other fields. The other fields are only used in
the scalar code path, so update all vector driver code paths (AVX2, SSE,
Neon, Altivec) to use the smaller, faster structure.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/i40e/i40e_rxtx.c | 8 +++++---
drivers/net/i40e/i40e_rxtx_vec_altivec.c | 12 ++++++------
drivers/net/i40e/i40e_rxtx_vec_avx2.c | 12 ++++++------
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 14 ++------------
drivers/net/i40e/i40e_rxtx_vec_common.h | 6 ------
drivers/net/i40e/i40e_rxtx_vec_neon.c | 12 ++++++------
drivers/net/i40e/i40e_rxtx_vec_sse.c | 12 ++++++------
7 files changed, 31 insertions(+), 45 deletions(-)
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 081d743e62..745c467912 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -1891,7 +1891,7 @@ i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
tx_queue_id);
txq->vector_tx = ad->tx_vec_allowed;
- txq->vector_sw_ring = ad->tx_use_avx512;
+ txq->vector_sw_ring = txq->vector_tx;
/*
* tx_queue_id is queue id application refers to, while
@@ -3550,9 +3550,11 @@ i40e_set_tx_function(struct rte_eth_dev *dev)
}
}
+ if (rte_vect_get_max_simd_bitwidth() < RTE_VECT_SIMD_128)
+ ad->tx_vec_allowed = false;
+
if (ad->tx_simple_allowed) {
- if (ad->tx_vec_allowed &&
- rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) {
+ if (ad->tx_vec_allowed) {
#ifdef RTE_ARCH_X86
if (ad->tx_use_avx512) {
#ifdef CC_AVX512_SUPPORT
diff --git a/drivers/net/i40e/i40e_rxtx_vec_altivec.c b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
index 500bba2cef..b6900a3e15 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_altivec.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
@@ -553,14 +553,14 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
- struct ci_tx_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = I40E_TD_CMD;
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
int i;
if (txq->nb_tx_free < txq->tx_free_thresh)
- i40e_tx_free_bufs(txq);
+ ci_tx_free_bufs_vec(txq, i40e_tx_desc_done, false);
nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
nb_commit = nb_pkts;
@@ -569,13 +569,13 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = txq->tx_tail;
txdp = &txq->i40e_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ci_tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -589,10 +589,10 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
/* avoid reach the end of ring */
txdp = &txq->i40e_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
}
- ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx2.c b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
index 29bef64287..2477573c01 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx2.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
@@ -745,13 +745,13 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
- struct ci_tx_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = I40E_TD_CMD;
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
if (txq->nb_tx_free < txq->tx_free_thresh)
- i40e_tx_free_bufs(txq);
+ ci_tx_free_bufs_vec(txq, i40e_tx_desc_done, false);
nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
@@ -759,13 +759,13 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = txq->tx_tail;
txdp = &txq->i40e_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ci_tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, n);
vtx(txdp, tx_pkts, n - 1, flags);
tx_pkts += (n - 1);
@@ -780,10 +780,10 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
/* avoid reach the end of ring */
txdp = &txq->i40e_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
}
- ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index c555c3491d..2497e6a8f0 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -807,16 +807,6 @@ vtx(volatile struct i40e_tx_desc *txdp,
}
}
-static __rte_always_inline void
-tx_backlog_entry_avx512(struct ci_tx_entry_vec *txep,
- struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
-{
- int i;
-
- for (i = 0; i < (int)nb_pkts; ++i)
- txep[i].mbuf = tx_pkts[i];
-}
-
static inline uint16_t
i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
@@ -844,7 +834,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- tx_backlog_entry_avx512(txep, tx_pkts, n);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, n);
vtx(txdp, tx_pkts, n - 1, flags);
tx_pkts += (n - 1);
@@ -862,7 +852,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = (void *)txq->sw_ring;
}
- tx_backlog_entry_avx512(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index 907d32dd0b..733dc797cd 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -24,12 +24,6 @@ i40e_tx_desc_done(struct ci_tx_queue *txq, uint16_t idx)
rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE);
}
-static __rte_always_inline int
-i40e_tx_free_bufs(struct ci_tx_queue *txq)
-{
- return ci_tx_free_bufs(txq, i40e_tx_desc_done);
-}
-
static inline void
_i40e_rx_queue_release_mbufs_vec(struct i40e_rx_queue *rxq)
{
diff --git a/drivers/net/i40e/i40e_rxtx_vec_neon.c b/drivers/net/i40e/i40e_rxtx_vec_neon.c
index c97f337e43..b398d66154 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_neon.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_neon.c
@@ -681,14 +681,14 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
{
struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
- struct ci_tx_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = I40E_TD_CMD;
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
int i;
if (txq->nb_tx_free < txq->tx_free_thresh)
- i40e_tx_free_bufs(txq);
+ ci_tx_free_bufs_vec(txq, i40e_tx_desc_done, false);
nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
@@ -696,13 +696,13 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
tx_id = txq->tx_tail;
txdp = &txq->i40e_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ci_tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -716,10 +716,10 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
/* avoid reach the end of ring */
txdp = &txq->i40e_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
}
- ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_sse.c b/drivers/net/i40e/i40e_rxtx_vec_sse.c
index 2c467e2089..90c57e59d0 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_sse.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_sse.c
@@ -700,14 +700,14 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
- struct ci_tx_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = I40E_TD_CMD;
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
int i;
if (txq->nb_tx_free < txq->tx_free_thresh)
- i40e_tx_free_bufs(txq);
+ ci_tx_free_bufs_vec(txq, i40e_tx_desc_done, false);
nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
@@ -715,13 +715,13 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = txq->tx_tail;
txdp = &txq->i40e_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ci_tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -735,10 +735,10 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
/* avoid reach the end of ring */
txdp = &txq->i40e_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
}
- ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v3 20/22] net/iavf: use vector SW ring for all vector paths
2024-12-11 17:33 ` [PATCH v3 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (18 preceding siblings ...)
2024-12-11 17:33 ` [PATCH v3 19/22] net/i40e: " Bruce Richardson
@ 2024-12-11 17:33 ` Bruce Richardson
2024-12-11 17:33 ` [PATCH v3 21/22] net/_common_intel: remove unneeded code Bruce Richardson
2024-12-11 17:33 ` [PATCH v3 22/22] net/ixgbe: use common Tx backlog entry fn Bruce Richardson
21 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-11 17:33 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Vladimir Medvedkin, Ian Stokes, Konstantin Ananyev
The AVX-512 code path used a smaller SW ring structure only containing
the mbuf pointer, but no other fields. The other fields are only used in
the scalar code path, so update all vector driver code paths (AVX2, SSE)
to use the smaller, faster structure.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/iavf/iavf_rxtx.c | 7 -------
drivers/net/iavf/iavf_rxtx_vec_avx2.c | 12 ++++++------
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 8 --------
drivers/net/iavf/iavf_rxtx_vec_common.h | 6 ------
drivers/net/iavf/iavf_rxtx_vec_sse.c | 14 +++++++-------
5 files changed, 13 insertions(+), 34 deletions(-)
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index f0ab881ac5..6692f6992b 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -4193,14 +4193,7 @@ iavf_set_tx_function(struct rte_eth_dev *dev)
txq = dev->data->tx_queues[i];
if (!txq)
continue;
-#ifdef CC_AVX512_SUPPORT
- if (use_avx512)
- iavf_txq_vec_setup_avx512(txq);
- else
- iavf_txq_vec_setup(txq);
-#else
iavf_txq_vec_setup(txq);
-#endif
}
if (no_poll_on_link_down) {
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index fdb98b417a..b847886081 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -1736,14 +1736,14 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
- struct ci_tx_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
/* bit2 is reserved and must be set to 1 according to Spec */
uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC;
uint64_t rs = IAVF_TX_DESC_CMD_RS | flags;
if (txq->nb_tx_free < txq->tx_free_thresh)
- iavf_tx_free_bufs(txq);
+ ci_tx_free_bufs_vec(txq, iavf_tx_desc_done, false);
nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
@@ -1752,13 +1752,13 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = txq->tx_tail;
txdp = &txq->iavf_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ci_tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, n);
iavf_vtx(txdp, tx_pkts, n - 1, flags, offload);
tx_pkts += (n - 1);
@@ -1773,10 +1773,10 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
/* avoid reach the end of ring */
txdp = &txq->iavf_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
}
- ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
iavf_vtx(txdp, tx_pkts, nb_commit, flags, offload);
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index 007759e451..641f3311eb 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -2357,14 +2357,6 @@ iavf_xmit_pkts_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
return iavf_xmit_pkts_vec_avx512_cmn(tx_queue, tx_pkts, nb_pkts, false);
}
-int __rte_cold
-iavf_txq_vec_setup_avx512(struct ci_tx_queue *txq)
-{
- txq->vector_tx = true;
- txq->vector_sw_ring = true;
- return 0;
-}
-
uint16_t
iavf_xmit_pkts_vec_avx512_offload(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index 6f94587eee..c69399a173 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -24,12 +24,6 @@ iavf_tx_desc_done(struct ci_tx_queue *txq, uint16_t idx)
rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE);
}
-static __rte_always_inline int
-iavf_tx_free_bufs(struct ci_tx_queue *txq)
-{
- return ci_tx_free_bufs(txq, iavf_tx_desc_done);
-}
-
static inline void
_iavf_rx_queue_release_mbufs_vec(struct iavf_rx_queue *rxq)
{
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index 3adf2a59e4..9f7db80bfd 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -1368,14 +1368,14 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
- struct ci_tx_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = IAVF_TX_DESC_CMD_EOP | 0x04; /* bit 2 must be set */
uint64_t rs = IAVF_TX_DESC_CMD_RS | flags;
int i;
if (txq->nb_tx_free < txq->tx_free_thresh)
- iavf_tx_free_bufs(txq);
+ ci_tx_free_bufs_vec(txq, iavf_tx_desc_done, false);
nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
@@ -1384,13 +1384,13 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = txq->tx_tail;
txdp = &txq->iavf_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ci_tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -1404,10 +1404,10 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
/* avoid reach the end of ring */
txdp = &txq->iavf_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
}
- ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
iavf_vtx(txdp, tx_pkts, nb_commit, flags);
@@ -1462,7 +1462,7 @@ int __rte_cold
iavf_txq_vec_setup(struct ci_tx_queue *txq)
{
txq->vector_tx = true;
- txq->vector_sw_ring = false;
+ txq->vector_sw_ring = txq->vector_tx;
return 0;
}
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v3 21/22] net/_common_intel: remove unneeded code
2024-12-11 17:33 ` [PATCH v3 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (19 preceding siblings ...)
2024-12-11 17:33 ` [PATCH v3 20/22] net/iavf: " Bruce Richardson
@ 2024-12-11 17:33 ` Bruce Richardson
2024-12-11 17:33 ` [PATCH v3 22/22] net/ixgbe: use common Tx backlog entry fn Bruce Richardson
21 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-11 17:33 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Ian Stokes, Konstantin Ananyev,
Vladimir Medvedkin, Anatoly Burakov
With all drivers using the common Tx structure updated so that their
vector paths all use the simplified Tx mbuf ring format, it's no longer
necessary to have a separate flag for the ring format and for use of a
vector driver.
Remove the former flag and base all decisions off the vector flag. With
that done, we go from having only two paths to consider for releasing
all mbufs in the ring, not three. That allows further simpification of
the "ci_txq_release_all_mbufs" function.
The separate function to free buffers from the vector driver not using
the simplified ring format can similarly be removed as no longer
necessary.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 97 +++--------------------
drivers/net/i40e/i40e_rxtx.c | 1 -
drivers/net/iavf/iavf_rxtx_vec_sse.c | 1 -
drivers/net/ice/ice_rxtx.c | 1 -
drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 1 -
5 files changed, 10 insertions(+), 91 deletions(-)
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
index aa42b9b49f..d9cf4474fc 100644
--- a/drivers/net/_common_intel/tx.h
+++ b/drivers/net/_common_intel/tx.h
@@ -66,7 +66,6 @@ struct ci_tx_queue {
bool tx_deferred_start; /* don't start this queue in dev start */
bool q_set; /* indicate if tx queue has been configured */
bool vector_tx; /* port is using vector TX */
- bool vector_sw_ring; /* port is using vectorized SW ring (ieth_tx_entry_vec) */
union { /* the VSI this queue belongs to */
struct i40e_vsi *i40e_vsi;
struct iavf_vsi *iavf_vsi;
@@ -120,72 +119,6 @@ ci_tx_backlog_entry_vec(struct ci_tx_entry_vec *txep, struct rte_mbuf **tx_pkts,
typedef int (*ci_desc_done_fn)(struct ci_tx_queue *txq, uint16_t idx);
-static __rte_always_inline int
-ci_tx_free_bufs(struct ci_tx_queue *txq, ci_desc_done_fn desc_done)
-{
- struct ci_tx_entry *txep;
- uint32_t n;
- uint32_t i;
- int nb_free = 0;
- struct rte_mbuf *m, *free[IETH_VPMD_TX_MAX_FREE_BUF];
-
- /* check DD bits on threshold descriptor */
- if (!desc_done(txq, txq->tx_next_dd))
- return 0;
-
- n = txq->tx_rs_thresh;
-
- /* first buffer to free from S/W ring is at index
- * tx_next_dd - (tx_rs_thresh-1)
- */
- txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)];
-
- if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
- for (i = 0; i < n; i++) {
- free[i] = txep[i].mbuf;
- /* no need to reset txep[i].mbuf in vector path */
- }
- rte_mempool_put_bulk(free[0]->pool, (void **)free, n);
- goto done;
- }
-
- m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
- if (likely(m != NULL)) {
- free[0] = m;
- nb_free = 1;
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (likely(m != NULL)) {
- if (likely(m->pool == free[0]->pool)) {
- free[nb_free++] = m;
- } else {
- rte_mempool_put_bulk(free[0]->pool,
- (void *)free,
- nb_free);
- free[0] = m;
- nb_free = 1;
- }
- }
- }
- rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
- } else {
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (m != NULL)
- rte_mempool_put(m->pool, m);
- }
- }
-
-done:
- /* buffers were freed, update counters */
- txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
- txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
- if (txq->tx_next_dd >= txq->nb_tx_desc)
- txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
-
- return txq->tx_rs_thresh;
-}
-
static __rte_always_inline int
ci_tx_free_bufs_vec(struct ci_tx_queue *txq, ci_desc_done_fn desc_done, bool ctx_descs)
{
@@ -278,21 +211,6 @@ ci_tx_free_bufs_vec(struct ci_tx_queue *txq, ci_desc_done_fn desc_done, bool ctx
return txq->tx_rs_thresh;
}
-#define IETH_FREE_BUFS_LOOP(swr, nb_desc, start, end) do { \
- uint16_t i = start; \
- if (end < i) { \
- for (; i < nb_desc; i++) { \
- rte_pktmbuf_free_seg(swr[i].mbuf); \
- swr[i].mbuf = NULL; \
- } \
- i = 0; \
- } \
- for (; i < end; i++) { \
- rte_pktmbuf_free_seg(swr[i].mbuf); \
- swr[i].mbuf = NULL; \
- } \
-} while (0)
-
static inline void
ci_txq_release_all_mbufs(struct ci_tx_queue *txq, bool use_ctx)
{
@@ -311,16 +229,21 @@ ci_txq_release_all_mbufs(struct ci_tx_queue *txq, bool use_ctx)
/**
* vPMD tx will not set sw_ring's mbuf to NULL after free,
- * so need to free remains more carefully.
+ * so determining buffers to free is a little more complex.
*/
const uint16_t start = (txq->tx_next_dd - txq->tx_rs_thresh + 1) >> use_ctx;
const uint16_t nb_desc = txq->nb_tx_desc >> use_ctx;
const uint16_t end = txq->tx_tail >> use_ctx;
- if (txq->vector_sw_ring)
- IETH_FREE_BUFS_LOOP(txq->sw_ring_vec, nb_desc, start, end);
- else
- IETH_FREE_BUFS_LOOP(txq->sw_ring, nb_desc, start, end);
+ uint16_t i = start;
+ if (end < i) {
+ for (; i < nb_desc; i++)
+ rte_pktmbuf_free_seg(txq->sw_ring_vec[i].mbuf);
+ i = 0;
+ }
+ for (; i < end; i++)
+ rte_pktmbuf_free_seg(txq->sw_ring_vec[i].mbuf);
+ memset(txq->sw_ring_vec, 0, sizeof(txq->sw_ring_vec[0]) * nb_desc);
}
#endif /* _COMMON_INTEL_TX_H_ */
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 745c467912..c3ff2e05c3 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -1891,7 +1891,6 @@ i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
tx_queue_id);
txq->vector_tx = ad->tx_vec_allowed;
- txq->vector_sw_ring = txq->vector_tx;
/*
* tx_queue_id is queue id application refers to, while
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index 9f7db80bfd..21d5bfd309 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -1462,7 +1462,6 @@ int __rte_cold
iavf_txq_vec_setup(struct ci_tx_queue *txq)
{
txq->vector_tx = true;
- txq->vector_sw_ring = txq->vector_tx;
return 0;
}
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 77cb6688a7..dcfa409813 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -825,7 +825,6 @@ ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
/* record what kind of descriptor cleanup we need on teardown */
txq->vector_tx = ad->tx_vec_allowed;
- txq->vector_sw_ring = txq->vector_tx;
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
index 2b12bdcc9c..53d1fed6f8 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
@@ -182,7 +182,6 @@ ixgbe_txq_vec_setup_default(struct ci_tx_queue *txq,
txq->sw_ring_vec = txq->sw_ring_vec + 1;
txq->ops = txq_ops;
txq->vector_tx = 1;
- txq->vector_sw_ring = 1;
return 0;
}
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v3 22/22] net/ixgbe: use common Tx backlog entry fn
2024-12-11 17:33 ` [PATCH v3 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (20 preceding siblings ...)
2024-12-11 17:33 ` [PATCH v3 21/22] net/_common_intel: remove unneeded code Bruce Richardson
@ 2024-12-11 17:33 ` Bruce Richardson
21 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-11 17:33 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Anatoly Burakov, Vladimir Medvedkin,
Wathsala Vithanage, Konstantin Ananyev
Remove the custom vector Tx backlog entry function and use the standard
intel_common one, now that all vector drivers are using the same,
smaller ring structure.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 10 ----------
drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 4 ++--
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 4 ++--
3 files changed, 4 insertions(+), 14 deletions(-)
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
index 53d1fed6f8..9c3752a12a 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
@@ -68,16 +68,6 @@ ixgbe_tx_free_bufs(struct ci_tx_queue *txq)
return txq->tx_rs_thresh;
}
-static __rte_always_inline void
-tx_backlog_entry(struct ci_tx_entry_vec *txep,
- struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
-{
- int i;
-
- for (i = 0; i < (int)nb_pkts; ++i)
- txep[i].mbuf = tx_pkts[i];
-}
-
static inline void
_ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq)
{
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
index 2ccb399b64..f879f6fa9a 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
@@ -597,7 +597,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -614,7 +614,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = &txq->sw_ring_vec[tx_id];
}
- tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
index fa26365f06..915358e16b 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -720,7 +720,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -737,7 +737,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = &txq->sw_ring_vec[tx_id];
}
- tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v4 00/24] Reduce code duplication across Intel NIC drivers
2024-11-22 12:53 [RFC PATCH 00/21] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (24 preceding siblings ...)
2024-12-11 17:33 ` [PATCH v3 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
@ 2024-12-20 14:38 ` Bruce Richardson
2024-12-20 14:38 ` [PATCH v4 01/24] net/_common_intel: add pkt reassembly fn for intel drivers Bruce Richardson
` (23 more replies)
25 siblings, 24 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-20 14:38 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson
This RFC attempts to reduce the amount of code duplication across a
number of Intel NIC drivers, specifically: ixgbe, i40e, iavf, and ice.
The first patch extract a function from the Rx side, otherwise the
majority of the changes are on the Tx side, leading to a converged Tx
queue structure across the 4 drivers, and a large number of common
functions.
v3->v4:
* Add patches 23 & 24 to set, to do a little more dedupliation on
Rx side
v2->v3:
* Fix incorrect/unadjusted memset in patch 8, leading to incorrect
threshold tracking in ixgbe.
v1->v2:
* Fix two additional checkpatch issues that were flagged.
* Added in patch 21, which performs additional cleanup that is possible
once all vector drivers use the same mbuf free/release process.
[This brings the patchset to having over twice as many lines removed
as added (1887 vs 930), and close to having a net removal of 1kloc]
RFC->v1:
* Moved the location of the common code from "common/intel_eth" to
"net/_common_intel", and added only ".." to the driver include path so
that the paths included "_common_intel" in them, to make it clear it's
not driver-local headers.
* Due to change in location, structure/fn prefix changes from "ieth" to
"ci" for "common intel".
* Removed the seeming-arbitrary split of vector and non-vector code -
since much of the code taken from vector files was scalar code which
was used by the vector drivers.
* Split code into separate Rx and Tx files.
* Fixed multiple checkpatch issues (but not all).
* Attempted to improve name standardization, by using "_vec" as a common
suffix for all vector-related fns and data. Previously, some names had
"vec" in the middle, others had just "_v" suffix or full word "vector"
as suffix.
* Other minor changes...
Bruce Richardson (24):
net/_common_intel: add pkt reassembly fn for intel drivers
net/_common_intel: provide common Tx entry structures
net/_common_intel: add Tx mbuf ring replenish fn
drivers/net: align Tx queue struct field names
drivers/net: add prefix for driver-specific structs
net/_common_intel: merge ice and i40e Tx queue struct
net/iavf: use common Tx queue structure
net/ixgbe: convert Tx queue context cache field to ptr
net/ixgbe: use common Tx queue structure
net/_common_intel: pack Tx queue structure
net/_common_intel: add post-Tx buffer free function
net/_common_intel: add Tx buffer free fn for AVX-512
net/iavf: use common Tx free fn for AVX-512
net/ice: move Tx queue mbuf cleanup fn to common
net/i40e: use common Tx queue mbuf cleanup fn
net/ixgbe: use common Tx queue mbuf cleanup fn
net/iavf: use common Tx queue mbuf cleanup fn
net/ice: use vector SW ring for all vector paths
net/i40e: use vector SW ring for all vector paths
net/iavf: use vector SW ring for all vector paths
net/_common_intel: remove unneeded code
net/ixgbe: use common Tx backlog entry fn
net/_common_intel: create common mbuf initializer fn
net/_common_intel: extract common Rx vector criteria
drivers/net/_common_intel/rx.h | 112 ++++++++
drivers/net/_common_intel/tx.h | 249 ++++++++++++++++++
drivers/net/i40e/i40e_ethdev.c | 4 +-
drivers/net/i40e/i40e_ethdev.h | 8 +-
drivers/net/i40e/i40e_fdir.c | 10 +-
.../net/i40e/i40e_recycle_mbufs_vec_common.c | 6 +-
drivers/net/i40e/i40e_rxtx.c | 192 +++++---------
drivers/net/i40e/i40e_rxtx.h | 61 +----
drivers/net/i40e/i40e_rxtx_vec_altivec.c | 30 ++-
drivers/net/i40e/i40e_rxtx_vec_avx2.c | 26 +-
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 144 +---------
drivers/net/i40e/i40e_rxtx_vec_common.h | 198 +-------------
drivers/net/i40e/i40e_rxtx_vec_neon.c | 30 ++-
drivers/net/i40e/i40e_rxtx_vec_sse.c | 30 ++-
drivers/net/i40e/meson.build | 2 +-
drivers/net/iavf/iavf.h | 2 +-
drivers/net/iavf/iavf_ethdev.c | 4 +-
drivers/net/iavf/iavf_rxtx.c | 195 +++++---------
drivers/net/iavf/iavf_rxtx.h | 62 +----
drivers/net/iavf/iavf_rxtx_vec_avx2.c | 47 ++--
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 214 +++------------
drivers/net/iavf/iavf_rxtx_vec_common.h | 178 +------------
drivers/net/iavf/iavf_rxtx_vec_neon.c | 3 +-
drivers/net/iavf/iavf_rxtx_vec_sse.c | 59 ++---
drivers/net/iavf/iavf_vchnl.c | 8 +-
drivers/net/iavf/meson.build | 2 +-
drivers/net/ice/ice_dcf.c | 4 +-
drivers/net/ice/ice_dcf_ethdev.c | 21 +-
drivers/net/ice/ice_diagnose.c | 2 +-
drivers/net/ice/ice_ethdev.c | 2 +-
drivers/net/ice/ice_ethdev.h | 7 +-
drivers/net/ice/ice_rxtx.c | 163 +++++-------
drivers/net/ice/ice_rxtx.h | 52 +---
drivers/net/ice/ice_rxtx_vec_avx2.c | 26 +-
drivers/net/ice/ice_rxtx_vec_avx512.c | 153 +----------
drivers/net/ice/ice_rxtx_vec_common.h | 222 +---------------
drivers/net/ice/ice_rxtx_vec_sse.c | 35 ++-
drivers/net/ice/meson.build | 2 +-
drivers/net/ixgbe/base/ixgbe_osdep.h | 2 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 4 +-
.../ixgbe/ixgbe_recycle_mbufs_vec_common.c | 6 +-
drivers/net/ixgbe/ixgbe_rxtx.c | 139 +++++-----
drivers/net/ixgbe/ixgbe_rxtx.h | 73 +----
drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 156 ++---------
drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 40 ++-
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 40 ++-
drivers/net/ixgbe/meson.build | 2 +-
47 files changed, 1000 insertions(+), 2027 deletions(-)
create mode 100644 drivers/net/_common_intel/rx.h
create mode 100644 drivers/net/_common_intel/tx.h
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v4 01/24] net/_common_intel: add pkt reassembly fn for intel drivers
2024-12-20 14:38 ` [PATCH v4 00/24] Reduce code duplication across Intel NIC drivers Bruce Richardson
@ 2024-12-20 14:38 ` Bruce Richardson
2024-12-20 16:15 ` Stephen Hemminger
2024-12-20 14:38 ` [PATCH v4 02/24] net/_common_intel: provide common Tx entry structures Bruce Richardson
` (22 subsequent siblings)
23 siblings, 1 reply; 127+ messages in thread
From: Bruce Richardson @ 2024-12-20 14:38 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, David Christensen, Ian Stokes,
Konstantin Ananyev, Wathsala Vithanage, Vladimir Medvedkin,
Anatoly Burakov
The code for reassembling a single, multi-mbuf packet from multiple
buffers received from the NIC is duplicated across many drivers. Rather
than having multiple copies of this function, we can create an
"_common_intel" directory to hold such functions and consolidate
multiple functions down to a single one for easier maintenance.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/rx.h | 79 +++++++++++++++++++++++
drivers/net/i40e/i40e_rxtx_vec_altivec.c | 4 +-
drivers/net/i40e/i40e_rxtx_vec_avx2.c | 4 +-
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 4 +-
drivers/net/i40e/i40e_rxtx_vec_common.h | 64 +-----------------
drivers/net/i40e/i40e_rxtx_vec_neon.c | 4 +-
drivers/net/i40e/i40e_rxtx_vec_sse.c | 4 +-
drivers/net/i40e/meson.build | 2 +-
drivers/net/iavf/iavf_rxtx_vec_avx2.c | 8 +--
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 8 +--
drivers/net/iavf/iavf_rxtx_vec_common.h | 65 +------------------
drivers/net/iavf/iavf_rxtx_vec_sse.c | 8 +--
drivers/net/iavf/meson.build | 2 +-
drivers/net/ice/ice_rxtx_vec_avx2.c | 4 +-
drivers/net/ice/ice_rxtx_vec_avx512.c | 8 +--
drivers/net/ice/ice_rxtx_vec_common.h | 66 +------------------
drivers/net/ice/ice_rxtx_vec_sse.c | 4 +-
drivers/net/ice/meson.build | 2 +-
drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 63 +-----------------
drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 4 +-
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 4 +-
drivers/net/ixgbe/meson.build | 2 +-
22 files changed, 121 insertions(+), 292 deletions(-)
create mode 100644 drivers/net/_common_intel/rx.h
diff --git a/drivers/net/_common_intel/rx.h b/drivers/net/_common_intel/rx.h
new file mode 100644
index 0000000000..5bd2fea7e3
--- /dev/null
+++ b/drivers/net/_common_intel/rx.h
@@ -0,0 +1,79 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Intel Corporation
+ */
+
+#ifndef _COMMON_INTEL_RX_H_
+#define _COMMON_INTEL_RX_H_
+
+#include <stdint.h>
+#include <unistd.h>
+#include <rte_mbuf.h>
+
+#define CI_RX_BURST 32
+
+static inline uint16_t
+ci_rx_reassemble_packets(struct rte_mbuf **rx_bufs, uint16_t nb_bufs, uint8_t *split_flags,
+ struct rte_mbuf **pkt_first_seg, struct rte_mbuf **pkt_last_seg,
+ const uint8_t crc_len)
+{
+ struct rte_mbuf *pkts[CI_RX_BURST] = {0}; /*finished pkts*/
+ struct rte_mbuf *start = *pkt_first_seg;
+ struct rte_mbuf *end = *pkt_last_seg;
+ unsigned int pkt_idx, buf_idx;
+
+ for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) {
+ if (end) {
+ /* processing a split packet */
+ end->next = rx_bufs[buf_idx];
+ rx_bufs[buf_idx]->data_len += crc_len;
+
+ start->nb_segs++;
+ start->pkt_len += rx_bufs[buf_idx]->data_len;
+ end = end->next;
+
+ if (!split_flags[buf_idx]) {
+ /* it's the last packet of the set */
+ start->hash = end->hash;
+ start->vlan_tci = end->vlan_tci;
+ start->ol_flags = end->ol_flags;
+ /* we need to strip crc for the whole packet */
+ start->pkt_len -= crc_len;
+ if (end->data_len > crc_len) {
+ end->data_len -= crc_len;
+ } else {
+ /* free up last mbuf */
+ struct rte_mbuf *secondlast = start;
+
+ start->nb_segs--;
+ while (secondlast->next != end)
+ secondlast = secondlast->next;
+ secondlast->data_len -= (crc_len - end->data_len);
+ secondlast->next = NULL;
+ rte_pktmbuf_free_seg(end);
+ }
+ pkts[pkt_idx++] = start;
+ start = NULL;
+ end = NULL;
+ }
+ } else {
+ /* not processing a split packet */
+ if (!split_flags[buf_idx]) {
+ /* not a split packet, save and skip */
+ pkts[pkt_idx++] = rx_bufs[buf_idx];
+ continue;
+ }
+ start = rx_bufs[buf_idx];
+ end = start;
+ rx_bufs[buf_idx]->data_len += crc_len;
+ rx_bufs[buf_idx]->pkt_len += crc_len;
+ }
+ }
+
+ /* save the partial packet for next time */
+ *pkt_first_seg = start;
+ *pkt_last_seg = end;
+ memcpy(rx_bufs, pkts, pkt_idx * (sizeof(*pkts)));
+ return pkt_idx;
+}
+
+#endif /* _COMMON_INTEL_RX_H_ */
diff --git a/drivers/net/i40e/i40e_rxtx_vec_altivec.c b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
index b6b0d38ec1..95829f65d5 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_altivec.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
@@ -494,8 +494,8 @@ i40e_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
if (i == nb_bufs)
return nb_bufs;
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx2.c b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
index 19cf0ac718..6dd6e55d9c 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx2.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
@@ -657,8 +657,8 @@ i40e_recv_scattered_burst_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/*
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index 3b2750221b..506f1b5878 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -725,8 +725,8 @@ i40e_recv_scattered_burst_vec_avx512(void *rx_queue,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index 8b745630e4..1248cecacd 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -8,6 +8,7 @@
#include <ethdev_driver.h>
#include <rte_malloc.h>
+#include <_common_intel/rx.h>
#include "i40e_ethdev.h"
#include "i40e_rxtx.h"
@@ -15,69 +16,6 @@
#pragma GCC diagnostic ignored "-Wcast-qual"
#endif
-static inline uint16_t
-reassemble_packets(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_bufs,
- uint16_t nb_bufs, uint8_t *split_flags)
-{
- struct rte_mbuf *pkts[RTE_I40E_VPMD_RX_BURST]; /*finished pkts*/
- struct rte_mbuf *start = rxq->pkt_first_seg;
- struct rte_mbuf *end = rxq->pkt_last_seg;
- unsigned pkt_idx, buf_idx;
-
- for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) {
- if (end != NULL) {
- /* processing a split packet */
- end->next = rx_bufs[buf_idx];
- rx_bufs[buf_idx]->data_len += rxq->crc_len;
-
- start->nb_segs++;
- start->pkt_len += rx_bufs[buf_idx]->data_len;
- end = end->next;
-
- if (!split_flags[buf_idx]) {
- /* it's the last packet of the set */
- start->hash = end->hash;
- start->vlan_tci = end->vlan_tci;
- start->ol_flags = end->ol_flags;
- /* we need to strip crc for the whole packet */
- start->pkt_len -= rxq->crc_len;
- if (end->data_len > rxq->crc_len)
- end->data_len -= rxq->crc_len;
- else {
- /* free up last mbuf */
- struct rte_mbuf *secondlast = start;
-
- start->nb_segs--;
- while (secondlast->next != end)
- secondlast = secondlast->next;
- secondlast->data_len -= (rxq->crc_len -
- end->data_len);
- secondlast->next = NULL;
- rte_pktmbuf_free_seg(end);
- }
- pkts[pkt_idx++] = start;
- start = end = NULL;
- }
- } else {
- /* not processing a split packet */
- if (!split_flags[buf_idx]) {
- /* not a split packet, save and skip */
- pkts[pkt_idx++] = rx_bufs[buf_idx];
- continue;
- }
- end = start = rx_bufs[buf_idx];
- rx_bufs[buf_idx]->data_len += rxq->crc_len;
- rx_bufs[buf_idx]->pkt_len += rxq->crc_len;
- }
- }
-
- /* save the partial packet for next time */
- rxq->pkt_first_seg = start;
- rxq->pkt_last_seg = end;
- memcpy(rx_bufs, pkts, pkt_idx * (sizeof(*pkts)));
- return pkt_idx;
-}
-
static __rte_always_inline int
i40e_tx_free_bufs(struct i40e_tx_queue *txq)
{
diff --git a/drivers/net/i40e/i40e_rxtx_vec_neon.c b/drivers/net/i40e/i40e_rxtx_vec_neon.c
index e1c5c7041b..159d971796 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_neon.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_neon.c
@@ -623,8 +623,8 @@ i40e_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/i40e/i40e_rxtx_vec_sse.c b/drivers/net/i40e/i40e_rxtx_vec_sse.c
index ad560d2b6b..3a8128e014 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_sse.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_sse.c
@@ -641,8 +641,8 @@ i40e_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/i40e/meson.build b/drivers/net/i40e/meson.build
index 5c93493124..0e0b416b8f 100644
--- a/drivers/net/i40e/meson.build
+++ b/drivers/net/i40e/meson.build
@@ -36,7 +36,7 @@ sources = files(
testpmd_sources = files('i40e_testpmd.c')
deps += ['hash']
-includes += include_directories('base')
+includes += include_directories('base', '..')
if arch_subdir == 'x86'
sources += files('i40e_rxtx_vec_sse.c')
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index 49d41af953..0baf5045c8 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -1508,8 +1508,8 @@ iavf_recv_scattered_burst_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
@@ -1597,8 +1597,8 @@ iavf_recv_scattered_burst_vec_avx2_flex_rxd(void *rx_queue,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index d6a861bf80..5a88007096 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -1685,8 +1685,8 @@ iavf_recv_scattered_burst_vec_avx512(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
@@ -1761,8 +1761,8 @@ iavf_recv_scattered_burst_vec_avx512_flex_rxd(void *rx_queue,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index 5c5220048d..26b6f07614 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -8,6 +8,7 @@
#include <ethdev_driver.h>
#include <rte_malloc.h>
+#include <_common_intel/rx.h>
#include "iavf.h"
#include "iavf_rxtx.h"
@@ -15,70 +16,6 @@
#pragma GCC diagnostic ignored "-Wcast-qual"
#endif
-static __rte_always_inline uint16_t
-reassemble_packets(struct iavf_rx_queue *rxq, struct rte_mbuf **rx_bufs,
- uint16_t nb_bufs, uint8_t *split_flags)
-{
- struct rte_mbuf *pkts[IAVF_VPMD_RX_MAX_BURST];
- struct rte_mbuf *start = rxq->pkt_first_seg;
- struct rte_mbuf *end = rxq->pkt_last_seg;
- unsigned int pkt_idx, buf_idx;
-
- for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) {
- if (end) {
- /* processing a split packet */
- end->next = rx_bufs[buf_idx];
- rx_bufs[buf_idx]->data_len += rxq->crc_len;
-
- start->nb_segs++;
- start->pkt_len += rx_bufs[buf_idx]->data_len;
- end = end->next;
-
- if (!split_flags[buf_idx]) {
- /* it's the last packet of the set */
- start->hash = end->hash;
- start->vlan_tci = end->vlan_tci;
- start->ol_flags = end->ol_flags;
- /* we need to strip crc for the whole packet */
- start->pkt_len -= rxq->crc_len;
- if (end->data_len > rxq->crc_len) {
- end->data_len -= rxq->crc_len;
- } else {
- /* free up last mbuf */
- struct rte_mbuf *secondlast = start;
-
- start->nb_segs--;
- while (secondlast->next != end)
- secondlast = secondlast->next;
- secondlast->data_len -= (rxq->crc_len -
- end->data_len);
- secondlast->next = NULL;
- rte_pktmbuf_free_seg(end);
- }
- pkts[pkt_idx++] = start;
- start = NULL;
- end = NULL;
- }
- } else {
- /* not processing a split packet */
- if (!split_flags[buf_idx]) {
- /* not a split packet, save and skip */
- pkts[pkt_idx++] = rx_bufs[buf_idx];
- continue;
- }
- end = start = rx_bufs[buf_idx];
- rx_bufs[buf_idx]->data_len += rxq->crc_len;
- rx_bufs[buf_idx]->pkt_len += rxq->crc_len;
- }
- }
-
- /* save the partial packet for next time */
- rxq->pkt_first_seg = start;
- rxq->pkt_last_seg = end;
- memcpy(rx_bufs, pkts, pkt_idx * (sizeof(*pkts)));
- return pkt_idx;
-}
-
static __rte_always_inline int
iavf_tx_free_bufs(struct iavf_tx_queue *txq)
{
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index 0db6fa8bd4..48b01462ea 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -1238,8 +1238,8 @@ iavf_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
@@ -1307,8 +1307,8 @@ iavf_recv_scattered_burst_vec_flex_rxd(void *rx_queue,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/iavf/meson.build b/drivers/net/iavf/meson.build
index b48bb83438..9106e016ef 100644
--- a/drivers/net/iavf/meson.build
+++ b/drivers/net/iavf/meson.build
@@ -5,7 +5,7 @@ if dpdk_conf.get('RTE_IOVA_IN_MBUF') == 0
subdir_done()
endif
-includes += include_directories('../../common/iavf')
+includes += include_directories('../../common/iavf', '..')
testpmd_sources = files('iavf_testpmd.c')
diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index d6e88dbb29..ca247b155c 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -726,8 +726,8 @@ ice_recv_scattered_burst_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + ice_rx_reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index add095ef06..1e603d5d8f 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -763,8 +763,8 @@ ice_recv_scattered_burst_vec_avx512(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + ice_rx_reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
@@ -805,8 +805,8 @@ ice_recv_scattered_burst_vec_avx512_offload(void *rx_queue,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + ice_rx_reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index 4b73465af5..dd7da4761f 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -5,77 +5,13 @@
#ifndef _ICE_RXTX_VEC_COMMON_H_
#define _ICE_RXTX_VEC_COMMON_H_
+#include <_common_intel/rx.h>
#include "ice_rxtx.h"
#ifndef __INTEL_COMPILER
#pragma GCC diagnostic ignored "-Wcast-qual"
#endif
-static inline uint16_t
-ice_rx_reassemble_packets(struct ice_rx_queue *rxq, struct rte_mbuf **rx_bufs,
- uint16_t nb_bufs, uint8_t *split_flags)
-{
- struct rte_mbuf *pkts[ICE_VPMD_RX_BURST] = {0}; /*finished pkts*/
- struct rte_mbuf *start = rxq->pkt_first_seg;
- struct rte_mbuf *end = rxq->pkt_last_seg;
- unsigned int pkt_idx, buf_idx;
-
- for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) {
- if (end) {
- /* processing a split packet */
- end->next = rx_bufs[buf_idx];
- rx_bufs[buf_idx]->data_len += rxq->crc_len;
-
- start->nb_segs++;
- start->pkt_len += rx_bufs[buf_idx]->data_len;
- end = end->next;
-
- if (!split_flags[buf_idx]) {
- /* it's the last packet of the set */
- start->hash = end->hash;
- start->vlan_tci = end->vlan_tci;
- start->ol_flags = end->ol_flags;
- /* we need to strip crc for the whole packet */
- start->pkt_len -= rxq->crc_len;
- if (end->data_len > rxq->crc_len) {
- end->data_len -= rxq->crc_len;
- } else {
- /* free up last mbuf */
- struct rte_mbuf *secondlast = start;
-
- start->nb_segs--;
- while (secondlast->next != end)
- secondlast = secondlast->next;
- secondlast->data_len -= (rxq->crc_len -
- end->data_len);
- secondlast->next = NULL;
- rte_pktmbuf_free_seg(end);
- }
- pkts[pkt_idx++] = start;
- start = NULL;
- end = NULL;
- }
- } else {
- /* not processing a split packet */
- if (!split_flags[buf_idx]) {
- /* not a split packet, save and skip */
- pkts[pkt_idx++] = rx_bufs[buf_idx];
- continue;
- }
- start = rx_bufs[buf_idx];
- end = start;
- rx_bufs[buf_idx]->data_len += rxq->crc_len;
- rx_bufs[buf_idx]->pkt_len += rxq->crc_len;
- }
- }
-
- /* save the partial packet for next time */
- rxq->pkt_first_seg = start;
- rxq->pkt_last_seg = end;
- memcpy(rx_bufs, pkts, pkt_idx * (sizeof(*pkts)));
- return pkt_idx;
-}
-
static __rte_always_inline int
ice_tx_free_bufs_vec(struct ice_tx_queue *txq)
{
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index c01d8ede29..01533454ba 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -640,8 +640,8 @@ ice_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + ice_rx_reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/ice/meson.build b/drivers/net/ice/meson.build
index 1c9dc0cc6d..02c028db73 100644
--- a/drivers/net/ice/meson.build
+++ b/drivers/net/ice/meson.build
@@ -19,7 +19,7 @@ sources = files(
testpmd_sources = files('ice_testpmd.c')
deps += ['hash', 'net', 'common_iavf']
-includes += include_directories('base', '../../common/iavf')
+includes += include_directories('base', '..')
if arch_subdir == 'x86'
sources += files('ice_rxtx_vec_sse.c')
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
index a4d9ec9b08..2bab17c934 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
@@ -7,71 +7,10 @@
#include <stdint.h>
#include <ethdev_driver.h>
+#include <_common_intel/rx.h>
#include "ixgbe_ethdev.h"
#include "ixgbe_rxtx.h"
-static inline uint16_t
-reassemble_packets(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_bufs,
- uint16_t nb_bufs, uint8_t *split_flags)
-{
- struct rte_mbuf *pkts[nb_bufs]; /*finished pkts*/
- struct rte_mbuf *start = rxq->pkt_first_seg;
- struct rte_mbuf *end = rxq->pkt_last_seg;
- unsigned int pkt_idx, buf_idx;
-
- for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) {
- if (end != NULL) {
- /* processing a split packet */
- end->next = rx_bufs[buf_idx];
- rx_bufs[buf_idx]->data_len += rxq->crc_len;
-
- start->nb_segs++;
- start->pkt_len += rx_bufs[buf_idx]->data_len;
- end = end->next;
-
- if (!split_flags[buf_idx]) {
- /* it's the last packet of the set */
- start->hash = end->hash;
- start->ol_flags = end->ol_flags;
- /* we need to strip crc for the whole packet */
- start->pkt_len -= rxq->crc_len;
- if (end->data_len > rxq->crc_len)
- end->data_len -= rxq->crc_len;
- else {
- /* free up last mbuf */
- struct rte_mbuf *secondlast = start;
-
- start->nb_segs--;
- while (secondlast->next != end)
- secondlast = secondlast->next;
- secondlast->data_len -= (rxq->crc_len -
- end->data_len);
- secondlast->next = NULL;
- rte_pktmbuf_free_seg(end);
- }
- pkts[pkt_idx++] = start;
- start = end = NULL;
- }
- } else {
- /* not processing a split packet */
- if (!split_flags[buf_idx]) {
- /* not a split packet, save and skip */
- pkts[pkt_idx++] = rx_bufs[buf_idx];
- continue;
- }
- end = start = rx_bufs[buf_idx];
- rx_bufs[buf_idx]->data_len += rxq->crc_len;
- rx_bufs[buf_idx]->pkt_len += rxq->crc_len;
- }
- }
-
- /* save the partial packet for next time */
- rxq->pkt_first_seg = start;
- rxq->pkt_last_seg = end;
- memcpy(rx_bufs, pkts, pkt_idx * (sizeof(*pkts)));
- return pkt_idx;
-}
-
static __rte_always_inline int
ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq)
{
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
index 952b032eb6..7b35093075 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
@@ -516,8 +516,8 @@ ixgbe_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
index a77370cdb7..a709bf8c7f 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -639,8 +639,8 @@ ixgbe_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_bufs;
rxq->pkt_first_seg = rx_pkts[i];
}
- return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
- &split_flags[i]);
+ return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i],
+ &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len);
}
/**
diff --git a/drivers/net/ixgbe/meson.build b/drivers/net/ixgbe/meson.build
index 0ae12dd5ff..a65ff51379 100644
--- a/drivers/net/ixgbe/meson.build
+++ b/drivers/net/ixgbe/meson.build
@@ -35,6 +35,6 @@ elif arch_subdir == 'arm'
sources += files('ixgbe_recycle_mbufs_vec_common.c')
endif
-includes += include_directories('base')
+includes += include_directories('base', '..')
headers = files('rte_pmd_ixgbe.h')
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v4 02/24] net/_common_intel: provide common Tx entry structures
2024-12-20 14:38 ` [PATCH v4 00/24] Reduce code duplication across Intel NIC drivers Bruce Richardson
2024-12-20 14:38 ` [PATCH v4 01/24] net/_common_intel: add pkt reassembly fn for intel drivers Bruce Richardson
@ 2024-12-20 14:38 ` Bruce Richardson
2024-12-20 14:39 ` [PATCH v4 03/24] net/_common_intel: add Tx mbuf ring replenish fn Bruce Richardson
` (21 subsequent siblings)
23 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-20 14:38 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Ian Stokes, David Christensen,
Konstantin Ananyev, Wathsala Vithanage, Vladimir Medvedkin,
Anatoly Burakov
The Tx entry structures, both vector and scalar, are common across Intel
drivers, so provide a single definition to be used everywhere.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 27 +++++++++++++++++++
.../net/i40e/i40e_recycle_mbufs_vec_common.c | 2 +-
drivers/net/i40e/i40e_rxtx.c | 18 ++++++-------
drivers/net/i40e/i40e_rxtx.h | 14 +++-------
drivers/net/i40e/i40e_rxtx_vec_altivec.c | 2 +-
drivers/net/i40e/i40e_rxtx_vec_avx2.c | 2 +-
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 6 ++---
drivers/net/i40e/i40e_rxtx_vec_common.h | 4 +--
drivers/net/i40e/i40e_rxtx_vec_neon.c | 2 +-
drivers/net/i40e/i40e_rxtx_vec_sse.c | 2 +-
drivers/net/iavf/iavf_rxtx.c | 12 ++++-----
drivers/net/iavf/iavf_rxtx.h | 14 +++-------
drivers/net/iavf/iavf_rxtx_vec_avx2.c | 2 +-
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 10 +++----
drivers/net/iavf/iavf_rxtx_vec_common.h | 4 +--
drivers/net/iavf/iavf_rxtx_vec_sse.c | 2 +-
drivers/net/ice/ice_dcf_ethdev.c | 2 +-
drivers/net/ice/ice_rxtx.c | 16 +++++------
drivers/net/ice/ice_rxtx.h | 13 ++-------
drivers/net/ice/ice_rxtx_vec_avx2.c | 2 +-
drivers/net/ice/ice_rxtx_vec_avx512.c | 6 ++---
drivers/net/ice/ice_rxtx_vec_common.h | 6 ++---
drivers/net/ice/ice_rxtx_vec_sse.c | 2 +-
.../ixgbe/ixgbe_recycle_mbufs_vec_common.c | 2 +-
drivers/net/ixgbe/ixgbe_rxtx.c | 16 +++++------
drivers/net/ixgbe/ixgbe_rxtx.h | 22 +++------------
drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 8 +++---
drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 2 +-
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 2 +-
29 files changed, 105 insertions(+), 117 deletions(-)
create mode 100644 drivers/net/_common_intel/tx.h
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
new file mode 100644
index 0000000000..384352b9db
--- /dev/null
+++ b/drivers/net/_common_intel/tx.h
@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Intel Corporation
+ */
+
+#ifndef _COMMON_INTEL_TX_H_
+#define _COMMON_INTEL_TX_H_
+
+#include <stdint.h>
+#include <rte_mbuf.h>
+
+/**
+ * Structure associated with each descriptor of the TX ring of a TX queue.
+ */
+struct ci_tx_entry {
+ struct rte_mbuf *mbuf; /* mbuf associated with TX desc, if any. */
+ uint16_t next_id; /* Index of next descriptor in ring. */
+ uint16_t last_id; /* Index of last scattered descriptor. */
+};
+
+/**
+ * Structure associated with each descriptor of the TX ring of a TX queue in vector Tx.
+ */
+struct ci_tx_entry_vec {
+ struct rte_mbuf *mbuf; /* mbuf associated with TX desc, if any. */
+};
+
+#endif /* _COMMON_INTEL_TX_H_ */
diff --git a/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c b/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
index 14424c9921..260d238ce4 100644
--- a/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
+++ b/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
@@ -56,7 +56,7 @@ i40e_recycle_tx_mbufs_reuse_vec(void *tx_queue,
struct rte_eth_recycle_rxq_info *recycle_rxq_info)
{
struct i40e_tx_queue *txq = tx_queue;
- struct i40e_tx_entry *txep;
+ struct ci_tx_entry *txep;
struct rte_mbuf **rxep;
int i, n;
uint16_t nb_recycle_mbufs;
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 839c8a5442..2e1f07d2a1 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -378,7 +378,7 @@ i40e_build_ctob(uint32_t td_cmd,
static inline int
i40e_xmit_cleanup(struct i40e_tx_queue *txq)
{
- struct i40e_tx_entry *sw_ring = txq->sw_ring;
+ struct ci_tx_entry *sw_ring = txq->sw_ring;
volatile struct i40e_tx_desc *txd = txq->tx_ring;
uint16_t last_desc_cleaned = txq->last_desc_cleaned;
uint16_t nb_tx_desc = txq->nb_tx_desc;
@@ -1081,8 +1081,8 @@ uint16_t
i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
struct i40e_tx_queue *txq;
- struct i40e_tx_entry *sw_ring;
- struct i40e_tx_entry *txe, *txn;
+ struct ci_tx_entry *sw_ring;
+ struct ci_tx_entry *txe, *txn;
volatile struct i40e_tx_desc *txd;
volatile struct i40e_tx_desc *txr;
struct rte_mbuf *tx_pkt;
@@ -1331,7 +1331,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
static __rte_always_inline int
i40e_tx_free_bufs(struct i40e_tx_queue *txq)
{
- struct i40e_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint16_t tx_rs_thresh = txq->tx_rs_thresh;
uint16_t i = 0, j = 0;
struct rte_mbuf *free[RTE_I40E_TX_MAX_FREE_BUF_SZ];
@@ -1418,7 +1418,7 @@ i40e_tx_fill_hw_ring(struct i40e_tx_queue *txq,
uint16_t nb_pkts)
{
volatile struct i40e_tx_desc *txdp = &(txq->tx_ring[txq->tx_tail]);
- struct i40e_tx_entry *txep = &(txq->sw_ring[txq->tx_tail]);
+ struct ci_tx_entry *txep = &txq->sw_ring[txq->tx_tail];
const int N_PER_LOOP = 4;
const int N_PER_LOOP_MASK = N_PER_LOOP - 1;
int mainpart, leftover;
@@ -2555,7 +2555,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate software ring */
txq->sw_ring =
rte_zmalloc_socket("i40e tx sw ring",
- sizeof(struct i40e_tx_entry) * nb_desc,
+ sizeof(struct ci_tx_entry) * nb_desc,
RTE_CACHE_LINE_SIZE,
socket_id);
if (!txq->sw_ring) {
@@ -2723,7 +2723,7 @@ i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq)
*/
#ifdef CC_AVX512_SUPPORT
if (dev->tx_pkt_burst == i40e_xmit_pkts_vec_avx512) {
- struct i40e_vec_tx_entry *swr = (void *)txq->sw_ring;
+ struct ci_tx_entry_vec *swr = (void *)txq->sw_ring;
i = txq->tx_next_dd - txq->tx_rs_thresh + 1;
if (txq->tx_tail < i) {
@@ -2768,7 +2768,7 @@ static int
i40e_tx_done_cleanup_full(struct i40e_tx_queue *txq,
uint32_t free_cnt)
{
- struct i40e_tx_entry *swr_ring = txq->sw_ring;
+ struct ci_tx_entry *swr_ring = txq->sw_ring;
uint16_t i, tx_last, tx_id;
uint16_t nb_tx_free_last;
uint16_t nb_tx_to_clean;
@@ -2874,7 +2874,7 @@ i40e_tx_done_cleanup(void *txq, uint32_t free_cnt)
void
i40e_reset_tx_queue(struct i40e_tx_queue *txq)
{
- struct i40e_tx_entry *txe;
+ struct ci_tx_entry *txe;
uint16_t i, prev, size;
if (!txq) {
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 33fc9770d9..0f5d3cb0b7 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -5,6 +5,8 @@
#ifndef _I40E_RXTX_H_
#define _I40E_RXTX_H_
+#include <_common_intel/tx.h>
+
#define RTE_PMD_I40E_RX_MAX_BURST 32
#define RTE_PMD_I40E_TX_MAX_BURST 32
@@ -122,16 +124,6 @@ struct i40e_rx_queue {
const struct rte_memzone *mz;
};
-struct i40e_tx_entry {
- struct rte_mbuf *mbuf;
- uint16_t next_id;
- uint16_t last_id;
-};
-
-struct i40e_vec_tx_entry {
- struct rte_mbuf *mbuf;
-};
-
/*
* Structure associated with each TX queue.
*/
@@ -139,7 +131,7 @@ struct i40e_tx_queue {
uint16_t nb_tx_desc; /**< number of TX descriptors */
uint64_t tx_ring_phys_addr; /**< TX ring DMA address */
volatile struct i40e_tx_desc *tx_ring; /**< TX ring virtual address */
- struct i40e_tx_entry *sw_ring; /**< virtual address of SW ring */
+ struct ci_tx_entry *sw_ring; /**< virtual address of SW ring */
uint16_t tx_tail; /**< current value of tail register */
volatile uint8_t *qtx_tail; /**< register address of tail */
uint16_t nb_tx_used; /**< number of TX desc used since RS bit set */
diff --git a/drivers/net/i40e/i40e_rxtx_vec_altivec.c b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
index 95829f65d5..ca1038eaa6 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_altivec.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
@@ -553,7 +553,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
- struct i40e_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = I40E_TD_CMD;
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx2.c b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
index 6dd6e55d9c..e8441de759 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx2.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
@@ -745,7 +745,7 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
- struct i40e_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = I40E_TD_CMD;
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index 506f1b5878..8b8a16daa8 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -757,7 +757,7 @@ i40e_recv_scattered_pkts_vec_avx512(void *rx_queue,
static __rte_always_inline int
i40e_tx_free_bufs_avx512(struct i40e_tx_queue *txq)
{
- struct i40e_vec_tx_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint32_t n;
uint32_t i;
int nb_free = 0;
@@ -920,7 +920,7 @@ vtx(volatile struct i40e_tx_desc *txdp,
}
static __rte_always_inline void
-tx_backlog_entry_avx512(struct i40e_vec_tx_entry *txep,
+tx_backlog_entry_avx512(struct ci_tx_entry_vec *txep,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
int i;
@@ -935,7 +935,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
- struct i40e_vec_tx_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = I40E_TD_CMD;
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index 1248cecacd..619fb89110 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -19,7 +19,7 @@
static __rte_always_inline int
i40e_tx_free_bufs(struct i40e_tx_queue *txq)
{
- struct i40e_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint32_t n;
uint32_t i;
int nb_free = 0;
@@ -85,7 +85,7 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
}
static __rte_always_inline void
-tx_backlog_entry(struct i40e_tx_entry *txep,
+tx_backlog_entry(struct ci_tx_entry *txep,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
int i;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_neon.c b/drivers/net/i40e/i40e_rxtx_vec_neon.c
index 159d971796..9b90a32e28 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_neon.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_neon.c
@@ -681,7 +681,7 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
{
struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
- struct i40e_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = I40E_TD_CMD;
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_sse.c b/drivers/net/i40e/i40e_rxtx_vec_sse.c
index 3a8128e014..e1fa2ed543 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_sse.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_sse.c
@@ -700,7 +700,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
- struct i40e_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = I40E_TD_CMD;
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 6a093c6746..e337f20073 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -284,7 +284,7 @@ reset_rx_queue(struct iavf_rx_queue *rxq)
static inline void
reset_tx_queue(struct iavf_tx_queue *txq)
{
- struct iavf_tx_entry *txe;
+ struct ci_tx_entry *txe;
uint32_t i, size;
uint16_t prev;
@@ -860,7 +860,7 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate software ring */
txq->sw_ring =
rte_zmalloc_socket("iavf tx sw ring",
- sizeof(struct iavf_tx_entry) * nb_desc,
+ sizeof(struct ci_tx_entry) * nb_desc,
RTE_CACHE_LINE_SIZE,
socket_id);
if (!txq->sw_ring) {
@@ -2379,7 +2379,7 @@ iavf_recv_pkts_bulk_alloc(void *rx_queue,
static inline int
iavf_xmit_cleanup(struct iavf_tx_queue *txq)
{
- struct iavf_tx_entry *sw_ring = txq->sw_ring;
+ struct ci_tx_entry *sw_ring = txq->sw_ring;
uint16_t last_desc_cleaned = txq->last_desc_cleaned;
uint16_t nb_tx_desc = txq->nb_tx_desc;
uint16_t desc_to_clean_to;
@@ -2797,8 +2797,8 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
struct iavf_tx_queue *txq = tx_queue;
volatile struct iavf_tx_desc *txr = txq->tx_ring;
- struct iavf_tx_entry *txe_ring = txq->sw_ring;
- struct iavf_tx_entry *txe, *txn;
+ struct ci_tx_entry *txe_ring = txq->sw_ring;
+ struct ci_tx_entry *txe, *txn;
struct rte_mbuf *mb, *mb_seg;
uint64_t buf_dma_addr;
uint16_t desc_idx, desc_idx_last;
@@ -4268,7 +4268,7 @@ static int
iavf_tx_done_cleanup_full(struct iavf_tx_queue *txq,
uint32_t free_cnt)
{
- struct iavf_tx_entry *swr_ring = txq->sw_ring;
+ struct ci_tx_entry *swr_ring = txq->sw_ring;
uint16_t tx_last, tx_id;
uint16_t nb_tx_free_last;
uint16_t nb_tx_to_clean;
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index 7b56076d32..1a191f2c89 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -5,6 +5,8 @@
#ifndef _IAVF_RXTX_H_
#define _IAVF_RXTX_H_
+#include <_common_intel/tx.h>
+
/* In QLEN must be whole number of 32 descriptors. */
#define IAVF_ALIGN_RING_DESC 32
#define IAVF_MIN_RING_DESC 64
@@ -271,22 +273,12 @@ struct iavf_rx_queue {
uint64_t hw_time_update;
};
-struct iavf_tx_entry {
- struct rte_mbuf *mbuf;
- uint16_t next_id;
- uint16_t last_id;
-};
-
-struct iavf_tx_vec_entry {
- struct rte_mbuf *mbuf;
-};
-
/* Structure associated with each TX queue. */
struct iavf_tx_queue {
const struct rte_memzone *mz; /* memzone for Tx ring */
volatile struct iavf_tx_desc *tx_ring; /* Tx ring virtual address */
uint64_t tx_ring_phys_addr; /* Tx ring DMA address */
- struct iavf_tx_entry *sw_ring; /* address array of SW ring */
+ struct ci_tx_entry *sw_ring; /* address array of SW ring */
uint16_t nb_tx_desc; /* ring length */
uint16_t tx_tail; /* current value of tail */
volatile uint8_t *qtx_tail; /* register address of tail */
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index 0baf5045c8..e7d3d52655 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -1736,7 +1736,7 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
- struct iavf_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
/* bit2 is reserved and must be set to 1 according to Spec */
uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index 5a88007096..a899309f94 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -1847,7 +1847,7 @@ iavf_recv_scattered_pkts_vec_avx512_flex_rxd_offload(void *rx_queue,
static __rte_always_inline int
iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq)
{
- struct iavf_tx_vec_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint32_t n;
uint32_t i;
int nb_free = 0;
@@ -1960,7 +1960,7 @@ iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq)
}
static __rte_always_inline void
-tx_backlog_entry_avx512(struct iavf_tx_vec_entry *txep,
+tx_backlog_entry_avx512(struct ci_tx_entry_vec *txep,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
int i;
@@ -2313,7 +2313,7 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
- struct iavf_tx_vec_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
/* bit2 is reserved and must be set to 1 according to Spec */
uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC;
@@ -2380,7 +2380,7 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
- struct iavf_tx_vec_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, nb_mbuf, tx_id;
/* bit2 is reserved and must be set to 1 according to Spec */
uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC;
@@ -2478,7 +2478,7 @@ iavf_tx_queue_release_mbufs_avx512(struct iavf_tx_queue *txq)
const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1);
const uint16_t end_desc = txq->tx_tail >> txq->use_ctx; /* next empty slot */
const uint16_t wrap_point = txq->nb_tx_desc >> txq->use_ctx; /* end of SW ring */
- struct iavf_tx_vec_entry *swr = (void *)txq->sw_ring;
+ struct ci_tx_entry_vec *swr = (void *)txq->sw_ring;
if (!txq->sw_ring || txq->nb_free == max_desc)
return;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index 26b6f07614..df40857218 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -19,7 +19,7 @@
static __rte_always_inline int
iavf_tx_free_bufs(struct iavf_tx_queue *txq)
{
- struct iavf_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint32_t n;
uint32_t i;
int nb_free = 0;
@@ -74,7 +74,7 @@ iavf_tx_free_bufs(struct iavf_tx_queue *txq)
}
static __rte_always_inline void
-tx_backlog_entry(struct iavf_tx_entry *txep,
+tx_backlog_entry(struct ci_tx_entry *txep,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
int i;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index 48b01462ea..0a30b1ef64 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -1368,7 +1368,7 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
- struct iavf_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = IAVF_TX_DESC_CMD_EOP | 0x04; /* bit 2 must be set */
uint64_t rs = IAVF_TX_DESC_CMD_RS | flags;
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 91f4943a11..4b98e4066b 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -389,7 +389,7 @@ reset_rx_queue(struct ice_rx_queue *rxq)
static inline void
reset_tx_queue(struct ice_tx_queue *txq)
{
- struct ice_tx_entry *txe;
+ struct ci_tx_entry *txe;
uint32_t i, size;
uint16_t prev;
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 0c7106c7e0..d584086a36 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -1028,7 +1028,7 @@ _ice_tx_queue_release_mbufs(struct ice_tx_queue *txq)
static void
ice_reset_tx_queue(struct ice_tx_queue *txq)
{
- struct ice_tx_entry *txe;
+ struct ci_tx_entry *txe;
uint16_t i, prev, size;
if (!txq) {
@@ -1509,7 +1509,7 @@ ice_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate software ring */
txq->sw_ring =
rte_zmalloc_socket(NULL,
- sizeof(struct ice_tx_entry) * nb_desc,
+ sizeof(struct ci_tx_entry) * nb_desc,
RTE_CACHE_LINE_SIZE,
socket_id);
if (!txq->sw_ring) {
@@ -2837,7 +2837,7 @@ ice_txd_enable_checksum(uint64_t ol_flags,
static inline int
ice_xmit_cleanup(struct ice_tx_queue *txq)
{
- struct ice_tx_entry *sw_ring = txq->sw_ring;
+ struct ci_tx_entry *sw_ring = txq->sw_ring;
volatile struct ice_tx_desc *txd = txq->tx_ring;
uint16_t last_desc_cleaned = txq->last_desc_cleaned;
uint16_t nb_tx_desc = txq->nb_tx_desc;
@@ -2961,8 +2961,8 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
struct ice_tx_queue *txq;
volatile struct ice_tx_desc *tx_ring;
volatile struct ice_tx_desc *txd;
- struct ice_tx_entry *sw_ring;
- struct ice_tx_entry *txe, *txn;
+ struct ci_tx_entry *sw_ring;
+ struct ci_tx_entry *txe, *txn;
struct rte_mbuf *tx_pkt;
struct rte_mbuf *m_seg;
uint32_t cd_tunneling_params;
@@ -3184,7 +3184,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
static __rte_always_inline int
ice_tx_free_bufs(struct ice_tx_queue *txq)
{
- struct ice_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint16_t i;
if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
@@ -3221,7 +3221,7 @@ static int
ice_tx_done_cleanup_full(struct ice_tx_queue *txq,
uint32_t free_cnt)
{
- struct ice_tx_entry *swr_ring = txq->sw_ring;
+ struct ci_tx_entry *swr_ring = txq->sw_ring;
uint16_t i, tx_last, tx_id;
uint16_t nb_tx_free_last;
uint16_t nb_tx_to_clean;
@@ -3361,7 +3361,7 @@ ice_tx_fill_hw_ring(struct ice_tx_queue *txq, struct rte_mbuf **pkts,
uint16_t nb_pkts)
{
volatile struct ice_tx_desc *txdp = &txq->tx_ring[txq->tx_tail];
- struct ice_tx_entry *txep = &txq->sw_ring[txq->tx_tail];
+ struct ci_tx_entry *txep = &txq->sw_ring[txq->tx_tail];
const int N_PER_LOOP = 4;
const int N_PER_LOOP_MASK = N_PER_LOOP - 1;
int mainpart, leftover;
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index 45f25b3609..8d1a1a8676 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -5,6 +5,7 @@
#ifndef _ICE_RXTX_H_
#define _ICE_RXTX_H_
+#include <_common_intel/tx.h>
#include "ice_ethdev.h"
#define ICE_ALIGN_RING_DESC 32
@@ -144,21 +145,11 @@ struct ice_rx_queue {
bool ts_enable; /* if rxq timestamp is enabled */
};
-struct ice_tx_entry {
- struct rte_mbuf *mbuf;
- uint16_t next_id;
- uint16_t last_id;
-};
-
-struct ice_vec_tx_entry {
- struct rte_mbuf *mbuf;
-};
-
struct ice_tx_queue {
uint16_t nb_tx_desc; /* number of TX descriptors */
rte_iova_t tx_ring_dma; /* TX ring DMA address */
volatile struct ice_tx_desc *tx_ring; /* TX ring virtual address */
- struct ice_tx_entry *sw_ring; /* virtual address of SW ring */
+ struct ci_tx_entry *sw_ring; /* virtual address of SW ring */
uint16_t tx_tail; /* current value of tail register */
volatile uint8_t *qtx_tail; /* register address of tail */
uint16_t nb_tx_used; /* number of TX desc used since RS bit set */
diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index ca247b155c..cf1862263a 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -858,7 +858,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
volatile struct ice_tx_desc *txdp;
- struct ice_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = ICE_TD_CMD;
uint64_t rs = ICE_TX_DESC_CMD_RS | ICE_TD_CMD;
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index 1e603d5d8f..6b6aa3f1fe 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -862,7 +862,7 @@ ice_recv_scattered_pkts_vec_avx512_offload(void *rx_queue,
static __rte_always_inline int
ice_tx_free_bufs_avx512(struct ice_tx_queue *txq)
{
- struct ice_vec_tx_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint32_t n;
uint32_t i;
int nb_free = 0;
@@ -1040,7 +1040,7 @@ ice_vtx(volatile struct ice_tx_desc *txdp, struct rte_mbuf **pkt,
}
static __rte_always_inline void
-ice_tx_backlog_entry_avx512(struct ice_vec_tx_entry *txep,
+ice_tx_backlog_entry_avx512(struct ci_tx_entry_vec *txep,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
int i;
@@ -1055,7 +1055,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
volatile struct ice_tx_desc *txdp;
- struct ice_vec_tx_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = ICE_TD_CMD;
uint64_t rs = ICE_TX_DESC_CMD_RS | ICE_TD_CMD;
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index dd7da4761f..3dc6061e84 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -15,7 +15,7 @@
static __rte_always_inline int
ice_tx_free_bufs_vec(struct ice_tx_queue *txq)
{
- struct ice_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint32_t n;
uint32_t i;
int nb_free = 0;
@@ -70,7 +70,7 @@ ice_tx_free_bufs_vec(struct ice_tx_queue *txq)
}
static __rte_always_inline void
-ice_tx_backlog_entry(struct ice_tx_entry *txep,
+ice_tx_backlog_entry(struct ci_tx_entry *txep,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
int i;
@@ -135,7 +135,7 @@ _ice_tx_queue_release_mbufs_vec(struct ice_tx_queue *txq)
if (dev->tx_pkt_burst == ice_xmit_pkts_vec_avx512 ||
dev->tx_pkt_burst == ice_xmit_pkts_vec_avx512_offload) {
- struct ice_vec_tx_entry *swr = (void *)txq->sw_ring;
+ struct ci_tx_entry_vec *swr = (void *)txq->sw_ring;
if (txq->tx_tail < i) {
for (; i < txq->nb_tx_desc; i++) {
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index 01533454ba..889b754cc1 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -699,7 +699,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
volatile struct ice_tx_desc *txdp;
- struct ice_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = ICE_TD_CMD;
uint64_t rs = ICE_TX_DESC_CMD_RS | ICE_TD_CMD;
diff --git a/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c b/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c
index d451562269..2241726ad8 100644
--- a/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c
+++ b/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c
@@ -52,7 +52,7 @@ ixgbe_recycle_tx_mbufs_reuse_vec(void *tx_queue,
struct rte_eth_recycle_rxq_info *recycle_rxq_info)
{
struct ixgbe_tx_queue *txq = tx_queue;
- struct ixgbe_tx_entry *txep;
+ struct ci_tx_entry *txep;
struct rte_mbuf **rxep;
int i, n;
uint32_t status;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 7d16eb9df7..db4b993ebc 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -100,7 +100,7 @@
static __rte_always_inline int
ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq)
{
- struct ixgbe_tx_entry *txep;
+ struct ci_tx_entry *txep;
uint32_t status;
int i, nb_free = 0;
struct rte_mbuf *m, *free[RTE_IXGBE_TX_MAX_FREE_BUF_SZ];
@@ -199,7 +199,7 @@ ixgbe_tx_fill_hw_ring(struct ixgbe_tx_queue *txq, struct rte_mbuf **pkts,
uint16_t nb_pkts)
{
volatile union ixgbe_adv_tx_desc *txdp = &(txq->tx_ring[txq->tx_tail]);
- struct ixgbe_tx_entry *txep = &(txq->sw_ring[txq->tx_tail]);
+ struct ci_tx_entry *txep = &txq->sw_ring[txq->tx_tail];
const int N_PER_LOOP = 4;
const int N_PER_LOOP_MASK = N_PER_LOOP-1;
int mainpart, leftover;
@@ -563,7 +563,7 @@ tx_desc_ol_flags_to_cmdtype(uint64_t ol_flags)
static inline int
ixgbe_xmit_cleanup(struct ixgbe_tx_queue *txq)
{
- struct ixgbe_tx_entry *sw_ring = txq->sw_ring;
+ struct ci_tx_entry *sw_ring = txq->sw_ring;
volatile union ixgbe_adv_tx_desc *txr = txq->tx_ring;
uint16_t last_desc_cleaned = txq->last_desc_cleaned;
uint16_t nb_tx_desc = txq->nb_tx_desc;
@@ -624,8 +624,8 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
struct ixgbe_tx_queue *txq;
- struct ixgbe_tx_entry *sw_ring;
- struct ixgbe_tx_entry *txe, *txn;
+ struct ci_tx_entry *sw_ring;
+ struct ci_tx_entry *txe, *txn;
volatile union ixgbe_adv_tx_desc *txr;
volatile union ixgbe_adv_tx_desc *txd, *txp;
struct rte_mbuf *tx_pkt;
@@ -2352,7 +2352,7 @@ ixgbe_tx_queue_release_mbufs(struct ixgbe_tx_queue *txq)
static int
ixgbe_tx_done_cleanup_full(struct ixgbe_tx_queue *txq, uint32_t free_cnt)
{
- struct ixgbe_tx_entry *swr_ring = txq->sw_ring;
+ struct ci_tx_entry *swr_ring = txq->sw_ring;
uint16_t i, tx_last, tx_id;
uint16_t nb_tx_free_last;
uint16_t nb_tx_to_clean;
@@ -2490,7 +2490,7 @@ static void __rte_cold
ixgbe_reset_tx_queue(struct ixgbe_tx_queue *txq)
{
static const union ixgbe_adv_tx_desc zeroed_desc = {{0}};
- struct ixgbe_tx_entry *txe = txq->sw_ring;
+ struct ci_tx_entry *txe = txq->sw_ring;
uint16_t prev, i;
/* Zero out HW ring memory */
@@ -2795,7 +2795,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate software ring */
txq->sw_ring = rte_zmalloc_socket("txq->sw_ring",
- sizeof(struct ixgbe_tx_entry) * nb_desc,
+ sizeof(struct ci_tx_entry) * nb_desc,
RTE_CACHE_LINE_SIZE, socket_id);
if (txq->sw_ring == NULL) {
ixgbe_tx_queue_release(txq);
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index 0550c1da60..1647396419 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -5,6 +5,8 @@
#ifndef _IXGBE_RXTX_H_
#define _IXGBE_RXTX_H_
+#include <_common_intel/tx.h>
+
/*
* Rings setup and release.
*
@@ -75,22 +77,6 @@ struct ixgbe_scattered_rx_entry {
struct rte_mbuf *fbuf; /**< First segment of the fragmented packet. */
};
-/**
- * Structure associated with each descriptor of the TX ring of a TX queue.
- */
-struct ixgbe_tx_entry {
- struct rte_mbuf *mbuf; /**< mbuf associated with TX desc, if any. */
- uint16_t next_id; /**< Index of next descriptor in ring. */
- uint16_t last_id; /**< Index of last scattered descriptor. */
-};
-
-/**
- * Structure associated with each descriptor of the TX ring of a TX queue.
- */
-struct ixgbe_tx_entry_v {
- struct rte_mbuf *mbuf; /**< mbuf associated with TX desc, if any. */
-};
-
/**
* Structure associated with each RX queue.
*/
@@ -202,8 +188,8 @@ struct ixgbe_tx_queue {
volatile union ixgbe_adv_tx_desc *tx_ring;
uint64_t tx_ring_phys_addr; /**< TX ring DMA address. */
union {
- struct ixgbe_tx_entry *sw_ring; /**< address of SW ring for scalar PMD. */
- struct ixgbe_tx_entry_v *sw_ring_v; /**< address of SW ring for vector PMD */
+ struct ci_tx_entry *sw_ring; /**< address of SW ring for scalar PMD. */
+ struct ci_tx_entry_vec *sw_ring_v; /**< address of SW ring for vector PMD */
};
volatile uint32_t *tdt_reg_addr; /**< Address of TDT register. */
uint16_t nb_tx_desc; /**< number of TX descriptors. */
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
index 2bab17c934..e9592c0d08 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
@@ -14,7 +14,7 @@
static __rte_always_inline int
ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq)
{
- struct ixgbe_tx_entry_v *txep;
+ struct ci_tx_entry_vec *txep;
uint32_t status;
uint32_t n;
uint32_t i;
@@ -69,7 +69,7 @@ ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq)
}
static __rte_always_inline void
-tx_backlog_entry(struct ixgbe_tx_entry_v *txep,
+tx_backlog_entry(struct ci_tx_entry_vec *txep,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
int i;
@@ -82,7 +82,7 @@ static inline void
_ixgbe_tx_queue_release_mbufs_vec(struct ixgbe_tx_queue *txq)
{
unsigned int i;
- struct ixgbe_tx_entry_v *txe;
+ struct ci_tx_entry_vec *txe;
const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1);
if (txq->sw_ring == NULL || txq->nb_tx_free == max_desc)
@@ -149,7 +149,7 @@ static inline void
_ixgbe_reset_tx_queue_vec(struct ixgbe_tx_queue *txq)
{
static const union ixgbe_adv_tx_desc zeroed_desc = { { 0 } };
- struct ixgbe_tx_entry_v *txe = txq->sw_ring_v;
+ struct ci_tx_entry_vec *txe = txq->sw_ring_v;
uint16_t i;
/* Zero out HW ring memory */
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
index 7b35093075..02b53c008e 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
@@ -573,7 +573,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
volatile union ixgbe_adv_tx_desc *txdp;
- struct ixgbe_tx_entry_v *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = DCMD_DTYP_FLAGS;
uint64_t rs = IXGBE_ADVTXD_DCMD_RS | DCMD_DTYP_FLAGS;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
index a709bf8c7f..c8b5377c9f 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -695,7 +695,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
volatile union ixgbe_adv_tx_desc *txdp;
- struct ixgbe_tx_entry_v *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = DCMD_DTYP_FLAGS;
uint64_t rs = IXGBE_ADVTXD_DCMD_RS|DCMD_DTYP_FLAGS;
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v4 03/24] net/_common_intel: add Tx mbuf ring replenish fn
2024-12-20 14:38 ` [PATCH v4 00/24] Reduce code duplication across Intel NIC drivers Bruce Richardson
2024-12-20 14:38 ` [PATCH v4 01/24] net/_common_intel: add pkt reassembly fn for intel drivers Bruce Richardson
2024-12-20 14:38 ` [PATCH v4 02/24] net/_common_intel: provide common Tx entry structures Bruce Richardson
@ 2024-12-20 14:39 ` Bruce Richardson
2024-12-20 14:39 ` [PATCH v4 04/24] drivers/net: align Tx queue struct field names Bruce Richardson
` (20 subsequent siblings)
23 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-20 14:39 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, David Christensen, Ian Stokes,
Konstantin Ananyev, Wathsala Vithanage, Vladimir Medvedkin,
Anatoly Burakov
Move the short function used to place mbufs on the SW Tx ring to common
code to avoid duplication.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 7 +++++++
drivers/net/i40e/i40e_rxtx_vec_altivec.c | 4 ++--
drivers/net/i40e/i40e_rxtx_vec_avx2.c | 4 ++--
drivers/net/i40e/i40e_rxtx_vec_common.h | 10 ----------
drivers/net/i40e/i40e_rxtx_vec_neon.c | 4 ++--
drivers/net/i40e/i40e_rxtx_vec_sse.c | 4 ++--
drivers/net/iavf/iavf_rxtx_vec_avx2.c | 4 ++--
drivers/net/iavf/iavf_rxtx_vec_common.h | 10 ----------
drivers/net/iavf/iavf_rxtx_vec_sse.c | 4 ++--
drivers/net/ice/ice_rxtx_vec_avx2.c | 4 ++--
drivers/net/ice/ice_rxtx_vec_common.h | 10 ----------
drivers/net/ice/ice_rxtx_vec_sse.c | 4 ++--
12 files changed, 23 insertions(+), 46 deletions(-)
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
index 384352b9db..5397007411 100644
--- a/drivers/net/_common_intel/tx.h
+++ b/drivers/net/_common_intel/tx.h
@@ -24,4 +24,11 @@ struct ci_tx_entry_vec {
struct rte_mbuf *mbuf; /* mbuf associated with TX desc, if any. */
};
+static __rte_always_inline void
+ci_tx_backlog_entry(struct ci_tx_entry *txep, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+ for (uint16_t i = 0; i < (int)nb_pkts; ++i)
+ txep[i].mbuf = tx_pkts[i];
+}
+
#endif /* _COMMON_INTEL_TX_H_ */
diff --git a/drivers/net/i40e/i40e_rxtx_vec_altivec.c b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
index ca1038eaa6..80f07a3e10 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_altivec.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
@@ -575,7 +575,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -592,7 +592,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = &txq->sw_ring[tx_id];
}
- tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx2.c b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
index e8441de759..b26bae4757 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx2.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
@@ -765,7 +765,7 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry(txep, tx_pkts, n);
vtx(txdp, tx_pkts, n - 1, flags);
tx_pkts += (n - 1);
@@ -783,7 +783,7 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = &txq->sw_ring[tx_id];
}
- tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index 619fb89110..325e99c1a4 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -84,16 +84,6 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
return txq->tx_rs_thresh;
}
-static __rte_always_inline void
-tx_backlog_entry(struct ci_tx_entry *txep,
- struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
-{
- int i;
-
- for (i = 0; i < (int)nb_pkts; ++i)
- txep[i].mbuf = tx_pkts[i];
-}
-
static inline void
_i40e_rx_queue_release_mbufs_vec(struct i40e_rx_queue *rxq)
{
diff --git a/drivers/net/i40e/i40e_rxtx_vec_neon.c b/drivers/net/i40e/i40e_rxtx_vec_neon.c
index 9b90a32e28..26bc345a0a 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_neon.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_neon.c
@@ -702,7 +702,7 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -719,7 +719,7 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
txep = &txq->sw_ring[tx_id];
}
- tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_sse.c b/drivers/net/i40e/i40e_rxtx_vec_sse.c
index e1fa2ed543..ebc32b0d27 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_sse.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_sse.c
@@ -721,7 +721,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -738,7 +738,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = &txq->sw_ring[tx_id];
}
- tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index e7d3d52655..28885800e0 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -1757,7 +1757,7 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry(txep, tx_pkts, n);
iavf_vtx(txdp, tx_pkts, n - 1, flags, offload);
tx_pkts += (n - 1);
@@ -1775,7 +1775,7 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = &txq->sw_ring[tx_id];
}
- tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
iavf_vtx(txdp, tx_pkts, nb_commit, flags, offload);
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index df40857218..2c118cc059 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -73,16 +73,6 @@ iavf_tx_free_bufs(struct iavf_tx_queue *txq)
return txq->rs_thresh;
}
-static __rte_always_inline void
-tx_backlog_entry(struct ci_tx_entry *txep,
- struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
-{
- int i;
-
- for (i = 0; i < (int)nb_pkts; ++i)
- txep[i].mbuf = tx_pkts[i];
-}
-
static inline void
_iavf_rx_queue_release_mbufs_vec(struct iavf_rx_queue *rxq)
{
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index 0a30b1ef64..bc4b8f14c8 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -1390,7 +1390,7 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -1407,7 +1407,7 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = &txq->sw_ring[tx_id];
}
- tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
iavf_vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index cf1862263a..336697e72d 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -881,7 +881,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ice_tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry(txep, tx_pkts, n);
ice_vtx(txdp, tx_pkts, n - 1, flags, offload);
tx_pkts += (n - 1);
@@ -899,7 +899,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = &txq->sw_ring[tx_id];
}
- ice_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
ice_vtx(txdp, tx_pkts, nb_commit, flags, offload);
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index 3dc6061e84..32e4541267 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -69,16 +69,6 @@ ice_tx_free_bufs_vec(struct ice_tx_queue *txq)
return txq->tx_rs_thresh;
}
-static __rte_always_inline void
-ice_tx_backlog_entry(struct ci_tx_entry *txep,
- struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
-{
- int i;
-
- for (i = 0; i < (int)nb_pkts; ++i)
- txep[i].mbuf = tx_pkts[i];
-}
-
static inline void
_ice_rx_queue_release_mbufs_vec(struct ice_rx_queue *rxq)
{
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index 889b754cc1..debdd8f6a2 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -724,7 +724,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ice_tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
ice_vtx1(txdp, *tx_pkts, flags);
@@ -741,7 +741,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = &txq->sw_ring[tx_id];
}
- ice_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
ice_vtx(txdp, tx_pkts, nb_commit, flags);
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v4 04/24] drivers/net: align Tx queue struct field names
2024-12-20 14:38 ` [PATCH v4 00/24] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (2 preceding siblings ...)
2024-12-20 14:39 ` [PATCH v4 03/24] net/_common_intel: add Tx mbuf ring replenish fn Bruce Richardson
@ 2024-12-20 14:39 ` Bruce Richardson
2024-12-20 14:39 ` [PATCH v4 05/24] drivers/net: add prefix for driver-specific structs Bruce Richardson
` (19 subsequent siblings)
23 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-20 14:39 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Ian Stokes, Vladimir Medvedkin,
Konstantin Ananyev, Anatoly Burakov, Wathsala Vithanage
Across the various Intel drivers sometimes different names are given to
fields in the Tx queue structure which have the same function. Do some
renaming to align things better for future merging.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/i40e/i40e_rxtx.c | 6 +--
drivers/net/i40e/i40e_rxtx.h | 2 +-
drivers/net/iavf/iavf_rxtx.c | 60 ++++++++++++-------------
drivers/net/iavf/iavf_rxtx.h | 14 +++---
drivers/net/iavf/iavf_rxtx_vec_avx2.c | 19 ++++----
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 57 +++++++++++------------
drivers/net/iavf/iavf_rxtx_vec_common.h | 24 +++++-----
drivers/net/iavf/iavf_rxtx_vec_sse.c | 18 ++++----
drivers/net/iavf/iavf_vchnl.c | 2 +-
drivers/net/ixgbe/base/ixgbe_osdep.h | 2 +-
drivers/net/ixgbe/ixgbe_rxtx.c | 16 +++----
drivers/net/ixgbe/ixgbe_rxtx.h | 6 +--
drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 2 +-
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 2 +-
14 files changed, 116 insertions(+), 114 deletions(-)
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 2e1f07d2a1..b0bb20fe9a 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2549,7 +2549,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->vsi = vsi;
txq->tx_deferred_start = tx_conf->tx_deferred_start;
- txq->tx_ring_phys_addr = tz->iova;
+ txq->tx_ring_dma = tz->iova;
txq->tx_ring = (struct i40e_tx_desc *)tz->addr;
/* Allocate software ring */
@@ -2923,7 +2923,7 @@ i40e_tx_queue_init(struct i40e_tx_queue *txq)
/* clear the context structure first */
memset(&tx_ctx, 0, sizeof(tx_ctx));
tx_ctx.new_context = 1;
- tx_ctx.base = txq->tx_ring_phys_addr / I40E_QUEUE_BASE_ADDR_UNIT;
+ tx_ctx.base = txq->tx_ring_dma / I40E_QUEUE_BASE_ADDR_UNIT;
tx_ctx.qlen = txq->nb_tx_desc;
#ifdef RTE_LIBRTE_IEEE1588
@@ -3209,7 +3209,7 @@ i40e_fdir_setup_tx_resources(struct i40e_pf *pf)
txq->reg_idx = pf->fdir.fdir_vsi->base_queue;
txq->vsi = pf->fdir.fdir_vsi;
- txq->tx_ring_phys_addr = tz->iova;
+ txq->tx_ring_dma = tz->iova;
txq->tx_ring = (struct i40e_tx_desc *)tz->addr;
/*
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 0f5d3cb0b7..f420c98687 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -129,7 +129,7 @@ struct i40e_rx_queue {
*/
struct i40e_tx_queue {
uint16_t nb_tx_desc; /**< number of TX descriptors */
- uint64_t tx_ring_phys_addr; /**< TX ring DMA address */
+ rte_iova_t tx_ring_dma; /**< TX ring DMA address */
volatile struct i40e_tx_desc *tx_ring; /**< TX ring virtual address */
struct ci_tx_entry *sw_ring; /**< virtual address of SW ring */
uint16_t tx_tail; /**< current value of tail register */
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index e337f20073..adaaeb4625 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -216,8 +216,8 @@ static inline bool
check_tx_vec_allow(struct iavf_tx_queue *txq)
{
if (!(txq->offloads & IAVF_TX_NO_VECTOR_FLAGS) &&
- txq->rs_thresh >= IAVF_VPMD_TX_MAX_BURST &&
- txq->rs_thresh <= IAVF_VPMD_TX_MAX_FREE_BUF) {
+ txq->tx_rs_thresh >= IAVF_VPMD_TX_MAX_BURST &&
+ txq->tx_rs_thresh <= IAVF_VPMD_TX_MAX_FREE_BUF) {
PMD_INIT_LOG(DEBUG, "Vector tx can be enabled on this txq.");
return true;
}
@@ -309,13 +309,13 @@ reset_tx_queue(struct iavf_tx_queue *txq)
}
txq->tx_tail = 0;
- txq->nb_used = 0;
+ txq->nb_tx_used = 0;
txq->last_desc_cleaned = txq->nb_tx_desc - 1;
- txq->nb_free = txq->nb_tx_desc - 1;
+ txq->nb_tx_free = txq->nb_tx_desc - 1;
- txq->next_dd = txq->rs_thresh - 1;
- txq->next_rs = txq->rs_thresh - 1;
+ txq->tx_next_dd = txq->tx_rs_thresh - 1;
+ txq->tx_next_rs = txq->tx_rs_thresh - 1;
}
static int
@@ -845,8 +845,8 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,
}
txq->nb_tx_desc = nb_desc;
- txq->rs_thresh = tx_rs_thresh;
- txq->free_thresh = tx_free_thresh;
+ txq->tx_rs_thresh = tx_rs_thresh;
+ txq->tx_free_thresh = tx_free_thresh;
txq->queue_id = queue_idx;
txq->port_id = dev->data->port_id;
txq->offloads = offloads;
@@ -881,7 +881,7 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,
rte_free(txq);
return -ENOMEM;
}
- txq->tx_ring_phys_addr = mz->iova;
+ txq->tx_ring_dma = mz->iova;
txq->tx_ring = (struct iavf_tx_desc *)mz->addr;
txq->mz = mz;
@@ -2387,7 +2387,7 @@ iavf_xmit_cleanup(struct iavf_tx_queue *txq)
volatile struct iavf_tx_desc *txd = txq->tx_ring;
- desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->rs_thresh);
+ desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->tx_rs_thresh);
if (desc_to_clean_to >= nb_tx_desc)
desc_to_clean_to = (uint16_t)(desc_to_clean_to - nb_tx_desc);
@@ -2411,7 +2411,7 @@ iavf_xmit_cleanup(struct iavf_tx_queue *txq)
txd[desc_to_clean_to].cmd_type_offset_bsz = 0;
txq->last_desc_cleaned = desc_to_clean_to;
- txq->nb_free = (uint16_t)(txq->nb_free + nb_tx_to_clean);
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + nb_tx_to_clean);
return 0;
}
@@ -2807,7 +2807,7 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
/* Check if the descriptor ring needs to be cleaned. */
- if (txq->nb_free < txq->free_thresh)
+ if (txq->nb_tx_free < txq->tx_free_thresh)
iavf_xmit_cleanup(txq);
desc_idx = txq->tx_tail;
@@ -2862,14 +2862,14 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
"port_id=%u queue_id=%u tx_first=%u tx_last=%u",
txq->port_id, txq->queue_id, desc_idx, desc_idx_last);
- if (nb_desc_required > txq->nb_free) {
+ if (nb_desc_required > txq->nb_tx_free) {
if (iavf_xmit_cleanup(txq)) {
if (idx == 0)
return 0;
goto end_of_tx;
}
- if (unlikely(nb_desc_required > txq->rs_thresh)) {
- while (nb_desc_required > txq->nb_free) {
+ if (unlikely(nb_desc_required > txq->tx_rs_thresh)) {
+ while (nb_desc_required > txq->nb_tx_free) {
if (iavf_xmit_cleanup(txq)) {
if (idx == 0)
return 0;
@@ -2991,10 +2991,10 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
/* The last packet data descriptor needs End Of Packet (EOP) */
ddesc_cmd = IAVF_TX_DESC_CMD_EOP;
- txq->nb_used = (uint16_t)(txq->nb_used + nb_desc_required);
- txq->nb_free = (uint16_t)(txq->nb_free - nb_desc_required);
+ txq->nb_tx_used = (uint16_t)(txq->nb_tx_used + nb_desc_required);
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_desc_required);
- if (txq->nb_used >= txq->rs_thresh) {
+ if (txq->nb_tx_used >= txq->tx_rs_thresh) {
PMD_TX_LOG(DEBUG, "Setting RS bit on TXD id="
"%4u (port=%d queue=%d)",
desc_idx_last, txq->port_id, txq->queue_id);
@@ -3002,7 +3002,7 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
ddesc_cmd |= IAVF_TX_DESC_CMD_RS;
/* Update txq RS bit counters */
- txq->nb_used = 0;
+ txq->nb_tx_used = 0;
}
ddesc->cmd_type_offset_bsz |= rte_cpu_to_le_64(ddesc_cmd <<
@@ -4278,11 +4278,11 @@ iavf_tx_done_cleanup_full(struct iavf_tx_queue *txq,
tx_id = txq->tx_tail;
tx_last = tx_id;
- if (txq->nb_free == 0 && iavf_xmit_cleanup(txq))
+ if (txq->nb_tx_free == 0 && iavf_xmit_cleanup(txq))
return 0;
- nb_tx_to_clean = txq->nb_free;
- nb_tx_free_last = txq->nb_free;
+ nb_tx_to_clean = txq->nb_tx_free;
+ nb_tx_free_last = txq->nb_tx_free;
if (!free_cnt)
free_cnt = txq->nb_tx_desc;
@@ -4305,16 +4305,16 @@ iavf_tx_done_cleanup_full(struct iavf_tx_queue *txq,
tx_id = swr_ring[tx_id].next_id;
} while (--nb_tx_to_clean && pkt_cnt < free_cnt && tx_id != tx_last);
- if (txq->rs_thresh > txq->nb_tx_desc -
- txq->nb_free || tx_id == tx_last)
+ if (txq->tx_rs_thresh > txq->nb_tx_desc -
+ txq->nb_tx_free || tx_id == tx_last)
break;
if (pkt_cnt < free_cnt) {
if (iavf_xmit_cleanup(txq))
break;
- nb_tx_to_clean = txq->nb_free - nb_tx_free_last;
- nb_tx_free_last = txq->nb_free;
+ nb_tx_to_clean = txq->nb_tx_free - nb_tx_free_last;
+ nb_tx_free_last = txq->nb_tx_free;
}
}
@@ -4356,8 +4356,8 @@ iavf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
qinfo->nb_desc = txq->nb_tx_desc;
- qinfo->conf.tx_free_thresh = txq->free_thresh;
- qinfo->conf.tx_rs_thresh = txq->rs_thresh;
+ qinfo->conf.tx_free_thresh = txq->tx_free_thresh;
+ qinfo->conf.tx_rs_thresh = txq->tx_rs_thresh;
qinfo->conf.offloads = txq->offloads;
qinfo->conf.tx_deferred_start = txq->tx_deferred_start;
}
@@ -4432,8 +4432,8 @@ iavf_dev_tx_desc_status(void *tx_queue, uint16_t offset)
desc = txq->tx_tail + offset;
/* go to next desc that has the RS bit */
- desc = ((desc + txq->rs_thresh - 1) / txq->rs_thresh) *
- txq->rs_thresh;
+ desc = ((desc + txq->tx_rs_thresh - 1) / txq->tx_rs_thresh) *
+ txq->tx_rs_thresh;
if (desc >= txq->nb_tx_desc) {
desc -= txq->nb_tx_desc;
if (desc >= txq->nb_tx_desc)
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index 1a191f2c89..44e2de731c 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -277,25 +277,25 @@ struct iavf_rx_queue {
struct iavf_tx_queue {
const struct rte_memzone *mz; /* memzone for Tx ring */
volatile struct iavf_tx_desc *tx_ring; /* Tx ring virtual address */
- uint64_t tx_ring_phys_addr; /* Tx ring DMA address */
+ rte_iova_t tx_ring_dma; /* Tx ring DMA address */
struct ci_tx_entry *sw_ring; /* address array of SW ring */
uint16_t nb_tx_desc; /* ring length */
uint16_t tx_tail; /* current value of tail */
volatile uint8_t *qtx_tail; /* register address of tail */
/* number of used desc since RS bit set */
- uint16_t nb_used;
- uint16_t nb_free;
+ uint16_t nb_tx_used;
+ uint16_t nb_tx_free;
uint16_t last_desc_cleaned; /* last desc have been cleaned*/
- uint16_t free_thresh;
- uint16_t rs_thresh;
+ uint16_t tx_free_thresh;
+ uint16_t tx_rs_thresh;
uint8_t rel_mbufs_type;
struct iavf_vsi *vsi; /**< the VSI this queue belongs to */
uint16_t port_id;
uint16_t queue_id;
uint64_t offloads;
- uint16_t next_dd; /* next to set RS, for VPMD */
- uint16_t next_rs; /* next to check DD, for VPMD */
+ uint16_t tx_next_dd; /* next to set RS, for VPMD */
+ uint16_t tx_next_rs; /* next to check DD, for VPMD */
uint16_t ipsec_crypto_pkt_md_offset;
uint64_t mbuf_errors;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index 28885800e0..42e09a2adf 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -1742,18 +1742,19 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC;
uint64_t rs = IAVF_TX_DESC_CMD_RS | flags;
- if (txq->nb_free < txq->free_thresh)
+ if (txq->nb_tx_free < txq->tx_free_thresh)
iavf_tx_free_bufs(txq);
- nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_free, nb_pkts);
+ nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
return 0;
+ nb_commit = nb_pkts;
tx_id = txq->tx_tail;
txdp = &txq->tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
- txq->nb_free = (uint16_t)(txq->nb_free - nb_pkts);
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
@@ -1768,7 +1769,7 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_commit = (uint16_t)(nb_commit - n);
tx_id = 0;
- txq->next_rs = (uint16_t)(txq->rs_thresh - 1);
+ txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
txdp = &txq->tx_ring[tx_id];
@@ -1780,12 +1781,12 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
iavf_vtx(txdp, tx_pkts, nb_commit, flags, offload);
tx_id = (uint16_t)(tx_id + nb_commit);
- if (tx_id > txq->next_rs) {
- txq->tx_ring[txq->next_rs].cmd_type_offset_bsz |=
+ if (tx_id > txq->tx_next_rs) {
+ txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) <<
IAVF_TXD_QW1_CMD_SHIFT);
- txq->next_rs =
- (uint16_t)(txq->next_rs + txq->rs_thresh);
+ txq->tx_next_rs =
+ (uint16_t)(txq->tx_next_rs + txq->tx_rs_thresh);
}
txq->tx_tail = tx_id;
@@ -1806,7 +1807,7 @@ iavf_xmit_pkts_vec_avx2_common(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t ret, num;
/* cross rs_thresh boundary is not allowed */
- num = (uint16_t)RTE_MIN(nb_pkts, txq->rs_thresh);
+ num = (uint16_t)RTE_MIN(nb_pkts, txq->tx_rs_thresh);
ret = iavf_xmit_fixed_burst_vec_avx2(tx_queue, &tx_pkts[nb_tx],
num, offload);
nb_tx += ret;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index a899309f94..dc1fef24f0 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -1854,18 +1854,18 @@ iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq)
struct rte_mbuf *m, *free[IAVF_VPMD_TX_MAX_FREE_BUF];
/* check DD bits on threshold descriptor */
- if ((txq->tx_ring[txq->next_dd].cmd_type_offset_bsz &
+ if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(IAVF_TXD_QW1_DTYPE_MASK)) !=
rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE))
return 0;
- n = txq->rs_thresh >> txq->use_ctx;
+ n = txq->tx_rs_thresh >> txq->use_ctx;
/* first buffer to free from S/W ring is at index
* tx_next_dd - (tx_rs_thresh-1)
*/
txep = (void *)txq->sw_ring;
- txep += (txq->next_dd >> txq->use_ctx) - (n - 1);
+ txep += (txq->tx_next_dd >> txq->use_ctx) - (n - 1);
if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
struct rte_mempool *mp = txep[0].mbuf->pool;
@@ -1951,12 +1951,12 @@ iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq)
done:
/* buffers were freed, update counters */
- txq->nb_free = (uint16_t)(txq->nb_free + txq->rs_thresh);
- txq->next_dd = (uint16_t)(txq->next_dd + txq->rs_thresh);
- if (txq->next_dd >= txq->nb_tx_desc)
- txq->next_dd = (uint16_t)(txq->rs_thresh - 1);
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
+ txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
+ if (txq->tx_next_dd >= txq->nb_tx_desc)
+ txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
- return txq->rs_thresh;
+ return txq->tx_rs_thresh;
}
static __rte_always_inline void
@@ -2319,19 +2319,20 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC;
uint64_t rs = IAVF_TX_DESC_CMD_RS | flags;
- if (txq->nb_free < txq->free_thresh)
+ if (txq->nb_tx_free < txq->tx_free_thresh)
iavf_tx_free_bufs_avx512(txq);
- nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_free, nb_pkts);
+ nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
return 0;
+ nb_commit = nb_pkts;
tx_id = txq->tx_tail;
txdp = &txq->tx_ring[tx_id];
txep = (void *)txq->sw_ring;
txep += tx_id;
- txq->nb_free = (uint16_t)(txq->nb_free - nb_pkts);
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
@@ -2346,7 +2347,7 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_commit = (uint16_t)(nb_commit - n);
tx_id = 0;
- txq->next_rs = (uint16_t)(txq->rs_thresh - 1);
+ txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
txdp = &txq->tx_ring[tx_id];
@@ -2359,12 +2360,12 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
iavf_vtx(txdp, tx_pkts, nb_commit, flags, offload);
tx_id = (uint16_t)(tx_id + nb_commit);
- if (tx_id > txq->next_rs) {
- txq->tx_ring[txq->next_rs].cmd_type_offset_bsz |=
+ if (tx_id > txq->tx_next_rs) {
+ txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) <<
IAVF_TXD_QW1_CMD_SHIFT);
- txq->next_rs =
- (uint16_t)(txq->next_rs + txq->rs_thresh);
+ txq->tx_next_rs =
+ (uint16_t)(txq->tx_next_rs + txq->tx_rs_thresh);
}
txq->tx_tail = tx_id;
@@ -2386,10 +2387,10 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC;
uint64_t rs = IAVF_TX_DESC_CMD_RS | flags;
- if (txq->nb_free < txq->free_thresh)
+ if (txq->nb_tx_free < txq->tx_free_thresh)
iavf_tx_free_bufs_avx512(txq);
- nb_commit = (uint16_t)RTE_MIN(txq->nb_free, nb_pkts << 1);
+ nb_commit = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts << 1);
nb_commit &= 0xFFFE;
if (unlikely(nb_commit == 0))
return 0;
@@ -2400,7 +2401,7 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = (void *)txq->sw_ring;
txep += (tx_id >> 1);
- txq->nb_free = (uint16_t)(txq->nb_free - nb_commit);
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_commit);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (n != 0 && nb_commit >= n) {
@@ -2414,7 +2415,7 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_commit = (uint16_t)(nb_commit - n);
- txq->next_rs = (uint16_t)(txq->rs_thresh - 1);
+ txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
tx_id = 0;
/* avoid reach the end of ring */
txdp = txq->tx_ring;
@@ -2427,12 +2428,12 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
ctx_vtx(txdp, tx_pkts, nb_mbuf, flags, offload, txq->vlan_flag);
tx_id = (uint16_t)(tx_id + nb_commit);
- if (tx_id > txq->next_rs) {
- txq->tx_ring[txq->next_rs].cmd_type_offset_bsz |=
+ if (tx_id > txq->tx_next_rs) {
+ txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) <<
IAVF_TXD_QW1_CMD_SHIFT);
- txq->next_rs =
- (uint16_t)(txq->next_rs + txq->rs_thresh);
+ txq->tx_next_rs =
+ (uint16_t)(txq->tx_next_rs + txq->tx_rs_thresh);
}
txq->tx_tail = tx_id;
@@ -2452,7 +2453,7 @@ iavf_xmit_pkts_vec_avx512_cmn(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t ret, num;
/* cross rs_thresh boundary is not allowed */
- num = (uint16_t)RTE_MIN(nb_pkts, txq->rs_thresh);
+ num = (uint16_t)RTE_MIN(nb_pkts, txq->tx_rs_thresh);
ret = iavf_xmit_fixed_burst_vec_avx512(tx_queue, &tx_pkts[nb_tx],
num, offload);
nb_tx += ret;
@@ -2480,10 +2481,10 @@ iavf_tx_queue_release_mbufs_avx512(struct iavf_tx_queue *txq)
const uint16_t wrap_point = txq->nb_tx_desc >> txq->use_ctx; /* end of SW ring */
struct ci_tx_entry_vec *swr = (void *)txq->sw_ring;
- if (!txq->sw_ring || txq->nb_free == max_desc)
+ if (!txq->sw_ring || txq->nb_tx_free == max_desc)
return;
- i = (txq->next_dd - txq->rs_thresh + 1) >> txq->use_ctx;
+ i = (txq->tx_next_dd - txq->tx_rs_thresh + 1) >> txq->use_ctx;
while (i != end_desc) {
rte_pktmbuf_free_seg(swr[i].mbuf);
swr[i].mbuf = NULL;
@@ -2517,7 +2518,7 @@ iavf_xmit_pkts_vec_avx512_ctx_cmn(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t ret, num;
/* cross rs_thresh boundary is not allowed */
- num = (uint16_t)RTE_MIN(nb_pkts << 1, txq->rs_thresh);
+ num = (uint16_t)RTE_MIN(nb_pkts << 1, txq->tx_rs_thresh);
num = num >> 1;
ret = iavf_xmit_fixed_burst_vec_avx512_ctx(tx_queue, &tx_pkts[nb_tx],
num, offload);
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index 2c118cc059..ff24055c34 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -26,17 +26,17 @@ iavf_tx_free_bufs(struct iavf_tx_queue *txq)
struct rte_mbuf *m, *free[IAVF_VPMD_TX_MAX_FREE_BUF];
/* check DD bits on threshold descriptor */
- if ((txq->tx_ring[txq->next_dd].cmd_type_offset_bsz &
+ if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(IAVF_TXD_QW1_DTYPE_MASK)) !=
rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE))
return 0;
- n = txq->rs_thresh;
+ n = txq->tx_rs_thresh;
/* first buffer to free from S/W ring is at index
* tx_next_dd - (tx_rs_thresh-1)
*/
- txep = &txq->sw_ring[txq->next_dd - (n - 1)];
+ txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)];
m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
if (likely(m != NULL)) {
free[0] = m;
@@ -65,12 +65,12 @@ iavf_tx_free_bufs(struct iavf_tx_queue *txq)
}
/* buffers were freed, update counters */
- txq->nb_free = (uint16_t)(txq->nb_free + txq->rs_thresh);
- txq->next_dd = (uint16_t)(txq->next_dd + txq->rs_thresh);
- if (txq->next_dd >= txq->nb_tx_desc)
- txq->next_dd = (uint16_t)(txq->rs_thresh - 1);
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
+ txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
+ if (txq->tx_next_dd >= txq->nb_tx_desc)
+ txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
- return txq->rs_thresh;
+ return txq->tx_rs_thresh;
}
static inline void
@@ -109,10 +109,10 @@ _iavf_tx_queue_release_mbufs_vec(struct iavf_tx_queue *txq)
unsigned i;
const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1);
- if (!txq->sw_ring || txq->nb_free == max_desc)
+ if (!txq->sw_ring || txq->nb_tx_free == max_desc)
return;
- i = txq->next_dd - txq->rs_thresh + 1;
+ i = txq->tx_next_dd - txq->tx_rs_thresh + 1;
while (i != txq->tx_tail) {
rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
txq->sw_ring[i].mbuf = NULL;
@@ -169,8 +169,8 @@ iavf_tx_vec_queue_default(struct iavf_tx_queue *txq)
if (!txq)
return -1;
- if (txq->rs_thresh < IAVF_VPMD_TX_MAX_BURST ||
- txq->rs_thresh > IAVF_VPMD_TX_MAX_FREE_BUF)
+ if (txq->tx_rs_thresh < IAVF_VPMD_TX_MAX_BURST ||
+ txq->tx_rs_thresh > IAVF_VPMD_TX_MAX_FREE_BUF)
return -1;
if (txq->offloads & IAVF_TX_NO_VECTOR_FLAGS)
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index bc4b8f14c8..ed8455d669 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -1374,10 +1374,10 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint64_t rs = IAVF_TX_DESC_CMD_RS | flags;
int i;
- if (txq->nb_free < txq->free_thresh)
+ if (txq->nb_tx_free < txq->tx_free_thresh)
iavf_tx_free_bufs(txq);
- nb_pkts = (uint16_t)RTE_MIN(txq->nb_free, nb_pkts);
+ nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
return 0;
nb_commit = nb_pkts;
@@ -1386,7 +1386,7 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txdp = &txq->tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
- txq->nb_free = (uint16_t)(txq->nb_free - nb_pkts);
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
@@ -1400,7 +1400,7 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_commit = (uint16_t)(nb_commit - n);
tx_id = 0;
- txq->next_rs = (uint16_t)(txq->rs_thresh - 1);
+ txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
txdp = &txq->tx_ring[tx_id];
@@ -1412,12 +1412,12 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
iavf_vtx(txdp, tx_pkts, nb_commit, flags);
tx_id = (uint16_t)(tx_id + nb_commit);
- if (tx_id > txq->next_rs) {
- txq->tx_ring[txq->next_rs].cmd_type_offset_bsz |=
+ if (tx_id > txq->tx_next_rs) {
+ txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) <<
IAVF_TXD_QW1_CMD_SHIFT);
- txq->next_rs =
- (uint16_t)(txq->next_rs + txq->rs_thresh);
+ txq->tx_next_rs =
+ (uint16_t)(txq->tx_next_rs + txq->tx_rs_thresh);
}
txq->tx_tail = tx_id;
@@ -1441,7 +1441,7 @@ iavf_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t ret, num;
/* cross rs_thresh boundary is not allowed */
- num = (uint16_t)RTE_MIN(nb_pkts, txq->rs_thresh);
+ num = (uint16_t)RTE_MIN(nb_pkts, txq->tx_rs_thresh);
ret = iavf_xmit_fixed_burst_vec(tx_queue, &tx_pkts[nb_tx], num);
nb_tx += ret;
nb_pkts -= ret;
diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c
index 065ab3594c..0646a2f978 100644
--- a/drivers/net/iavf/iavf_vchnl.c
+++ b/drivers/net/iavf/iavf_vchnl.c
@@ -1247,7 +1247,7 @@ iavf_configure_queues(struct iavf_adapter *adapter,
/* Virtchnnl configure tx queues by pairs */
if (i < adapter->dev_data->nb_tx_queues) {
vc_qp->txq.ring_len = txq[i]->nb_tx_desc;
- vc_qp->txq.dma_ring_addr = txq[i]->tx_ring_phys_addr;
+ vc_qp->txq.dma_ring_addr = txq[i]->tx_ring_dma;
}
vc_qp->rxq.vsi_id = vf->vsi_res->vsi_id;
diff --git a/drivers/net/ixgbe/base/ixgbe_osdep.h b/drivers/net/ixgbe/base/ixgbe_osdep.h
index 502f386b56..95dbe2bedd 100644
--- a/drivers/net/ixgbe/base/ixgbe_osdep.h
+++ b/drivers/net/ixgbe/base/ixgbe_osdep.h
@@ -124,7 +124,7 @@ static inline uint32_t ixgbe_read_addr(volatile void* addr)
rte_write32_wc_relaxed((rte_cpu_to_le_32(value)), reg)
#define IXGBE_PCI_REG_ADDR(hw, reg) \
- ((volatile uint32_t *)((char *)(hw)->hw_addr + (reg)))
+ ((volatile void *)((char *)(hw)->hw_addr + (reg)))
#define IXGBE_PCI_REG_ARRAY_ADDR(hw, reg, index) \
IXGBE_PCI_REG_ADDR((hw), (reg) + ((index) << 2))
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index db4b993ebc..0a80b944f0 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -308,7 +308,7 @@ tx_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
/* update tail pointer */
rte_wmb();
- IXGBE_PCI_REG_WC_WRITE_RELAXED(txq->tdt_reg_addr, txq->tx_tail);
+ IXGBE_PCI_REG_WC_WRITE_RELAXED(txq->qtx_tail, txq->tx_tail);
return nb_pkts;
}
@@ -946,7 +946,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u tx_tail=%u nb_tx=%u",
(unsigned) txq->port_id, (unsigned) txq->queue_id,
(unsigned) tx_id, (unsigned) nb_tx);
- IXGBE_PCI_REG_WC_WRITE_RELAXED(txq->tdt_reg_addr, tx_id);
+ IXGBE_PCI_REG_WC_WRITE_RELAXED(txq->qtx_tail, tx_id);
txq->tx_tail = tx_id;
return nb_tx;
@@ -2786,11 +2786,11 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
hw->mac.type == ixgbe_mac_X550_vf ||
hw->mac.type == ixgbe_mac_X550EM_x_vf ||
hw->mac.type == ixgbe_mac_X550EM_a_vf)
- txq->tdt_reg_addr = IXGBE_PCI_REG_ADDR(hw, IXGBE_VFTDT(queue_idx));
+ txq->qtx_tail = IXGBE_PCI_REG_ADDR(hw, IXGBE_VFTDT(queue_idx));
else
- txq->tdt_reg_addr = IXGBE_PCI_REG_ADDR(hw, IXGBE_TDT(txq->reg_idx));
+ txq->qtx_tail = IXGBE_PCI_REG_ADDR(hw, IXGBE_TDT(txq->reg_idx));
- txq->tx_ring_phys_addr = tz->iova;
+ txq->tx_ring_dma = tz->iova;
txq->tx_ring = (union ixgbe_adv_tx_desc *) tz->addr;
/* Allocate software ring */
@@ -2802,7 +2802,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
return -ENOMEM;
}
PMD_INIT_LOG(DEBUG, "sw_ring=%p hw_ring=%p dma_addr=0x%"PRIx64,
- txq->sw_ring, txq->tx_ring, txq->tx_ring_phys_addr);
+ txq->sw_ring, txq->tx_ring, txq->tx_ring_dma);
/* set up vector or scalar TX function as appropriate */
ixgbe_set_tx_function(dev, txq);
@@ -5303,7 +5303,7 @@ ixgbe_dev_tx_init(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_tx_queues; i++) {
txq = dev->data->tx_queues[i];
- bus_addr = txq->tx_ring_phys_addr;
+ bus_addr = txq->tx_ring_dma;
IXGBE_WRITE_REG(hw, IXGBE_TDBAL(txq->reg_idx),
(uint32_t)(bus_addr & 0x00000000ffffffffULL));
IXGBE_WRITE_REG(hw, IXGBE_TDBAH(txq->reg_idx),
@@ -5887,7 +5887,7 @@ ixgbevf_dev_tx_init(struct rte_eth_dev *dev)
/* Setup the Base and Length of the Tx Descriptor Rings */
for (i = 0; i < dev->data->nb_tx_queues; i++) {
txq = dev->data->tx_queues[i];
- bus_addr = txq->tx_ring_phys_addr;
+ bus_addr = txq->tx_ring_dma;
IXGBE_WRITE_REG(hw, IXGBE_VFTDBAL(i),
(uint32_t)(bus_addr & 0x00000000ffffffffULL));
IXGBE_WRITE_REG(hw, IXGBE_VFTDBAH(i),
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index 1647396419..00e2009b3e 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -186,12 +186,12 @@ struct ixgbe_advctx_info {
struct ixgbe_tx_queue {
/** TX ring virtual address. */
volatile union ixgbe_adv_tx_desc *tx_ring;
- uint64_t tx_ring_phys_addr; /**< TX ring DMA address. */
+ rte_iova_t tx_ring_dma; /**< TX ring DMA address. */
union {
struct ci_tx_entry *sw_ring; /**< address of SW ring for scalar PMD. */
struct ci_tx_entry_vec *sw_ring_v; /**< address of SW ring for vector PMD */
};
- volatile uint32_t *tdt_reg_addr; /**< Address of TDT register. */
+ volatile uint8_t *qtx_tail; /**< Address of TDT register. */
uint16_t nb_tx_desc; /**< number of TX descriptors. */
uint16_t tx_tail; /**< current value of TDT reg. */
/**< Start freeing TX buffers if there are less free descriptors than
@@ -218,7 +218,7 @@ struct ixgbe_tx_queue {
/** Hardware context0 history. */
struct ixgbe_advctx_info ctx_cache[IXGBE_CTX_NUM];
const struct ixgbe_txq_ops *ops; /**< txq ops */
- uint8_t tx_deferred_start; /**< not in global dev start. */
+ bool tx_deferred_start; /**< not in global dev start. */
#ifdef RTE_LIB_SECURITY
uint8_t using_ipsec;
/**< indicates that IPsec TX feature is in use */
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
index 02b53c008e..871c1a7cd2 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
@@ -628,7 +628,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_tail = tx_id;
- IXGBE_PCI_REG_WRITE(txq->tdt_reg_addr, txq->tx_tail);
+ IXGBE_PCI_REG_WRITE(txq->qtx_tail, txq->tx_tail);
return nb_pkts;
}
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
index c8b5377c9f..37f2079519 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -751,7 +751,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_tail = tx_id;
- IXGBE_PCI_REG_WC_WRITE(txq->tdt_reg_addr, txq->tx_tail);
+ IXGBE_PCI_REG_WC_WRITE(txq->qtx_tail, txq->tx_tail);
return nb_pkts;
}
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v4 05/24] drivers/net: add prefix for driver-specific structs
2024-12-20 14:38 ` [PATCH v4 00/24] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (3 preceding siblings ...)
2024-12-20 14:39 ` [PATCH v4 04/24] drivers/net: align Tx queue struct field names Bruce Richardson
@ 2024-12-20 14:39 ` Bruce Richardson
2024-12-20 14:39 ` [PATCH v4 06/24] net/_common_intel: merge ice and i40e Tx queue struct Bruce Richardson
` (18 subsequent siblings)
23 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-20 14:39 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Ian Stokes, David Christensen,
Konstantin Ananyev, Wathsala Vithanage, Vladimir Medvedkin,
Anatoly Burakov
In preparation for merging the Tx structs for multiple drivers into a
single struct, rename the driver-specific pointers in each struct to
have a prefix on it, to avoid conflicts.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/i40e/i40e_fdir.c | 6 +--
.../net/i40e/i40e_recycle_mbufs_vec_common.c | 2 +-
drivers/net/i40e/i40e_rxtx.c | 30 ++++++------
drivers/net/i40e/i40e_rxtx.h | 4 +-
drivers/net/i40e/i40e_rxtx_vec_altivec.c | 6 +--
drivers/net/i40e/i40e_rxtx_vec_avx2.c | 6 +--
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 8 ++--
drivers/net/i40e/i40e_rxtx_vec_common.h | 2 +-
drivers/net/i40e/i40e_rxtx_vec_neon.c | 6 +--
drivers/net/i40e/i40e_rxtx_vec_sse.c | 6 +--
drivers/net/iavf/iavf_rxtx.c | 24 +++++-----
drivers/net/iavf/iavf_rxtx.h | 4 +-
drivers/net/iavf/iavf_rxtx_vec_avx2.c | 6 +--
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 14 +++---
drivers/net/iavf/iavf_rxtx_vec_common.h | 2 +-
drivers/net/iavf/iavf_rxtx_vec_sse.c | 6 +--
drivers/net/ice/ice_dcf_ethdev.c | 4 +-
drivers/net/ice/ice_rxtx.c | 48 +++++++++----------
drivers/net/ice/ice_rxtx.h | 4 +-
drivers/net/ice/ice_rxtx_vec_avx2.c | 6 +--
drivers/net/ice/ice_rxtx_vec_avx512.c | 8 ++--
drivers/net/ice/ice_rxtx_vec_common.h | 4 +-
drivers/net/ice/ice_rxtx_vec_sse.c | 6 +--
.../ixgbe/ixgbe_recycle_mbufs_vec_common.c | 2 +-
drivers/net/ixgbe/ixgbe_rxtx.c | 22 ++++-----
drivers/net/ixgbe/ixgbe_rxtx.h | 2 +-
drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 6 +--
drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 6 +--
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 6 +--
29 files changed, 128 insertions(+), 128 deletions(-)
diff --git a/drivers/net/i40e/i40e_fdir.c b/drivers/net/i40e/i40e_fdir.c
index 47f79ecf11..c600167634 100644
--- a/drivers/net/i40e/i40e_fdir.c
+++ b/drivers/net/i40e/i40e_fdir.c
@@ -1383,7 +1383,7 @@ i40e_find_available_buffer(struct rte_eth_dev *dev)
volatile struct i40e_tx_desc *tmp_txdp;
tmp_tail = txq->tx_tail;
- tmp_txdp = &txq->tx_ring[tmp_tail + 1];
+ tmp_txdp = &txq->i40e_tx_ring[tmp_tail + 1];
do {
if ((tmp_txdp->cmd_type_offset_bsz &
@@ -1640,7 +1640,7 @@ i40e_flow_fdir_filter_programming(struct i40e_pf *pf,
PMD_DRV_LOG(INFO, "filling filter programming descriptor.");
fdirdp = (volatile struct i40e_filter_program_desc *)
- (&txq->tx_ring[txq->tx_tail]);
+ (&txq->i40e_tx_ring[txq->tx_tail]);
fdirdp->qindex_flex_ptype_vsi =
rte_cpu_to_le_32((fdir_action->rx_queue <<
@@ -1710,7 +1710,7 @@ i40e_flow_fdir_filter_programming(struct i40e_pf *pf,
fdirdp->fd_id = rte_cpu_to_le_32(filter->soft_id);
PMD_DRV_LOG(INFO, "filling transmit descriptor.");
- txdp = &txq->tx_ring[txq->tx_tail + 1];
+ txdp = &txq->i40e_tx_ring[txq->tx_tail + 1];
txdp->buffer_addr = rte_cpu_to_le_64(pf->fdir.dma_addr[txq->tx_tail >> 1]);
td_cmd = I40E_TX_DESC_CMD_EOP |
diff --git a/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c b/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
index 260d238ce4..8679e5c1fd 100644
--- a/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
+++ b/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
@@ -75,7 +75,7 @@ i40e_recycle_tx_mbufs_reuse_vec(void *tx_queue,
return 0;
/* check DD bits on threshold descriptor */
- if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+ if ((txq->i40e_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) !=
rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE))
return 0;
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index b0bb20fe9a..34ef931859 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -379,7 +379,7 @@ static inline int
i40e_xmit_cleanup(struct i40e_tx_queue *txq)
{
struct ci_tx_entry *sw_ring = txq->sw_ring;
- volatile struct i40e_tx_desc *txd = txq->tx_ring;
+ volatile struct i40e_tx_desc *txd = txq->i40e_tx_ring;
uint16_t last_desc_cleaned = txq->last_desc_cleaned;
uint16_t nb_tx_desc = txq->nb_tx_desc;
uint16_t desc_to_clean_to;
@@ -1103,7 +1103,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
txq = tx_queue;
sw_ring = txq->sw_ring;
- txr = txq->tx_ring;
+ txr = txq->i40e_tx_ring;
tx_id = txq->tx_tail;
txe = &sw_ring[tx_id];
@@ -1338,7 +1338,7 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
const uint16_t k = RTE_ALIGN_FLOOR(tx_rs_thresh, RTE_I40E_TX_MAX_FREE_BUF_SZ);
const uint16_t m = tx_rs_thresh % RTE_I40E_TX_MAX_FREE_BUF_SZ;
- if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+ if ((txq->i40e_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) !=
rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE))
return 0;
@@ -1417,7 +1417,7 @@ i40e_tx_fill_hw_ring(struct i40e_tx_queue *txq,
struct rte_mbuf **pkts,
uint16_t nb_pkts)
{
- volatile struct i40e_tx_desc *txdp = &(txq->tx_ring[txq->tx_tail]);
+ volatile struct i40e_tx_desc *txdp = &txq->i40e_tx_ring[txq->tx_tail];
struct ci_tx_entry *txep = &txq->sw_ring[txq->tx_tail];
const int N_PER_LOOP = 4;
const int N_PER_LOOP_MASK = N_PER_LOOP - 1;
@@ -1445,7 +1445,7 @@ tx_xmit_pkts(struct i40e_tx_queue *txq,
struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- volatile struct i40e_tx_desc *txr = txq->tx_ring;
+ volatile struct i40e_tx_desc *txr = txq->i40e_tx_ring;
uint16_t n = 0;
/**
@@ -1556,7 +1556,7 @@ i40e_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts
bool pkt_error = false;
const char *reason = NULL;
uint16_t good_pkts = nb_pkts;
- struct i40e_adapter *adapter = txq->vsi->adapter;
+ struct i40e_adapter *adapter = txq->i40e_vsi->adapter;
for (idx = 0; idx < nb_pkts; idx++) {
mb = tx_pkts[idx];
@@ -2329,7 +2329,7 @@ i40e_dev_tx_descriptor_status(void *tx_queue, uint16_t offset)
desc -= txq->nb_tx_desc;
}
- status = &txq->tx_ring[desc].cmd_type_offset_bsz;
+ status = &txq->i40e_tx_ring[desc].cmd_type_offset_bsz;
mask = rte_le_to_cpu_64(I40E_TXD_QW1_DTYPE_MASK);
expect = rte_cpu_to_le_64(
I40E_TX_DESC_DTYPE_DESC_DONE << I40E_TXD_QW1_DTYPE_SHIFT);
@@ -2527,7 +2527,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate TX hardware ring descriptors. */
ring_size = sizeof(struct i40e_tx_desc) * I40E_MAX_RING_DESC;
ring_size = RTE_ALIGN(ring_size, I40E_DMA_MEM_ALIGN);
- tz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx,
+ tz = rte_eth_dma_zone_reserve(dev, "i40e_tx_ring", queue_idx,
ring_size, I40E_RING_BASE_ALIGN, socket_id);
if (!tz) {
i40e_tx_queue_release(txq);
@@ -2546,11 +2546,11 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->reg_idx = reg_idx;
txq->port_id = dev->data->port_id;
txq->offloads = offloads;
- txq->vsi = vsi;
+ txq->i40e_vsi = vsi;
txq->tx_deferred_start = tx_conf->tx_deferred_start;
txq->tx_ring_dma = tz->iova;
- txq->tx_ring = (struct i40e_tx_desc *)tz->addr;
+ txq->i40e_tx_ring = (struct i40e_tx_desc *)tz->addr;
/* Allocate software ring */
txq->sw_ring =
@@ -2885,11 +2885,11 @@ i40e_reset_tx_queue(struct i40e_tx_queue *txq)
txe = txq->sw_ring;
size = sizeof(struct i40e_tx_desc) * txq->nb_tx_desc;
for (i = 0; i < size; i++)
- ((volatile char *)txq->tx_ring)[i] = 0;
+ ((volatile char *)txq->i40e_tx_ring)[i] = 0;
prev = (uint16_t)(txq->nb_tx_desc - 1);
for (i = 0; i < txq->nb_tx_desc; i++) {
- volatile struct i40e_tx_desc *txd = &txq->tx_ring[i];
+ volatile struct i40e_tx_desc *txd = &txq->i40e_tx_ring[i];
txd->cmd_type_offset_bsz =
rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE);
@@ -2914,7 +2914,7 @@ int
i40e_tx_queue_init(struct i40e_tx_queue *txq)
{
enum i40e_status_code err = I40E_SUCCESS;
- struct i40e_vsi *vsi = txq->vsi;
+ struct i40e_vsi *vsi = txq->i40e_vsi;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
uint16_t pf_q = txq->reg_idx;
struct i40e_hmc_obj_txq tx_ctx;
@@ -3207,10 +3207,10 @@ i40e_fdir_setup_tx_resources(struct i40e_pf *pf)
txq->nb_tx_desc = I40E_FDIR_NUM_TX_DESC;
txq->queue_id = I40E_FDIR_QUEUE_ID;
txq->reg_idx = pf->fdir.fdir_vsi->base_queue;
- txq->vsi = pf->fdir.fdir_vsi;
+ txq->i40e_vsi = pf->fdir.fdir_vsi;
txq->tx_ring_dma = tz->iova;
- txq->tx_ring = (struct i40e_tx_desc *)tz->addr;
+ txq->i40e_tx_ring = (struct i40e_tx_desc *)tz->addr;
/*
* don't need to allocate software ring and reset for the fdir
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index f420c98687..8315ee2f59 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -130,7 +130,7 @@ struct i40e_rx_queue {
struct i40e_tx_queue {
uint16_t nb_tx_desc; /**< number of TX descriptors */
rte_iova_t tx_ring_dma; /**< TX ring DMA address */
- volatile struct i40e_tx_desc *tx_ring; /**< TX ring virtual address */
+ volatile struct i40e_tx_desc *i40e_tx_ring; /**< TX ring virtual address */
struct ci_tx_entry *sw_ring; /**< virtual address of SW ring */
uint16_t tx_tail; /**< current value of tail register */
volatile uint8_t *qtx_tail; /**< register address of tail */
@@ -150,7 +150,7 @@ struct i40e_tx_queue {
uint16_t port_id; /**< Device port identifier. */
uint16_t queue_id; /**< TX queue index. */
uint16_t reg_idx;
- struct i40e_vsi *vsi; /**< the VSI this queue belongs to */
+ struct i40e_vsi *i40e_vsi; /**< the VSI this queue belongs to */
uint16_t tx_next_dd;
uint16_t tx_next_rs;
bool q_set; /**< indicate if tx queue has been configured */
diff --git a/drivers/net/i40e/i40e_rxtx_vec_altivec.c b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
index 80f07a3e10..bf0e9ebd71 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_altivec.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
@@ -568,7 +568,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->i40e_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -588,7 +588,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->i40e_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
}
@@ -598,7 +598,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->i40e_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)I40E_TX_DESC_CMD_RS) <<
I40E_TXD_QW1_CMD_SHIFT);
txq->tx_next_rs =
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx2.c b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
index b26bae4757..5042e348db 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx2.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
@@ -758,7 +758,7 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->i40e_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -779,7 +779,7 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->i40e_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
}
@@ -789,7 +789,7 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->i40e_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)I40E_TX_DESC_CMD_RS) <<
I40E_TXD_QW1_CMD_SHIFT);
txq->tx_next_rs =
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index 8b8a16daa8..04fbe3b2e3 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -764,7 +764,7 @@ i40e_tx_free_bufs_avx512(struct i40e_tx_queue *txq)
struct rte_mbuf *m, *free[RTE_I40E_TX_MAX_FREE_BUF_SZ];
/* check DD bits on threshold descriptor */
- if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+ if ((txq->i40e_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) !=
rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE))
return 0;
@@ -948,7 +948,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->i40e_tx_ring[tx_id];
txep = (void *)txq->sw_ring;
txep += tx_id;
@@ -970,7 +970,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = txq->tx_ring;
+ txdp = txq->i40e_tx_ring;
txep = (void *)txq->sw_ring;
}
@@ -980,7 +980,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->i40e_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)I40E_TX_DESC_CMD_RS) <<
I40E_TXD_QW1_CMD_SHIFT);
txq->tx_next_rs =
diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index 325e99c1a4..e81f958361 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -26,7 +26,7 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
struct rte_mbuf *m, *free[RTE_I40E_TX_MAX_FREE_BUF_SZ];
/* check DD bits on threshold descriptor */
- if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+ if ((txq->i40e_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) !=
rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE))
return 0;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_neon.c b/drivers/net/i40e/i40e_rxtx_vec_neon.c
index 26bc345a0a..05191e4884 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_neon.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_neon.c
@@ -695,7 +695,7 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->i40e_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -715,7 +715,7 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->i40e_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
}
@@ -725,7 +725,7 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->i40e_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)I40E_TX_DESC_CMD_RS) <<
I40E_TXD_QW1_CMD_SHIFT);
txq->tx_next_rs =
diff --git a/drivers/net/i40e/i40e_rxtx_vec_sse.c b/drivers/net/i40e/i40e_rxtx_vec_sse.c
index ebc32b0d27..d81b553842 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_sse.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_sse.c
@@ -714,7 +714,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->i40e_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -734,7 +734,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->i40e_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
}
@@ -744,7 +744,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->i40e_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)I40E_TX_DESC_CMD_RS) <<
I40E_TXD_QW1_CMD_SHIFT);
txq->tx_next_rs =
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index adaaeb4625..6eda91e76b 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -296,11 +296,11 @@ reset_tx_queue(struct iavf_tx_queue *txq)
txe = txq->sw_ring;
size = sizeof(struct iavf_tx_desc) * txq->nb_tx_desc;
for (i = 0; i < size; i++)
- ((volatile char *)txq->tx_ring)[i] = 0;
+ ((volatile char *)txq->iavf_tx_ring)[i] = 0;
prev = (uint16_t)(txq->nb_tx_desc - 1);
for (i = 0; i < txq->nb_tx_desc; i++) {
- txq->tx_ring[i].cmd_type_offset_bsz =
+ txq->iavf_tx_ring[i].cmd_type_offset_bsz =
rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE);
txe[i].mbuf = NULL;
txe[i].last_id = i;
@@ -851,7 +851,7 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->port_id = dev->data->port_id;
txq->offloads = offloads;
txq->tx_deferred_start = tx_conf->tx_deferred_start;
- txq->vsi = vsi;
+ txq->iavf_vsi = vsi;
if (iavf_ipsec_crypto_supported(adapter))
txq->ipsec_crypto_pkt_md_offset =
@@ -872,7 +872,7 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate TX hardware ring descriptors. */
ring_size = sizeof(struct iavf_tx_desc) * IAVF_MAX_RING_DESC;
ring_size = RTE_ALIGN(ring_size, IAVF_DMA_MEM_ALIGN);
- mz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx,
+ mz = rte_eth_dma_zone_reserve(dev, "iavf_tx_ring", queue_idx,
ring_size, IAVF_RING_BASE_ALIGN,
socket_id);
if (!mz) {
@@ -882,7 +882,7 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,
return -ENOMEM;
}
txq->tx_ring_dma = mz->iova;
- txq->tx_ring = (struct iavf_tx_desc *)mz->addr;
+ txq->iavf_tx_ring = (struct iavf_tx_desc *)mz->addr;
txq->mz = mz;
reset_tx_queue(txq);
@@ -2385,7 +2385,7 @@ iavf_xmit_cleanup(struct iavf_tx_queue *txq)
uint16_t desc_to_clean_to;
uint16_t nb_tx_to_clean;
- volatile struct iavf_tx_desc *txd = txq->tx_ring;
+ volatile struct iavf_tx_desc *txd = txq->iavf_tx_ring;
desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->tx_rs_thresh);
if (desc_to_clean_to >= nb_tx_desc)
@@ -2796,7 +2796,7 @@ uint16_t
iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
struct iavf_tx_queue *txq = tx_queue;
- volatile struct iavf_tx_desc *txr = txq->tx_ring;
+ volatile struct iavf_tx_desc *txr = txq->iavf_tx_ring;
struct ci_tx_entry *txe_ring = txq->sw_ring;
struct ci_tx_entry *txe, *txn;
struct rte_mbuf *mb, *mb_seg;
@@ -3803,10 +3803,10 @@ iavf_xmit_pkts_no_poll(void *tx_queue, struct rte_mbuf **tx_pkts,
struct iavf_tx_queue *txq = tx_queue;
enum iavf_tx_burst_type tx_burst_type;
- if (!txq->vsi || txq->vsi->adapter->no_poll)
+ if (!txq->iavf_vsi || txq->iavf_vsi->adapter->no_poll)
return 0;
- tx_burst_type = txq->vsi->adapter->tx_burst_type;
+ tx_burst_type = txq->iavf_vsi->adapter->tx_burst_type;
return iavf_tx_pkt_burst_ops[tx_burst_type](tx_queue,
tx_pkts, nb_pkts);
@@ -3824,9 +3824,9 @@ iavf_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts,
const char *reason = NULL;
bool pkt_error = false;
struct iavf_tx_queue *txq = tx_queue;
- struct iavf_adapter *adapter = txq->vsi->adapter;
+ struct iavf_adapter *adapter = txq->iavf_vsi->adapter;
enum iavf_tx_burst_type tx_burst_type =
- txq->vsi->adapter->tx_burst_type;
+ txq->iavf_vsi->adapter->tx_burst_type;
for (idx = 0; idx < nb_pkts; idx++) {
mb = tx_pkts[idx];
@@ -4440,7 +4440,7 @@ iavf_dev_tx_desc_status(void *tx_queue, uint16_t offset)
desc -= txq->nb_tx_desc;
}
- status = &txq->tx_ring[desc].cmd_type_offset_bsz;
+ status = &txq->iavf_tx_ring[desc].cmd_type_offset_bsz;
mask = rte_le_to_cpu_64(IAVF_TXD_QW1_DTYPE_MASK);
expect = rte_cpu_to_le_64(
IAVF_TX_DESC_DTYPE_DESC_DONE << IAVF_TXD_QW1_DTYPE_SHIFT);
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index 44e2de731c..cc1eaaf54c 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -276,7 +276,7 @@ struct iavf_rx_queue {
/* Structure associated with each TX queue. */
struct iavf_tx_queue {
const struct rte_memzone *mz; /* memzone for Tx ring */
- volatile struct iavf_tx_desc *tx_ring; /* Tx ring virtual address */
+ volatile struct iavf_tx_desc *iavf_tx_ring; /* Tx ring virtual address */
rte_iova_t tx_ring_dma; /* Tx ring DMA address */
struct ci_tx_entry *sw_ring; /* address array of SW ring */
uint16_t nb_tx_desc; /* ring length */
@@ -289,7 +289,7 @@ struct iavf_tx_queue {
uint16_t tx_free_thresh;
uint16_t tx_rs_thresh;
uint8_t rel_mbufs_type;
- struct iavf_vsi *vsi; /**< the VSI this queue belongs to */
+ struct iavf_vsi *iavf_vsi; /**< the VSI this queue belongs to */
uint16_t port_id;
uint16_t queue_id;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index 42e09a2adf..f33ceceee1 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -1751,7 +1751,7 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_commit = nb_pkts;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->iavf_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -1772,7 +1772,7 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->iavf_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
}
@@ -1782,7 +1782,7 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->iavf_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) <<
IAVF_TXD_QW1_CMD_SHIFT);
txq->tx_next_rs =
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index dc1fef24f0..97420a75fd 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -1854,7 +1854,7 @@ iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq)
struct rte_mbuf *m, *free[IAVF_VPMD_TX_MAX_FREE_BUF];
/* check DD bits on threshold descriptor */
- if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+ if ((txq->iavf_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(IAVF_TXD_QW1_DTYPE_MASK)) !=
rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE))
return 0;
@@ -2328,7 +2328,7 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_commit = nb_pkts;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->iavf_tx_ring[tx_id];
txep = (void *)txq->sw_ring;
txep += tx_id;
@@ -2350,7 +2350,7 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->iavf_tx_ring[tx_id];
txep = (void *)txq->sw_ring;
txep += tx_id;
}
@@ -2361,7 +2361,7 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->iavf_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) <<
IAVF_TXD_QW1_CMD_SHIFT);
txq->tx_next_rs =
@@ -2397,7 +2397,7 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_pkts = nb_commit >> 1;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->iavf_tx_ring[tx_id];
txep = (void *)txq->sw_ring;
txep += (tx_id >> 1);
@@ -2418,7 +2418,7 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
tx_id = 0;
/* avoid reach the end of ring */
- txdp = txq->tx_ring;
+ txdp = txq->iavf_tx_ring;
txep = (void *)txq->sw_ring;
}
@@ -2429,7 +2429,7 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->iavf_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) <<
IAVF_TXD_QW1_CMD_SHIFT);
txq->tx_next_rs =
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index ff24055c34..6305c8cdd6 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -26,7 +26,7 @@ iavf_tx_free_bufs(struct iavf_tx_queue *txq)
struct rte_mbuf *m, *free[IAVF_VPMD_TX_MAX_FREE_BUF];
/* check DD bits on threshold descriptor */
- if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+ if ((txq->iavf_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(IAVF_TXD_QW1_DTYPE_MASK)) !=
rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE))
return 0;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index ed8455d669..64c3bf0eaa 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -1383,7 +1383,7 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_commit = nb_pkts;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->iavf_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -1403,7 +1403,7 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->iavf_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
}
@@ -1413,7 +1413,7 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->iavf_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) <<
IAVF_TXD_QW1_CMD_SHIFT);
txq->tx_next_rs =
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 4b98e4066b..4ffd1f5567 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -401,11 +401,11 @@ reset_tx_queue(struct ice_tx_queue *txq)
txe = txq->sw_ring;
size = sizeof(struct ice_tx_desc) * txq->nb_tx_desc;
for (i = 0; i < size; i++)
- ((volatile char *)txq->tx_ring)[i] = 0;
+ ((volatile char *)txq->ice_tx_ring)[i] = 0;
prev = (uint16_t)(txq->nb_tx_desc - 1);
for (i = 0; i < txq->nb_tx_desc; i++) {
- txq->tx_ring[i].cmd_type_offset_bsz =
+ txq->ice_tx_ring[i].cmd_type_offset_bsz =
rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE);
txe[i].mbuf = NULL;
txe[i].last_id = i;
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index d584086a36..5ec92f6d0c 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -776,7 +776,7 @@ ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
if (!txq_elem)
return -ENOMEM;
- vsi = txq->vsi;
+ vsi = txq->ice_vsi;
hw = ICE_VSI_TO_HW(vsi);
pf = ICE_VSI_TO_PF(vsi);
@@ -966,7 +966,7 @@ ice_fdir_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
if (!txq_elem)
return -ENOMEM;
- vsi = txq->vsi;
+ vsi = txq->ice_vsi;
hw = ICE_VSI_TO_HW(vsi);
memset(&tx_ctx, 0, sizeof(tx_ctx));
@@ -1039,11 +1039,11 @@ ice_reset_tx_queue(struct ice_tx_queue *txq)
txe = txq->sw_ring;
size = sizeof(struct ice_tx_desc) * txq->nb_tx_desc;
for (i = 0; i < size; i++)
- ((volatile char *)txq->tx_ring)[i] = 0;
+ ((volatile char *)txq->ice_tx_ring)[i] = 0;
prev = (uint16_t)(txq->nb_tx_desc - 1);
for (i = 0; i < txq->nb_tx_desc; i++) {
- volatile struct ice_tx_desc *txd = &txq->tx_ring[i];
+ volatile struct ice_tx_desc *txd = &txq->ice_tx_ring[i];
txd->cmd_type_offset_bsz =
rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE);
@@ -1153,7 +1153,7 @@ ice_fdir_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
PMD_DRV_LOG(INFO, "TX queue %u not started", tx_queue_id);
return 0;
}
- vsi = txq->vsi;
+ vsi = txq->ice_vsi;
q_ids[0] = txq->reg_idx;
q_teids[0] = txq->q_teid;
@@ -1479,7 +1479,7 @@ ice_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate TX hardware ring descriptors. */
ring_size = sizeof(struct ice_tx_desc) * ICE_MAX_RING_DESC;
ring_size = RTE_ALIGN(ring_size, ICE_DMA_MEM_ALIGN);
- tz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx,
+ tz = rte_eth_dma_zone_reserve(dev, "ice_tx_ring", queue_idx,
ring_size, ICE_RING_BASE_ALIGN,
socket_id);
if (!tz) {
@@ -1500,11 +1500,11 @@ ice_tx_queue_setup(struct rte_eth_dev *dev,
txq->reg_idx = vsi->base_queue + queue_idx;
txq->port_id = dev->data->port_id;
txq->offloads = offloads;
- txq->vsi = vsi;
+ txq->ice_vsi = vsi;
txq->tx_deferred_start = tx_conf->tx_deferred_start;
txq->tx_ring_dma = tz->iova;
- txq->tx_ring = tz->addr;
+ txq->ice_tx_ring = tz->addr;
/* Allocate software ring */
txq->sw_ring =
@@ -2372,7 +2372,7 @@ ice_tx_descriptor_status(void *tx_queue, uint16_t offset)
desc -= txq->nb_tx_desc;
}
- status = &txq->tx_ring[desc].cmd_type_offset_bsz;
+ status = &txq->ice_tx_ring[desc].cmd_type_offset_bsz;
mask = rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M);
expect = rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE <<
ICE_TXD_QW1_DTYPE_S);
@@ -2452,10 +2452,10 @@ ice_fdir_setup_tx_resources(struct ice_pf *pf)
txq->nb_tx_desc = ICE_FDIR_NUM_TX_DESC;
txq->queue_id = ICE_FDIR_QUEUE_ID;
txq->reg_idx = pf->fdir.fdir_vsi->base_queue;
- txq->vsi = pf->fdir.fdir_vsi;
+ txq->ice_vsi = pf->fdir.fdir_vsi;
txq->tx_ring_dma = tz->iova;
- txq->tx_ring = (struct ice_tx_desc *)tz->addr;
+ txq->ice_tx_ring = (struct ice_tx_desc *)tz->addr;
/*
* don't need to allocate software ring and reset for the fdir
* program queue just set the queue has been configured.
@@ -2838,7 +2838,7 @@ static inline int
ice_xmit_cleanup(struct ice_tx_queue *txq)
{
struct ci_tx_entry *sw_ring = txq->sw_ring;
- volatile struct ice_tx_desc *txd = txq->tx_ring;
+ volatile struct ice_tx_desc *txd = txq->ice_tx_ring;
uint16_t last_desc_cleaned = txq->last_desc_cleaned;
uint16_t nb_tx_desc = txq->nb_tx_desc;
uint16_t desc_to_clean_to;
@@ -2959,7 +2959,7 @@ uint16_t
ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
struct ice_tx_queue *txq;
- volatile struct ice_tx_desc *tx_ring;
+ volatile struct ice_tx_desc *ice_tx_ring;
volatile struct ice_tx_desc *txd;
struct ci_tx_entry *sw_ring;
struct ci_tx_entry *txe, *txn;
@@ -2981,7 +2981,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
txq = tx_queue;
sw_ring = txq->sw_ring;
- tx_ring = txq->tx_ring;
+ ice_tx_ring = txq->ice_tx_ring;
tx_id = txq->tx_tail;
txe = &sw_ring[tx_id];
@@ -3064,7 +3064,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
/* Setup TX context descriptor if required */
volatile struct ice_tx_ctx_desc *ctx_txd =
(volatile struct ice_tx_ctx_desc *)
- &tx_ring[tx_id];
+ &ice_tx_ring[tx_id];
uint16_t cd_l2tag2 = 0;
uint64_t cd_type_cmd_tso_mss = ICE_TX_DESC_DTYPE_CTX;
@@ -3082,7 +3082,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
cd_type_cmd_tso_mss |=
((uint64_t)ICE_TX_CTX_DESC_TSYN <<
ICE_TXD_CTX_QW1_CMD_S) |
- (((uint64_t)txq->vsi->adapter->ptp_tx_index <<
+ (((uint64_t)txq->ice_vsi->adapter->ptp_tx_index <<
ICE_TXD_CTX_QW1_TSYN_S) & ICE_TXD_CTX_QW1_TSYN_M);
ctx_txd->tunneling_params =
@@ -3106,7 +3106,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
m_seg = tx_pkt;
do {
- txd = &tx_ring[tx_id];
+ txd = &ice_tx_ring[tx_id];
txn = &sw_ring[txe->next_id];
if (txe->mbuf)
@@ -3134,7 +3134,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
txe->last_id = tx_last;
tx_id = txe->next_id;
txe = txn;
- txd = &tx_ring[tx_id];
+ txd = &ice_tx_ring[tx_id];
txn = &sw_ring[txe->next_id];
}
@@ -3187,7 +3187,7 @@ ice_tx_free_bufs(struct ice_tx_queue *txq)
struct ci_tx_entry *txep;
uint16_t i;
- if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+ if ((txq->ice_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) !=
rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))
return 0;
@@ -3360,7 +3360,7 @@ static inline void
ice_tx_fill_hw_ring(struct ice_tx_queue *txq, struct rte_mbuf **pkts,
uint16_t nb_pkts)
{
- volatile struct ice_tx_desc *txdp = &txq->tx_ring[txq->tx_tail];
+ volatile struct ice_tx_desc *txdp = &txq->ice_tx_ring[txq->tx_tail];
struct ci_tx_entry *txep = &txq->sw_ring[txq->tx_tail];
const int N_PER_LOOP = 4;
const int N_PER_LOOP_MASK = N_PER_LOOP - 1;
@@ -3393,7 +3393,7 @@ tx_xmit_pkts(struct ice_tx_queue *txq,
struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- volatile struct ice_tx_desc *txr = txq->tx_ring;
+ volatile struct ice_tx_desc *txr = txq->ice_tx_ring;
uint16_t n = 0;
/**
@@ -3722,7 +3722,7 @@ ice_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
bool pkt_error = false;
uint16_t good_pkts = nb_pkts;
const char *reason = NULL;
- struct ice_adapter *adapter = txq->vsi->adapter;
+ struct ice_adapter *adapter = txq->ice_vsi->adapter;
uint64_t ol_flags;
for (idx = 0; idx < nb_pkts; idx++) {
@@ -4701,11 +4701,11 @@ ice_fdir_programming(struct ice_pf *pf, struct ice_fltr_desc *fdir_desc)
uint16_t i;
fdirdp = (volatile struct ice_fltr_desc *)
- (&txq->tx_ring[txq->tx_tail]);
+ (&txq->ice_tx_ring[txq->tx_tail]);
fdirdp->qidx_compq_space_stat = fdir_desc->qidx_compq_space_stat;
fdirdp->dtype_cmd_vsi_fdid = fdir_desc->dtype_cmd_vsi_fdid;
- txdp = &txq->tx_ring[txq->tx_tail + 1];
+ txdp = &txq->ice_tx_ring[txq->tx_tail + 1];
txdp->buf_addr = rte_cpu_to_le_64(pf->fdir.dma_addr);
td_cmd = ICE_TX_DESC_CMD_EOP |
ICE_TX_DESC_CMD_RS |
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index 8d1a1a8676..3257f449f5 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -148,7 +148,7 @@ struct ice_rx_queue {
struct ice_tx_queue {
uint16_t nb_tx_desc; /* number of TX descriptors */
rte_iova_t tx_ring_dma; /* TX ring DMA address */
- volatile struct ice_tx_desc *tx_ring; /* TX ring virtual address */
+ volatile struct ice_tx_desc *ice_tx_ring; /* TX ring virtual address */
struct ci_tx_entry *sw_ring; /* virtual address of SW ring */
uint16_t tx_tail; /* current value of tail register */
volatile uint8_t *qtx_tail; /* register address of tail */
@@ -171,7 +171,7 @@ struct ice_tx_queue {
uint32_t q_teid; /* TX schedule node id. */
uint16_t reg_idx;
uint64_t offloads;
- struct ice_vsi *vsi; /* the VSI this queue belongs to */
+ struct ice_vsi *ice_vsi; /* the VSI this queue belongs to */
uint16_t tx_next_dd;
uint16_t tx_next_rs;
uint64_t mbuf_errors;
diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index 336697e72d..dde07ac99e 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -874,7 +874,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->ice_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -895,7 +895,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->ice_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
}
@@ -905,7 +905,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->ice_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)ICE_TX_DESC_CMD_RS) <<
ICE_TXD_QW1_CMD_S);
txq->tx_next_rs =
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index 6b6aa3f1fe..e4d0270176 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -869,7 +869,7 @@ ice_tx_free_bufs_avx512(struct ice_tx_queue *txq)
struct rte_mbuf *m, *free[ICE_TX_MAX_FREE_BUF_SZ];
/* check DD bits on threshold descriptor */
- if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+ if ((txq->ice_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) !=
rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))
return 0;
@@ -1071,7 +1071,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->ice_tx_ring[tx_id];
txep = (void *)txq->sw_ring;
txep += tx_id;
@@ -1093,7 +1093,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = txq->tx_ring;
+ txdp = txq->ice_tx_ring;
txep = (void *)txq->sw_ring;
}
@@ -1103,7 +1103,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->ice_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)ICE_TX_DESC_CMD_RS) <<
ICE_TXD_QW1_CMD_S);
txq->tx_next_rs =
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index 32e4541267..7b865b53ad 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -22,7 +22,7 @@ ice_tx_free_bufs_vec(struct ice_tx_queue *txq)
struct rte_mbuf *m, *free[ICE_TX_MAX_FREE_BUF_SZ];
/* check DD bits on threshold descriptor */
- if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+ if ((txq->ice_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) !=
rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))
return 0;
@@ -121,7 +121,7 @@ _ice_tx_queue_release_mbufs_vec(struct ice_tx_queue *txq)
i = txq->tx_next_dd - txq->tx_rs_thresh + 1;
#ifdef __AVX512VL__
- struct rte_eth_dev *dev = &rte_eth_devices[txq->vsi->adapter->pf.dev_data->port_id];
+ struct rte_eth_dev *dev = &rte_eth_devices[txq->ice_vsi->adapter->pf.dev_data->port_id];
if (dev->tx_pkt_burst == ice_xmit_pkts_vec_avx512 ||
dev->tx_pkt_burst == ice_xmit_pkts_vec_avx512_offload) {
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index debdd8f6a2..364207e8a8 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -717,7 +717,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->ice_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -737,7 +737,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->ice_tx_ring[tx_id];
txep = &txq->sw_ring[tx_id];
}
@@ -747,7 +747,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
+ txq->ice_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |=
rte_cpu_to_le_64(((uint64_t)ICE_TX_DESC_CMD_RS) <<
ICE_TXD_QW1_CMD_S);
txq->tx_next_rs =
diff --git a/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c b/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c
index 2241726ad8..a878db3150 100644
--- a/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c
+++ b/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c
@@ -72,7 +72,7 @@ ixgbe_recycle_tx_mbufs_reuse_vec(void *tx_queue,
return 0;
/* check DD bits on threshold descriptor */
- status = txq->tx_ring[txq->tx_next_dd].wb.status;
+ status = txq->ixgbe_tx_ring[txq->tx_next_dd].wb.status;
if (!(status & IXGBE_ADVTXD_STAT_DD))
return 0;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 0a80b944f0..f7ddbba1b6 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -106,7 +106,7 @@ ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq)
struct rte_mbuf *m, *free[RTE_IXGBE_TX_MAX_FREE_BUF_SZ];
/* check DD bit on threshold descriptor */
- status = txq->tx_ring[txq->tx_next_dd].wb.status;
+ status = txq->ixgbe_tx_ring[txq->tx_next_dd].wb.status;
if (!(status & rte_cpu_to_le_32(IXGBE_ADVTXD_STAT_DD)))
return 0;
@@ -198,7 +198,7 @@ static inline void
ixgbe_tx_fill_hw_ring(struct ixgbe_tx_queue *txq, struct rte_mbuf **pkts,
uint16_t nb_pkts)
{
- volatile union ixgbe_adv_tx_desc *txdp = &(txq->tx_ring[txq->tx_tail]);
+ volatile union ixgbe_adv_tx_desc *txdp = &txq->ixgbe_tx_ring[txq->tx_tail];
struct ci_tx_entry *txep = &txq->sw_ring[txq->tx_tail];
const int N_PER_LOOP = 4;
const int N_PER_LOOP_MASK = N_PER_LOOP-1;
@@ -232,7 +232,7 @@ tx_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
- volatile union ixgbe_adv_tx_desc *tx_r = txq->tx_ring;
+ volatile union ixgbe_adv_tx_desc *tx_r = txq->ixgbe_tx_ring;
uint16_t n = 0;
/*
@@ -564,7 +564,7 @@ static inline int
ixgbe_xmit_cleanup(struct ixgbe_tx_queue *txq)
{
struct ci_tx_entry *sw_ring = txq->sw_ring;
- volatile union ixgbe_adv_tx_desc *txr = txq->tx_ring;
+ volatile union ixgbe_adv_tx_desc *txr = txq->ixgbe_tx_ring;
uint16_t last_desc_cleaned = txq->last_desc_cleaned;
uint16_t nb_tx_desc = txq->nb_tx_desc;
uint16_t desc_to_clean_to;
@@ -652,7 +652,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_offload.data[1] = 0;
txq = tx_queue;
sw_ring = txq->sw_ring;
- txr = txq->tx_ring;
+ txr = txq->ixgbe_tx_ring;
tx_id = txq->tx_tail;
txe = &sw_ring[tx_id];
txp = NULL;
@@ -2495,13 +2495,13 @@ ixgbe_reset_tx_queue(struct ixgbe_tx_queue *txq)
/* Zero out HW ring memory */
for (i = 0; i < txq->nb_tx_desc; i++) {
- txq->tx_ring[i] = zeroed_desc;
+ txq->ixgbe_tx_ring[i] = zeroed_desc;
}
/* Initialize SW ring entries */
prev = (uint16_t) (txq->nb_tx_desc - 1);
for (i = 0; i < txq->nb_tx_desc; i++) {
- volatile union ixgbe_adv_tx_desc *txd = &txq->tx_ring[i];
+ volatile union ixgbe_adv_tx_desc *txd = &txq->ixgbe_tx_ring[i];
txd->wb.status = rte_cpu_to_le_32(IXGBE_TXD_STAT_DD);
txe[i].mbuf = NULL;
@@ -2751,7 +2751,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
* handle the maximum ring size is allocated in order to allow for
* resizing in later calls to the queue setup function.
*/
- tz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx,
+ tz = rte_eth_dma_zone_reserve(dev, "ixgbe_tx_ring", queue_idx,
sizeof(union ixgbe_adv_tx_desc) * IXGBE_MAX_RING_DESC,
IXGBE_ALIGN, socket_id);
if (tz == NULL) {
@@ -2791,7 +2791,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->qtx_tail = IXGBE_PCI_REG_ADDR(hw, IXGBE_TDT(txq->reg_idx));
txq->tx_ring_dma = tz->iova;
- txq->tx_ring = (union ixgbe_adv_tx_desc *) tz->addr;
+ txq->ixgbe_tx_ring = (union ixgbe_adv_tx_desc *)tz->addr;
/* Allocate software ring */
txq->sw_ring = rte_zmalloc_socket("txq->sw_ring",
@@ -2802,7 +2802,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
return -ENOMEM;
}
PMD_INIT_LOG(DEBUG, "sw_ring=%p hw_ring=%p dma_addr=0x%"PRIx64,
- txq->sw_ring, txq->tx_ring, txq->tx_ring_dma);
+ txq->sw_ring, txq->ixgbe_tx_ring, txq->tx_ring_dma);
/* set up vector or scalar TX function as appropriate */
ixgbe_set_tx_function(dev, txq);
@@ -3328,7 +3328,7 @@ ixgbe_dev_tx_descriptor_status(void *tx_queue, uint16_t offset)
desc -= txq->nb_tx_desc;
}
- status = &txq->tx_ring[desc].wb.status;
+ status = &txq->ixgbe_tx_ring[desc].wb.status;
if (*status & rte_cpu_to_le_32(IXGBE_ADVTXD_STAT_DD))
return RTE_ETH_TX_DESC_DONE;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index 00e2009b3e..f6bae37cf3 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -185,7 +185,7 @@ struct ixgbe_advctx_info {
*/
struct ixgbe_tx_queue {
/** TX ring virtual address. */
- volatile union ixgbe_adv_tx_desc *tx_ring;
+ volatile union ixgbe_adv_tx_desc *ixgbe_tx_ring;
rte_iova_t tx_ring_dma; /**< TX ring DMA address. */
union {
struct ci_tx_entry *sw_ring; /**< address of SW ring for scalar PMD. */
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
index e9592c0d08..cc51bf6eed 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
@@ -22,7 +22,7 @@ ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq)
struct rte_mbuf *m, *free[RTE_IXGBE_TX_MAX_FREE_BUF_SZ];
/* check DD bit on threshold descriptor */
- status = txq->tx_ring[txq->tx_next_dd].wb.status;
+ status = txq->ixgbe_tx_ring[txq->tx_next_dd].wb.status;
if (!(status & IXGBE_ADVTXD_STAT_DD))
return 0;
@@ -154,11 +154,11 @@ _ixgbe_reset_tx_queue_vec(struct ixgbe_tx_queue *txq)
/* Zero out HW ring memory */
for (i = 0; i < txq->nb_tx_desc; i++)
- txq->tx_ring[i] = zeroed_desc;
+ txq->ixgbe_tx_ring[i] = zeroed_desc;
/* Initialize SW ring entries */
for (i = 0; i < txq->nb_tx_desc; i++) {
- volatile union ixgbe_adv_tx_desc *txd = &txq->tx_ring[i];
+ volatile union ixgbe_adv_tx_desc *txd = &txq->ixgbe_tx_ring[i];
txd->wb.status = IXGBE_TXD_STAT_DD;
txe[i].mbuf = NULL;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
index 871c1a7cd2..06be7ec82a 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
@@ -590,7 +590,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->ixgbe_tx_ring[tx_id];
txep = &txq->sw_ring_v[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -610,7 +610,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->ixgbe_tx_ring[tx_id];
txep = &txq->sw_ring_v[tx_id];
}
@@ -620,7 +620,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].read.cmd_type_len |=
+ txq->ixgbe_tx_ring[txq->tx_next_rs].read.cmd_type_len |=
rte_cpu_to_le_32(IXGBE_ADVTXD_DCMD_RS);
txq->tx_next_rs = (uint16_t)(txq->tx_next_rs +
txq->tx_rs_thresh);
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
index 37f2079519..a21a57bd55 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -712,7 +712,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
return 0;
tx_id = txq->tx_tail;
- txdp = &txq->tx_ring[tx_id];
+ txdp = &txq->ixgbe_tx_ring[tx_id];
txep = &txq->sw_ring_v[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -733,7 +733,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
/* avoid reach the end of ring */
- txdp = &(txq->tx_ring[tx_id]);
+ txdp = &txq->ixgbe_tx_ring[tx_id];
txep = &txq->sw_ring_v[tx_id];
}
@@ -743,7 +743,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = (uint16_t)(tx_id + nb_commit);
if (tx_id > txq->tx_next_rs) {
- txq->tx_ring[txq->tx_next_rs].read.cmd_type_len |=
+ txq->ixgbe_tx_ring[txq->tx_next_rs].read.cmd_type_len |=
rte_cpu_to_le_32(IXGBE_ADVTXD_DCMD_RS);
txq->tx_next_rs = (uint16_t)(txq->tx_next_rs +
txq->tx_rs_thresh);
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v4 06/24] net/_common_intel: merge ice and i40e Tx queue struct
2024-12-20 14:38 ` [PATCH v4 00/24] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (4 preceding siblings ...)
2024-12-20 14:39 ` [PATCH v4 05/24] drivers/net: add prefix for driver-specific structs Bruce Richardson
@ 2024-12-20 14:39 ` Bruce Richardson
2024-12-20 14:39 ` [PATCH v4 07/24] net/iavf: use common Tx queue structure Bruce Richardson
` (17 subsequent siblings)
23 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-20 14:39 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Ian Stokes, David Christensen,
Konstantin Ananyev, Wathsala Vithanage, Anatoly Burakov
The queue structures of i40e and ice drivers are virtually identical, so
merge them into a common struct. This should allow easier function
merging in future using that common struct.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 55 +++++++++++++++++
drivers/net/i40e/i40e_ethdev.c | 4 +-
drivers/net/i40e/i40e_ethdev.h | 4 +-
drivers/net/i40e/i40e_fdir.c | 4 +-
.../net/i40e/i40e_recycle_mbufs_vec_common.c | 2 +-
drivers/net/i40e/i40e_rxtx.c | 58 +++++++++---------
drivers/net/i40e/i40e_rxtx.h | 50 ++--------------
drivers/net/i40e/i40e_rxtx_vec_altivec.c | 4 +-
drivers/net/i40e/i40e_rxtx_vec_avx2.c | 4 +-
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 6 +-
drivers/net/i40e/i40e_rxtx_vec_common.h | 2 +-
drivers/net/i40e/i40e_rxtx_vec_neon.c | 4 +-
drivers/net/i40e/i40e_rxtx_vec_sse.c | 4 +-
drivers/net/ice/ice_dcf.c | 4 +-
drivers/net/ice/ice_dcf_ethdev.c | 10 ++--
drivers/net/ice/ice_diagnose.c | 2 +-
drivers/net/ice/ice_ethdev.c | 2 +-
drivers/net/ice/ice_ethdev.h | 4 +-
drivers/net/ice/ice_rxtx.c | 60 +++++++++----------
drivers/net/ice/ice_rxtx.h | 41 +------------
drivers/net/ice/ice_rxtx_vec_avx2.c | 4 +-
drivers/net/ice/ice_rxtx_vec_avx512.c | 8 +--
drivers/net/ice/ice_rxtx_vec_common.h | 8 +--
drivers/net/ice/ice_rxtx_vec_sse.c | 6 +-
24 files changed, 165 insertions(+), 185 deletions(-)
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
index 5397007411..c965f5ee6c 100644
--- a/drivers/net/_common_intel/tx.h
+++ b/drivers/net/_common_intel/tx.h
@@ -8,6 +8,9 @@
#include <stdint.h>
#include <rte_mbuf.h>
+/* forward declaration of the common intel (ci) queue structure */
+struct ci_tx_queue;
+
/**
* Structure associated with each descriptor of the TX ring of a TX queue.
*/
@@ -24,6 +27,58 @@ struct ci_tx_entry_vec {
struct rte_mbuf *mbuf; /* mbuf associated with TX desc, if any. */
};
+typedef void (*ice_tx_release_mbufs_t)(struct ci_tx_queue *txq);
+
+struct ci_tx_queue {
+ union { /* TX ring virtual address */
+ volatile struct ice_tx_desc *ice_tx_ring;
+ volatile struct i40e_tx_desc *i40e_tx_ring;
+ };
+ volatile uint8_t *qtx_tail; /* register address of tail */
+ struct ci_tx_entry *sw_ring; /* virtual address of SW ring */
+ rte_iova_t tx_ring_dma; /* TX ring DMA address */
+ uint16_t nb_tx_desc; /* number of TX descriptors */
+ uint16_t tx_tail; /* current value of tail register */
+ uint16_t nb_tx_used; /* number of TX desc used since RS bit set */
+ /* index to last TX descriptor to have been cleaned */
+ uint16_t last_desc_cleaned;
+ /* Total number of TX descriptors ready to be allocated. */
+ uint16_t nb_tx_free;
+ /* Start freeing TX buffers if there are less free descriptors than
+ * this value.
+ */
+ uint16_t tx_free_thresh;
+ /* Number of TX descriptors to use before RS bit is set. */
+ uint16_t tx_rs_thresh;
+ uint8_t pthresh; /**< Prefetch threshold register. */
+ uint8_t hthresh; /**< Host threshold register. */
+ uint8_t wthresh; /**< Write-back threshold reg. */
+ uint16_t port_id; /* Device port identifier. */
+ uint16_t queue_id; /* TX queue index. */
+ uint16_t reg_idx;
+ uint64_t offloads;
+ uint16_t tx_next_dd;
+ uint16_t tx_next_rs;
+ uint64_t mbuf_errors;
+ bool tx_deferred_start; /* don't start this queue in dev start */
+ bool q_set; /* indicate if tx queue has been configured */
+ union { /* the VSI this queue belongs to */
+ struct ice_vsi *ice_vsi;
+ struct i40e_vsi *i40e_vsi;
+ };
+ const struct rte_memzone *mz;
+
+ union {
+ struct { /* ICE driver specific values */
+ ice_tx_release_mbufs_t tx_rel_mbufs;
+ uint32_t q_teid; /* TX schedule node id. */
+ };
+ struct { /* I40E driver specific values */
+ uint8_t dcb_tc;
+ };
+ };
+};
+
static __rte_always_inline void
ci_tx_backlog_entry(struct ci_tx_entry *txep, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 30dcdc68a8..bf5560ccc8 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -3685,7 +3685,7 @@ i40e_dev_update_mbuf_stats(struct rte_eth_dev *ethdev,
struct i40e_mbuf_stats *mbuf_stats)
{
uint16_t idx;
- struct i40e_tx_queue *txq;
+ struct ci_tx_queue *txq;
for (idx = 0; idx < ethdev->data->nb_tx_queues; idx++) {
txq = ethdev->data->tx_queues[idx];
@@ -6585,7 +6585,7 @@ i40e_dev_tx_init(struct i40e_pf *pf)
struct rte_eth_dev_data *data = pf->dev_data;
uint16_t i;
uint32_t ret = I40E_SUCCESS;
- struct i40e_tx_queue *txq;
+ struct ci_tx_queue *txq;
for (i = 0; i < data->nb_tx_queues; i++) {
txq = data->tx_queues[i];
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index 98213948b4..d351193ed9 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -334,7 +334,7 @@ struct i40e_vsi_list {
};
struct i40e_rx_queue;
-struct i40e_tx_queue;
+struct ci_tx_queue;
/* Bandwidth limit information */
struct i40e_bw_info {
@@ -738,7 +738,7 @@ TAILQ_HEAD(i40e_fdir_filter_list, i40e_fdir_filter);
struct i40e_fdir_info {
struct i40e_vsi *fdir_vsi; /* pointer to fdir VSI structure */
uint16_t match_counter_index; /* Statistic counter index used for fdir*/
- struct i40e_tx_queue *txq;
+ struct ci_tx_queue *txq;
struct i40e_rx_queue *rxq;
void *prg_pkt[I40E_FDIR_PRG_PKT_CNT]; /* memory for fdir program packet */
uint64_t dma_addr[I40E_FDIR_PRG_PKT_CNT]; /* physic address of packet memory*/
diff --git a/drivers/net/i40e/i40e_fdir.c b/drivers/net/i40e/i40e_fdir.c
index c600167634..349627a2ed 100644
--- a/drivers/net/i40e/i40e_fdir.c
+++ b/drivers/net/i40e/i40e_fdir.c
@@ -1372,7 +1372,7 @@ i40e_find_available_buffer(struct rte_eth_dev *dev)
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_fdir_info *fdir_info = &pf->fdir;
- struct i40e_tx_queue *txq = pf->fdir.txq;
+ struct ci_tx_queue *txq = pf->fdir.txq;
/* no available buffer
* search for more available buffers from the current
@@ -1628,7 +1628,7 @@ i40e_flow_fdir_filter_programming(struct i40e_pf *pf,
const struct i40e_fdir_filter_conf *filter,
bool add, bool wait_status)
{
- struct i40e_tx_queue *txq = pf->fdir.txq;
+ struct ci_tx_queue *txq = pf->fdir.txq;
struct i40e_rx_queue *rxq = pf->fdir.rxq;
const struct i40e_fdir_action *fdir_action = &filter->action;
volatile struct i40e_tx_desc *txdp;
diff --git a/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c b/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
index 8679e5c1fd..5a65c80d90 100644
--- a/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
+++ b/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
@@ -55,7 +55,7 @@ uint16_t
i40e_recycle_tx_mbufs_reuse_vec(void *tx_queue,
struct rte_eth_recycle_rxq_info *recycle_rxq_info)
{
- struct i40e_tx_queue *txq = tx_queue;
+ struct ci_tx_queue *txq = tx_queue;
struct ci_tx_entry *txep;
struct rte_mbuf **rxep;
int i, n;
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 34ef931859..305bc53480 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -376,7 +376,7 @@ i40e_build_ctob(uint32_t td_cmd,
}
static inline int
-i40e_xmit_cleanup(struct i40e_tx_queue *txq)
+i40e_xmit_cleanup(struct ci_tx_queue *txq)
{
struct ci_tx_entry *sw_ring = txq->sw_ring;
volatile struct i40e_tx_desc *txd = txq->i40e_tx_ring;
@@ -1080,7 +1080,7 @@ i40e_calc_pkt_desc(struct rte_mbuf *tx_pkt)
uint16_t
i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
- struct i40e_tx_queue *txq;
+ struct ci_tx_queue *txq;
struct ci_tx_entry *sw_ring;
struct ci_tx_entry *txe, *txn;
volatile struct i40e_tx_desc *txd;
@@ -1329,7 +1329,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
}
static __rte_always_inline int
-i40e_tx_free_bufs(struct i40e_tx_queue *txq)
+i40e_tx_free_bufs(struct ci_tx_queue *txq)
{
struct ci_tx_entry *txep;
uint16_t tx_rs_thresh = txq->tx_rs_thresh;
@@ -1413,7 +1413,7 @@ tx1(volatile struct i40e_tx_desc *txdp, struct rte_mbuf **pkts)
/* Fill hardware descriptor ring with mbuf data */
static inline void
-i40e_tx_fill_hw_ring(struct i40e_tx_queue *txq,
+i40e_tx_fill_hw_ring(struct ci_tx_queue *txq,
struct rte_mbuf **pkts,
uint16_t nb_pkts)
{
@@ -1441,7 +1441,7 @@ i40e_tx_fill_hw_ring(struct i40e_tx_queue *txq,
}
static inline uint16_t
-tx_xmit_pkts(struct i40e_tx_queue *txq,
+tx_xmit_pkts(struct ci_tx_queue *txq,
struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
@@ -1504,14 +1504,14 @@ i40e_xmit_pkts_simple(void *tx_queue,
uint16_t nb_tx = 0;
if (likely(nb_pkts <= I40E_TX_MAX_BURST))
- return tx_xmit_pkts((struct i40e_tx_queue *)tx_queue,
+ return tx_xmit_pkts((struct ci_tx_queue *)tx_queue,
tx_pkts, nb_pkts);
while (nb_pkts) {
uint16_t ret, num = (uint16_t)RTE_MIN(nb_pkts,
I40E_TX_MAX_BURST);
- ret = tx_xmit_pkts((struct i40e_tx_queue *)tx_queue,
+ ret = tx_xmit_pkts((struct ci_tx_queue *)tx_queue,
&tx_pkts[nb_tx], num);
nb_tx = (uint16_t)(nb_tx + ret);
nb_pkts = (uint16_t)(nb_pkts - ret);
@@ -1527,7 +1527,7 @@ i40e_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
uint16_t nb_tx = 0;
- struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
@@ -1549,7 +1549,7 @@ i40e_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
static uint16_t
i40e_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
- struct i40e_tx_queue *txq = tx_queue;
+ struct ci_tx_queue *txq = tx_queue;
uint16_t idx;
uint64_t ol_flags;
struct rte_mbuf *mb;
@@ -1611,7 +1611,7 @@ i40e_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts
pkt_error = true;
break;
}
- if (mb->nb_segs > ((struct i40e_tx_queue *)tx_queue)->nb_tx_desc) {
+ if (mb->nb_segs > ((struct ci_tx_queue *)tx_queue)->nb_tx_desc) {
PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs out of ring length");
pkt_error = true;
break;
@@ -1873,7 +1873,7 @@ int
i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
int err;
- struct i40e_tx_queue *txq;
+ struct ci_tx_queue *txq;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
PMD_INIT_FUNC_TRACE();
@@ -1907,7 +1907,7 @@ i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
int
i40e_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
- struct i40e_tx_queue *txq;
+ struct ci_tx_queue *txq;
int err;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
@@ -2311,7 +2311,7 @@ i40e_dev_rx_descriptor_status(void *rx_queue, uint16_t offset)
int
i40e_dev_tx_descriptor_status(void *tx_queue, uint16_t offset)
{
- struct i40e_tx_queue *txq = tx_queue;
+ struct ci_tx_queue *txq = tx_queue;
volatile uint64_t *status;
uint64_t mask, expect;
uint32_t desc;
@@ -2341,7 +2341,7 @@ i40e_dev_tx_descriptor_status(void *tx_queue, uint16_t offset)
static int
i40e_dev_tx_queue_setup_runtime(struct rte_eth_dev *dev,
- struct i40e_tx_queue *txq)
+ struct ci_tx_queue *txq)
{
struct i40e_adapter *ad =
I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
@@ -2394,7 +2394,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
{
struct i40e_vsi *vsi;
struct i40e_pf *pf = NULL;
- struct i40e_tx_queue *txq;
+ struct ci_tx_queue *txq;
const struct rte_memzone *tz;
uint32_t ring_size;
uint16_t tx_rs_thresh, tx_free_thresh;
@@ -2515,7 +2515,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate the TX queue data structure. */
txq = rte_zmalloc_socket("i40e tx queue",
- sizeof(struct i40e_tx_queue),
+ sizeof(struct ci_tx_queue),
RTE_CACHE_LINE_SIZE,
socket_id);
if (!txq) {
@@ -2600,7 +2600,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
void
i40e_tx_queue_release(void *txq)
{
- struct i40e_tx_queue *q = (struct i40e_tx_queue *)txq;
+ struct ci_tx_queue *q = (struct ci_tx_queue *)txq;
if (!q) {
PMD_DRV_LOG(DEBUG, "Pointer to TX queue is NULL");
@@ -2705,7 +2705,7 @@ i40e_reset_rx_queue(struct i40e_rx_queue *rxq)
}
void
-i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq)
+i40e_tx_queue_release_mbufs(struct ci_tx_queue *txq)
{
struct rte_eth_dev *dev;
uint16_t i;
@@ -2765,7 +2765,7 @@ i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq)
}
static int
-i40e_tx_done_cleanup_full(struct i40e_tx_queue *txq,
+i40e_tx_done_cleanup_full(struct ci_tx_queue *txq,
uint32_t free_cnt)
{
struct ci_tx_entry *swr_ring = txq->sw_ring;
@@ -2824,7 +2824,7 @@ i40e_tx_done_cleanup_full(struct i40e_tx_queue *txq,
}
static int
-i40e_tx_done_cleanup_simple(struct i40e_tx_queue *txq,
+i40e_tx_done_cleanup_simple(struct ci_tx_queue *txq,
uint32_t free_cnt)
{
int i, n, cnt;
@@ -2848,7 +2848,7 @@ i40e_tx_done_cleanup_simple(struct i40e_tx_queue *txq,
}
static int
-i40e_tx_done_cleanup_vec(struct i40e_tx_queue *txq __rte_unused,
+i40e_tx_done_cleanup_vec(struct ci_tx_queue *txq __rte_unused,
uint32_t free_cnt __rte_unused)
{
return -ENOTSUP;
@@ -2856,7 +2856,7 @@ i40e_tx_done_cleanup_vec(struct i40e_tx_queue *txq __rte_unused,
int
i40e_tx_done_cleanup(void *txq, uint32_t free_cnt)
{
- struct i40e_tx_queue *q = (struct i40e_tx_queue *)txq;
+ struct ci_tx_queue *q = (struct ci_tx_queue *)txq;
struct rte_eth_dev *dev = &rte_eth_devices[q->port_id];
struct i40e_adapter *ad =
I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
@@ -2872,7 +2872,7 @@ i40e_tx_done_cleanup(void *txq, uint32_t free_cnt)
}
void
-i40e_reset_tx_queue(struct i40e_tx_queue *txq)
+i40e_reset_tx_queue(struct ci_tx_queue *txq)
{
struct ci_tx_entry *txe;
uint16_t i, prev, size;
@@ -2911,7 +2911,7 @@ i40e_reset_tx_queue(struct i40e_tx_queue *txq)
/* Init the TX queue in hardware */
int
-i40e_tx_queue_init(struct i40e_tx_queue *txq)
+i40e_tx_queue_init(struct ci_tx_queue *txq)
{
enum i40e_status_code err = I40E_SUCCESS;
struct i40e_vsi *vsi = txq->i40e_vsi;
@@ -3167,7 +3167,7 @@ i40e_dev_free_queues(struct rte_eth_dev *dev)
enum i40e_status_code
i40e_fdir_setup_tx_resources(struct i40e_pf *pf)
{
- struct i40e_tx_queue *txq;
+ struct ci_tx_queue *txq;
const struct rte_memzone *tz = NULL;
struct rte_eth_dev *dev;
uint32_t ring_size;
@@ -3181,7 +3181,7 @@ i40e_fdir_setup_tx_resources(struct i40e_pf *pf)
/* Allocate the TX queue data structure. */
txq = rte_zmalloc_socket("i40e fdir tx queue",
- sizeof(struct i40e_tx_queue),
+ sizeof(struct ci_tx_queue),
RTE_CACHE_LINE_SIZE,
SOCKET_ID_ANY);
if (!txq) {
@@ -3304,7 +3304,7 @@ void
i40e_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
struct rte_eth_txq_info *qinfo)
{
- struct i40e_tx_queue *txq;
+ struct ci_tx_queue *txq;
txq = dev->data->tx_queues[queue_id];
@@ -3552,7 +3552,7 @@ i40e_rx_burst_mode_get(struct rte_eth_dev *dev, __rte_unused uint16_t queue_id,
}
void __rte_cold
-i40e_set_tx_function_flag(struct rte_eth_dev *dev, struct i40e_tx_queue *txq)
+i40e_set_tx_function_flag(struct rte_eth_dev *dev, struct ci_tx_queue *txq)
{
struct i40e_adapter *ad =
I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
@@ -3592,7 +3592,7 @@ i40e_set_tx_function(struct rte_eth_dev *dev)
#endif
if (ad->tx_vec_allowed) {
for (i = 0; i < dev->data->nb_tx_queues; i++) {
- struct i40e_tx_queue *txq =
+ struct ci_tx_queue *txq =
dev->data->tx_queues[i];
if (txq && i40e_txq_vec_setup(txq)) {
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 8315ee2f59..043d1df912 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -124,44 +124,6 @@ struct i40e_rx_queue {
const struct rte_memzone *mz;
};
-/*
- * Structure associated with each TX queue.
- */
-struct i40e_tx_queue {
- uint16_t nb_tx_desc; /**< number of TX descriptors */
- rte_iova_t tx_ring_dma; /**< TX ring DMA address */
- volatile struct i40e_tx_desc *i40e_tx_ring; /**< TX ring virtual address */
- struct ci_tx_entry *sw_ring; /**< virtual address of SW ring */
- uint16_t tx_tail; /**< current value of tail register */
- volatile uint8_t *qtx_tail; /**< register address of tail */
- uint16_t nb_tx_used; /**< number of TX desc used since RS bit set */
- /**< index to last TX descriptor to have been cleaned */
- uint16_t last_desc_cleaned;
- /**< Total number of TX descriptors ready to be allocated. */
- uint16_t nb_tx_free;
- /**< Start freeing TX buffers if there are less free descriptors than
- this value. */
- uint16_t tx_free_thresh;
- /** Number of TX descriptors to use before RS bit is set. */
- uint16_t tx_rs_thresh;
- uint8_t pthresh; /**< Prefetch threshold register. */
- uint8_t hthresh; /**< Host threshold register. */
- uint8_t wthresh; /**< Write-back threshold reg. */
- uint16_t port_id; /**< Device port identifier. */
- uint16_t queue_id; /**< TX queue index. */
- uint16_t reg_idx;
- struct i40e_vsi *i40e_vsi; /**< the VSI this queue belongs to */
- uint16_t tx_next_dd;
- uint16_t tx_next_rs;
- bool q_set; /**< indicate if tx queue has been configured */
- uint64_t mbuf_errors;
-
- bool tx_deferred_start; /**< don't start this queue in dev start */
- uint8_t dcb_tc; /**< Traffic class of tx queue */
- uint64_t offloads; /**< Tx offload flags of RTE_ETH_TX_OFFLOAD_* */
- const struct rte_memzone *mz;
-};
-
/** Offload features */
union i40e_tx_offload {
uint64_t data;
@@ -209,15 +171,15 @@ uint16_t i40e_simple_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
uint16_t i40e_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
-int i40e_tx_queue_init(struct i40e_tx_queue *txq);
+int i40e_tx_queue_init(struct ci_tx_queue *txq);
int i40e_rx_queue_init(struct i40e_rx_queue *rxq);
-void i40e_free_tx_resources(struct i40e_tx_queue *txq);
+void i40e_free_tx_resources(struct ci_tx_queue *txq);
void i40e_free_rx_resources(struct i40e_rx_queue *rxq);
void i40e_dev_clear_queues(struct rte_eth_dev *dev);
void i40e_dev_free_queues(struct rte_eth_dev *dev);
void i40e_reset_rx_queue(struct i40e_rx_queue *rxq);
-void i40e_reset_tx_queue(struct i40e_tx_queue *txq);
-void i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq);
+void i40e_reset_tx_queue(struct ci_tx_queue *txq);
+void i40e_tx_queue_release_mbufs(struct ci_tx_queue *txq);
int i40e_tx_done_cleanup(void *txq, uint32_t free_cnt);
int i40e_alloc_rx_queue_mbufs(struct i40e_rx_queue *rxq);
void i40e_rx_queue_release_mbufs(struct i40e_rx_queue *rxq);
@@ -237,13 +199,13 @@ uint16_t i40e_recv_scattered_pkts_vec(void *rx_queue,
uint16_t nb_pkts);
int i40e_rx_vec_dev_conf_condition_check(struct rte_eth_dev *dev);
int i40e_rxq_vec_setup(struct i40e_rx_queue *rxq);
-int i40e_txq_vec_setup(struct i40e_tx_queue *txq);
+int i40e_txq_vec_setup(struct ci_tx_queue *txq);
void i40e_rx_queue_release_mbufs_vec(struct i40e_rx_queue *rxq);
uint16_t i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
void i40e_set_rx_function(struct rte_eth_dev *dev);
void i40e_set_tx_function_flag(struct rte_eth_dev *dev,
- struct i40e_tx_queue *txq);
+ struct ci_tx_queue *txq);
void i40e_set_tx_function(struct rte_eth_dev *dev);
void i40e_set_default_ptype_table(struct rte_eth_dev *dev);
void i40e_set_default_pctype_table(struct rte_eth_dev *dev);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_altivec.c b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
index bf0e9ebd71..500bba2cef 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_altivec.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
@@ -551,7 +551,7 @@ uint16_t
i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
@@ -625,7 +625,7 @@ i40e_rxq_vec_setup(struct i40e_rx_queue *rxq)
}
int __rte_cold
-i40e_txq_vec_setup(struct i40e_tx_queue __rte_unused * txq)
+i40e_txq_vec_setup(struct ci_tx_queue __rte_unused * txq)
{
return 0;
}
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx2.c b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
index 5042e348db..29bef64287 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx2.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
@@ -743,7 +743,7 @@ static inline uint16_t
i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
@@ -808,7 +808,7 @@ i40e_xmit_pkts_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
uint16_t nb_tx = 0;
- struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index 04fbe3b2e3..a3f6d1667f 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -755,7 +755,7 @@ i40e_recv_scattered_pkts_vec_avx512(void *rx_queue,
}
static __rte_always_inline int
-i40e_tx_free_bufs_avx512(struct i40e_tx_queue *txq)
+i40e_tx_free_bufs_avx512(struct ci_tx_queue *txq)
{
struct ci_tx_entry_vec *txep;
uint32_t n;
@@ -933,7 +933,7 @@ static inline uint16_t
i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
@@ -999,7 +999,7 @@ i40e_xmit_pkts_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
uint16_t nb_tx = 0;
- struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index e81f958361..57d6263ccf 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -17,7 +17,7 @@
#endif
static __rte_always_inline int
-i40e_tx_free_bufs(struct i40e_tx_queue *txq)
+i40e_tx_free_bufs(struct ci_tx_queue *txq)
{
struct ci_tx_entry *txep;
uint32_t n;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_neon.c b/drivers/net/i40e/i40e_rxtx_vec_neon.c
index 05191e4884..c97f337e43 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_neon.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_neon.c
@@ -679,7 +679,7 @@ uint16_t
i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
struct rte_mbuf **__rte_restrict tx_pkts, uint16_t nb_pkts)
{
- struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
@@ -753,7 +753,7 @@ i40e_rxq_vec_setup(struct i40e_rx_queue *rxq)
}
int __rte_cold
-i40e_txq_vec_setup(struct i40e_tx_queue __rte_unused *txq)
+i40e_txq_vec_setup(struct ci_tx_queue *txq __rte_unused)
{
return 0;
}
diff --git a/drivers/net/i40e/i40e_rxtx_vec_sse.c b/drivers/net/i40e/i40e_rxtx_vec_sse.c
index d81b553842..2c467e2089 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_sse.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_sse.c
@@ -698,7 +698,7 @@ uint16_t
i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
@@ -771,7 +771,7 @@ i40e_rxq_vec_setup(struct i40e_rx_queue *rxq)
}
int __rte_cold
-i40e_txq_vec_setup(struct i40e_tx_queue __rte_unused *txq)
+i40e_txq_vec_setup(struct ci_tx_queue *txq __rte_unused)
{
return 0;
}
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 204d4eadbb..65c18921f4 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -1177,8 +1177,8 @@ ice_dcf_configure_queues(struct ice_dcf_hw *hw)
{
struct ice_rx_queue **rxq =
(struct ice_rx_queue **)hw->eth_dev->data->rx_queues;
- struct ice_tx_queue **txq =
- (struct ice_tx_queue **)hw->eth_dev->data->tx_queues;
+ struct ci_tx_queue **txq =
+ (struct ci_tx_queue **)hw->eth_dev->data->tx_queues;
struct virtchnl_vsi_queue_config_info *vc_config;
struct virtchnl_queue_pair_info *vc_qp;
struct dcf_virtchnl_cmd args;
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 4ffd1f5567..a0c065d78c 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -387,7 +387,7 @@ reset_rx_queue(struct ice_rx_queue *rxq)
}
static inline void
-reset_tx_queue(struct ice_tx_queue *txq)
+reset_tx_queue(struct ci_tx_queue *txq)
{
struct ci_tx_entry *txe;
uint32_t i, size;
@@ -454,7 +454,7 @@ ice_dcf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
struct ice_dcf_adapter *ad = dev->data->dev_private;
struct iavf_hw *hw = &ad->real_hw.avf;
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
int err = 0;
if (tx_queue_id >= dev->data->nb_tx_queues)
@@ -486,7 +486,7 @@ ice_dcf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
struct ice_dcf_adapter *ad = dev->data->dev_private;
struct ice_dcf_hw *hw = &ad->real_hw;
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
int err;
if (tx_queue_id >= dev->data->nb_tx_queues)
@@ -511,7 +511,7 @@ static int
ice_dcf_start_queues(struct rte_eth_dev *dev)
{
struct ice_rx_queue *rxq;
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
int nb_rxq = 0;
int nb_txq, i;
@@ -638,7 +638,7 @@ ice_dcf_stop_queues(struct rte_eth_dev *dev)
struct ice_dcf_adapter *ad = dev->data->dev_private;
struct ice_dcf_hw *hw = &ad->real_hw;
struct ice_rx_queue *rxq;
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
int ret, i;
/* Stop All queues */
diff --git a/drivers/net/ice/ice_diagnose.c b/drivers/net/ice/ice_diagnose.c
index 5bec9d00ad..a50068441a 100644
--- a/drivers/net/ice/ice_diagnose.c
+++ b/drivers/net/ice/ice_diagnose.c
@@ -605,7 +605,7 @@ void print_node(const struct rte_eth_dev_data *ethdata,
get_elem_type(data->data.elem_type));
if (data->data.elem_type == ICE_AQC_ELEM_TYPE_LEAF) {
for (uint16_t i = 0; i < ethdata->nb_tx_queues; i++) {
- struct ice_tx_queue *q = ethdata->tx_queues[i];
+ struct ci_tx_queue *q = ethdata->tx_queues[i];
if (q->q_teid == data->node_teid) {
fprintf(stream, "\t\t\t\t<tr><td>TXQ</td><td>%u</td></tr>\n", i);
break;
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 93a6308a86..80eee03204 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -6448,7 +6448,7 @@ ice_update_mbuf_stats(struct rte_eth_dev *ethdev,
struct ice_mbuf_stats *mbuf_stats)
{
uint16_t idx;
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
for (idx = 0; idx < ethdev->data->nb_tx_queues; idx++) {
txq = ethdev->data->tx_queues[idx];
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index a5b27fabd2..ba54655499 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -258,7 +258,7 @@ struct ice_vsi_list {
};
struct ice_rx_queue;
-struct ice_tx_queue;
+struct ci_tx_queue;
/**
* Structure that defines a VSI, associated with a adapter.
@@ -408,7 +408,7 @@ struct ice_fdir_counter_pool_container {
*/
struct ice_fdir_info {
struct ice_vsi *fdir_vsi; /* pointer to fdir VSI structure */
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
struct ice_rx_queue *rxq;
void *prg_pkt; /* memory for fdir program packet */
uint64_t dma_addr; /* physic address of packet memory*/
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 5ec92f6d0c..bcc7c7a016 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -743,7 +743,7 @@ ice_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
int
ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
int err;
struct ice_vsi *vsi;
struct ice_hw *hw;
@@ -944,7 +944,7 @@ int
ice_fdir_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
int err;
struct ice_vsi *vsi;
struct ice_hw *hw;
@@ -1008,7 +1008,7 @@ ice_fdir_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
/* Free all mbufs for descriptors in tx queue */
static void
-_ice_tx_queue_release_mbufs(struct ice_tx_queue *txq)
+_ice_tx_queue_release_mbufs(struct ci_tx_queue *txq)
{
uint16_t i;
@@ -1026,7 +1026,7 @@ _ice_tx_queue_release_mbufs(struct ice_tx_queue *txq)
}
static void
-ice_reset_tx_queue(struct ice_tx_queue *txq)
+ice_reset_tx_queue(struct ci_tx_queue *txq)
{
struct ci_tx_entry *txe;
uint16_t i, prev, size;
@@ -1066,7 +1066,7 @@ ice_reset_tx_queue(struct ice_tx_queue *txq)
int
ice_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_vsi *vsi = pf->main_vsi;
@@ -1134,7 +1134,7 @@ ice_fdir_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
int
ice_fdir_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_vsi *vsi = pf->main_vsi;
@@ -1354,7 +1354,7 @@ ice_tx_queue_setup(struct rte_eth_dev *dev,
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_vsi *vsi = pf->main_vsi;
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
const struct rte_memzone *tz;
uint32_t ring_size;
uint16_t tx_rs_thresh, tx_free_thresh;
@@ -1467,7 +1467,7 @@ ice_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate the TX queue data structure. */
txq = rte_zmalloc_socket(NULL,
- sizeof(struct ice_tx_queue),
+ sizeof(struct ci_tx_queue),
RTE_CACHE_LINE_SIZE,
socket_id);
if (!txq) {
@@ -1542,7 +1542,7 @@ ice_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
void
ice_tx_queue_release(void *txq)
{
- struct ice_tx_queue *q = (struct ice_tx_queue *)txq;
+ struct ci_tx_queue *q = (struct ci_tx_queue *)txq;
if (!q) {
PMD_DRV_LOG(DEBUG, "Pointer to TX queue is NULL");
@@ -1577,7 +1577,7 @@ void
ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
struct rte_eth_txq_info *qinfo)
{
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
txq = dev->data->tx_queues[queue_id];
@@ -2354,7 +2354,7 @@ ice_rx_descriptor_status(void *rx_queue, uint16_t offset)
int
ice_tx_descriptor_status(void *tx_queue, uint16_t offset)
{
- struct ice_tx_queue *txq = tx_queue;
+ struct ci_tx_queue *txq = tx_queue;
volatile uint64_t *status;
uint64_t mask, expect;
uint32_t desc;
@@ -2412,7 +2412,7 @@ ice_free_queues(struct rte_eth_dev *dev)
int
ice_fdir_setup_tx_resources(struct ice_pf *pf)
{
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
const struct rte_memzone *tz = NULL;
uint32_t ring_size;
struct rte_eth_dev *dev;
@@ -2426,7 +2426,7 @@ ice_fdir_setup_tx_resources(struct ice_pf *pf)
/* Allocate the TX queue data structure. */
txq = rte_zmalloc_socket("ice fdir tx queue",
- sizeof(struct ice_tx_queue),
+ sizeof(struct ci_tx_queue),
RTE_CACHE_LINE_SIZE,
SOCKET_ID_ANY);
if (!txq) {
@@ -2835,7 +2835,7 @@ ice_txd_enable_checksum(uint64_t ol_flags,
}
static inline int
-ice_xmit_cleanup(struct ice_tx_queue *txq)
+ice_xmit_cleanup(struct ci_tx_queue *txq)
{
struct ci_tx_entry *sw_ring = txq->sw_ring;
volatile struct ice_tx_desc *txd = txq->ice_tx_ring;
@@ -2958,7 +2958,7 @@ ice_calc_pkt_desc(struct rte_mbuf *tx_pkt)
uint16_t
ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
volatile struct ice_tx_desc *ice_tx_ring;
volatile struct ice_tx_desc *txd;
struct ci_tx_entry *sw_ring;
@@ -3182,7 +3182,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
}
static __rte_always_inline int
-ice_tx_free_bufs(struct ice_tx_queue *txq)
+ice_tx_free_bufs(struct ci_tx_queue *txq)
{
struct ci_tx_entry *txep;
uint16_t i;
@@ -3218,7 +3218,7 @@ ice_tx_free_bufs(struct ice_tx_queue *txq)
}
static int
-ice_tx_done_cleanup_full(struct ice_tx_queue *txq,
+ice_tx_done_cleanup_full(struct ci_tx_queue *txq,
uint32_t free_cnt)
{
struct ci_tx_entry *swr_ring = txq->sw_ring;
@@ -3278,7 +3278,7 @@ ice_tx_done_cleanup_full(struct ice_tx_queue *txq,
#ifdef RTE_ARCH_X86
static int
-ice_tx_done_cleanup_vec(struct ice_tx_queue *txq __rte_unused,
+ice_tx_done_cleanup_vec(struct ci_tx_queue *txq __rte_unused,
uint32_t free_cnt __rte_unused)
{
return -ENOTSUP;
@@ -3286,7 +3286,7 @@ ice_tx_done_cleanup_vec(struct ice_tx_queue *txq __rte_unused,
#endif
static int
-ice_tx_done_cleanup_simple(struct ice_tx_queue *txq,
+ice_tx_done_cleanup_simple(struct ci_tx_queue *txq,
uint32_t free_cnt)
{
int i, n, cnt;
@@ -3312,7 +3312,7 @@ ice_tx_done_cleanup_simple(struct ice_tx_queue *txq,
int
ice_tx_done_cleanup(void *txq, uint32_t free_cnt)
{
- struct ice_tx_queue *q = (struct ice_tx_queue *)txq;
+ struct ci_tx_queue *q = (struct ci_tx_queue *)txq;
struct rte_eth_dev *dev = &rte_eth_devices[q->port_id];
struct ice_adapter *ad =
ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
@@ -3357,7 +3357,7 @@ tx1(volatile struct ice_tx_desc *txdp, struct rte_mbuf **pkts)
}
static inline void
-ice_tx_fill_hw_ring(struct ice_tx_queue *txq, struct rte_mbuf **pkts,
+ice_tx_fill_hw_ring(struct ci_tx_queue *txq, struct rte_mbuf **pkts,
uint16_t nb_pkts)
{
volatile struct ice_tx_desc *txdp = &txq->ice_tx_ring[txq->tx_tail];
@@ -3389,7 +3389,7 @@ ice_tx_fill_hw_ring(struct ice_tx_queue *txq, struct rte_mbuf **pkts,
}
static inline uint16_t
-tx_xmit_pkts(struct ice_tx_queue *txq,
+tx_xmit_pkts(struct ci_tx_queue *txq,
struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
@@ -3452,14 +3452,14 @@ ice_xmit_pkts_simple(void *tx_queue,
uint16_t nb_tx = 0;
if (likely(nb_pkts <= ICE_TX_MAX_BURST))
- return tx_xmit_pkts((struct ice_tx_queue *)tx_queue,
+ return tx_xmit_pkts((struct ci_tx_queue *)tx_queue,
tx_pkts, nb_pkts);
while (nb_pkts) {
uint16_t ret, num = (uint16_t)RTE_MIN(nb_pkts,
ICE_TX_MAX_BURST);
- ret = tx_xmit_pkts((struct ice_tx_queue *)tx_queue,
+ ret = tx_xmit_pkts((struct ci_tx_queue *)tx_queue,
&tx_pkts[nb_tx], num);
nb_tx = (uint16_t)(nb_tx + ret);
nb_pkts = (uint16_t)(nb_pkts - ret);
@@ -3667,7 +3667,7 @@ ice_rx_burst_mode_get(struct rte_eth_dev *dev, __rte_unused uint16_t queue_id,
}
void __rte_cold
-ice_set_tx_function_flag(struct rte_eth_dev *dev, struct ice_tx_queue *txq)
+ice_set_tx_function_flag(struct rte_eth_dev *dev, struct ci_tx_queue *txq)
{
struct ice_adapter *ad =
ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
@@ -3716,7 +3716,7 @@ ice_check_empty_mbuf(struct rte_mbuf *tx_pkt)
static uint16_t
ice_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
- struct ice_tx_queue *txq = tx_queue;
+ struct ci_tx_queue *txq = tx_queue;
uint16_t idx;
struct rte_mbuf *mb;
bool pkt_error = false;
@@ -3778,7 +3778,7 @@ ice_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
pkt_error = true;
break;
}
- if (mb->nb_segs > ((struct ice_tx_queue *)tx_queue)->nb_tx_desc) {
+ if (mb->nb_segs > ((struct ci_tx_queue *)tx_queue)->nb_tx_desc) {
PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs out of ring length");
pkt_error = true;
break;
@@ -3839,7 +3839,7 @@ ice_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
(m->tso_segsz < ICE_MIN_TSO_MSS ||
m->tso_segsz > ICE_MAX_TSO_MSS ||
m->nb_segs >
- ((struct ice_tx_queue *)tx_queue)->nb_tx_desc ||
+ ((struct ci_tx_queue *)tx_queue)->nb_tx_desc ||
m->pkt_len > ICE_MAX_TSO_FRAME_SIZE)) {
/**
* MSS outside the range are considered malicious
@@ -3881,7 +3881,7 @@ ice_set_tx_function(struct rte_eth_dev *dev)
ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
int mbuf_check = ad->devargs.mbuf_check;
#ifdef RTE_ARCH_X86
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
int i;
int tx_check_ret = -1;
@@ -4693,7 +4693,7 @@ ice_check_fdir_programming_status(struct ice_rx_queue *rxq)
int
ice_fdir_programming(struct ice_pf *pf, struct ice_fltr_desc *fdir_desc)
{
- struct ice_tx_queue *txq = pf->fdir.txq;
+ struct ci_tx_queue *txq = pf->fdir.txq;
struct ice_rx_queue *rxq = pf->fdir.rxq;
volatile struct ice_fltr_desc *fdirdp;
volatile struct ice_tx_desc *txdp;
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index 3257f449f5..1cae8a9b50 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -79,7 +79,6 @@ extern int ice_timestamp_dynfield_offset;
#define ICE_TX_MTU_SEG_MAX 8
typedef void (*ice_rx_release_mbufs_t)(struct ice_rx_queue *rxq);
-typedef void (*ice_tx_release_mbufs_t)(struct ice_tx_queue *txq);
typedef void (*ice_rxd_to_pkt_fields_t)(struct ice_rx_queue *rxq,
struct rte_mbuf *mb,
volatile union ice_rx_flex_desc *rxdp);
@@ -145,42 +144,6 @@ struct ice_rx_queue {
bool ts_enable; /* if rxq timestamp is enabled */
};
-struct ice_tx_queue {
- uint16_t nb_tx_desc; /* number of TX descriptors */
- rte_iova_t tx_ring_dma; /* TX ring DMA address */
- volatile struct ice_tx_desc *ice_tx_ring; /* TX ring virtual address */
- struct ci_tx_entry *sw_ring; /* virtual address of SW ring */
- uint16_t tx_tail; /* current value of tail register */
- volatile uint8_t *qtx_tail; /* register address of tail */
- uint16_t nb_tx_used; /* number of TX desc used since RS bit set */
- /* index to last TX descriptor to have been cleaned */
- uint16_t last_desc_cleaned;
- /* Total number of TX descriptors ready to be allocated. */
- uint16_t nb_tx_free;
- /* Start freeing TX buffers if there are less free descriptors than
- * this value.
- */
- uint16_t tx_free_thresh;
- /* Number of TX descriptors to use before RS bit is set. */
- uint16_t tx_rs_thresh;
- uint8_t pthresh; /**< Prefetch threshold register. */
- uint8_t hthresh; /**< Host threshold register. */
- uint8_t wthresh; /**< Write-back threshold reg. */
- uint16_t port_id; /* Device port identifier. */
- uint16_t queue_id; /* TX queue index. */
- uint32_t q_teid; /* TX schedule node id. */
- uint16_t reg_idx;
- uint64_t offloads;
- struct ice_vsi *ice_vsi; /* the VSI this queue belongs to */
- uint16_t tx_next_dd;
- uint16_t tx_next_rs;
- uint64_t mbuf_errors;
- bool tx_deferred_start; /* don't start this queue in dev start */
- bool q_set; /* indicate if tx queue has been configured */
- ice_tx_release_mbufs_t tx_rel_mbufs;
- const struct rte_memzone *mz;
-};
-
/* Offload features */
union ice_tx_offload {
uint64_t data;
@@ -268,7 +231,7 @@ void ice_set_rx_function(struct rte_eth_dev *dev);
uint16_t ice_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
void ice_set_tx_function_flag(struct rte_eth_dev *dev,
- struct ice_tx_queue *txq);
+ struct ci_tx_queue *txq);
void ice_set_tx_function(struct rte_eth_dev *dev);
uint32_t ice_rx_queue_count(void *rx_queue);
void ice_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
@@ -290,7 +253,7 @@ void ice_select_rxd_to_pkt_fields_handler(struct ice_rx_queue *rxq,
int ice_rx_vec_dev_check(struct rte_eth_dev *dev);
int ice_tx_vec_dev_check(struct rte_eth_dev *dev);
int ice_rxq_vec_setup(struct ice_rx_queue *rxq);
-int ice_txq_vec_setup(struct ice_tx_queue *txq);
+int ice_txq_vec_setup(struct ci_tx_queue *txq);
uint16_t ice_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
uint16_t ice_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index dde07ac99e..12ffa0fa9a 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -856,7 +856,7 @@ static __rte_always_inline uint16_t
ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool offload)
{
- struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct ice_tx_desc *txdp;
struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
@@ -924,7 +924,7 @@ ice_xmit_pkts_vec_avx2_common(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool offload)
{
uint16_t nb_tx = 0;
- struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index e4d0270176..eabd8b04a0 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -860,7 +860,7 @@ ice_recv_scattered_pkts_vec_avx512_offload(void *rx_queue,
}
static __rte_always_inline int
-ice_tx_free_bufs_avx512(struct ice_tx_queue *txq)
+ice_tx_free_bufs_avx512(struct ci_tx_queue *txq)
{
struct ci_tx_entry_vec *txep;
uint32_t n;
@@ -1053,7 +1053,7 @@ static __rte_always_inline uint16_t
ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool do_offload)
{
- struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct ice_tx_desc *txdp;
struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
@@ -1122,7 +1122,7 @@ ice_xmit_pkts_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
uint16_t nb_tx = 0;
- struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
@@ -1144,7 +1144,7 @@ ice_xmit_pkts_vec_avx512_offload(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
uint16_t nb_tx = 0;
- struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index 7b865b53ad..b39289ceb5 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -13,7 +13,7 @@
#endif
static __rte_always_inline int
-ice_tx_free_bufs_vec(struct ice_tx_queue *txq)
+ice_tx_free_bufs_vec(struct ci_tx_queue *txq)
{
struct ci_tx_entry *txep;
uint32_t n;
@@ -105,7 +105,7 @@ _ice_rx_queue_release_mbufs_vec(struct ice_rx_queue *rxq)
}
static inline void
-_ice_tx_queue_release_mbufs_vec(struct ice_tx_queue *txq)
+_ice_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq)
{
uint16_t i;
@@ -231,7 +231,7 @@ ice_rx_vec_queue_default(struct ice_rx_queue *rxq)
}
static inline int
-ice_tx_vec_queue_default(struct ice_tx_queue *txq)
+ice_tx_vec_queue_default(struct ci_tx_queue *txq)
{
if (!txq)
return -1;
@@ -273,7 +273,7 @@ static inline int
ice_tx_vec_dev_check_default(struct rte_eth_dev *dev)
{
int i;
- struct ice_tx_queue *txq;
+ struct ci_tx_queue *txq;
int ret = 0;
int result = 0;
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index 364207e8a8..f11528385a 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -697,7 +697,7 @@ static uint16_t
ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct ice_tx_desc *txdp;
struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
@@ -766,7 +766,7 @@ ice_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
uint16_t nb_tx = 0;
- struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
@@ -793,7 +793,7 @@ ice_rxq_vec_setup(struct ice_rx_queue *rxq)
}
int __rte_cold
-ice_txq_vec_setup(struct ice_tx_queue __rte_unused *txq)
+ice_txq_vec_setup(struct ci_tx_queue *txq __rte_unused)
{
if (!txq)
return -1;
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v4 07/24] net/iavf: use common Tx queue structure
2024-12-20 14:38 ` [PATCH v4 00/24] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (5 preceding siblings ...)
2024-12-20 14:39 ` [PATCH v4 06/24] net/_common_intel: merge ice and i40e Tx queue struct Bruce Richardson
@ 2024-12-20 14:39 ` Bruce Richardson
2024-12-20 14:39 ` [PATCH v4 08/24] net/ixgbe: convert Tx queue context cache field to ptr Bruce Richardson
` (16 subsequent siblings)
23 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-20 14:39 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Vladimir Medvedkin, Ian Stokes, Konstantin Ananyev
Merge in the few additional fields used by iavf driver and convert it to
using the common Tx queue structure also.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 15 +++++++-
drivers/net/iavf/iavf.h | 2 +-
drivers/net/iavf/iavf_ethdev.c | 4 +-
drivers/net/iavf/iavf_rxtx.c | 42 ++++++++++-----------
drivers/net/iavf/iavf_rxtx.h | 49 +++----------------------
drivers/net/iavf/iavf_rxtx_vec_avx2.c | 4 +-
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 14 +++----
drivers/net/iavf/iavf_rxtx_vec_common.h | 8 ++--
drivers/net/iavf/iavf_rxtx_vec_sse.c | 8 ++--
drivers/net/iavf/iavf_vchnl.c | 6 +--
10 files changed, 62 insertions(+), 90 deletions(-)
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
index c965f5ee6c..c4a1a0c816 100644
--- a/drivers/net/_common_intel/tx.h
+++ b/drivers/net/_common_intel/tx.h
@@ -31,8 +31,9 @@ typedef void (*ice_tx_release_mbufs_t)(struct ci_tx_queue *txq);
struct ci_tx_queue {
union { /* TX ring virtual address */
- volatile struct ice_tx_desc *ice_tx_ring;
volatile struct i40e_tx_desc *i40e_tx_ring;
+ volatile struct iavf_tx_desc *iavf_tx_ring;
+ volatile struct ice_tx_desc *ice_tx_ring;
};
volatile uint8_t *qtx_tail; /* register address of tail */
struct ci_tx_entry *sw_ring; /* virtual address of SW ring */
@@ -63,8 +64,9 @@ struct ci_tx_queue {
bool tx_deferred_start; /* don't start this queue in dev start */
bool q_set; /* indicate if tx queue has been configured */
union { /* the VSI this queue belongs to */
- struct ice_vsi *ice_vsi;
struct i40e_vsi *i40e_vsi;
+ struct iavf_vsi *iavf_vsi;
+ struct ice_vsi *ice_vsi;
};
const struct rte_memzone *mz;
@@ -76,6 +78,15 @@ struct ci_tx_queue {
struct { /* I40E driver specific values */
uint8_t dcb_tc;
};
+ struct { /* iavf driver specific values */
+ uint16_t ipsec_crypto_pkt_md_offset;
+ uint8_t rel_mbufs_type;
+#define IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG1 BIT(0)
+#define IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2 BIT(1)
+ uint8_t vlan_flag;
+ uint8_t tc;
+ bool use_ctx; /* with ctx info, each pkt needs two descriptors */
+ };
};
};
diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h
index ad526c644c..956c60ef45 100644
--- a/drivers/net/iavf/iavf.h
+++ b/drivers/net/iavf/iavf.h
@@ -98,7 +98,7 @@
struct iavf_adapter;
struct iavf_rx_queue;
-struct iavf_tx_queue;
+struct ci_tx_queue;
struct iavf_ipsec_crypto_stats {
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 7f80cd6258..328c224c93 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -954,7 +954,7 @@ static int
iavf_start_queues(struct rte_eth_dev *dev)
{
struct iavf_rx_queue *rxq;
- struct iavf_tx_queue *txq;
+ struct ci_tx_queue *txq;
int i;
uint16_t nb_txq, nb_rxq;
@@ -1885,7 +1885,7 @@ iavf_dev_update_mbuf_stats(struct rte_eth_dev *ethdev,
struct iavf_mbuf_stats *mbuf_stats)
{
uint16_t idx;
- struct iavf_tx_queue *txq;
+ struct ci_tx_queue *txq;
for (idx = 0; idx < ethdev->data->nb_tx_queues; idx++) {
txq = ethdev->data->tx_queues[idx];
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 6eda91e76b..7e381b2a17 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -213,7 +213,7 @@ check_rx_vec_allow(struct iavf_rx_queue *rxq)
}
static inline bool
-check_tx_vec_allow(struct iavf_tx_queue *txq)
+check_tx_vec_allow(struct ci_tx_queue *txq)
{
if (!(txq->offloads & IAVF_TX_NO_VECTOR_FLAGS) &&
txq->tx_rs_thresh >= IAVF_VPMD_TX_MAX_BURST &&
@@ -282,7 +282,7 @@ reset_rx_queue(struct iavf_rx_queue *rxq)
}
static inline void
-reset_tx_queue(struct iavf_tx_queue *txq)
+reset_tx_queue(struct ci_tx_queue *txq)
{
struct ci_tx_entry *txe;
uint32_t i, size;
@@ -388,7 +388,7 @@ release_rxq_mbufs(struct iavf_rx_queue *rxq)
}
static inline void
-release_txq_mbufs(struct iavf_tx_queue *txq)
+release_txq_mbufs(struct ci_tx_queue *txq)
{
uint16_t i;
@@ -778,7 +778,7 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,
struct iavf_info *vf =
IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
struct iavf_vsi *vsi = &vf->vsi;
- struct iavf_tx_queue *txq;
+ struct ci_tx_queue *txq;
const struct rte_memzone *mz;
uint32_t ring_size;
uint16_t tx_rs_thresh, tx_free_thresh;
@@ -814,7 +814,7 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,
/* Allocate the TX queue data structure. */
txq = rte_zmalloc_socket("iavf txq",
- sizeof(struct iavf_tx_queue),
+ sizeof(struct ci_tx_queue),
RTE_CACHE_LINE_SIZE,
socket_id);
if (!txq) {
@@ -979,7 +979,7 @@ iavf_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- struct iavf_tx_queue *txq;
+ struct ci_tx_queue *txq;
int err = 0;
PMD_DRV_FUNC_TRACE();
@@ -1048,7 +1048,7 @@ iavf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
struct iavf_adapter *adapter =
IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
- struct iavf_tx_queue *txq;
+ struct ci_tx_queue *txq;
int err;
PMD_DRV_FUNC_TRACE();
@@ -1092,7 +1092,7 @@ iavf_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
void
iavf_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
{
- struct iavf_tx_queue *q = dev->data->tx_queues[qid];
+ struct ci_tx_queue *q = dev->data->tx_queues[qid];
if (!q)
return;
@@ -1107,7 +1107,7 @@ static void
iavf_reset_queues(struct rte_eth_dev *dev)
{
struct iavf_rx_queue *rxq;
- struct iavf_tx_queue *txq;
+ struct ci_tx_queue *txq;
int i;
for (i = 0; i < dev->data->nb_tx_queues; i++) {
@@ -2377,7 +2377,7 @@ iavf_recv_pkts_bulk_alloc(void *rx_queue,
}
static inline int
-iavf_xmit_cleanup(struct iavf_tx_queue *txq)
+iavf_xmit_cleanup(struct ci_tx_queue *txq)
{
struct ci_tx_entry *sw_ring = txq->sw_ring;
uint16_t last_desc_cleaned = txq->last_desc_cleaned;
@@ -2781,7 +2781,7 @@ iavf_fill_data_desc(volatile struct iavf_tx_desc *desc,
static struct iavf_ipsec_crypto_pkt_metadata *
-iavf_ipsec_crypto_get_pkt_metadata(const struct iavf_tx_queue *txq,
+iavf_ipsec_crypto_get_pkt_metadata(const struct ci_tx_queue *txq,
struct rte_mbuf *m)
{
if (m->ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD)
@@ -2795,7 +2795,7 @@ iavf_ipsec_crypto_get_pkt_metadata(const struct iavf_tx_queue *txq,
uint16_t
iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
- struct iavf_tx_queue *txq = tx_queue;
+ struct ci_tx_queue *txq = tx_queue;
volatile struct iavf_tx_desc *txr = txq->iavf_tx_ring;
struct ci_tx_entry *txe_ring = txq->sw_ring;
struct ci_tx_entry *txe, *txn;
@@ -3027,7 +3027,7 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
* correct queue.
*/
static int
-iavf_check_vlan_up2tc(struct iavf_tx_queue *txq, struct rte_mbuf *m)
+iavf_check_vlan_up2tc(struct ci_tx_queue *txq, struct rte_mbuf *m)
{
struct rte_eth_dev *dev = &rte_eth_devices[txq->port_id];
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
@@ -3646,7 +3646,7 @@ iavf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
int i, ret;
uint64_t ol_flags;
struct rte_mbuf *m;
- struct iavf_tx_queue *txq = tx_queue;
+ struct ci_tx_queue *txq = tx_queue;
struct rte_eth_dev *dev = &rte_eth_devices[txq->port_id];
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
struct iavf_adapter *adapter = IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
@@ -3800,7 +3800,7 @@ static uint16_t
iavf_xmit_pkts_no_poll(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct iavf_tx_queue *txq = tx_queue;
+ struct ci_tx_queue *txq = tx_queue;
enum iavf_tx_burst_type tx_burst_type;
if (!txq->iavf_vsi || txq->iavf_vsi->adapter->no_poll)
@@ -3823,7 +3823,7 @@ iavf_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t good_pkts = nb_pkts;
const char *reason = NULL;
bool pkt_error = false;
- struct iavf_tx_queue *txq = tx_queue;
+ struct ci_tx_queue *txq = tx_queue;
struct iavf_adapter *adapter = txq->iavf_vsi->adapter;
enum iavf_tx_burst_type tx_burst_type =
txq->iavf_vsi->adapter->tx_burst_type;
@@ -4144,7 +4144,7 @@ iavf_set_tx_function(struct rte_eth_dev *dev)
int mbuf_check = adapter->devargs.mbuf_check;
int no_poll_on_link_down = adapter->devargs.no_poll_on_link_down;
#ifdef RTE_ARCH_X86
- struct iavf_tx_queue *txq;
+ struct ci_tx_queue *txq;
int i;
int check_ret;
bool use_sse = false;
@@ -4265,7 +4265,7 @@ iavf_set_tx_function(struct rte_eth_dev *dev)
}
static int
-iavf_tx_done_cleanup_full(struct iavf_tx_queue *txq,
+iavf_tx_done_cleanup_full(struct ci_tx_queue *txq,
uint32_t free_cnt)
{
struct ci_tx_entry *swr_ring = txq->sw_ring;
@@ -4324,7 +4324,7 @@ iavf_tx_done_cleanup_full(struct iavf_tx_queue *txq,
int
iavf_dev_tx_done_cleanup(void *txq, uint32_t free_cnt)
{
- struct iavf_tx_queue *q = (struct iavf_tx_queue *)txq;
+ struct ci_tx_queue *q = (struct ci_tx_queue *)txq;
return iavf_tx_done_cleanup_full(q, free_cnt);
}
@@ -4350,7 +4350,7 @@ void
iavf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
struct rte_eth_txq_info *qinfo)
{
- struct iavf_tx_queue *txq;
+ struct ci_tx_queue *txq;
txq = dev->data->tx_queues[queue_id];
@@ -4422,7 +4422,7 @@ iavf_dev_rx_desc_status(void *rx_queue, uint16_t offset)
int
iavf_dev_tx_desc_status(void *tx_queue, uint16_t offset)
{
- struct iavf_tx_queue *txq = tx_queue;
+ struct ci_tx_queue *txq = tx_queue;
volatile uint64_t *status;
uint64_t mask, expect;
uint32_t desc;
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index cc1eaaf54c..c18e01560c 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -211,7 +211,7 @@ struct iavf_rxq_ops {
};
struct iavf_txq_ops {
- void (*release_mbufs)(struct iavf_tx_queue *txq);
+ void (*release_mbufs)(struct ci_tx_queue *txq);
};
@@ -273,43 +273,6 @@ struct iavf_rx_queue {
uint64_t hw_time_update;
};
-/* Structure associated with each TX queue. */
-struct iavf_tx_queue {
- const struct rte_memzone *mz; /* memzone for Tx ring */
- volatile struct iavf_tx_desc *iavf_tx_ring; /* Tx ring virtual address */
- rte_iova_t tx_ring_dma; /* Tx ring DMA address */
- struct ci_tx_entry *sw_ring; /* address array of SW ring */
- uint16_t nb_tx_desc; /* ring length */
- uint16_t tx_tail; /* current value of tail */
- volatile uint8_t *qtx_tail; /* register address of tail */
- /* number of used desc since RS bit set */
- uint16_t nb_tx_used;
- uint16_t nb_tx_free;
- uint16_t last_desc_cleaned; /* last desc have been cleaned*/
- uint16_t tx_free_thresh;
- uint16_t tx_rs_thresh;
- uint8_t rel_mbufs_type;
- struct iavf_vsi *iavf_vsi; /**< the VSI this queue belongs to */
-
- uint16_t port_id;
- uint16_t queue_id;
- uint64_t offloads;
- uint16_t tx_next_dd; /* next to set RS, for VPMD */
- uint16_t tx_next_rs; /* next to check DD, for VPMD */
- uint16_t ipsec_crypto_pkt_md_offset;
-
- uint64_t mbuf_errors;
-
- bool q_set; /* if rx queue has been configured */
- bool tx_deferred_start; /* don't start this queue in dev start */
- const struct iavf_txq_ops *ops;
-#define IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG1 BIT(0)
-#define IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2 BIT(1)
- uint8_t vlan_flag;
- uint8_t tc;
- uint8_t use_ctx:1; /* if use the ctx desc, a packet needs two descriptors */
-};
-
/* Offload features */
union iavf_tx_offload {
uint64_t data;
@@ -724,7 +687,7 @@ int iavf_get_monitor_addr(void *rx_queue, struct rte_power_monitor_cond *pmc);
int iavf_rx_vec_dev_check(struct rte_eth_dev *dev);
int iavf_tx_vec_dev_check(struct rte_eth_dev *dev);
int iavf_rxq_vec_setup(struct iavf_rx_queue *rxq);
-int iavf_txq_vec_setup(struct iavf_tx_queue *txq);
+int iavf_txq_vec_setup(struct ci_tx_queue *txq);
uint16_t iavf_recv_pkts_vec_avx512(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
uint16_t iavf_recv_pkts_vec_avx512_offload(void *rx_queue,
@@ -757,14 +720,14 @@ uint16_t iavf_xmit_pkts_vec_avx512_ctx_offload(void *tx_queue, struct rte_mbuf *
uint16_t nb_pkts);
uint16_t iavf_xmit_pkts_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
-int iavf_txq_vec_setup_avx512(struct iavf_tx_queue *txq);
+int iavf_txq_vec_setup_avx512(struct ci_tx_queue *txq);
uint8_t iavf_proto_xtr_type_to_rxdid(uint8_t xtr_type);
void iavf_set_default_ptype_table(struct rte_eth_dev *dev);
-void iavf_tx_queue_release_mbufs_avx512(struct iavf_tx_queue *txq);
+void iavf_tx_queue_release_mbufs_avx512(struct ci_tx_queue *txq);
void iavf_rx_queue_release_mbufs_sse(struct iavf_rx_queue *rxq);
-void iavf_tx_queue_release_mbufs_sse(struct iavf_tx_queue *txq);
+void iavf_tx_queue_release_mbufs_sse(struct ci_tx_queue *txq);
static inline
void iavf_dump_rx_descriptor(struct iavf_rx_queue *rxq,
@@ -791,7 +754,7 @@ void iavf_dump_rx_descriptor(struct iavf_rx_queue *rxq,
* to print the qwords
*/
static inline
-void iavf_dump_tx_descriptor(const struct iavf_tx_queue *txq,
+void iavf_dump_tx_descriptor(const struct ci_tx_queue *txq,
const volatile void *desc, uint16_t tx_id)
{
const char *name;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index f33ceceee1..fdb98b417a 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -1734,7 +1734,7 @@ static __rte_always_inline uint16_t
iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool offload)
{
- struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
@@ -1801,7 +1801,7 @@ iavf_xmit_pkts_vec_avx2_common(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool offload)
{
uint16_t nb_tx = 0;
- struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index 97420a75fd..9cf7171524 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -1845,7 +1845,7 @@ iavf_recv_scattered_pkts_vec_avx512_flex_rxd_offload(void *rx_queue,
}
static __rte_always_inline int
-iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq)
+iavf_tx_free_bufs_avx512(struct ci_tx_queue *txq)
{
struct ci_tx_entry_vec *txep;
uint32_t n;
@@ -2311,7 +2311,7 @@ static __rte_always_inline uint16_t
iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool offload)
{
- struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
@@ -2379,7 +2379,7 @@ static __rte_always_inline uint16_t
iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool offload)
{
- struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, nb_mbuf, tx_id;
@@ -2447,7 +2447,7 @@ iavf_xmit_pkts_vec_avx512_cmn(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool offload)
{
uint16_t nb_tx = 0;
- struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
@@ -2473,7 +2473,7 @@ iavf_xmit_pkts_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
}
void __rte_cold
-iavf_tx_queue_release_mbufs_avx512(struct iavf_tx_queue *txq)
+iavf_tx_queue_release_mbufs_avx512(struct ci_tx_queue *txq)
{
unsigned int i;
const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1);
@@ -2494,7 +2494,7 @@ iavf_tx_queue_release_mbufs_avx512(struct iavf_tx_queue *txq)
}
int __rte_cold
-iavf_txq_vec_setup_avx512(struct iavf_tx_queue *txq)
+iavf_txq_vec_setup_avx512(struct ci_tx_queue *txq)
{
txq->rel_mbufs_type = IAVF_REL_MBUFS_AVX512_VEC;
return 0;
@@ -2512,7 +2512,7 @@ iavf_xmit_pkts_vec_avx512_ctx_cmn(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool offload)
{
uint16_t nb_tx = 0;
- struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index 6305c8cdd6..f1bb12c4f4 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -17,7 +17,7 @@
#endif
static __rte_always_inline int
-iavf_tx_free_bufs(struct iavf_tx_queue *txq)
+iavf_tx_free_bufs(struct ci_tx_queue *txq)
{
struct ci_tx_entry *txep;
uint32_t n;
@@ -104,7 +104,7 @@ _iavf_rx_queue_release_mbufs_vec(struct iavf_rx_queue *rxq)
}
static inline void
-_iavf_tx_queue_release_mbufs_vec(struct iavf_tx_queue *txq)
+_iavf_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq)
{
unsigned i;
const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1);
@@ -164,7 +164,7 @@ iavf_rx_vec_queue_default(struct iavf_rx_queue *rxq)
}
static inline int
-iavf_tx_vec_queue_default(struct iavf_tx_queue *txq)
+iavf_tx_vec_queue_default(struct ci_tx_queue *txq)
{
if (!txq)
return -1;
@@ -227,7 +227,7 @@ static inline int
iavf_tx_vec_dev_check_default(struct rte_eth_dev *dev)
{
int i;
- struct iavf_tx_queue *txq;
+ struct ci_tx_queue *txq;
int ret;
int result = 0;
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index 64c3bf0eaa..5c0b2fff46 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -1366,7 +1366,7 @@ uint16_t
iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
struct ci_tx_entry *txep;
uint16_t n, nb_commit, tx_id;
@@ -1435,7 +1435,7 @@ iavf_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
uint16_t nb_tx = 0;
- struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
@@ -1459,13 +1459,13 @@ iavf_rx_queue_release_mbufs_sse(struct iavf_rx_queue *rxq)
}
void __rte_cold
-iavf_tx_queue_release_mbufs_sse(struct iavf_tx_queue *txq)
+iavf_tx_queue_release_mbufs_sse(struct ci_tx_queue *txq)
{
_iavf_tx_queue_release_mbufs_vec(txq);
}
int __rte_cold
-iavf_txq_vec_setup(struct iavf_tx_queue *txq)
+iavf_txq_vec_setup(struct ci_tx_queue *txq)
{
txq->rel_mbufs_type = IAVF_REL_MBUFS_SSE_VEC;
return 0;
diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c
index 0646a2f978..c74466735d 100644
--- a/drivers/net/iavf/iavf_vchnl.c
+++ b/drivers/net/iavf/iavf_vchnl.c
@@ -1218,10 +1218,8 @@ int
iavf_configure_queues(struct iavf_adapter *adapter,
uint16_t num_queue_pairs, uint16_t index)
{
- struct iavf_rx_queue **rxq =
- (struct iavf_rx_queue **)adapter->dev_data->rx_queues;
- struct iavf_tx_queue **txq =
- (struct iavf_tx_queue **)adapter->dev_data->tx_queues;
+ struct iavf_rx_queue **rxq = (struct iavf_rx_queue **)adapter->dev_data->rx_queues;
+ struct ci_tx_queue **txq = (struct ci_tx_queue **)adapter->dev_data->tx_queues;
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
struct virtchnl_vsi_queue_config_info *vc_config;
struct virtchnl_queue_pair_info *vc_qp;
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v4 08/24] net/ixgbe: convert Tx queue context cache field to ptr
2024-12-20 14:38 ` [PATCH v4 00/24] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (6 preceding siblings ...)
2024-12-20 14:39 ` [PATCH v4 07/24] net/iavf: use common Tx queue structure Bruce Richardson
@ 2024-12-20 14:39 ` Bruce Richardson
2024-12-20 14:39 ` [PATCH v4 09/24] net/ixgbe: use common Tx queue structure Bruce Richardson
` (15 subsequent siblings)
23 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-20 14:39 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Anatoly Burakov, Vladimir Medvedkin
Rather than having a two element array of context cache values inside
the Tx queue structure, convert it to a pointer to a cache at the end of
the structure. This makes future merging of the structure easier as we
don't need the "ixgbe_advctx_info" struct defined when defining a
combined queue structure.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/ixgbe/ixgbe_rxtx.c | 7 ++++---
drivers/net/ixgbe/ixgbe_rxtx.h | 4 ++--
drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 3 +--
3 files changed, 7 insertions(+), 7 deletions(-)
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index f7ddbba1b6..2ca26cd132 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -2522,8 +2522,7 @@ ixgbe_reset_tx_queue(struct ixgbe_tx_queue *txq)
txq->last_desc_cleaned = (uint16_t)(txq->nb_tx_desc - 1);
txq->nb_tx_free = (uint16_t)(txq->nb_tx_desc - 1);
txq->ctx_curr = 0;
- memset((void *)&txq->ctx_cache, 0,
- IXGBE_CTX_NUM * sizeof(struct ixgbe_advctx_info));
+ memset(txq->ctx_cache, 0, IXGBE_CTX_NUM * sizeof(struct ixgbe_advctx_info));
}
static const struct ixgbe_txq_ops def_txq_ops = {
@@ -2741,10 +2740,12 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
}
/* First allocate the tx queue data structure */
- txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct ixgbe_tx_queue),
+ txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct ixgbe_tx_queue) +
+ sizeof(struct ixgbe_advctx_info) * IXGBE_CTX_NUM,
RTE_CACHE_LINE_SIZE, socket_id);
if (txq == NULL)
return -ENOMEM;
+ txq->ctx_cache = RTE_PTR_ADD(txq, sizeof(struct ixgbe_tx_queue));
/*
* Allocate TX ring hardware descriptors. A memzone large enough to
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index f6bae37cf3..847cacf7b5 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -215,8 +215,8 @@ struct ixgbe_tx_queue {
uint8_t wthresh; /**< Write-back threshold reg. */
uint64_t offloads; /**< Tx offload flags of RTE_ETH_TX_OFFLOAD_* */
uint32_t ctx_curr; /**< Hardware context states. */
- /** Hardware context0 history. */
- struct ixgbe_advctx_info ctx_cache[IXGBE_CTX_NUM];
+ /** Hardware context history. */
+ struct ixgbe_advctx_info *ctx_cache;
const struct ixgbe_txq_ops *ops; /**< txq ops */
bool tx_deferred_start; /**< not in global dev start. */
#ifdef RTE_LIB_SECURITY
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
index cc51bf6eed..ec334b5f65 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
@@ -176,8 +176,7 @@ _ixgbe_reset_tx_queue_vec(struct ixgbe_tx_queue *txq)
txq->last_desc_cleaned = (uint16_t)(txq->nb_tx_desc - 1);
txq->nb_tx_free = (uint16_t)(txq->nb_tx_desc - 1);
txq->ctx_curr = 0;
- memset((void *)&txq->ctx_cache, 0,
- IXGBE_CTX_NUM * sizeof(struct ixgbe_advctx_info));
+ memset(txq->ctx_cache, 0, IXGBE_CTX_NUM * sizeof(struct ixgbe_advctx_info));
}
static inline int
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v4 09/24] net/ixgbe: use common Tx queue structure
2024-12-20 14:38 ` [PATCH v4 00/24] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (7 preceding siblings ...)
2024-12-20 14:39 ` [PATCH v4 08/24] net/ixgbe: convert Tx queue context cache field to ptr Bruce Richardson
@ 2024-12-20 14:39 ` Bruce Richardson
2024-12-20 14:39 ` [PATCH v4 10/24] net/_common_intel: pack " Bruce Richardson
` (14 subsequent siblings)
23 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-20 14:39 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Anatoly Burakov, Vladimir Medvedkin,
Wathsala Vithanage, Konstantin Ananyev
Merge in additional fields used by the ixgbe driver and then convert it
over to using the common Tx queue structure.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 14 +++-
drivers/net/ixgbe/ixgbe_ethdev.c | 4 +-
.../ixgbe/ixgbe_recycle_mbufs_vec_common.c | 2 +-
drivers/net/ixgbe/ixgbe_rxtx.c | 64 +++++++++----------
drivers/net/ixgbe/ixgbe_rxtx.h | 56 ++--------------
drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 26 ++++----
drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 14 ++--
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 14 ++--
8 files changed, 80 insertions(+), 114 deletions(-)
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
index c4a1a0c816..51ae3b051d 100644
--- a/drivers/net/_common_intel/tx.h
+++ b/drivers/net/_common_intel/tx.h
@@ -34,9 +34,13 @@ struct ci_tx_queue {
volatile struct i40e_tx_desc *i40e_tx_ring;
volatile struct iavf_tx_desc *iavf_tx_ring;
volatile struct ice_tx_desc *ice_tx_ring;
+ volatile union ixgbe_adv_tx_desc *ixgbe_tx_ring;
};
volatile uint8_t *qtx_tail; /* register address of tail */
- struct ci_tx_entry *sw_ring; /* virtual address of SW ring */
+ union {
+ struct ci_tx_entry *sw_ring; /* virtual address of SW ring */
+ struct ci_tx_entry_vec *sw_ring_vec;
+ };
rte_iova_t tx_ring_dma; /* TX ring DMA address */
uint16_t nb_tx_desc; /* number of TX descriptors */
uint16_t tx_tail; /* current value of tail register */
@@ -87,6 +91,14 @@ struct ci_tx_queue {
uint8_t tc;
bool use_ctx; /* with ctx info, each pkt needs two descriptors */
};
+ struct { /* ixgbe specific values */
+ const struct ixgbe_txq_ops *ops;
+ struct ixgbe_advctx_info *ctx_cache;
+ uint32_t ctx_curr;
+#ifdef RTE_LIB_SECURITY
+ uint8_t using_ipsec; /**< indicates that IPsec TX feature is in use */
+#endif
+ };
};
};
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 8bee97d191..5f18fbaad5 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1118,7 +1118,7 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
* RX and TX function.
*/
if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
- struct ixgbe_tx_queue *txq;
+ struct ci_tx_queue *txq;
/* TX queue function in primary, set by last queue initialized
* Tx queue may not initialized by primary process
*/
@@ -1623,7 +1623,7 @@ eth_ixgbevf_dev_init(struct rte_eth_dev *eth_dev)
* RX function
*/
if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
- struct ixgbe_tx_queue *txq;
+ struct ci_tx_queue *txq;
/* TX queue function in primary, set by last queue initialized
* Tx queue may not initialized by primary process
*/
diff --git a/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c b/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c
index a878db3150..3fd05ed5eb 100644
--- a/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c
+++ b/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c
@@ -51,7 +51,7 @@ uint16_t
ixgbe_recycle_tx_mbufs_reuse_vec(void *tx_queue,
struct rte_eth_recycle_rxq_info *recycle_rxq_info)
{
- struct ixgbe_tx_queue *txq = tx_queue;
+ struct ci_tx_queue *txq = tx_queue;
struct ci_tx_entry *txep;
struct rte_mbuf **rxep;
int i, n;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 2ca26cd132..344ef85685 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -98,7 +98,7 @@
* Return the total number of buffers freed.
*/
static __rte_always_inline int
-ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq)
+ixgbe_tx_free_bufs(struct ci_tx_queue *txq)
{
struct ci_tx_entry *txep;
uint32_t status;
@@ -195,7 +195,7 @@ tx1(volatile union ixgbe_adv_tx_desc *txdp, struct rte_mbuf **pkts)
* Copy mbuf pointers to the S/W ring.
*/
static inline void
-ixgbe_tx_fill_hw_ring(struct ixgbe_tx_queue *txq, struct rte_mbuf **pkts,
+ixgbe_tx_fill_hw_ring(struct ci_tx_queue *txq, struct rte_mbuf **pkts,
uint16_t nb_pkts)
{
volatile union ixgbe_adv_tx_desc *txdp = &txq->ixgbe_tx_ring[txq->tx_tail];
@@ -231,7 +231,7 @@ static inline uint16_t
tx_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile union ixgbe_adv_tx_desc *tx_r = txq->ixgbe_tx_ring;
uint16_t n = 0;
@@ -344,7 +344,7 @@ ixgbe_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
uint16_t nb_tx = 0;
- struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
while (nb_pkts) {
uint16_t ret, num;
@@ -362,7 +362,7 @@ ixgbe_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
}
static inline void
-ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
+ixgbe_set_xmit_ctx(struct ci_tx_queue *txq,
volatile struct ixgbe_adv_tx_context_desc *ctx_txd,
uint64_t ol_flags, union ixgbe_tx_offload tx_offload,
__rte_unused uint64_t *mdata)
@@ -493,7 +493,7 @@ ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
* or create a new context descriptor.
*/
static inline uint32_t
-what_advctx_update(struct ixgbe_tx_queue *txq, uint64_t flags,
+what_advctx_update(struct ci_tx_queue *txq, uint64_t flags,
union ixgbe_tx_offload tx_offload)
{
/* If match with the current used context */
@@ -561,7 +561,7 @@ tx_desc_ol_flags_to_cmdtype(uint64_t ol_flags)
/* Reset transmit descriptors after they have been used */
static inline int
-ixgbe_xmit_cleanup(struct ixgbe_tx_queue *txq)
+ixgbe_xmit_cleanup(struct ci_tx_queue *txq)
{
struct ci_tx_entry *sw_ring = txq->sw_ring;
volatile union ixgbe_adv_tx_desc *txr = txq->ixgbe_tx_ring;
@@ -623,7 +623,7 @@ uint16_t
ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct ixgbe_tx_queue *txq;
+ struct ci_tx_queue *txq;
struct ci_tx_entry *sw_ring;
struct ci_tx_entry *txe, *txn;
volatile union ixgbe_adv_tx_desc *txr;
@@ -963,7 +963,7 @@ ixgbe_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
int i, ret;
uint64_t ol_flags;
struct rte_mbuf *m;
- struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
for (i = 0; i < nb_pkts; i++) {
m = tx_pkts[i];
@@ -2335,7 +2335,7 @@ ixgbe_recv_pkts_lro_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts,
**********************************************************************/
static void __rte_cold
-ixgbe_tx_queue_release_mbufs(struct ixgbe_tx_queue *txq)
+ixgbe_tx_queue_release_mbufs(struct ci_tx_queue *txq)
{
unsigned i;
@@ -2350,7 +2350,7 @@ ixgbe_tx_queue_release_mbufs(struct ixgbe_tx_queue *txq)
}
static int
-ixgbe_tx_done_cleanup_full(struct ixgbe_tx_queue *txq, uint32_t free_cnt)
+ixgbe_tx_done_cleanup_full(struct ci_tx_queue *txq, uint32_t free_cnt)
{
struct ci_tx_entry *swr_ring = txq->sw_ring;
uint16_t i, tx_last, tx_id;
@@ -2408,7 +2408,7 @@ ixgbe_tx_done_cleanup_full(struct ixgbe_tx_queue *txq, uint32_t free_cnt)
}
static int
-ixgbe_tx_done_cleanup_simple(struct ixgbe_tx_queue *txq,
+ixgbe_tx_done_cleanup_simple(struct ci_tx_queue *txq,
uint32_t free_cnt)
{
int i, n, cnt;
@@ -2432,7 +2432,7 @@ ixgbe_tx_done_cleanup_simple(struct ixgbe_tx_queue *txq,
}
static int
-ixgbe_tx_done_cleanup_vec(struct ixgbe_tx_queue *txq __rte_unused,
+ixgbe_tx_done_cleanup_vec(struct ci_tx_queue *txq __rte_unused,
uint32_t free_cnt __rte_unused)
{
return -ENOTSUP;
@@ -2441,7 +2441,7 @@ ixgbe_tx_done_cleanup_vec(struct ixgbe_tx_queue *txq __rte_unused,
int
ixgbe_dev_tx_done_cleanup(void *tx_queue, uint32_t free_cnt)
{
- struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
if (txq->offloads == 0 &&
#ifdef RTE_LIB_SECURITY
!(txq->using_ipsec) &&
@@ -2450,7 +2450,7 @@ ixgbe_dev_tx_done_cleanup(void *tx_queue, uint32_t free_cnt)
if (txq->tx_rs_thresh <= RTE_IXGBE_TX_MAX_FREE_BUF_SZ &&
rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128 &&
(rte_eal_process_type() != RTE_PROC_PRIMARY ||
- txq->sw_ring_v != NULL)) {
+ txq->sw_ring_vec != NULL)) {
return ixgbe_tx_done_cleanup_vec(txq, free_cnt);
} else {
return ixgbe_tx_done_cleanup_simple(txq, free_cnt);
@@ -2461,7 +2461,7 @@ ixgbe_dev_tx_done_cleanup(void *tx_queue, uint32_t free_cnt)
}
static void __rte_cold
-ixgbe_tx_free_swring(struct ixgbe_tx_queue *txq)
+ixgbe_tx_free_swring(struct ci_tx_queue *txq)
{
if (txq != NULL &&
txq->sw_ring != NULL)
@@ -2469,7 +2469,7 @@ ixgbe_tx_free_swring(struct ixgbe_tx_queue *txq)
}
static void __rte_cold
-ixgbe_tx_queue_release(struct ixgbe_tx_queue *txq)
+ixgbe_tx_queue_release(struct ci_tx_queue *txq)
{
if (txq != NULL && txq->ops != NULL) {
txq->ops->release_mbufs(txq);
@@ -2487,7 +2487,7 @@ ixgbe_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
/* (Re)set dynamic ixgbe_tx_queue fields to defaults */
static void __rte_cold
-ixgbe_reset_tx_queue(struct ixgbe_tx_queue *txq)
+ixgbe_reset_tx_queue(struct ci_tx_queue *txq)
{
static const union ixgbe_adv_tx_desc zeroed_desc = {{0}};
struct ci_tx_entry *txe = txq->sw_ring;
@@ -2536,7 +2536,7 @@ static const struct ixgbe_txq_ops def_txq_ops = {
* in dev_init by secondary process when attaching to an existing ethdev.
*/
void __rte_cold
-ixgbe_set_tx_function(struct rte_eth_dev *dev, struct ixgbe_tx_queue *txq)
+ixgbe_set_tx_function(struct rte_eth_dev *dev, struct ci_tx_queue *txq)
{
/* Use a simple Tx queue (no offloads, no multi segs) if possible */
if ((txq->offloads == 0) &&
@@ -2618,7 +2618,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
const struct rte_eth_txconf *tx_conf)
{
const struct rte_memzone *tz;
- struct ixgbe_tx_queue *txq;
+ struct ci_tx_queue *txq;
struct ixgbe_hw *hw;
uint16_t tx_rs_thresh, tx_free_thresh;
uint64_t offloads;
@@ -2740,12 +2740,12 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
}
/* First allocate the tx queue data structure */
- txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct ixgbe_tx_queue) +
+ txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct ci_tx_queue) +
sizeof(struct ixgbe_advctx_info) * IXGBE_CTX_NUM,
RTE_CACHE_LINE_SIZE, socket_id);
if (txq == NULL)
return -ENOMEM;
- txq->ctx_cache = RTE_PTR_ADD(txq, sizeof(struct ixgbe_tx_queue));
+ txq->ctx_cache = RTE_PTR_ADD(txq, sizeof(struct ci_tx_queue));
/*
* Allocate TX ring hardware descriptors. A memzone large enough to
@@ -3312,7 +3312,7 @@ ixgbe_dev_rx_descriptor_status(void *rx_queue, uint16_t offset)
int
ixgbe_dev_tx_descriptor_status(void *tx_queue, uint16_t offset)
{
- struct ixgbe_tx_queue *txq = tx_queue;
+ struct ci_tx_queue *txq = tx_queue;
volatile uint32_t *status;
uint32_t desc;
@@ -3377,7 +3377,7 @@ ixgbe_dev_clear_queues(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
for (i = 0; i < dev->data->nb_tx_queues; i++) {
- struct ixgbe_tx_queue *txq = dev->data->tx_queues[i];
+ struct ci_tx_queue *txq = dev->data->tx_queues[i];
if (txq != NULL) {
txq->ops->release_mbufs(txq);
@@ -5284,7 +5284,7 @@ void __rte_cold
ixgbe_dev_tx_init(struct rte_eth_dev *dev)
{
struct ixgbe_hw *hw;
- struct ixgbe_tx_queue *txq;
+ struct ci_tx_queue *txq;
uint64_t bus_addr;
uint32_t hlreg0;
uint32_t txctrl;
@@ -5402,7 +5402,7 @@ int __rte_cold
ixgbe_dev_rxtx_start(struct rte_eth_dev *dev)
{
struct ixgbe_hw *hw;
- struct ixgbe_tx_queue *txq;
+ struct ci_tx_queue *txq;
struct ixgbe_rx_queue *rxq;
uint32_t txdctl;
uint32_t dmatxctl;
@@ -5572,7 +5572,7 @@ int __rte_cold
ixgbe_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
struct ixgbe_hw *hw;
- struct ixgbe_tx_queue *txq;
+ struct ci_tx_queue *txq;
uint32_t txdctl;
int poll_ms;
@@ -5611,7 +5611,7 @@ int __rte_cold
ixgbe_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
struct ixgbe_hw *hw;
- struct ixgbe_tx_queue *txq;
+ struct ci_tx_queue *txq;
uint32_t txdctl;
uint32_t txtdh, txtdt;
int poll_ms;
@@ -5685,7 +5685,7 @@ void
ixgbe_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
struct rte_eth_txq_info *qinfo)
{
- struct ixgbe_tx_queue *txq;
+ struct ci_tx_queue *txq;
txq = dev->data->tx_queues[queue_id];
@@ -5877,7 +5877,7 @@ void __rte_cold
ixgbevf_dev_tx_init(struct rte_eth_dev *dev)
{
struct ixgbe_hw *hw;
- struct ixgbe_tx_queue *txq;
+ struct ci_tx_queue *txq;
uint64_t bus_addr;
uint32_t txctrl;
uint16_t i;
@@ -5918,7 +5918,7 @@ void __rte_cold
ixgbevf_dev_rxtx_start(struct rte_eth_dev *dev)
{
struct ixgbe_hw *hw;
- struct ixgbe_tx_queue *txq;
+ struct ci_tx_queue *txq;
struct ixgbe_rx_queue *rxq;
uint32_t txdctl;
uint32_t rxdctl;
@@ -6127,7 +6127,7 @@ ixgbe_xmit_fixed_burst_vec(void __rte_unused *tx_queue,
}
int
-ixgbe_txq_vec_setup(struct ixgbe_tx_queue __rte_unused *txq)
+ixgbe_txq_vec_setup(struct ci_tx_queue *txq __rte_unused)
{
return -1;
}
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index 847cacf7b5..4333e5bf2f 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -180,56 +180,10 @@ struct ixgbe_advctx_info {
union ixgbe_tx_offload tx_offload_mask;
};
-/**
- * Structure associated with each TX queue.
- */
-struct ixgbe_tx_queue {
- /** TX ring virtual address. */
- volatile union ixgbe_adv_tx_desc *ixgbe_tx_ring;
- rte_iova_t tx_ring_dma; /**< TX ring DMA address. */
- union {
- struct ci_tx_entry *sw_ring; /**< address of SW ring for scalar PMD. */
- struct ci_tx_entry_vec *sw_ring_v; /**< address of SW ring for vector PMD */
- };
- volatile uint8_t *qtx_tail; /**< Address of TDT register. */
- uint16_t nb_tx_desc; /**< number of TX descriptors. */
- uint16_t tx_tail; /**< current value of TDT reg. */
- /**< Start freeing TX buffers if there are less free descriptors than
- this value. */
- uint16_t tx_free_thresh;
- /** Number of TX descriptors to use before RS bit is set. */
- uint16_t tx_rs_thresh;
- /** Number of TX descriptors used since RS bit was set. */
- uint16_t nb_tx_used;
- /** Index to last TX descriptor to have been cleaned. */
- uint16_t last_desc_cleaned;
- /** Total number of TX descriptors ready to be allocated. */
- uint16_t nb_tx_free;
- uint16_t tx_next_dd; /**< next desc to scan for DD bit */
- uint16_t tx_next_rs; /**< next desc to set RS bit */
- uint16_t queue_id; /**< TX queue index. */
- uint16_t reg_idx; /**< TX queue register index. */
- uint16_t port_id; /**< Device port identifier. */
- uint8_t pthresh; /**< Prefetch threshold register. */
- uint8_t hthresh; /**< Host threshold register. */
- uint8_t wthresh; /**< Write-back threshold reg. */
- uint64_t offloads; /**< Tx offload flags of RTE_ETH_TX_OFFLOAD_* */
- uint32_t ctx_curr; /**< Hardware context states. */
- /** Hardware context history. */
- struct ixgbe_advctx_info *ctx_cache;
- const struct ixgbe_txq_ops *ops; /**< txq ops */
- bool tx_deferred_start; /**< not in global dev start. */
-#ifdef RTE_LIB_SECURITY
- uint8_t using_ipsec;
- /**< indicates that IPsec TX feature is in use */
-#endif
- const struct rte_memzone *mz;
-};
-
struct ixgbe_txq_ops {
- void (*release_mbufs)(struct ixgbe_tx_queue *txq);
- void (*free_swring)(struct ixgbe_tx_queue *txq);
- void (*reset)(struct ixgbe_tx_queue *txq);
+ void (*release_mbufs)(struct ci_tx_queue *txq);
+ void (*free_swring)(struct ci_tx_queue *txq);
+ void (*reset)(struct ci_tx_queue *txq);
};
/*
@@ -250,7 +204,7 @@ struct ixgbe_txq_ops {
* the queue parameters. Used in tx_queue_setup by primary process and then
* in dev_init by secondary process when attaching to an existing ethdev.
*/
-void ixgbe_set_tx_function(struct rte_eth_dev *dev, struct ixgbe_tx_queue *txq);
+void ixgbe_set_tx_function(struct rte_eth_dev *dev, struct ci_tx_queue *txq);
/**
* Sets the rx_pkt_burst callback in the ixgbe rte_eth_dev instance.
@@ -287,7 +241,7 @@ void ixgbe_recycle_rx_descriptors_refill_vec(void *rx_queue, uint16_t nb_mbufs);
uint16_t ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
-int ixgbe_txq_vec_setup(struct ixgbe_tx_queue *txq);
+int ixgbe_txq_vec_setup(struct ci_tx_queue *txq);
uint64_t ixgbe_get_tx_port_offloads(struct rte_eth_dev *dev);
uint64_t ixgbe_get_rx_queue_offloads(struct rte_eth_dev *dev);
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
index ec334b5f65..06e760867c 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
@@ -12,7 +12,7 @@
#include "ixgbe_rxtx.h"
static __rte_always_inline int
-ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq)
+ixgbe_tx_free_bufs(struct ci_tx_queue *txq)
{
struct ci_tx_entry_vec *txep;
uint32_t status;
@@ -32,7 +32,7 @@ ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq)
* first buffer to free from S/W ring is at index
* tx_next_dd - (tx_rs_thresh-1)
*/
- txep = &txq->sw_ring_v[txq->tx_next_dd - (n - 1)];
+ txep = &txq->sw_ring_vec[txq->tx_next_dd - (n - 1)];
m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
if (likely(m != NULL)) {
free[0] = m;
@@ -79,7 +79,7 @@ tx_backlog_entry(struct ci_tx_entry_vec *txep,
}
static inline void
-_ixgbe_tx_queue_release_mbufs_vec(struct ixgbe_tx_queue *txq)
+_ixgbe_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq)
{
unsigned int i;
struct ci_tx_entry_vec *txe;
@@ -92,14 +92,14 @@ _ixgbe_tx_queue_release_mbufs_vec(struct ixgbe_tx_queue *txq)
for (i = txq->tx_next_dd - (txq->tx_rs_thresh - 1);
i != txq->tx_tail;
i = (i + 1) % txq->nb_tx_desc) {
- txe = &txq->sw_ring_v[i];
+ txe = &txq->sw_ring_vec[i];
rte_pktmbuf_free_seg(txe->mbuf);
}
txq->nb_tx_free = max_desc;
/* reset tx_entry */
for (i = 0; i < txq->nb_tx_desc; i++) {
- txe = &txq->sw_ring_v[i];
+ txe = &txq->sw_ring_vec[i];
txe->mbuf = NULL;
}
}
@@ -134,22 +134,22 @@ _ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq)
}
static inline void
-_ixgbe_tx_free_swring_vec(struct ixgbe_tx_queue *txq)
+_ixgbe_tx_free_swring_vec(struct ci_tx_queue *txq)
{
if (txq == NULL)
return;
if (txq->sw_ring != NULL) {
- rte_free(txq->sw_ring_v - 1);
- txq->sw_ring_v = NULL;
+ rte_free(txq->sw_ring_vec - 1);
+ txq->sw_ring_vec = NULL;
}
}
static inline void
-_ixgbe_reset_tx_queue_vec(struct ixgbe_tx_queue *txq)
+_ixgbe_reset_tx_queue_vec(struct ci_tx_queue *txq)
{
static const union ixgbe_adv_tx_desc zeroed_desc = { { 0 } };
- struct ci_tx_entry_vec *txe = txq->sw_ring_v;
+ struct ci_tx_entry_vec *txe = txq->sw_ring_vec;
uint16_t i;
/* Zero out HW ring memory */
@@ -198,14 +198,14 @@ ixgbe_rxq_vec_setup_default(struct ixgbe_rx_queue *rxq)
}
static inline int
-ixgbe_txq_vec_setup_default(struct ixgbe_tx_queue *txq,
+ixgbe_txq_vec_setup_default(struct ci_tx_queue *txq,
const struct ixgbe_txq_ops *txq_ops)
{
- if (txq->sw_ring_v == NULL)
+ if (txq->sw_ring_vec == NULL)
return -1;
/* leave the first one for overflow */
- txq->sw_ring_v = txq->sw_ring_v + 1;
+ txq->sw_ring_vec = txq->sw_ring_vec + 1;
txq->ops = txq_ops;
return 0;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
index 06be7ec82a..cb749a3760 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
@@ -571,7 +571,7 @@ uint16_t
ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile union ixgbe_adv_tx_desc *txdp;
struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
@@ -591,7 +591,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = txq->tx_tail;
txdp = &txq->ixgbe_tx_ring[tx_id];
- txep = &txq->sw_ring_v[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -611,7 +611,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
/* avoid reach the end of ring */
txdp = &txq->ixgbe_tx_ring[tx_id];
- txep = &txq->sw_ring_v[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
}
tx_backlog_entry(txep, tx_pkts, nb_commit);
@@ -634,7 +634,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
}
static void __rte_cold
-ixgbe_tx_queue_release_mbufs_vec(struct ixgbe_tx_queue *txq)
+ixgbe_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq)
{
_ixgbe_tx_queue_release_mbufs_vec(txq);
}
@@ -646,13 +646,13 @@ ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq)
}
static void __rte_cold
-ixgbe_tx_free_swring(struct ixgbe_tx_queue *txq)
+ixgbe_tx_free_swring(struct ci_tx_queue *txq)
{
_ixgbe_tx_free_swring_vec(txq);
}
static void __rte_cold
-ixgbe_reset_tx_queue(struct ixgbe_tx_queue *txq)
+ixgbe_reset_tx_queue(struct ci_tx_queue *txq)
{
_ixgbe_reset_tx_queue_vec(txq);
}
@@ -670,7 +670,7 @@ ixgbe_rxq_vec_setup(struct ixgbe_rx_queue *rxq)
}
int __rte_cold
-ixgbe_txq_vec_setup(struct ixgbe_tx_queue *txq)
+ixgbe_txq_vec_setup(struct ci_tx_queue *txq)
{
return ixgbe_txq_vec_setup_default(txq, &vec_txq_ops);
}
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
index a21a57bd55..e46550f76a 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -693,7 +693,7 @@ uint16_t
ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
- struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
+ struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile union ixgbe_adv_tx_desc *txdp;
struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
@@ -713,7 +713,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = txq->tx_tail;
txdp = &txq->ixgbe_tx_ring[tx_id];
- txep = &txq->sw_ring_v[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
@@ -734,7 +734,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
/* avoid reach the end of ring */
txdp = &txq->ixgbe_tx_ring[tx_id];
- txep = &txq->sw_ring_v[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
}
tx_backlog_entry(txep, tx_pkts, nb_commit);
@@ -757,7 +757,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
}
static void __rte_cold
-ixgbe_tx_queue_release_mbufs_vec(struct ixgbe_tx_queue *txq)
+ixgbe_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq)
{
_ixgbe_tx_queue_release_mbufs_vec(txq);
}
@@ -769,13 +769,13 @@ ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq)
}
static void __rte_cold
-ixgbe_tx_free_swring(struct ixgbe_tx_queue *txq)
+ixgbe_tx_free_swring(struct ci_tx_queue *txq)
{
_ixgbe_tx_free_swring_vec(txq);
}
static void __rte_cold
-ixgbe_reset_tx_queue(struct ixgbe_tx_queue *txq)
+ixgbe_reset_tx_queue(struct ci_tx_queue *txq)
{
_ixgbe_reset_tx_queue_vec(txq);
}
@@ -793,7 +793,7 @@ ixgbe_rxq_vec_setup(struct ixgbe_rx_queue *rxq)
}
int __rte_cold
-ixgbe_txq_vec_setup(struct ixgbe_tx_queue *txq)
+ixgbe_txq_vec_setup(struct ci_tx_queue *txq)
{
return ixgbe_txq_vec_setup_default(txq, &vec_txq_ops);
}
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v4 10/24] net/_common_intel: pack Tx queue structure
2024-12-20 14:38 ` [PATCH v4 00/24] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (8 preceding siblings ...)
2024-12-20 14:39 ` [PATCH v4 09/24] net/ixgbe: use common Tx queue structure Bruce Richardson
@ 2024-12-20 14:39 ` Bruce Richardson
2024-12-20 14:39 ` [PATCH v4 11/24] net/_common_intel: add post-Tx buffer free function Bruce Richardson
` (13 subsequent siblings)
23 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-20 14:39 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Ian Stokes, Anatoly Burakov
Move some fields about to better pack the Tx queue structure and make
sure all data used by the vector codepaths is on the first cacheline of
the structure. Checking with "pahole" on 64-bit build, only one 6-byte
hole is left in the structure - on second cacheline - after this patch.
As part of the reordering, move the p/h/wthresh values to the
ixgbe-specific part of the union. That is the only driver which actually
uses those values. i40e and ice drivers just record the values for later
return, so we can drop them from the Tx queue structure for those
drivers and just report the defaults in all cases.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 12 +++++-------
drivers/net/i40e/i40e_rxtx.c | 9 +++------
drivers/net/ice/ice_rxtx.c | 9 +++------
3 files changed, 11 insertions(+), 19 deletions(-)
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
index 51ae3b051d..c372d2838b 100644
--- a/drivers/net/_common_intel/tx.h
+++ b/drivers/net/_common_intel/tx.h
@@ -41,7 +41,6 @@ struct ci_tx_queue {
struct ci_tx_entry *sw_ring; /* virtual address of SW ring */
struct ci_tx_entry_vec *sw_ring_vec;
};
- rte_iova_t tx_ring_dma; /* TX ring DMA address */
uint16_t nb_tx_desc; /* number of TX descriptors */
uint16_t tx_tail; /* current value of tail register */
uint16_t nb_tx_used; /* number of TX desc used since RS bit set */
@@ -55,16 +54,14 @@ struct ci_tx_queue {
uint16_t tx_free_thresh;
/* Number of TX descriptors to use before RS bit is set. */
uint16_t tx_rs_thresh;
- uint8_t pthresh; /**< Prefetch threshold register. */
- uint8_t hthresh; /**< Host threshold register. */
- uint8_t wthresh; /**< Write-back threshold reg. */
uint16_t port_id; /* Device port identifier. */
uint16_t queue_id; /* TX queue index. */
uint16_t reg_idx;
- uint64_t offloads;
uint16_t tx_next_dd;
uint16_t tx_next_rs;
+ uint64_t offloads;
uint64_t mbuf_errors;
+ rte_iova_t tx_ring_dma; /* TX ring DMA address */
bool tx_deferred_start; /* don't start this queue in dev start */
bool q_set; /* indicate if tx queue has been configured */
union { /* the VSI this queue belongs to */
@@ -95,9 +92,10 @@ struct ci_tx_queue {
const struct ixgbe_txq_ops *ops;
struct ixgbe_advctx_info *ctx_cache;
uint32_t ctx_curr;
-#ifdef RTE_LIB_SECURITY
+ uint8_t pthresh; /**< Prefetch threshold register. */
+ uint8_t hthresh; /**< Host threshold register. */
+ uint8_t wthresh; /**< Write-back threshold reg. */
uint8_t using_ipsec; /**< indicates that IPsec TX feature is in use */
-#endif
};
};
};
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 305bc53480..539b170266 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2539,9 +2539,6 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->nb_tx_desc = nb_desc;
txq->tx_rs_thresh = tx_rs_thresh;
txq->tx_free_thresh = tx_free_thresh;
- txq->pthresh = tx_conf->tx_thresh.pthresh;
- txq->hthresh = tx_conf->tx_thresh.hthresh;
- txq->wthresh = tx_conf->tx_thresh.wthresh;
txq->queue_id = queue_idx;
txq->reg_idx = reg_idx;
txq->port_id = dev->data->port_id;
@@ -3310,9 +3307,9 @@ i40e_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
qinfo->nb_desc = txq->nb_tx_desc;
- qinfo->conf.tx_thresh.pthresh = txq->pthresh;
- qinfo->conf.tx_thresh.hthresh = txq->hthresh;
- qinfo->conf.tx_thresh.wthresh = txq->wthresh;
+ qinfo->conf.tx_thresh.pthresh = I40E_DEFAULT_TX_PTHRESH;
+ qinfo->conf.tx_thresh.hthresh = I40E_DEFAULT_TX_HTHRESH;
+ qinfo->conf.tx_thresh.wthresh = I40E_DEFAULT_TX_WTHRESH;
qinfo->conf.tx_free_thresh = txq->tx_free_thresh;
qinfo->conf.tx_rs_thresh = txq->tx_rs_thresh;
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index bcc7c7a016..e2e147ba3e 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -1492,9 +1492,6 @@ ice_tx_queue_setup(struct rte_eth_dev *dev,
txq->nb_tx_desc = nb_desc;
txq->tx_rs_thresh = tx_rs_thresh;
txq->tx_free_thresh = tx_free_thresh;
- txq->pthresh = tx_conf->tx_thresh.pthresh;
- txq->hthresh = tx_conf->tx_thresh.hthresh;
- txq->wthresh = tx_conf->tx_thresh.wthresh;
txq->queue_id = queue_idx;
txq->reg_idx = vsi->base_queue + queue_idx;
@@ -1583,9 +1580,9 @@ ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
qinfo->nb_desc = txq->nb_tx_desc;
- qinfo->conf.tx_thresh.pthresh = txq->pthresh;
- qinfo->conf.tx_thresh.hthresh = txq->hthresh;
- qinfo->conf.tx_thresh.wthresh = txq->wthresh;
+ qinfo->conf.tx_thresh.pthresh = ICE_DEFAULT_TX_PTHRESH;
+ qinfo->conf.tx_thresh.hthresh = ICE_DEFAULT_TX_HTHRESH;
+ qinfo->conf.tx_thresh.wthresh = ICE_DEFAULT_TX_WTHRESH;
qinfo->conf.tx_free_thresh = txq->tx_free_thresh;
qinfo->conf.tx_rs_thresh = txq->tx_rs_thresh;
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v4 11/24] net/_common_intel: add post-Tx buffer free function
2024-12-20 14:38 ` [PATCH v4 00/24] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (9 preceding siblings ...)
2024-12-20 14:39 ` [PATCH v4 10/24] net/_common_intel: pack " Bruce Richardson
@ 2024-12-20 14:39 ` Bruce Richardson
2024-12-20 14:39 ` [PATCH v4 12/24] net/_common_intel: add Tx buffer free fn for AVX-512 Bruce Richardson
` (12 subsequent siblings)
23 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-20 14:39 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Ian Stokes, Vladimir Medvedkin, Anatoly Burakov
The actions taken for post-Tx buffer free for the SSE and AVX drivers
for i40e, iavf and ice drivers are all common, so centralize those in
common/intel_eth driver.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 71 ++++++++++++++++++++++++
drivers/net/i40e/i40e_rxtx_vec_common.h | 72 ++++---------------------
drivers/net/iavf/iavf_rxtx_vec_common.h | 61 ++++-----------------
drivers/net/ice/ice_rxtx_vec_common.h | 61 ++++-----------------
4 files changed, 98 insertions(+), 167 deletions(-)
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
index c372d2838b..a930309c05 100644
--- a/drivers/net/_common_intel/tx.h
+++ b/drivers/net/_common_intel/tx.h
@@ -7,6 +7,7 @@
#include <stdint.h>
#include <rte_mbuf.h>
+#include <rte_ethdev.h>
/* forward declaration of the common intel (ci) queue structure */
struct ci_tx_queue;
@@ -107,4 +108,74 @@ ci_tx_backlog_entry(struct ci_tx_entry *txep, struct rte_mbuf **tx_pkts, uint16_
txep[i].mbuf = tx_pkts[i];
}
+#define IETH_VPMD_TX_MAX_FREE_BUF 64
+
+typedef int (*ci_desc_done_fn)(struct ci_tx_queue *txq, uint16_t idx);
+
+static __rte_always_inline int
+ci_tx_free_bufs(struct ci_tx_queue *txq, ci_desc_done_fn desc_done)
+{
+ struct ci_tx_entry *txep;
+ uint32_t n;
+ uint32_t i;
+ int nb_free = 0;
+ struct rte_mbuf *m, *free[IETH_VPMD_TX_MAX_FREE_BUF];
+
+ /* check DD bits on threshold descriptor */
+ if (!desc_done(txq, txq->tx_next_dd))
+ return 0;
+
+ n = txq->tx_rs_thresh;
+
+ /* first buffer to free from S/W ring is at index
+ * tx_next_dd - (tx_rs_thresh-1)
+ */
+ txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)];
+
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
+ for (i = 0; i < n; i++) {
+ free[i] = txep[i].mbuf;
+ /* no need to reset txep[i].mbuf in vector path */
+ }
+ rte_mempool_put_bulk(free[0]->pool, (void **)free, n);
+ goto done;
+ }
+
+ m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
+ if (likely(m != NULL)) {
+ free[0] = m;
+ nb_free = 1;
+ for (i = 1; i < n; i++) {
+ m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
+ if (likely(m != NULL)) {
+ if (likely(m->pool == free[0]->pool)) {
+ free[nb_free++] = m;
+ } else {
+ rte_mempool_put_bulk(free[0]->pool,
+ (void *)free,
+ nb_free);
+ free[0] = m;
+ nb_free = 1;
+ }
+ }
+ }
+ rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
+ } else {
+ for (i = 1; i < n; i++) {
+ m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
+ if (m != NULL)
+ rte_mempool_put(m->pool, m);
+ }
+ }
+
+done:
+ /* buffers were freed, update counters */
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
+ txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
+ if (txq->tx_next_dd >= txq->nb_tx_desc)
+ txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
+
+ return txq->tx_rs_thresh;
+}
+
#endif /* _COMMON_INTEL_TX_H_ */
diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index 57d6263ccf..907d32dd0b 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -16,72 +16,18 @@
#pragma GCC diagnostic ignored "-Wcast-qual"
#endif
+static inline int
+i40e_tx_desc_done(struct ci_tx_queue *txq, uint16_t idx)
+{
+ return (txq->i40e_tx_ring[idx].cmd_type_offset_bsz &
+ rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) ==
+ rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE);
+}
+
static __rte_always_inline int
i40e_tx_free_bufs(struct ci_tx_queue *txq)
{
- struct ci_tx_entry *txep;
- uint32_t n;
- uint32_t i;
- int nb_free = 0;
- struct rte_mbuf *m, *free[RTE_I40E_TX_MAX_FREE_BUF_SZ];
-
- /* check DD bits on threshold descriptor */
- if ((txq->i40e_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
- rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) !=
- rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE))
- return 0;
-
- n = txq->tx_rs_thresh;
-
- /* first buffer to free from S/W ring is at index
- * tx_next_dd - (tx_rs_thresh-1)
- */
- txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)];
-
- if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
- for (i = 0; i < n; i++) {
- free[i] = txep[i].mbuf;
- /* no need to reset txep[i].mbuf in vector path */
- }
- rte_mempool_put_bulk(free[0]->pool, (void **)free, n);
- goto done;
- }
-
- m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
- if (likely(m != NULL)) {
- free[0] = m;
- nb_free = 1;
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (likely(m != NULL)) {
- if (likely(m->pool == free[0]->pool)) {
- free[nb_free++] = m;
- } else {
- rte_mempool_put_bulk(free[0]->pool,
- (void *)free,
- nb_free);
- free[0] = m;
- nb_free = 1;
- }
- }
- }
- rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
- } else {
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (m != NULL)
- rte_mempool_put(m->pool, m);
- }
- }
-
-done:
- /* buffers were freed, update counters */
- txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
- txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
- if (txq->tx_next_dd >= txq->nb_tx_desc)
- txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
-
- return txq->tx_rs_thresh;
+ return ci_tx_free_bufs(txq, i40e_tx_desc_done);
}
static inline void
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index f1bb12c4f4..7130229f23 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -16,61 +16,18 @@
#pragma GCC diagnostic ignored "-Wcast-qual"
#endif
+static inline int
+iavf_tx_desc_done(struct ci_tx_queue *txq, uint16_t idx)
+{
+ return (txq->iavf_tx_ring[idx].cmd_type_offset_bsz &
+ rte_cpu_to_le_64(IAVF_TXD_QW1_DTYPE_MASK)) ==
+ rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE);
+}
+
static __rte_always_inline int
iavf_tx_free_bufs(struct ci_tx_queue *txq)
{
- struct ci_tx_entry *txep;
- uint32_t n;
- uint32_t i;
- int nb_free = 0;
- struct rte_mbuf *m, *free[IAVF_VPMD_TX_MAX_FREE_BUF];
-
- /* check DD bits on threshold descriptor */
- if ((txq->iavf_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
- rte_cpu_to_le_64(IAVF_TXD_QW1_DTYPE_MASK)) !=
- rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE))
- return 0;
-
- n = txq->tx_rs_thresh;
-
- /* first buffer to free from S/W ring is at index
- * tx_next_dd - (tx_rs_thresh-1)
- */
- txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)];
- m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
- if (likely(m != NULL)) {
- free[0] = m;
- nb_free = 1;
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (likely(m != NULL)) {
- if (likely(m->pool == free[0]->pool)) {
- free[nb_free++] = m;
- } else {
- rte_mempool_put_bulk(free[0]->pool,
- (void *)free,
- nb_free);
- free[0] = m;
- nb_free = 1;
- }
- }
- }
- rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
- } else {
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (m)
- rte_mempool_put(m->pool, m);
- }
- }
-
- /* buffers were freed, update counters */
- txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
- txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
- if (txq->tx_next_dd >= txq->nb_tx_desc)
- txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
-
- return txq->tx_rs_thresh;
+ return ci_tx_free_bufs(txq, iavf_tx_desc_done);
}
static inline void
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index b39289ceb5..c6c3933299 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -12,61 +12,18 @@
#pragma GCC diagnostic ignored "-Wcast-qual"
#endif
+static inline int
+ice_tx_desc_done(struct ci_tx_queue *txq, uint16_t idx)
+{
+ return (txq->ice_tx_ring[idx].cmd_type_offset_bsz &
+ rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) ==
+ rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE);
+}
+
static __rte_always_inline int
ice_tx_free_bufs_vec(struct ci_tx_queue *txq)
{
- struct ci_tx_entry *txep;
- uint32_t n;
- uint32_t i;
- int nb_free = 0;
- struct rte_mbuf *m, *free[ICE_TX_MAX_FREE_BUF_SZ];
-
- /* check DD bits on threshold descriptor */
- if ((txq->ice_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
- rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) !=
- rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))
- return 0;
-
- n = txq->tx_rs_thresh;
-
- /* first buffer to free from S/W ring is at index
- * tx_next_dd - (tx_rs_thresh-1)
- */
- txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)];
- m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
- if (likely(m)) {
- free[0] = m;
- nb_free = 1;
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (likely(m)) {
- if (likely(m->pool == free[0]->pool)) {
- free[nb_free++] = m;
- } else {
- rte_mempool_put_bulk(free[0]->pool,
- (void *)free,
- nb_free);
- free[0] = m;
- nb_free = 1;
- }
- }
- }
- rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
- } else {
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (m)
- rte_mempool_put(m->pool, m);
- }
- }
-
- /* buffers were freed, update counters */
- txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
- txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
- if (txq->tx_next_dd >= txq->nb_tx_desc)
- txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
-
- return txq->tx_rs_thresh;
+ return ci_tx_free_bufs(txq, ice_tx_desc_done);
}
static inline void
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v4 12/24] net/_common_intel: add Tx buffer free fn for AVX-512
2024-12-20 14:38 ` [PATCH v4 00/24] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (10 preceding siblings ...)
2024-12-20 14:39 ` [PATCH v4 11/24] net/_common_intel: add post-Tx buffer free function Bruce Richardson
@ 2024-12-20 14:39 ` Bruce Richardson
2024-12-20 14:39 ` [PATCH v4 13/24] net/iavf: use common Tx " Bruce Richardson
` (11 subsequent siblings)
23 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-20 14:39 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Konstantin Ananyev, Ian Stokes, Anatoly Burakov
AVX-512 code paths for ice and i40e drivers are common, and differ from
the regular post-Tx free function in that the SW ring from which the
buffers are freed does not contain anything other than the mbuf pointer.
Merge these into a common function in intel_common to reduce
duplication.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 92 +++++++++++++++++++
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 114 +----------------------
drivers/net/ice/ice_rxtx_vec_avx512.c | 117 +-----------------------
3 files changed, 94 insertions(+), 229 deletions(-)
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
index a930309c05..84ff839672 100644
--- a/drivers/net/_common_intel/tx.h
+++ b/drivers/net/_common_intel/tx.h
@@ -178,4 +178,96 @@ ci_tx_free_bufs(struct ci_tx_queue *txq, ci_desc_done_fn desc_done)
return txq->tx_rs_thresh;
}
+static __rte_always_inline int
+ci_tx_free_bufs_vec(struct ci_tx_queue *txq, ci_desc_done_fn desc_done)
+{
+ int nb_free = 0;
+ struct rte_mbuf *free[IETH_VPMD_TX_MAX_FREE_BUF];
+ struct rte_mbuf *m;
+
+ /* check DD bits on threshold descriptor */
+ if (!desc_done(txq, txq->tx_next_dd))
+ return 0;
+
+ const uint32_t n = txq->tx_rs_thresh;
+
+ /* first buffer to free from S/W ring is at index
+ * tx_next_dd - (tx_rs_thresh - 1)
+ */
+ struct ci_tx_entry_vec *txep = txq->sw_ring_vec;
+ txep += txq->tx_next_dd - (n - 1);
+
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
+ struct rte_mempool *mp = txep[0].mbuf->pool;
+ void **cache_objs;
+ struct rte_mempool_cache *cache = rte_mempool_default_cache(mp, rte_lcore_id());
+
+ if (!cache || cache->len == 0)
+ goto normal;
+
+ cache_objs = &cache->objs[cache->len];
+
+ if (n > RTE_MEMPOOL_CACHE_MAX_SIZE) {
+ rte_mempool_ops_enqueue_bulk(mp, (void *)txep, n);
+ goto done;
+ }
+
+ /* The cache follows the following algorithm
+ * 1. Add the objects to the cache
+ * 2. Anything greater than the cache min value (if it
+ * crosses the cache flush threshold) is flushed to the ring.
+ */
+ /* Add elements back into the cache */
+ uint32_t copied = 0;
+ /* n is multiple of 32 */
+ while (copied < n) {
+ memcpy(&cache_objs[copied], &txep[copied], 32 * sizeof(void *));
+ copied += 32;
+ }
+ cache->len += n;
+
+ if (cache->len >= cache->flushthresh) {
+ rte_mempool_ops_enqueue_bulk(mp, &cache->objs[cache->size],
+ cache->len - cache->size);
+ cache->len = cache->size;
+ }
+ goto done;
+ }
+
+normal:
+ m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
+ if (likely(m)) {
+ free[0] = m;
+ nb_free = 1;
+ for (uint32_t i = 1; i < n; i++) {
+ m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
+ if (likely(m)) {
+ if (likely(m->pool == free[0]->pool)) {
+ free[nb_free++] = m;
+ } else {
+ rte_mempool_put_bulk(free[0]->pool, (void *)free, nb_free);
+ free[0] = m;
+ nb_free = 1;
+ }
+ }
+ }
+ rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
+ } else {
+ for (uint32_t i = 1; i < n; i++) {
+ m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
+ if (m)
+ rte_mempool_put(m->pool, m);
+ }
+ }
+
+done:
+ /* buffers were freed, update counters */
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
+ txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
+ if (txq->tx_next_dd >= txq->nb_tx_desc)
+ txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
+
+ return txq->tx_rs_thresh;
+}
+
#endif /* _COMMON_INTEL_TX_H_ */
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index a3f6d1667f..9bb2a44231 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -754,118 +754,6 @@ i40e_recv_scattered_pkts_vec_avx512(void *rx_queue,
rx_pkts + retval, nb_pkts);
}
-static __rte_always_inline int
-i40e_tx_free_bufs_avx512(struct ci_tx_queue *txq)
-{
- struct ci_tx_entry_vec *txep;
- uint32_t n;
- uint32_t i;
- int nb_free = 0;
- struct rte_mbuf *m, *free[RTE_I40E_TX_MAX_FREE_BUF_SZ];
-
- /* check DD bits on threshold descriptor */
- if ((txq->i40e_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
- rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) !=
- rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE))
- return 0;
-
- n = txq->tx_rs_thresh;
-
- /* first buffer to free from S/W ring is at index
- * tx_next_dd - (tx_rs_thresh-1)
- */
- txep = (void *)txq->sw_ring;
- txep += txq->tx_next_dd - (n - 1);
-
- if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
- struct rte_mempool *mp = txep[0].mbuf->pool;
- void **cache_objs;
- struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
- rte_lcore_id());
-
- if (!cache || n > RTE_MEMPOOL_CACHE_MAX_SIZE) {
- rte_mempool_generic_put(mp, (void *)txep, n, cache);
- goto done;
- }
-
- cache_objs = &cache->objs[cache->len];
-
- /* The cache follows the following algorithm
- * 1. Add the objects to the cache
- * 2. Anything greater than the cache min value (if it
- * crosses the cache flush threshold) is flushed to the ring.
- */
- /* Add elements back into the cache */
- uint32_t copied = 0;
- /* n is multiple of 32 */
- while (copied < n) {
-#ifdef RTE_ARCH_64
- const __m512i a = _mm512_load_si512(&txep[copied]);
- const __m512i b = _mm512_load_si512(&txep[copied + 8]);
- const __m512i c = _mm512_load_si512(&txep[copied + 16]);
- const __m512i d = _mm512_load_si512(&txep[copied + 24]);
-
- _mm512_storeu_si512(&cache_objs[copied], a);
- _mm512_storeu_si512(&cache_objs[copied + 8], b);
- _mm512_storeu_si512(&cache_objs[copied + 16], c);
- _mm512_storeu_si512(&cache_objs[copied + 24], d);
-#else
- const __m512i a = _mm512_load_si512(&txep[copied]);
- const __m512i b = _mm512_load_si512(&txep[copied + 16]);
- _mm512_storeu_si512(&cache_objs[copied], a);
- _mm512_storeu_si512(&cache_objs[copied + 16], b);
-#endif
- copied += 32;
- }
- cache->len += n;
-
- if (cache->len >= cache->flushthresh) {
- rte_mempool_ops_enqueue_bulk
- (mp, &cache->objs[cache->size],
- cache->len - cache->size);
- cache->len = cache->size;
- }
- goto done;
- }
-
- m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
- if (likely(m)) {
- free[0] = m;
- nb_free = 1;
- for (i = 1; i < n; i++) {
- rte_mbuf_prefetch_part2(txep[i + 3].mbuf);
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (likely(m)) {
- if (likely(m->pool == free[0]->pool)) {
- free[nb_free++] = m;
- } else {
- rte_mempool_put_bulk(free[0]->pool,
- (void *)free,
- nb_free);
- free[0] = m;
- nb_free = 1;
- }
- }
- }
- rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
- } else {
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (m)
- rte_mempool_put(m->pool, m);
- }
- }
-
-done:
- /* buffers were freed, update counters */
- txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
- txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
- if (txq->tx_next_dd >= txq->nb_tx_desc)
- txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
-
- return txq->tx_rs_thresh;
-}
-
static inline void
vtx1(volatile struct i40e_tx_desc *txdp, struct rte_mbuf *pkt, uint64_t flags)
{
@@ -941,7 +829,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
if (txq->nb_tx_free < txq->tx_free_thresh)
- i40e_tx_free_bufs_avx512(txq);
+ ci_tx_free_bufs_vec(txq, i40e_tx_desc_done);
nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index eabd8b04a0..538be707ef 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -859,121 +859,6 @@ ice_recv_scattered_pkts_vec_avx512_offload(void *rx_queue,
rx_pkts + retval, nb_pkts);
}
-static __rte_always_inline int
-ice_tx_free_bufs_avx512(struct ci_tx_queue *txq)
-{
- struct ci_tx_entry_vec *txep;
- uint32_t n;
- uint32_t i;
- int nb_free = 0;
- struct rte_mbuf *m, *free[ICE_TX_MAX_FREE_BUF_SZ];
-
- /* check DD bits on threshold descriptor */
- if ((txq->ice_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
- rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) !=
- rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))
- return 0;
-
- n = txq->tx_rs_thresh;
-
- /* first buffer to free from S/W ring is at index
- * tx_next_dd - (tx_rs_thresh - 1)
- */
- txep = (void *)txq->sw_ring;
- txep += txq->tx_next_dd - (n - 1);
-
- if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
- struct rte_mempool *mp = txep[0].mbuf->pool;
- void **cache_objs;
- struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
- rte_lcore_id());
-
- if (!cache || cache->len == 0)
- goto normal;
-
- cache_objs = &cache->objs[cache->len];
-
- if (n > RTE_MEMPOOL_CACHE_MAX_SIZE) {
- rte_mempool_ops_enqueue_bulk(mp, (void *)txep, n);
- goto done;
- }
-
- /* The cache follows the following algorithm
- * 1. Add the objects to the cache
- * 2. Anything greater than the cache min value (if it
- * crosses the cache flush threshold) is flushed to the ring.
- */
- /* Add elements back into the cache */
- uint32_t copied = 0;
- /* n is multiple of 32 */
- while (copied < n) {
-#ifdef RTE_ARCH_64
- const __m512i a = _mm512_loadu_si512(&txep[copied]);
- const __m512i b = _mm512_loadu_si512(&txep[copied + 8]);
- const __m512i c = _mm512_loadu_si512(&txep[copied + 16]);
- const __m512i d = _mm512_loadu_si512(&txep[copied + 24]);
-
- _mm512_storeu_si512(&cache_objs[copied], a);
- _mm512_storeu_si512(&cache_objs[copied + 8], b);
- _mm512_storeu_si512(&cache_objs[copied + 16], c);
- _mm512_storeu_si512(&cache_objs[copied + 24], d);
-#else
- const __m512i a = _mm512_loadu_si512(&txep[copied]);
- const __m512i b = _mm512_loadu_si512(&txep[copied + 16]);
- _mm512_storeu_si512(&cache_objs[copied], a);
- _mm512_storeu_si512(&cache_objs[copied + 16], b);
-#endif
- copied += 32;
- }
- cache->len += n;
-
- if (cache->len >= cache->flushthresh) {
- rte_mempool_ops_enqueue_bulk
- (mp, &cache->objs[cache->size],
- cache->len - cache->size);
- cache->len = cache->size;
- }
- goto done;
- }
-
-normal:
- m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
- if (likely(m)) {
- free[0] = m;
- nb_free = 1;
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (likely(m)) {
- if (likely(m->pool == free[0]->pool)) {
- free[nb_free++] = m;
- } else {
- rte_mempool_put_bulk(free[0]->pool,
- (void *)free,
- nb_free);
- free[0] = m;
- nb_free = 1;
- }
- }
- }
- rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
- } else {
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (m)
- rte_mempool_put(m->pool, m);
- }
- }
-
-done:
- /* buffers were freed, update counters */
- txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
- txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
- if (txq->tx_next_dd >= txq->nb_tx_desc)
- txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
-
- return txq->tx_rs_thresh;
-}
-
static __rte_always_inline void
ice_vtx1(volatile struct ice_tx_desc *txdp,
struct rte_mbuf *pkt, uint64_t flags, bool do_offload)
@@ -1064,7 +949,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_pkts = RTE_MIN(nb_pkts, txq->tx_rs_thresh);
if (txq->nb_tx_free < txq->tx_free_thresh)
- ice_tx_free_bufs_avx512(txq);
+ ci_tx_free_bufs_vec(txq, ice_tx_desc_done);
nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v4 13/24] net/iavf: use common Tx free fn for AVX-512
2024-12-20 14:38 ` [PATCH v4 00/24] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (11 preceding siblings ...)
2024-12-20 14:39 ` [PATCH v4 12/24] net/_common_intel: add Tx buffer free fn for AVX-512 Bruce Richardson
@ 2024-12-20 14:39 ` Bruce Richardson
2024-12-20 14:39 ` [PATCH v4 14/24] net/ice: move Tx queue mbuf cleanup fn to common Bruce Richardson
` (10 subsequent siblings)
23 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-20 14:39 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Konstantin Ananyev, Ian Stokes,
Vladimir Medvedkin, Anatoly Burakov
Switch the iavf driver to use the common Tx free function. This requires
one additional parameter to that function, since iavf sometimes uses
context descriptors which means that we have double the descriptors per
SW ring slot.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 6 +-
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 2 +-
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 119 +-----------------------
drivers/net/ice/ice_rxtx_vec_avx512.c | 2 +-
4 files changed, 7 insertions(+), 122 deletions(-)
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
index 84ff839672..26aef528fa 100644
--- a/drivers/net/_common_intel/tx.h
+++ b/drivers/net/_common_intel/tx.h
@@ -179,7 +179,7 @@ ci_tx_free_bufs(struct ci_tx_queue *txq, ci_desc_done_fn desc_done)
}
static __rte_always_inline int
-ci_tx_free_bufs_vec(struct ci_tx_queue *txq, ci_desc_done_fn desc_done)
+ci_tx_free_bufs_vec(struct ci_tx_queue *txq, ci_desc_done_fn desc_done, bool ctx_descs)
{
int nb_free = 0;
struct rte_mbuf *free[IETH_VPMD_TX_MAX_FREE_BUF];
@@ -189,13 +189,13 @@ ci_tx_free_bufs_vec(struct ci_tx_queue *txq, ci_desc_done_fn desc_done)
if (!desc_done(txq, txq->tx_next_dd))
return 0;
- const uint32_t n = txq->tx_rs_thresh;
+ const uint32_t n = txq->tx_rs_thresh >> ctx_descs;
/* first buffer to free from S/W ring is at index
* tx_next_dd - (tx_rs_thresh - 1)
*/
struct ci_tx_entry_vec *txep = txq->sw_ring_vec;
- txep += txq->tx_next_dd - (n - 1);
+ txep += (txq->tx_next_dd >> ctx_descs) - (n - 1);
if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
struct rte_mempool *mp = txep[0].mbuf->pool;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index 9bb2a44231..c555c3491d 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -829,7 +829,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
if (txq->nb_tx_free < txq->tx_free_thresh)
- ci_tx_free_bufs_vec(txq, i40e_tx_desc_done);
+ ci_tx_free_bufs_vec(txq, i40e_tx_desc_done, false);
nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index 9cf7171524..8543490c70 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -1844,121 +1844,6 @@ iavf_recv_scattered_pkts_vec_avx512_flex_rxd_offload(void *rx_queue,
true);
}
-static __rte_always_inline int
-iavf_tx_free_bufs_avx512(struct ci_tx_queue *txq)
-{
- struct ci_tx_entry_vec *txep;
- uint32_t n;
- uint32_t i;
- int nb_free = 0;
- struct rte_mbuf *m, *free[IAVF_VPMD_TX_MAX_FREE_BUF];
-
- /* check DD bits on threshold descriptor */
- if ((txq->iavf_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
- rte_cpu_to_le_64(IAVF_TXD_QW1_DTYPE_MASK)) !=
- rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE))
- return 0;
-
- n = txq->tx_rs_thresh >> txq->use_ctx;
-
- /* first buffer to free from S/W ring is at index
- * tx_next_dd - (tx_rs_thresh-1)
- */
- txep = (void *)txq->sw_ring;
- txep += (txq->tx_next_dd >> txq->use_ctx) - (n - 1);
-
- if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
- struct rte_mempool *mp = txep[0].mbuf->pool;
- struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
- rte_lcore_id());
- void **cache_objs;
-
- if (!cache || cache->len == 0)
- goto normal;
-
- cache_objs = &cache->objs[cache->len];
-
- if (n > RTE_MEMPOOL_CACHE_MAX_SIZE) {
- rte_mempool_ops_enqueue_bulk(mp, (void *)txep, n);
- goto done;
- }
-
- /* The cache follows the following algorithm
- * 1. Add the objects to the cache
- * 2. Anything greater than the cache min value (if it crosses the
- * cache flush threshold) is flushed to the ring.
- */
- /* Add elements back into the cache */
- uint32_t copied = 0;
- /* n is multiple of 32 */
- while (copied < n) {
-#ifdef RTE_ARCH_64
- const __m512i a = _mm512_loadu_si512(&txep[copied]);
- const __m512i b = _mm512_loadu_si512(&txep[copied + 8]);
- const __m512i c = _mm512_loadu_si512(&txep[copied + 16]);
- const __m512i d = _mm512_loadu_si512(&txep[copied + 24]);
-
- _mm512_storeu_si512(&cache_objs[copied], a);
- _mm512_storeu_si512(&cache_objs[copied + 8], b);
- _mm512_storeu_si512(&cache_objs[copied + 16], c);
- _mm512_storeu_si512(&cache_objs[copied + 24], d);
-#else
- const __m512i a = _mm512_loadu_si512(&txep[copied]);
- const __m512i b = _mm512_loadu_si512(&txep[copied + 16]);
- _mm512_storeu_si512(&cache_objs[copied], a);
- _mm512_storeu_si512(&cache_objs[copied + 16], b);
-#endif
- copied += 32;
- }
- cache->len += n;
-
- if (cache->len >= cache->flushthresh) {
- rte_mempool_ops_enqueue_bulk(mp,
- &cache->objs[cache->size],
- cache->len - cache->size);
- cache->len = cache->size;
- }
- goto done;
- }
-
-normal:
- m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
- if (likely(m)) {
- free[0] = m;
- nb_free = 1;
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (likely(m)) {
- if (likely(m->pool == free[0]->pool)) {
- free[nb_free++] = m;
- } else {
- rte_mempool_put_bulk(free[0]->pool,
- (void *)free,
- nb_free);
- free[0] = m;
- nb_free = 1;
- }
- }
- }
- rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
- } else {
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (m)
- rte_mempool_put(m->pool, m);
- }
- }
-
-done:
- /* buffers were freed, update counters */
- txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
- txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
- if (txq->tx_next_dd >= txq->nb_tx_desc)
- txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
-
- return txq->tx_rs_thresh;
-}
-
static __rte_always_inline void
tx_backlog_entry_avx512(struct ci_tx_entry_vec *txep,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
@@ -2320,7 +2205,7 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint64_t rs = IAVF_TX_DESC_CMD_RS | flags;
if (txq->nb_tx_free < txq->tx_free_thresh)
- iavf_tx_free_bufs_avx512(txq);
+ ci_tx_free_bufs_vec(txq, iavf_tx_desc_done, false);
nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
@@ -2388,7 +2273,7 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts,
uint64_t rs = IAVF_TX_DESC_CMD_RS | flags;
if (txq->nb_tx_free < txq->tx_free_thresh)
- iavf_tx_free_bufs_avx512(txq);
+ ci_tx_free_bufs_vec(txq, iavf_tx_desc_done, true);
nb_commit = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts << 1);
nb_commit &= 0xFFFE;
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index 538be707ef..f6ec593f96 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -949,7 +949,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_pkts = RTE_MIN(nb_pkts, txq->tx_rs_thresh);
if (txq->nb_tx_free < txq->tx_free_thresh)
- ci_tx_free_bufs_vec(txq, ice_tx_desc_done);
+ ci_tx_free_bufs_vec(txq, ice_tx_desc_done, false);
nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v4 14/24] net/ice: move Tx queue mbuf cleanup fn to common
2024-12-20 14:38 ` [PATCH v4 00/24] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (12 preceding siblings ...)
2024-12-20 14:39 ` [PATCH v4 13/24] net/iavf: use common Tx " Bruce Richardson
@ 2024-12-20 14:39 ` Bruce Richardson
2024-12-20 14:39 ` [PATCH v4 15/24] net/i40e: use common Tx queue mbuf cleanup fn Bruce Richardson
` (9 subsequent siblings)
23 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-20 14:39 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Anatoly Burakov, Konstantin Ananyev
The functions to loop over the Tx queue and clean up all the mbufs on
it, e.g. for queue shutdown, is not device specific and so can move into
the common_intel headers. Only complication is ensuring that the
correct ring format, either minimal vector or full structure, is used.
Ice driver currently uses two functions and a function pointer to help
with this - though actually one of those functions uses a further check
inside it - so we can simplify this down to just one common function,
with a flag set in the appropriate place. This avoids checking for
AVX-512-specific functions, which were the only function using the
smaller struct in this driver.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 49 ++++++++++++++++++++++++-
drivers/net/ice/ice_dcf_ethdev.c | 5 +--
drivers/net/ice/ice_ethdev.h | 3 +-
drivers/net/ice/ice_rxtx.c | 33 +++++------------
drivers/net/ice/ice_rxtx_vec_common.h | 51 ---------------------------
drivers/net/ice/ice_rxtx_vec_sse.c | 4 ---
6 files changed, 60 insertions(+), 85 deletions(-)
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
index 26aef528fa..1bf2a61b2f 100644
--- a/drivers/net/_common_intel/tx.h
+++ b/drivers/net/_common_intel/tx.h
@@ -65,6 +65,8 @@ struct ci_tx_queue {
rte_iova_t tx_ring_dma; /* TX ring DMA address */
bool tx_deferred_start; /* don't start this queue in dev start */
bool q_set; /* indicate if tx queue has been configured */
+ bool vector_tx; /* port is using vector TX */
+ bool vector_sw_ring; /* port is using vectorized SW ring (ieth_tx_entry_vec) */
union { /* the VSI this queue belongs to */
struct i40e_vsi *i40e_vsi;
struct iavf_vsi *iavf_vsi;
@@ -74,7 +76,6 @@ struct ci_tx_queue {
union {
struct { /* ICE driver specific values */
- ice_tx_release_mbufs_t tx_rel_mbufs;
uint32_t q_teid; /* TX schedule node id. */
};
struct { /* I40E driver specific values */
@@ -270,4 +271,50 @@ ci_tx_free_bufs_vec(struct ci_tx_queue *txq, ci_desc_done_fn desc_done, bool ctx
return txq->tx_rs_thresh;
}
+#define IETH_FREE_BUFS_LOOP(txq, swr, start) do { \
+ uint16_t i = start; \
+ if (txq->tx_tail < i) { \
+ for (; i < txq->nb_tx_desc; i++) { \
+ rte_pktmbuf_free_seg(swr[i].mbuf); \
+ swr[i].mbuf = NULL; \
+ } \
+ i = 0; \
+ } \
+ for (; i < txq->tx_tail; i++) { \
+ rte_pktmbuf_free_seg(swr[i].mbuf); \
+ swr[i].mbuf = NULL; \
+ } \
+} while (0)
+
+static inline void
+ci_txq_release_all_mbufs(struct ci_tx_queue *txq)
+{
+ if (unlikely(!txq || !txq->sw_ring))
+ return;
+
+ if (!txq->vector_tx) {
+ for (uint16_t i = 0; i < txq->nb_tx_desc; i++) {
+ if (txq->sw_ring[i].mbuf != NULL) {
+ rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
+ txq->sw_ring[i].mbuf = NULL;
+ }
+ }
+ return;
+ }
+
+ /**
+ * vPMD tx will not set sw_ring's mbuf to NULL after free,
+ * so need to free remains more carefully.
+ */
+ const uint16_t start = txq->tx_next_dd - txq->tx_rs_thresh + 1;
+
+ if (txq->vector_sw_ring) {
+ struct ci_tx_entry_vec *swr = txq->sw_ring_vec;
+ IETH_FREE_BUFS_LOOP(txq, swr, start);
+ } else {
+ struct ci_tx_entry *swr = txq->sw_ring;
+ IETH_FREE_BUFS_LOOP(txq, swr, start);
+ }
+}
+
#endif /* _COMMON_INTEL_TX_H_ */
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index a0c065d78c..c20399cd84 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -24,6 +24,7 @@
#include "ice_generic_flow.h"
#include "ice_dcf_ethdev.h"
#include "ice_rxtx.h"
+#include "_common_intel/tx.h"
#define DCF_NUM_MACADDR_MAX 64
@@ -500,7 +501,7 @@ ice_dcf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
}
txq = dev->data->tx_queues[tx_queue_id];
- txq->tx_rel_mbufs(txq);
+ ci_txq_release_all_mbufs(txq);
reset_tx_queue(txq);
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
@@ -650,7 +651,7 @@ ice_dcf_stop_queues(struct rte_eth_dev *dev)
txq = dev->data->tx_queues[i];
if (!txq)
continue;
- txq->tx_rel_mbufs(txq);
+ ci_txq_release_all_mbufs(txq);
reset_tx_queue(txq);
dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
}
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index ba54655499..afe8dae497 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -621,13 +621,12 @@ struct ice_adapter {
/* Set bit if the engine is disabled */
unsigned long disabled_engine_mask;
struct ice_parser *psr;
-#ifdef RTE_ARCH_X86
+ /* used only on X86, zero on other Archs */
bool rx_use_avx2;
bool rx_use_avx512;
bool tx_use_avx2;
bool tx_use_avx512;
bool rx_vec_offload_support;
-#endif
};
struct ice_vsi_vlan_pvid_info {
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index e2e147ba3e..0a890e587c 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -751,6 +751,7 @@ ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
struct ice_aqc_add_tx_qgrp *txq_elem;
struct ice_tlan_ctx tx_ctx;
int buf_len;
+ struct ice_adapter *ad = ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
PMD_INIT_FUNC_TRACE();
@@ -822,6 +823,10 @@ ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
return -EIO;
}
+ /* record what kind of descriptor cleanup we need on teardown */
+ txq->vector_tx = ad->tx_vec_allowed;
+ txq->vector_sw_ring = ad->tx_use_avx512;
+
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED;
rte_free(txq_elem);
@@ -1006,25 +1011,6 @@ ice_fdir_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
return 0;
}
-/* Free all mbufs for descriptors in tx queue */
-static void
-_ice_tx_queue_release_mbufs(struct ci_tx_queue *txq)
-{
- uint16_t i;
-
- if (!txq || !txq->sw_ring) {
- PMD_DRV_LOG(DEBUG, "Pointer to txq or sw_ring is NULL");
- return;
- }
-
- for (i = 0; i < txq->nb_tx_desc; i++) {
- if (txq->sw_ring[i].mbuf) {
- rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
- txq->sw_ring[i].mbuf = NULL;
- }
- }
-}
-
static void
ice_reset_tx_queue(struct ci_tx_queue *txq)
{
@@ -1103,7 +1089,7 @@ ice_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
return -EINVAL;
}
- txq->tx_rel_mbufs(txq);
+ ci_txq_release_all_mbufs(txq);
ice_reset_tx_queue(txq);
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
@@ -1166,7 +1152,7 @@ ice_fdir_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
return -EINVAL;
}
- txq->tx_rel_mbufs(txq);
+ ci_txq_release_all_mbufs(txq);
txq->qtx_tail = NULL;
return 0;
@@ -1518,7 +1504,6 @@ ice_tx_queue_setup(struct rte_eth_dev *dev,
ice_reset_tx_queue(txq);
txq->q_set = true;
dev->data->tx_queues[queue_idx] = txq;
- txq->tx_rel_mbufs = _ice_tx_queue_release_mbufs;
ice_set_tx_function_flag(dev, txq);
return 0;
@@ -1546,8 +1531,7 @@ ice_tx_queue_release(void *txq)
return;
}
- if (q->tx_rel_mbufs != NULL)
- q->tx_rel_mbufs(q);
+ ci_txq_release_all_mbufs(q);
rte_free(q->sw_ring);
rte_memzone_free(q->mz);
rte_free(q);
@@ -2460,7 +2444,6 @@ ice_fdir_setup_tx_resources(struct ice_pf *pf)
txq->q_set = true;
pf->fdir.txq = txq;
- txq->tx_rel_mbufs = _ice_tx_queue_release_mbufs;
return ICE_SUCCESS;
}
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index c6c3933299..907828b675 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -61,57 +61,6 @@ _ice_rx_queue_release_mbufs_vec(struct ice_rx_queue *rxq)
memset(rxq->sw_ring, 0, sizeof(rxq->sw_ring[0]) * rxq->nb_rx_desc);
}
-static inline void
-_ice_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq)
-{
- uint16_t i;
-
- if (unlikely(!txq || !txq->sw_ring)) {
- PMD_DRV_LOG(DEBUG, "Pointer to rxq or sw_ring is NULL");
- return;
- }
-
- /**
- * vPMD tx will not set sw_ring's mbuf to NULL after free,
- * so need to free remains more carefully.
- */
- i = txq->tx_next_dd - txq->tx_rs_thresh + 1;
-
-#ifdef __AVX512VL__
- struct rte_eth_dev *dev = &rte_eth_devices[txq->ice_vsi->adapter->pf.dev_data->port_id];
-
- if (dev->tx_pkt_burst == ice_xmit_pkts_vec_avx512 ||
- dev->tx_pkt_burst == ice_xmit_pkts_vec_avx512_offload) {
- struct ci_tx_entry_vec *swr = (void *)txq->sw_ring;
-
- if (txq->tx_tail < i) {
- for (; i < txq->nb_tx_desc; i++) {
- rte_pktmbuf_free_seg(swr[i].mbuf);
- swr[i].mbuf = NULL;
- }
- i = 0;
- }
- for (; i < txq->tx_tail; i++) {
- rte_pktmbuf_free_seg(swr[i].mbuf);
- swr[i].mbuf = NULL;
- }
- } else
-#endif
- {
- if (txq->tx_tail < i) {
- for (; i < txq->nb_tx_desc; i++) {
- rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
- txq->sw_ring[i].mbuf = NULL;
- }
- i = 0;
- }
- for (; i < txq->tx_tail; i++) {
- rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
- txq->sw_ring[i].mbuf = NULL;
- }
- }
-}
-
static inline int
ice_rxq_vec_setup_default(struct ice_rx_queue *rxq)
{
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index f11528385a..bff39c28d8 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -795,10 +795,6 @@ ice_rxq_vec_setup(struct ice_rx_queue *rxq)
int __rte_cold
ice_txq_vec_setup(struct ci_tx_queue *txq __rte_unused)
{
- if (!txq)
- return -1;
-
- txq->tx_rel_mbufs = _ice_tx_queue_release_mbufs_vec;
return 0;
}
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v4 15/24] net/i40e: use common Tx queue mbuf cleanup fn
2024-12-20 14:38 ` [PATCH v4 00/24] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (13 preceding siblings ...)
2024-12-20 14:39 ` [PATCH v4 14/24] net/ice: move Tx queue mbuf cleanup fn to common Bruce Richardson
@ 2024-12-20 14:39 ` Bruce Richardson
2024-12-20 14:39 ` [PATCH v4 16/24] net/ixgbe: " Bruce Richardson
` (8 subsequent siblings)
23 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-20 14:39 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Ian Stokes
Update driver to be similar to the "ice" driver and use the common mbuf
ring cleanup code on shutdown of a Tx queue.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/i40e/i40e_ethdev.h | 4 +-
drivers/net/i40e/i40e_rxtx.c | 70 ++++------------------------------
drivers/net/i40e/i40e_rxtx.h | 1 -
3 files changed, 9 insertions(+), 66 deletions(-)
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index d351193ed9..ccc8732d7d 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -1260,12 +1260,12 @@ struct i40e_adapter {
/* For RSS reta table update */
uint8_t rss_reta_updated;
-#ifdef RTE_ARCH_X86
+
+ /* used only on x86, zero on other architectures */
bool rx_use_avx2;
bool rx_use_avx512;
bool tx_use_avx2;
bool tx_use_avx512;
-#endif
};
/**
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 539b170266..b70919c5dc 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -1875,6 +1875,7 @@ i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
int err;
struct ci_tx_queue *txq;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ const struct i40e_adapter *ad = I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
PMD_INIT_FUNC_TRACE();
@@ -1889,6 +1890,9 @@ i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
PMD_DRV_LOG(WARNING, "TX queue %u is deferred start",
tx_queue_id);
+ txq->vector_tx = ad->tx_vec_allowed;
+ txq->vector_sw_ring = ad->tx_use_avx512;
+
/*
* tx_queue_id is queue id application refers to, while
* rxq->reg_idx is the real queue index.
@@ -1929,7 +1933,7 @@ i40e_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
return err;
}
- i40e_tx_queue_release_mbufs(txq);
+ ci_txq_release_all_mbufs(txq);
i40e_reset_tx_queue(txq);
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
@@ -2604,7 +2608,7 @@ i40e_tx_queue_release(void *txq)
return;
}
- i40e_tx_queue_release_mbufs(q);
+ ci_txq_release_all_mbufs(q);
rte_free(q->sw_ring);
rte_memzone_free(q->mz);
rte_free(q);
@@ -2701,66 +2705,6 @@ i40e_reset_rx_queue(struct i40e_rx_queue *rxq)
rxq->rxrearm_nb = 0;
}
-void
-i40e_tx_queue_release_mbufs(struct ci_tx_queue *txq)
-{
- struct rte_eth_dev *dev;
- uint16_t i;
-
- if (!txq || !txq->sw_ring) {
- PMD_DRV_LOG(DEBUG, "Pointer to txq or sw_ring is NULL");
- return;
- }
-
- dev = &rte_eth_devices[txq->port_id];
-
- /**
- * vPMD tx will not set sw_ring's mbuf to NULL after free,
- * so need to free remains more carefully.
- */
-#ifdef CC_AVX512_SUPPORT
- if (dev->tx_pkt_burst == i40e_xmit_pkts_vec_avx512) {
- struct ci_tx_entry_vec *swr = (void *)txq->sw_ring;
-
- i = txq->tx_next_dd - txq->tx_rs_thresh + 1;
- if (txq->tx_tail < i) {
- for (; i < txq->nb_tx_desc; i++) {
- rte_pktmbuf_free_seg(swr[i].mbuf);
- swr[i].mbuf = NULL;
- }
- i = 0;
- }
- for (; i < txq->tx_tail; i++) {
- rte_pktmbuf_free_seg(swr[i].mbuf);
- swr[i].mbuf = NULL;
- }
- return;
- }
-#endif
- if (dev->tx_pkt_burst == i40e_xmit_pkts_vec_avx2 ||
- dev->tx_pkt_burst == i40e_xmit_pkts_vec) {
- i = txq->tx_next_dd - txq->tx_rs_thresh + 1;
- if (txq->tx_tail < i) {
- for (; i < txq->nb_tx_desc; i++) {
- rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
- txq->sw_ring[i].mbuf = NULL;
- }
- i = 0;
- }
- for (; i < txq->tx_tail; i++) {
- rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
- txq->sw_ring[i].mbuf = NULL;
- }
- } else {
- for (i = 0; i < txq->nb_tx_desc; i++) {
- if (txq->sw_ring[i].mbuf) {
- rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
- txq->sw_ring[i].mbuf = NULL;
- }
- }
- }
-}
-
static int
i40e_tx_done_cleanup_full(struct ci_tx_queue *txq,
uint32_t free_cnt)
@@ -3127,7 +3071,7 @@ i40e_dev_clear_queues(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_tx_queues; i++) {
if (!dev->data->tx_queues[i])
continue;
- i40e_tx_queue_release_mbufs(dev->data->tx_queues[i]);
+ ci_txq_release_all_mbufs(dev->data->tx_queues[i]);
i40e_reset_tx_queue(dev->data->tx_queues[i]);
}
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 043d1df912..858b8433e9 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -179,7 +179,6 @@ void i40e_dev_clear_queues(struct rte_eth_dev *dev);
void i40e_dev_free_queues(struct rte_eth_dev *dev);
void i40e_reset_rx_queue(struct i40e_rx_queue *rxq);
void i40e_reset_tx_queue(struct ci_tx_queue *txq);
-void i40e_tx_queue_release_mbufs(struct ci_tx_queue *txq);
int i40e_tx_done_cleanup(void *txq, uint32_t free_cnt);
int i40e_alloc_rx_queue_mbufs(struct i40e_rx_queue *rxq);
void i40e_rx_queue_release_mbufs(struct i40e_rx_queue *rxq);
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v4 16/24] net/ixgbe: use common Tx queue mbuf cleanup fn
2024-12-20 14:38 ` [PATCH v4 00/24] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (14 preceding siblings ...)
2024-12-20 14:39 ` [PATCH v4 15/24] net/i40e: use common Tx queue mbuf cleanup fn Bruce Richardson
@ 2024-12-20 14:39 ` Bruce Richardson
2024-12-20 14:39 ` [PATCH v4 17/24] net/iavf: " Bruce Richardson
` (7 subsequent siblings)
23 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-20 14:39 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Anatoly Burakov, Vladimir Medvedkin,
Wathsala Vithanage, Konstantin Ananyev
Update driver to use the common cleanup function.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/ixgbe/ixgbe_rxtx.c | 22 +++---------------
drivers/net/ixgbe/ixgbe_rxtx.h | 1 -
drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 28 ++---------------------
drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 7 ------
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 7 ------
5 files changed, 5 insertions(+), 60 deletions(-)
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 344ef85685..bf9d461b06 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -2334,21 +2334,6 @@ ixgbe_recv_pkts_lro_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts,
*
**********************************************************************/
-static void __rte_cold
-ixgbe_tx_queue_release_mbufs(struct ci_tx_queue *txq)
-{
- unsigned i;
-
- if (txq->sw_ring != NULL) {
- for (i = 0; i < txq->nb_tx_desc; i++) {
- if (txq->sw_ring[i].mbuf != NULL) {
- rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
- txq->sw_ring[i].mbuf = NULL;
- }
- }
- }
-}
-
static int
ixgbe_tx_done_cleanup_full(struct ci_tx_queue *txq, uint32_t free_cnt)
{
@@ -2472,7 +2457,7 @@ static void __rte_cold
ixgbe_tx_queue_release(struct ci_tx_queue *txq)
{
if (txq != NULL && txq->ops != NULL) {
- txq->ops->release_mbufs(txq);
+ ci_txq_release_all_mbufs(txq);
txq->ops->free_swring(txq);
rte_memzone_free(txq->mz);
rte_free(txq);
@@ -2526,7 +2511,6 @@ ixgbe_reset_tx_queue(struct ci_tx_queue *txq)
}
static const struct ixgbe_txq_ops def_txq_ops = {
- .release_mbufs = ixgbe_tx_queue_release_mbufs,
.free_swring = ixgbe_tx_free_swring,
.reset = ixgbe_reset_tx_queue,
};
@@ -3380,7 +3364,7 @@ ixgbe_dev_clear_queues(struct rte_eth_dev *dev)
struct ci_tx_queue *txq = dev->data->tx_queues[i];
if (txq != NULL) {
- txq->ops->release_mbufs(txq);
+ ci_txq_release_all_mbufs(txq);
txq->ops->reset(txq);
dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
}
@@ -5655,7 +5639,7 @@ ixgbe_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
}
if (txq->ops != NULL) {
- txq->ops->release_mbufs(txq);
+ ci_txq_release_all_mbufs(txq);
txq->ops->reset(txq);
}
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index 4333e5bf2f..11689eb432 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -181,7 +181,6 @@ struct ixgbe_advctx_info {
};
struct ixgbe_txq_ops {
- void (*release_mbufs)(struct ci_tx_queue *txq);
void (*free_swring)(struct ci_tx_queue *txq);
void (*reset)(struct ci_tx_queue *txq);
};
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
index 06e760867c..2b12bdcc9c 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
@@ -78,32 +78,6 @@ tx_backlog_entry(struct ci_tx_entry_vec *txep,
txep[i].mbuf = tx_pkts[i];
}
-static inline void
-_ixgbe_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq)
-{
- unsigned int i;
- struct ci_tx_entry_vec *txe;
- const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1);
-
- if (txq->sw_ring == NULL || txq->nb_tx_free == max_desc)
- return;
-
- /* release the used mbufs in sw_ring */
- for (i = txq->tx_next_dd - (txq->tx_rs_thresh - 1);
- i != txq->tx_tail;
- i = (i + 1) % txq->nb_tx_desc) {
- txe = &txq->sw_ring_vec[i];
- rte_pktmbuf_free_seg(txe->mbuf);
- }
- txq->nb_tx_free = max_desc;
-
- /* reset tx_entry */
- for (i = 0; i < txq->nb_tx_desc; i++) {
- txe = &txq->sw_ring_vec[i];
- txe->mbuf = NULL;
- }
-}
-
static inline void
_ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq)
{
@@ -207,6 +181,8 @@ ixgbe_txq_vec_setup_default(struct ci_tx_queue *txq,
/* leave the first one for overflow */
txq->sw_ring_vec = txq->sw_ring_vec + 1;
txq->ops = txq_ops;
+ txq->vector_tx = 1;
+ txq->vector_sw_ring = 1;
return 0;
}
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
index cb749a3760..2ccb399b64 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
@@ -633,12 +633,6 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
return nb_pkts;
}
-static void __rte_cold
-ixgbe_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq)
-{
- _ixgbe_tx_queue_release_mbufs_vec(txq);
-}
-
void __rte_cold
ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq)
{
@@ -658,7 +652,6 @@ ixgbe_reset_tx_queue(struct ci_tx_queue *txq)
}
static const struct ixgbe_txq_ops vec_txq_ops = {
- .release_mbufs = ixgbe_tx_queue_release_mbufs_vec,
.free_swring = ixgbe_tx_free_swring,
.reset = ixgbe_reset_tx_queue,
};
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
index e46550f76a..fa26365f06 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -756,12 +756,6 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
return nb_pkts;
}
-static void __rte_cold
-ixgbe_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq)
-{
- _ixgbe_tx_queue_release_mbufs_vec(txq);
-}
-
void __rte_cold
ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq)
{
@@ -781,7 +775,6 @@ ixgbe_reset_tx_queue(struct ci_tx_queue *txq)
}
static const struct ixgbe_txq_ops vec_txq_ops = {
- .release_mbufs = ixgbe_tx_queue_release_mbufs_vec,
.free_swring = ixgbe_tx_free_swring,
.reset = ixgbe_reset_tx_queue,
};
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v4 17/24] net/iavf: use common Tx queue mbuf cleanup fn
2024-12-20 14:38 ` [PATCH v4 00/24] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (15 preceding siblings ...)
2024-12-20 14:39 ` [PATCH v4 16/24] net/ixgbe: " Bruce Richardson
@ 2024-12-20 14:39 ` Bruce Richardson
2024-12-20 14:39 ` [PATCH v4 18/24] net/ice: use vector SW ring for all vector paths Bruce Richardson
` (6 subsequent siblings)
23 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-20 14:39 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Ian Stokes, Vladimir Medvedkin,
Konstantin Ananyev, Anatoly Burakov
Adjust iavf driver to also use the common mbuf freeing functions on Tx
queue release/cleanup. The implementation is complicated a little by the
need to integrate the additional "has_ctx" parameter for the iavf code,
but changes in other drivers are minimal - just a constant "false"
parameter.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 27 +++++++++---------
drivers/net/i40e/i40e_rxtx.c | 6 ++--
drivers/net/iavf/iavf_rxtx.c | 37 ++-----------------------
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 24 ++--------------
drivers/net/iavf/iavf_rxtx_vec_common.h | 18 ------------
drivers/net/iavf/iavf_rxtx_vec_sse.c | 9 ++----
drivers/net/ice/ice_dcf_ethdev.c | 4 +--
drivers/net/ice/ice_rxtx.c | 6 ++--
drivers/net/ixgbe/ixgbe_rxtx.c | 6 ++--
9 files changed, 31 insertions(+), 106 deletions(-)
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
index 1bf2a61b2f..310b51adcf 100644
--- a/drivers/net/_common_intel/tx.h
+++ b/drivers/net/_common_intel/tx.h
@@ -271,23 +271,23 @@ ci_tx_free_bufs_vec(struct ci_tx_queue *txq, ci_desc_done_fn desc_done, bool ctx
return txq->tx_rs_thresh;
}
-#define IETH_FREE_BUFS_LOOP(txq, swr, start) do { \
+#define IETH_FREE_BUFS_LOOP(swr, nb_desc, start, end) do { \
uint16_t i = start; \
- if (txq->tx_tail < i) { \
- for (; i < txq->nb_tx_desc; i++) { \
+ if (end < i) { \
+ for (; i < nb_desc; i++) { \
rte_pktmbuf_free_seg(swr[i].mbuf); \
swr[i].mbuf = NULL; \
} \
i = 0; \
} \
- for (; i < txq->tx_tail; i++) { \
+ for (; i < end; i++) { \
rte_pktmbuf_free_seg(swr[i].mbuf); \
swr[i].mbuf = NULL; \
} \
} while (0)
static inline void
-ci_txq_release_all_mbufs(struct ci_tx_queue *txq)
+ci_txq_release_all_mbufs(struct ci_tx_queue *txq, bool use_ctx)
{
if (unlikely(!txq || !txq->sw_ring))
return;
@@ -306,15 +306,14 @@ ci_txq_release_all_mbufs(struct ci_tx_queue *txq)
* vPMD tx will not set sw_ring's mbuf to NULL after free,
* so need to free remains more carefully.
*/
- const uint16_t start = txq->tx_next_dd - txq->tx_rs_thresh + 1;
-
- if (txq->vector_sw_ring) {
- struct ci_tx_entry_vec *swr = txq->sw_ring_vec;
- IETH_FREE_BUFS_LOOP(txq, swr, start);
- } else {
- struct ci_tx_entry *swr = txq->sw_ring;
- IETH_FREE_BUFS_LOOP(txq, swr, start);
- }
+ const uint16_t start = (txq->tx_next_dd - txq->tx_rs_thresh + 1) >> use_ctx;
+ const uint16_t nb_desc = txq->nb_tx_desc >> use_ctx;
+ const uint16_t end = txq->tx_tail >> use_ctx;
+
+ if (txq->vector_sw_ring)
+ IETH_FREE_BUFS_LOOP(txq->sw_ring_vec, nb_desc, start, end);
+ else
+ IETH_FREE_BUFS_LOOP(txq->sw_ring, nb_desc, start, end);
}
#endif /* _COMMON_INTEL_TX_H_ */
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index b70919c5dc..081d743e62 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -1933,7 +1933,7 @@ i40e_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
return err;
}
- ci_txq_release_all_mbufs(txq);
+ ci_txq_release_all_mbufs(txq, false);
i40e_reset_tx_queue(txq);
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
@@ -2608,7 +2608,7 @@ i40e_tx_queue_release(void *txq)
return;
}
- ci_txq_release_all_mbufs(q);
+ ci_txq_release_all_mbufs(q, false);
rte_free(q->sw_ring);
rte_memzone_free(q->mz);
rte_free(q);
@@ -3071,7 +3071,7 @@ i40e_dev_clear_queues(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_tx_queues; i++) {
if (!dev->data->tx_queues[i])
continue;
- ci_txq_release_all_mbufs(dev->data->tx_queues[i]);
+ ci_txq_release_all_mbufs(dev->data->tx_queues[i], false);
i40e_reset_tx_queue(dev->data->tx_queues[i]);
}
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 7e381b2a17..f0ab881ac5 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -387,24 +387,6 @@ release_rxq_mbufs(struct iavf_rx_queue *rxq)
rxq->rx_nb_avail = 0;
}
-static inline void
-release_txq_mbufs(struct ci_tx_queue *txq)
-{
- uint16_t i;
-
- if (!txq || !txq->sw_ring) {
- PMD_DRV_LOG(DEBUG, "Pointer to rxq or sw_ring is NULL");
- return;
- }
-
- for (i = 0; i < txq->nb_tx_desc; i++) {
- if (txq->sw_ring[i].mbuf) {
- rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
- txq->sw_ring[i].mbuf = NULL;
- }
- }
-}
-
static const
struct iavf_rxq_ops iavf_rxq_release_mbufs_ops[] = {
[IAVF_REL_MBUFS_DEFAULT].release_mbufs = release_rxq_mbufs,
@@ -413,18 +395,6 @@ struct iavf_rxq_ops iavf_rxq_release_mbufs_ops[] = {
#endif
};
-static const
-struct iavf_txq_ops iavf_txq_release_mbufs_ops[] = {
- [IAVF_REL_MBUFS_DEFAULT].release_mbufs = release_txq_mbufs,
-#ifdef RTE_ARCH_X86
- [IAVF_REL_MBUFS_SSE_VEC].release_mbufs = iavf_tx_queue_release_mbufs_sse,
-#ifdef CC_AVX512_SUPPORT
- [IAVF_REL_MBUFS_AVX512_VEC].release_mbufs = iavf_tx_queue_release_mbufs_avx512,
-#endif
-#endif
-
-};
-
static inline void
iavf_rxd_to_pkt_fields_by_comms_ovs(__rte_unused struct iavf_rx_queue *rxq,
struct rte_mbuf *mb,
@@ -889,7 +859,6 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->q_set = true;
dev->data->tx_queues[queue_idx] = txq;
txq->qtx_tail = hw->hw_addr + IAVF_QTX_TAIL1(queue_idx);
- txq->rel_mbufs_type = IAVF_REL_MBUFS_DEFAULT;
if (check_tx_vec_allow(txq) == false) {
struct iavf_adapter *ad =
@@ -1068,7 +1037,7 @@ iavf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
}
txq = dev->data->tx_queues[tx_queue_id];
- iavf_txq_release_mbufs_ops[txq->rel_mbufs_type].release_mbufs(txq);
+ ci_txq_release_all_mbufs(txq, txq->use_ctx);
reset_tx_queue(txq);
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
@@ -1097,7 +1066,7 @@ iavf_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
if (!q)
return;
- iavf_txq_release_mbufs_ops[q->rel_mbufs_type].release_mbufs(q);
+ ci_txq_release_all_mbufs(q, q->use_ctx);
rte_free(q->sw_ring);
rte_memzone_free(q->mz);
rte_free(q);
@@ -1114,7 +1083,7 @@ iavf_reset_queues(struct rte_eth_dev *dev)
txq = dev->data->tx_queues[i];
if (!txq)
continue;
- iavf_txq_release_mbufs_ops[txq->rel_mbufs_type].release_mbufs(txq);
+ ci_txq_release_all_mbufs(txq, txq->use_ctx);
reset_tx_queue(txq);
dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
}
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index 8543490c70..007759e451 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -2357,31 +2357,11 @@ iavf_xmit_pkts_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
return iavf_xmit_pkts_vec_avx512_cmn(tx_queue, tx_pkts, nb_pkts, false);
}
-void __rte_cold
-iavf_tx_queue_release_mbufs_avx512(struct ci_tx_queue *txq)
-{
- unsigned int i;
- const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1);
- const uint16_t end_desc = txq->tx_tail >> txq->use_ctx; /* next empty slot */
- const uint16_t wrap_point = txq->nb_tx_desc >> txq->use_ctx; /* end of SW ring */
- struct ci_tx_entry_vec *swr = (void *)txq->sw_ring;
-
- if (!txq->sw_ring || txq->nb_tx_free == max_desc)
- return;
-
- i = (txq->tx_next_dd - txq->tx_rs_thresh + 1) >> txq->use_ctx;
- while (i != end_desc) {
- rte_pktmbuf_free_seg(swr[i].mbuf);
- swr[i].mbuf = NULL;
- if (++i == wrap_point)
- i = 0;
- }
-}
-
int __rte_cold
iavf_txq_vec_setup_avx512(struct ci_tx_queue *txq)
{
- txq->rel_mbufs_type = IAVF_REL_MBUFS_AVX512_VEC;
+ txq->vector_tx = true;
+ txq->vector_sw_ring = true;
return 0;
}
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index 7130229f23..6f94587eee 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -60,24 +60,6 @@ _iavf_rx_queue_release_mbufs_vec(struct iavf_rx_queue *rxq)
memset(rxq->sw_ring, 0, sizeof(rxq->sw_ring[0]) * rxq->nb_rx_desc);
}
-static inline void
-_iavf_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq)
-{
- unsigned i;
- const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1);
-
- if (!txq->sw_ring || txq->nb_tx_free == max_desc)
- return;
-
- i = txq->tx_next_dd - txq->tx_rs_thresh + 1;
- while (i != txq->tx_tail) {
- rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
- txq->sw_ring[i].mbuf = NULL;
- if (++i == txq->nb_tx_desc)
- i = 0;
- }
-}
-
static inline int
iavf_rxq_vec_setup_default(struct iavf_rx_queue *rxq)
{
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index 5c0b2fff46..3adf2a59e4 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -1458,16 +1458,11 @@ iavf_rx_queue_release_mbufs_sse(struct iavf_rx_queue *rxq)
_iavf_rx_queue_release_mbufs_vec(rxq);
}
-void __rte_cold
-iavf_tx_queue_release_mbufs_sse(struct ci_tx_queue *txq)
-{
- _iavf_tx_queue_release_mbufs_vec(txq);
-}
-
int __rte_cold
iavf_txq_vec_setup(struct ci_tx_queue *txq)
{
- txq->rel_mbufs_type = IAVF_REL_MBUFS_SSE_VEC;
+ txq->vector_tx = true;
+ txq->vector_sw_ring = false;
return 0;
}
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index c20399cd84..57fe44ebb3 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -501,7 +501,7 @@ ice_dcf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
}
txq = dev->data->tx_queues[tx_queue_id];
- ci_txq_release_all_mbufs(txq);
+ ci_txq_release_all_mbufs(txq, false);
reset_tx_queue(txq);
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
@@ -651,7 +651,7 @@ ice_dcf_stop_queues(struct rte_eth_dev *dev)
txq = dev->data->tx_queues[i];
if (!txq)
continue;
- ci_txq_release_all_mbufs(txq);
+ ci_txq_release_all_mbufs(txq, false);
reset_tx_queue(txq);
dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
}
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 0a890e587c..ad0ddf6a88 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -1089,7 +1089,7 @@ ice_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
return -EINVAL;
}
- ci_txq_release_all_mbufs(txq);
+ ci_txq_release_all_mbufs(txq, false);
ice_reset_tx_queue(txq);
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
@@ -1152,7 +1152,7 @@ ice_fdir_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
return -EINVAL;
}
- ci_txq_release_all_mbufs(txq);
+ ci_txq_release_all_mbufs(txq, false);
txq->qtx_tail = NULL;
return 0;
@@ -1531,7 +1531,7 @@ ice_tx_queue_release(void *txq)
return;
}
- ci_txq_release_all_mbufs(q);
+ ci_txq_release_all_mbufs(q, false);
rte_free(q->sw_ring);
rte_memzone_free(q->mz);
rte_free(q);
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index bf9d461b06..3b7a6a6f0e 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -2457,7 +2457,7 @@ static void __rte_cold
ixgbe_tx_queue_release(struct ci_tx_queue *txq)
{
if (txq != NULL && txq->ops != NULL) {
- ci_txq_release_all_mbufs(txq);
+ ci_txq_release_all_mbufs(txq, false);
txq->ops->free_swring(txq);
rte_memzone_free(txq->mz);
rte_free(txq);
@@ -3364,7 +3364,7 @@ ixgbe_dev_clear_queues(struct rte_eth_dev *dev)
struct ci_tx_queue *txq = dev->data->tx_queues[i];
if (txq != NULL) {
- ci_txq_release_all_mbufs(txq);
+ ci_txq_release_all_mbufs(txq, false);
txq->ops->reset(txq);
dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
}
@@ -5639,7 +5639,7 @@ ixgbe_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
}
if (txq->ops != NULL) {
- ci_txq_release_all_mbufs(txq);
+ ci_txq_release_all_mbufs(txq, false);
txq->ops->reset(txq);
}
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v4 18/24] net/ice: use vector SW ring for all vector paths
2024-12-20 14:38 ` [PATCH v4 00/24] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (16 preceding siblings ...)
2024-12-20 14:39 ` [PATCH v4 17/24] net/iavf: " Bruce Richardson
@ 2024-12-20 14:39 ` Bruce Richardson
2024-12-20 14:39 ` [PATCH v4 19/24] net/i40e: " Bruce Richardson
` (5 subsequent siblings)
23 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-20 14:39 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Anatoly Burakov, Konstantin Ananyev
The AVX-512 code path used a smaller SW ring structure only containing
the mbuf pointer, but no other fields. The other fields are only used in
the scalar code path, so update all vector driver code paths to use the
smaller, faster structure.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 7 +++++++
drivers/net/ice/ice_rxtx.c | 2 +-
drivers/net/ice/ice_rxtx_vec_avx2.c | 12 ++++++------
drivers/net/ice/ice_rxtx_vec_avx512.c | 14 ++------------
drivers/net/ice/ice_rxtx_vec_common.h | 6 ------
drivers/net/ice/ice_rxtx_vec_sse.c | 12 ++++++------
6 files changed, 22 insertions(+), 31 deletions(-)
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
index 310b51adcf..aa42b9b49f 100644
--- a/drivers/net/_common_intel/tx.h
+++ b/drivers/net/_common_intel/tx.h
@@ -109,6 +109,13 @@ ci_tx_backlog_entry(struct ci_tx_entry *txep, struct rte_mbuf **tx_pkts, uint16_
txep[i].mbuf = tx_pkts[i];
}
+static __rte_always_inline void
+ci_tx_backlog_entry_vec(struct ci_tx_entry_vec *txep, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+ for (uint16_t i = 0; i < nb_pkts; ++i)
+ txep[i].mbuf = tx_pkts[i];
+}
+
#define IETH_VPMD_TX_MAX_FREE_BUF 64
typedef int (*ci_desc_done_fn)(struct ci_tx_queue *txq, uint16_t idx);
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index ad0ddf6a88..77cb6688a7 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -825,7 +825,7 @@ ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
/* record what kind of descriptor cleanup we need on teardown */
txq->vector_tx = ad->tx_vec_allowed;
- txq->vector_sw_ring = ad->tx_use_avx512;
+ txq->vector_sw_ring = txq->vector_tx;
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED;
diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index 12ffa0fa9a..98bab322b4 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -858,7 +858,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct ice_tx_desc *txdp;
- struct ci_tx_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = ICE_TD_CMD;
uint64_t rs = ICE_TX_DESC_CMD_RS | ICE_TD_CMD;
@@ -867,7 +867,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_pkts = RTE_MIN(nb_pkts, txq->tx_rs_thresh);
if (txq->nb_tx_free < txq->tx_free_thresh)
- ice_tx_free_bufs_vec(txq);
+ ci_tx_free_bufs_vec(txq, ice_tx_desc_done, false);
nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
@@ -875,13 +875,13 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = txq->tx_tail;
txdp = &txq->ice_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ci_tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, n);
ice_vtx(txdp, tx_pkts, n - 1, flags, offload);
tx_pkts += (n - 1);
@@ -896,10 +896,10 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
/* avoid reach the end of ring */
txdp = &txq->ice_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
}
- ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
ice_vtx(txdp, tx_pkts, nb_commit, flags, offload);
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index f6ec593f96..481f784e34 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -924,16 +924,6 @@ ice_vtx(volatile struct ice_tx_desc *txdp, struct rte_mbuf **pkt,
}
}
-static __rte_always_inline void
-ice_tx_backlog_entry_avx512(struct ci_tx_entry_vec *txep,
- struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
-{
- int i;
-
- for (i = 0; i < (int)nb_pkts; ++i)
- txep[i].mbuf = tx_pkts[i];
-}
-
static __rte_always_inline uint16_t
ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts, bool do_offload)
@@ -964,7 +954,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ice_tx_backlog_entry_avx512(txep, tx_pkts, n);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, n);
ice_vtx(txdp, tx_pkts, n - 1, flags, do_offload);
tx_pkts += (n - 1);
@@ -982,7 +972,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = (void *)txq->sw_ring;
}
- ice_tx_backlog_entry_avx512(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
ice_vtx(txdp, tx_pkts, nb_commit, flags, do_offload);
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index 907828b675..aa709fb51c 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -20,12 +20,6 @@ ice_tx_desc_done(struct ci_tx_queue *txq, uint16_t idx)
rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE);
}
-static __rte_always_inline int
-ice_tx_free_bufs_vec(struct ci_tx_queue *txq)
-{
- return ci_tx_free_bufs(txq, ice_tx_desc_done);
-}
-
static inline void
_ice_rx_queue_release_mbufs_vec(struct ice_rx_queue *rxq)
{
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index bff39c28d8..73e3e9eb54 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -699,7 +699,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct ice_tx_desc *txdp;
- struct ci_tx_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = ICE_TD_CMD;
uint64_t rs = ICE_TX_DESC_CMD_RS | ICE_TD_CMD;
@@ -709,7 +709,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
nb_pkts = RTE_MIN(nb_pkts, txq->tx_rs_thresh);
if (txq->nb_tx_free < txq->tx_free_thresh)
- ice_tx_free_bufs_vec(txq);
+ ci_tx_free_bufs_vec(txq, ice_tx_desc_done, false);
nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
nb_commit = nb_pkts;
@@ -718,13 +718,13 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = txq->tx_tail;
txdp = &txq->ice_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ci_tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
ice_vtx1(txdp, *tx_pkts, flags);
@@ -738,10 +738,10 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
/* avoid reach the end of ring */
txdp = &txq->ice_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
}
- ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
ice_vtx(txdp, tx_pkts, nb_commit, flags);
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v4 19/24] net/i40e: use vector SW ring for all vector paths
2024-12-20 14:38 ` [PATCH v4 00/24] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (17 preceding siblings ...)
2024-12-20 14:39 ` [PATCH v4 18/24] net/ice: use vector SW ring for all vector paths Bruce Richardson
@ 2024-12-20 14:39 ` Bruce Richardson
2024-12-20 14:39 ` [PATCH v4 20/24] net/iavf: " Bruce Richardson
` (4 subsequent siblings)
23 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-20 14:39 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Ian Stokes, David Christensen,
Konstantin Ananyev, Wathsala Vithanage
The AVX-512 code path used a smaller SW ring structure only containing
the mbuf pointer, but no other fields. The other fields are only used in
the scalar code path, so update all vector driver code paths (AVX2, SSE,
Neon, Altivec) to use the smaller, faster structure.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/i40e/i40e_rxtx.c | 8 +++++---
drivers/net/i40e/i40e_rxtx_vec_altivec.c | 12 ++++++------
drivers/net/i40e/i40e_rxtx_vec_avx2.c | 12 ++++++------
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 14 ++------------
drivers/net/i40e/i40e_rxtx_vec_common.h | 6 ------
drivers/net/i40e/i40e_rxtx_vec_neon.c | 12 ++++++------
drivers/net/i40e/i40e_rxtx_vec_sse.c | 12 ++++++------
7 files changed, 31 insertions(+), 45 deletions(-)
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 081d743e62..745c467912 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -1891,7 +1891,7 @@ i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
tx_queue_id);
txq->vector_tx = ad->tx_vec_allowed;
- txq->vector_sw_ring = ad->tx_use_avx512;
+ txq->vector_sw_ring = txq->vector_tx;
/*
* tx_queue_id is queue id application refers to, while
@@ -3550,9 +3550,11 @@ i40e_set_tx_function(struct rte_eth_dev *dev)
}
}
+ if (rte_vect_get_max_simd_bitwidth() < RTE_VECT_SIMD_128)
+ ad->tx_vec_allowed = false;
+
if (ad->tx_simple_allowed) {
- if (ad->tx_vec_allowed &&
- rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) {
+ if (ad->tx_vec_allowed) {
#ifdef RTE_ARCH_X86
if (ad->tx_use_avx512) {
#ifdef CC_AVX512_SUPPORT
diff --git a/drivers/net/i40e/i40e_rxtx_vec_altivec.c b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
index 500bba2cef..b6900a3e15 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_altivec.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
@@ -553,14 +553,14 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
- struct ci_tx_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = I40E_TD_CMD;
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
int i;
if (txq->nb_tx_free < txq->tx_free_thresh)
- i40e_tx_free_bufs(txq);
+ ci_tx_free_bufs_vec(txq, i40e_tx_desc_done, false);
nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
nb_commit = nb_pkts;
@@ -569,13 +569,13 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = txq->tx_tail;
txdp = &txq->i40e_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ci_tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -589,10 +589,10 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
/* avoid reach the end of ring */
txdp = &txq->i40e_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
}
- ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx2.c b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
index 29bef64287..2477573c01 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx2.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
@@ -745,13 +745,13 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
- struct ci_tx_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = I40E_TD_CMD;
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
if (txq->nb_tx_free < txq->tx_free_thresh)
- i40e_tx_free_bufs(txq);
+ ci_tx_free_bufs_vec(txq, i40e_tx_desc_done, false);
nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
@@ -759,13 +759,13 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = txq->tx_tail;
txdp = &txq->i40e_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ci_tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, n);
vtx(txdp, tx_pkts, n - 1, flags);
tx_pkts += (n - 1);
@@ -780,10 +780,10 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
/* avoid reach the end of ring */
txdp = &txq->i40e_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
}
- ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index c555c3491d..2497e6a8f0 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -807,16 +807,6 @@ vtx(volatile struct i40e_tx_desc *txdp,
}
}
-static __rte_always_inline void
-tx_backlog_entry_avx512(struct ci_tx_entry_vec *txep,
- struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
-{
- int i;
-
- for (i = 0; i < (int)nb_pkts; ++i)
- txep[i].mbuf = tx_pkts[i];
-}
-
static inline uint16_t
i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
@@ -844,7 +834,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- tx_backlog_entry_avx512(txep, tx_pkts, n);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, n);
vtx(txdp, tx_pkts, n - 1, flags);
tx_pkts += (n - 1);
@@ -862,7 +852,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = (void *)txq->sw_ring;
}
- tx_backlog_entry_avx512(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index 907d32dd0b..733dc797cd 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -24,12 +24,6 @@ i40e_tx_desc_done(struct ci_tx_queue *txq, uint16_t idx)
rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE);
}
-static __rte_always_inline int
-i40e_tx_free_bufs(struct ci_tx_queue *txq)
-{
- return ci_tx_free_bufs(txq, i40e_tx_desc_done);
-}
-
static inline void
_i40e_rx_queue_release_mbufs_vec(struct i40e_rx_queue *rxq)
{
diff --git a/drivers/net/i40e/i40e_rxtx_vec_neon.c b/drivers/net/i40e/i40e_rxtx_vec_neon.c
index c97f337e43..b398d66154 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_neon.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_neon.c
@@ -681,14 +681,14 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
{
struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
- struct ci_tx_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = I40E_TD_CMD;
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
int i;
if (txq->nb_tx_free < txq->tx_free_thresh)
- i40e_tx_free_bufs(txq);
+ ci_tx_free_bufs_vec(txq, i40e_tx_desc_done, false);
nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
@@ -696,13 +696,13 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
tx_id = txq->tx_tail;
txdp = &txq->i40e_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ci_tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -716,10 +716,10 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue,
/* avoid reach the end of ring */
txdp = &txq->i40e_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
}
- ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_sse.c b/drivers/net/i40e/i40e_rxtx_vec_sse.c
index 2c467e2089..90c57e59d0 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_sse.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_sse.c
@@ -700,14 +700,14 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct i40e_tx_desc *txdp;
- struct ci_tx_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = I40E_TD_CMD;
uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD;
int i;
if (txq->nb_tx_free < txq->tx_free_thresh)
- i40e_tx_free_bufs(txq);
+ ci_tx_free_bufs_vec(txq, i40e_tx_desc_done, false);
nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
@@ -715,13 +715,13 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = txq->tx_tail;
txdp = &txq->i40e_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ci_tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -735,10 +735,10 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
/* avoid reach the end of ring */
txdp = &txq->i40e_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
}
- ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v4 20/24] net/iavf: use vector SW ring for all vector paths
2024-12-20 14:38 ` [PATCH v4 00/24] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (18 preceding siblings ...)
2024-12-20 14:39 ` [PATCH v4 19/24] net/i40e: " Bruce Richardson
@ 2024-12-20 14:39 ` Bruce Richardson
2024-12-20 14:39 ` [PATCH v4 21/24] net/_common_intel: remove unneeded code Bruce Richardson
` (3 subsequent siblings)
23 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-20 14:39 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Vladimir Medvedkin, Ian Stokes, Konstantin Ananyev
The AVX-512 code path used a smaller SW ring structure only containing
the mbuf pointer, but no other fields. The other fields are only used in
the scalar code path, so update all vector driver code paths (AVX2, SSE)
to use the smaller, faster structure.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/iavf/iavf_rxtx.c | 7 -------
drivers/net/iavf/iavf_rxtx_vec_avx2.c | 12 ++++++------
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 8 --------
drivers/net/iavf/iavf_rxtx_vec_common.h | 6 ------
drivers/net/iavf/iavf_rxtx_vec_sse.c | 14 +++++++-------
5 files changed, 13 insertions(+), 34 deletions(-)
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index f0ab881ac5..6692f6992b 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -4193,14 +4193,7 @@ iavf_set_tx_function(struct rte_eth_dev *dev)
txq = dev->data->tx_queues[i];
if (!txq)
continue;
-#ifdef CC_AVX512_SUPPORT
- if (use_avx512)
- iavf_txq_vec_setup_avx512(txq);
- else
- iavf_txq_vec_setup(txq);
-#else
iavf_txq_vec_setup(txq);
-#endif
}
if (no_poll_on_link_down) {
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index fdb98b417a..b847886081 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -1736,14 +1736,14 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
- struct ci_tx_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
/* bit2 is reserved and must be set to 1 according to Spec */
uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC;
uint64_t rs = IAVF_TX_DESC_CMD_RS | flags;
if (txq->nb_tx_free < txq->tx_free_thresh)
- iavf_tx_free_bufs(txq);
+ ci_tx_free_bufs_vec(txq, iavf_tx_desc_done, false);
nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
@@ -1752,13 +1752,13 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = txq->tx_tail;
txdp = &txq->iavf_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ci_tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, n);
iavf_vtx(txdp, tx_pkts, n - 1, flags, offload);
tx_pkts += (n - 1);
@@ -1773,10 +1773,10 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
/* avoid reach the end of ring */
txdp = &txq->iavf_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
}
- ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
iavf_vtx(txdp, tx_pkts, nb_commit, flags, offload);
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index 007759e451..641f3311eb 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -2357,14 +2357,6 @@ iavf_xmit_pkts_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
return iavf_xmit_pkts_vec_avx512_cmn(tx_queue, tx_pkts, nb_pkts, false);
}
-int __rte_cold
-iavf_txq_vec_setup_avx512(struct ci_tx_queue *txq)
-{
- txq->vector_tx = true;
- txq->vector_sw_ring = true;
- return 0;
-}
-
uint16_t
iavf_xmit_pkts_vec_avx512_offload(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index 6f94587eee..c69399a173 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -24,12 +24,6 @@ iavf_tx_desc_done(struct ci_tx_queue *txq, uint16_t idx)
rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE);
}
-static __rte_always_inline int
-iavf_tx_free_bufs(struct ci_tx_queue *txq)
-{
- return ci_tx_free_bufs(txq, iavf_tx_desc_done);
-}
-
static inline void
_iavf_rx_queue_release_mbufs_vec(struct iavf_rx_queue *rxq)
{
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index 3adf2a59e4..9f7db80bfd 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -1368,14 +1368,14 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
{
struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
volatile struct iavf_tx_desc *txdp;
- struct ci_tx_entry *txep;
+ struct ci_tx_entry_vec *txep;
uint16_t n, nb_commit, tx_id;
uint64_t flags = IAVF_TX_DESC_CMD_EOP | 0x04; /* bit 2 must be set */
uint64_t rs = IAVF_TX_DESC_CMD_RS | flags;
int i;
if (txq->nb_tx_free < txq->tx_free_thresh)
- iavf_tx_free_bufs(txq);
+ ci_tx_free_bufs_vec(txq, iavf_tx_desc_done, false);
nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
if (unlikely(nb_pkts == 0))
@@ -1384,13 +1384,13 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_id = txq->tx_tail;
txdp = &txq->iavf_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- ci_tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -1404,10 +1404,10 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
/* avoid reach the end of ring */
txdp = &txq->iavf_tx_ring[tx_id];
- txep = &txq->sw_ring[tx_id];
+ txep = &txq->sw_ring_vec[tx_id];
}
- ci_tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
iavf_vtx(txdp, tx_pkts, nb_commit, flags);
@@ -1462,7 +1462,7 @@ int __rte_cold
iavf_txq_vec_setup(struct ci_tx_queue *txq)
{
txq->vector_tx = true;
- txq->vector_sw_ring = false;
+ txq->vector_sw_ring = txq->vector_tx;
return 0;
}
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v4 21/24] net/_common_intel: remove unneeded code
2024-12-20 14:38 ` [PATCH v4 00/24] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (19 preceding siblings ...)
2024-12-20 14:39 ` [PATCH v4 20/24] net/iavf: " Bruce Richardson
@ 2024-12-20 14:39 ` Bruce Richardson
2024-12-20 14:39 ` [PATCH v4 22/24] net/ixgbe: use common Tx backlog entry fn Bruce Richardson
` (2 subsequent siblings)
23 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-20 14:39 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Ian Stokes, Konstantin Ananyev,
Vladimir Medvedkin, Anatoly Burakov
With all drivers using the common Tx structure updated so that their
vector paths all use the simplified Tx mbuf ring format, it's no longer
necessary to have a separate flag for the ring format and for use of a
vector driver.
Remove the former flag and base all decisions off the vector flag. With
that done, we go from having only two paths to consider for releasing
all mbufs in the ring, not three. That allows further simplification of
the "ci_txq_release_all_mbufs" function.
The separate function to free buffers from the vector driver not using
the simplified ring format can similarly be removed as no longer
necessary.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/tx.h | 97 +++--------------------
drivers/net/i40e/i40e_rxtx.c | 1 -
drivers/net/iavf/iavf_rxtx_vec_sse.c | 1 -
drivers/net/ice/ice_rxtx.c | 1 -
drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 1 -
5 files changed, 10 insertions(+), 91 deletions(-)
diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h
index aa42b9b49f..d9cf4474fc 100644
--- a/drivers/net/_common_intel/tx.h
+++ b/drivers/net/_common_intel/tx.h
@@ -66,7 +66,6 @@ struct ci_tx_queue {
bool tx_deferred_start; /* don't start this queue in dev start */
bool q_set; /* indicate if tx queue has been configured */
bool vector_tx; /* port is using vector TX */
- bool vector_sw_ring; /* port is using vectorized SW ring (ieth_tx_entry_vec) */
union { /* the VSI this queue belongs to */
struct i40e_vsi *i40e_vsi;
struct iavf_vsi *iavf_vsi;
@@ -120,72 +119,6 @@ ci_tx_backlog_entry_vec(struct ci_tx_entry_vec *txep, struct rte_mbuf **tx_pkts,
typedef int (*ci_desc_done_fn)(struct ci_tx_queue *txq, uint16_t idx);
-static __rte_always_inline int
-ci_tx_free_bufs(struct ci_tx_queue *txq, ci_desc_done_fn desc_done)
-{
- struct ci_tx_entry *txep;
- uint32_t n;
- uint32_t i;
- int nb_free = 0;
- struct rte_mbuf *m, *free[IETH_VPMD_TX_MAX_FREE_BUF];
-
- /* check DD bits on threshold descriptor */
- if (!desc_done(txq, txq->tx_next_dd))
- return 0;
-
- n = txq->tx_rs_thresh;
-
- /* first buffer to free from S/W ring is at index
- * tx_next_dd - (tx_rs_thresh-1)
- */
- txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)];
-
- if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
- for (i = 0; i < n; i++) {
- free[i] = txep[i].mbuf;
- /* no need to reset txep[i].mbuf in vector path */
- }
- rte_mempool_put_bulk(free[0]->pool, (void **)free, n);
- goto done;
- }
-
- m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
- if (likely(m != NULL)) {
- free[0] = m;
- nb_free = 1;
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (likely(m != NULL)) {
- if (likely(m->pool == free[0]->pool)) {
- free[nb_free++] = m;
- } else {
- rte_mempool_put_bulk(free[0]->pool,
- (void *)free,
- nb_free);
- free[0] = m;
- nb_free = 1;
- }
- }
- }
- rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
- } else {
- for (i = 1; i < n; i++) {
- m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
- if (m != NULL)
- rte_mempool_put(m->pool, m);
- }
- }
-
-done:
- /* buffers were freed, update counters */
- txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
- txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
- if (txq->tx_next_dd >= txq->nb_tx_desc)
- txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
-
- return txq->tx_rs_thresh;
-}
-
static __rte_always_inline int
ci_tx_free_bufs_vec(struct ci_tx_queue *txq, ci_desc_done_fn desc_done, bool ctx_descs)
{
@@ -278,21 +211,6 @@ ci_tx_free_bufs_vec(struct ci_tx_queue *txq, ci_desc_done_fn desc_done, bool ctx
return txq->tx_rs_thresh;
}
-#define IETH_FREE_BUFS_LOOP(swr, nb_desc, start, end) do { \
- uint16_t i = start; \
- if (end < i) { \
- for (; i < nb_desc; i++) { \
- rte_pktmbuf_free_seg(swr[i].mbuf); \
- swr[i].mbuf = NULL; \
- } \
- i = 0; \
- } \
- for (; i < end; i++) { \
- rte_pktmbuf_free_seg(swr[i].mbuf); \
- swr[i].mbuf = NULL; \
- } \
-} while (0)
-
static inline void
ci_txq_release_all_mbufs(struct ci_tx_queue *txq, bool use_ctx)
{
@@ -311,16 +229,21 @@ ci_txq_release_all_mbufs(struct ci_tx_queue *txq, bool use_ctx)
/**
* vPMD tx will not set sw_ring's mbuf to NULL after free,
- * so need to free remains more carefully.
+ * so determining buffers to free is a little more complex.
*/
const uint16_t start = (txq->tx_next_dd - txq->tx_rs_thresh + 1) >> use_ctx;
const uint16_t nb_desc = txq->nb_tx_desc >> use_ctx;
const uint16_t end = txq->tx_tail >> use_ctx;
- if (txq->vector_sw_ring)
- IETH_FREE_BUFS_LOOP(txq->sw_ring_vec, nb_desc, start, end);
- else
- IETH_FREE_BUFS_LOOP(txq->sw_ring, nb_desc, start, end);
+ uint16_t i = start;
+ if (end < i) {
+ for (; i < nb_desc; i++)
+ rte_pktmbuf_free_seg(txq->sw_ring_vec[i].mbuf);
+ i = 0;
+ }
+ for (; i < end; i++)
+ rte_pktmbuf_free_seg(txq->sw_ring_vec[i].mbuf);
+ memset(txq->sw_ring_vec, 0, sizeof(txq->sw_ring_vec[0]) * nb_desc);
}
#endif /* _COMMON_INTEL_TX_H_ */
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 745c467912..c3ff2e05c3 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -1891,7 +1891,6 @@ i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
tx_queue_id);
txq->vector_tx = ad->tx_vec_allowed;
- txq->vector_sw_ring = txq->vector_tx;
/*
* tx_queue_id is queue id application refers to, while
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index 9f7db80bfd..21d5bfd309 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -1462,7 +1462,6 @@ int __rte_cold
iavf_txq_vec_setup(struct ci_tx_queue *txq)
{
txq->vector_tx = true;
- txq->vector_sw_ring = txq->vector_tx;
return 0;
}
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 77cb6688a7..dcfa409813 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -825,7 +825,6 @@ ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
/* record what kind of descriptor cleanup we need on teardown */
txq->vector_tx = ad->tx_vec_allowed;
- txq->vector_sw_ring = txq->vector_tx;
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
index 2b12bdcc9c..53d1fed6f8 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
@@ -182,7 +182,6 @@ ixgbe_txq_vec_setup_default(struct ci_tx_queue *txq,
txq->sw_ring_vec = txq->sw_ring_vec + 1;
txq->ops = txq_ops;
txq->vector_tx = 1;
- txq->vector_sw_ring = 1;
return 0;
}
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v4 22/24] net/ixgbe: use common Tx backlog entry fn
2024-12-20 14:38 ` [PATCH v4 00/24] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (20 preceding siblings ...)
2024-12-20 14:39 ` [PATCH v4 21/24] net/_common_intel: remove unneeded code Bruce Richardson
@ 2024-12-20 14:39 ` Bruce Richardson
2024-12-20 14:39 ` [PATCH v4 23/24] net/_common_intel: create common mbuf initializer fn Bruce Richardson
2024-12-20 14:39 ` [PATCH v4 24/24] net/_common_intel: extract common Rx vector criteria Bruce Richardson
23 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-20 14:39 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Anatoly Burakov, Vladimir Medvedkin,
Wathsala Vithanage, Konstantin Ananyev
Remove the custom vector Tx backlog entry function and use the standard
intel_common one, now that all vector drivers are using the same,
smaller ring structure.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 10 ----------
drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 4 ++--
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 4 ++--
3 files changed, 4 insertions(+), 14 deletions(-)
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
index 53d1fed6f8..9c3752a12a 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
@@ -68,16 +68,6 @@ ixgbe_tx_free_bufs(struct ci_tx_queue *txq)
return txq->tx_rs_thresh;
}
-static __rte_always_inline void
-tx_backlog_entry(struct ci_tx_entry_vec *txep,
- struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
-{
- int i;
-
- for (i = 0; i < (int)nb_pkts; ++i)
- txep[i].mbuf = tx_pkts[i];
-}
-
static inline void
_ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq)
{
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
index 2ccb399b64..f879f6fa9a 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
@@ -597,7 +597,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -614,7 +614,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = &txq->sw_ring_vec[tx_id];
}
- tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
index fa26365f06..915358e16b 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -720,7 +720,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
n = (uint16_t)(txq->nb_tx_desc - tx_id);
if (nb_commit >= n) {
- tx_backlog_entry(txep, tx_pkts, n);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, n);
for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
vtx1(txdp, *tx_pkts, flags);
@@ -737,7 +737,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
txep = &txq->sw_ring_vec[tx_id];
}
- tx_backlog_entry(txep, tx_pkts, nb_commit);
+ ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit);
vtx(txdp, tx_pkts, nb_commit, flags);
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v4 23/24] net/_common_intel: create common mbuf initializer fn
2024-12-20 14:38 ` [PATCH v4 00/24] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (21 preceding siblings ...)
2024-12-20 14:39 ` [PATCH v4 22/24] net/ixgbe: use common Tx backlog entry fn Bruce Richardson
@ 2024-12-20 14:39 ` Bruce Richardson
2024-12-20 14:39 ` [PATCH v4 24/24] net/_common_intel: extract common Rx vector criteria Bruce Richardson
23 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-20 14:39 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, David Christensen, Ian Stokes,
Wathsala Vithanage, Konstantin Ananyev, Vladimir Medvedkin,
Anatoly Burakov
Across a number of drivers, the same code is used for initializing the
"mbuf_initializer" value inside the rx queue structure for use with the
vector drivers. Since the rx queue structures are (currently) different
across the drivers, we cannot just move a single copy of the function to
a common location. Instead, we create a dedicated function which just
creates the mbuf initializer for a particular port.
In creating this function, we can shorten it vs the original versions by
initializing the mbuf fields as they are defined, rather than
afterwards. We can also remove the use of the barrier and temporary
uintptr_t variable, because the mbuf has been reworked so that
rearm_data is a proper single-element array in a union.
Across ixgbe, i40e, iavf and i40e, we can call this function to
initialize the rxq data, replacing the "*_rxq_vec_setup_default"
functions. Only the i40e was slightly different, having an extra
assignment in it, to set the "sse" flag (even in case of neon and
altivec paths). This assignment was just duplicated to the calling sites
for simplicity and to keep existing behaviour.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/rx.h | 13 +++++++++++++
drivers/net/i40e/i40e_rxtx_vec_altivec.c | 4 +++-
drivers/net/i40e/i40e_rxtx_vec_common.h | 19 -------------------
drivers/net/i40e/i40e_rxtx_vec_neon.c | 4 +++-
drivers/net/i40e/i40e_rxtx_vec_sse.c | 4 +++-
drivers/net/iavf/iavf_rxtx_vec_common.h | 18 ------------------
drivers/net/iavf/iavf_rxtx_vec_neon.c | 3 ++-
drivers/net/iavf/iavf_rxtx_vec_sse.c | 3 ++-
drivers/net/ice/ice_rxtx_vec_common.h | 18 ------------------
drivers/net/ice/ice_rxtx_vec_sse.c | 3 ++-
drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 18 ------------------
drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 3 ++-
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 3 ++-
13 files changed, 32 insertions(+), 81 deletions(-)
diff --git a/drivers/net/_common_intel/rx.h b/drivers/net/_common_intel/rx.h
index 5bd2fea7e3..ca0485875c 100644
--- a/drivers/net/_common_intel/rx.h
+++ b/drivers/net/_common_intel/rx.h
@@ -76,4 +76,17 @@ ci_rx_reassemble_packets(struct rte_mbuf **rx_bufs, uint16_t nb_bufs, uint8_t *s
return pkt_idx;
}
+static inline uint64_t
+ci_rxq_mbuf_initializer(uint16_t port_id)
+{
+ struct rte_mbuf mb_def = {
+ .nb_segs = 1,
+ .data_off = RTE_PKTMBUF_HEADROOM,
+ .port = port_id,
+ };
+ rte_mbuf_refcnt_set(&mb_def, 1);
+
+ return mb_def.rearm_data[0];
+}
+
#endif /* _COMMON_INTEL_RX_H_ */
diff --git a/drivers/net/i40e/i40e_rxtx_vec_altivec.c b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
index b6900a3e15..e8046b5ce5 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_altivec.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
@@ -621,7 +621,9 @@ i40e_rx_queue_release_mbufs_vec(struct i40e_rx_queue *rxq)
int __rte_cold
i40e_rxq_vec_setup(struct i40e_rx_queue *rxq)
{
- return i40e_rxq_vec_setup_default(rxq);
+ rxq->rx_using_sse = 1;
+ rxq->mbuf_initializer = ci_rxq_mbuf_initializer(rxq->port_id);
+ return 0;
}
int __rte_cold
diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index 733dc797cd..1ccdbd3fdb 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -54,25 +54,6 @@ _i40e_rx_queue_release_mbufs_vec(struct i40e_rx_queue *rxq)
memset(rxq->sw_ring, 0, sizeof(rxq->sw_ring[0]) * rxq->nb_rx_desc);
}
-static inline int
-i40e_rxq_vec_setup_default(struct i40e_rx_queue *rxq)
-{
- uintptr_t p;
- struct rte_mbuf mb_def = { .buf_addr = 0 }; /* zeroed mbuf */
-
- mb_def.nb_segs = 1;
- mb_def.data_off = RTE_PKTMBUF_HEADROOM;
- mb_def.port = rxq->port_id;
- rte_mbuf_refcnt_set(&mb_def, 1);
-
- /* prevent compiler reordering: rearm_data covers previous fields */
- rte_compiler_barrier();
- p = (uintptr_t)&mb_def.rearm_data;
- rxq->mbuf_initializer = *(uint64_t *)p;
- rxq->rx_using_sse = 1;
- return 0;
-}
-
static inline int
i40e_rx_vec_dev_conf_condition_check_default(struct rte_eth_dev *dev)
{
diff --git a/drivers/net/i40e/i40e_rxtx_vec_neon.c b/drivers/net/i40e/i40e_rxtx_vec_neon.c
index b398d66154..1c7e9bf1fa 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_neon.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_neon.c
@@ -749,7 +749,9 @@ i40e_rx_queue_release_mbufs_vec(struct i40e_rx_queue *rxq)
int __rte_cold
i40e_rxq_vec_setup(struct i40e_rx_queue *rxq)
{
- return i40e_rxq_vec_setup_default(rxq);
+ rxq->rx_using_sse = 1;
+ rxq->mbuf_initializer = ci_rxq_mbuf_initializer(rxq->port_id);
+ return 0;
}
int __rte_cold
diff --git a/drivers/net/i40e/i40e_rxtx_vec_sse.c b/drivers/net/i40e/i40e_rxtx_vec_sse.c
index 90c57e59d0..42255a20af 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_sse.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_sse.c
@@ -767,7 +767,9 @@ i40e_rx_queue_release_mbufs_vec(struct i40e_rx_queue *rxq)
int __rte_cold
i40e_rxq_vec_setup(struct i40e_rx_queue *rxq)
{
- return i40e_rxq_vec_setup_default(rxq);
+ rxq->rx_using_sse = 1;
+ rxq->mbuf_initializer = ci_rxq_mbuf_initializer(rxq->port_id);
+ return 0;
}
int __rte_cold
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index c69399a173..2cea4b0fb9 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -54,24 +54,6 @@ _iavf_rx_queue_release_mbufs_vec(struct iavf_rx_queue *rxq)
memset(rxq->sw_ring, 0, sizeof(rxq->sw_ring[0]) * rxq->nb_rx_desc);
}
-static inline int
-iavf_rxq_vec_setup_default(struct iavf_rx_queue *rxq)
-{
- uintptr_t p;
- struct rte_mbuf mb_def = { .buf_addr = 0 }; /* zeroed mbuf */
-
- mb_def.nb_segs = 1;
- mb_def.data_off = RTE_PKTMBUF_HEADROOM;
- mb_def.port = rxq->port_id;
- rte_mbuf_refcnt_set(&mb_def, 1);
-
- /* prevent compiler reordering: rearm_data covers previous fields */
- rte_compiler_barrier();
- p = (uintptr_t)&mb_def.rearm_data;
- rxq->mbuf_initializer = *(uint64_t *)p;
- return 0;
-}
-
static inline int
iavf_rx_vec_queue_default(struct iavf_rx_queue *rxq)
{
diff --git a/drivers/net/iavf/iavf_rxtx_vec_neon.c b/drivers/net/iavf/iavf_rxtx_vec_neon.c
index 04be574683..56685ac02e 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_neon.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_neon.c
@@ -407,7 +407,8 @@ int __rte_cold
iavf_rxq_vec_setup(struct iavf_rx_queue *rxq)
{
rxq->ops = &neon_vec_rxq_ops;
- return iavf_rxq_vec_setup_default(rxq);
+ rxq->mbuf_initializer = ci_rxq_mbuf_initializer(rxq->port_id);
+ return 0;
}
int __rte_cold
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index 21d5bfd309..210ec9e690 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -1469,7 +1469,8 @@ int __rte_cold
iavf_rxq_vec_setup(struct iavf_rx_queue *rxq)
{
rxq->rel_mbufs_type = IAVF_REL_MBUFS_SSE_VEC;
- return iavf_rxq_vec_setup_default(rxq);
+ rxq->mbuf_initializer = ci_rxq_mbuf_initializer(rxq->port_id);
+ return 0;
}
int __rte_cold
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index aa709fb51c..d5cf0e6fca 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -55,24 +55,6 @@ _ice_rx_queue_release_mbufs_vec(struct ice_rx_queue *rxq)
memset(rxq->sw_ring, 0, sizeof(rxq->sw_ring[0]) * rxq->nb_rx_desc);
}
-static inline int
-ice_rxq_vec_setup_default(struct ice_rx_queue *rxq)
-{
- uintptr_t p;
- struct rte_mbuf mb_def = { .buf_addr = 0 }; /* zeroed mbuf */
-
- mb_def.nb_segs = 1;
- mb_def.data_off = RTE_PKTMBUF_HEADROOM;
- mb_def.port = rxq->port_id;
- rte_mbuf_refcnt_set(&mb_def, 1);
-
- /* prevent compiler reordering: rearm_data covers previous fields */
- rte_compiler_barrier();
- p = (uintptr_t)&mb_def.rearm_data;
- rxq->mbuf_initializer = *(uint64_t *)p;
- return 0;
-}
-
#define ICE_TX_NO_VECTOR_FLAGS ( \
RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \
RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index 73e3e9eb54..d723017c2c 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -789,7 +789,8 @@ ice_rxq_vec_setup(struct ice_rx_queue *rxq)
return -1;
rxq->rx_rel_mbufs = _ice_rx_queue_release_mbufs_vec;
- return ice_rxq_vec_setup_default(rxq);
+ rxq->mbuf_initializer = ci_rxq_mbuf_initializer(rxq->port_id);
+ return 0;
}
int __rte_cold
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
index 9c3752a12a..4a4d793e20 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
@@ -143,24 +143,6 @@ _ixgbe_reset_tx_queue_vec(struct ci_tx_queue *txq)
memset(txq->ctx_cache, 0, IXGBE_CTX_NUM * sizeof(struct ixgbe_advctx_info));
}
-static inline int
-ixgbe_rxq_vec_setup_default(struct ixgbe_rx_queue *rxq)
-{
- uintptr_t p;
- struct rte_mbuf mb_def = { .buf_addr = 0 }; /* zeroed mbuf */
-
- mb_def.nb_segs = 1;
- mb_def.data_off = RTE_PKTMBUF_HEADROOM;
- mb_def.port = rxq->port_id;
- rte_mbuf_refcnt_set(&mb_def, 1);
-
- /* prevent compiler reordering: rearm_data covers previous fields */
- rte_compiler_barrier();
- p = (uintptr_t)&mb_def.rearm_data;
- rxq->mbuf_initializer = *(uint64_t *)p;
- return 0;
-}
-
static inline int
ixgbe_txq_vec_setup_default(struct ci_tx_queue *txq,
const struct ixgbe_txq_ops *txq_ops)
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
index f879f6fa9a..e832f66e42 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
@@ -659,7 +659,8 @@ static const struct ixgbe_txq_ops vec_txq_ops = {
int __rte_cold
ixgbe_rxq_vec_setup(struct ixgbe_rx_queue *rxq)
{
- return ixgbe_rxq_vec_setup_default(rxq);
+ rxq->mbuf_initializer = ci_rxq_mbuf_initializer(rxq->port_id);
+ return 0;
}
int __rte_cold
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
index 915358e16b..f384b4b0e4 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -782,7 +782,8 @@ static const struct ixgbe_txq_ops vec_txq_ops = {
int __rte_cold
ixgbe_rxq_vec_setup(struct ixgbe_rx_queue *rxq)
{
- return ixgbe_rxq_vec_setup_default(rxq);
+ rxq->mbuf_initializer = ci_rxq_mbuf_initializer(rxq->port_id);
+ return 0;
}
int __rte_cold
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* [PATCH v4 24/24] net/_common_intel: extract common Rx vector criteria
2024-12-20 14:38 ` [PATCH v4 00/24] Reduce code duplication across Intel NIC drivers Bruce Richardson
` (22 preceding siblings ...)
2024-12-20 14:39 ` [PATCH v4 23/24] net/_common_intel: create common mbuf initializer fn Bruce Richardson
@ 2024-12-20 14:39 ` Bruce Richardson
23 siblings, 0 replies; 127+ messages in thread
From: Bruce Richardson @ 2024-12-20 14:39 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Ian Stokes, Vladimir Medvedkin, Anatoly Burakov
While some drivers have specific criteria for when a vector driver can
be enabled on the Rx path, there are a number of basic criteria which
apply across all drivers. Centralize those in the _common_intel folder,
and then update drivers to use the common conditional checks. This adds
some additional restrictions to some drivers like ixgbe, where those
conditions were necessary but never checked.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/_common_intel/rx.h | 20 +++++++++++++
drivers/net/i40e/i40e_rxtx_vec_common.h | 35 +++++------------------
drivers/net/iavf/iavf_rxtx.c | 15 +---------
drivers/net/iavf/iavf_rxtx.h | 1 +
drivers/net/ice/ice_rxtx_vec_common.h | 14 +--------
drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 7 +++++
6 files changed, 37 insertions(+), 55 deletions(-)
diff --git a/drivers/net/_common_intel/rx.h b/drivers/net/_common_intel/rx.h
index ca0485875c..abb01ba5e7 100644
--- a/drivers/net/_common_intel/rx.h
+++ b/drivers/net/_common_intel/rx.h
@@ -8,6 +8,7 @@
#include <stdint.h>
#include <unistd.h>
#include <rte_mbuf.h>
+#include <rte_ethdev.h>
#define CI_RX_BURST 32
@@ -89,4 +90,23 @@ ci_rxq_mbuf_initializer(uint16_t port_id)
return mb_def.rearm_data[0];
}
+/* basic checks for a vector-driver capable queue.
+ * Individual drivers may have other further tests beyond this.
+ */
+static inline bool
+ci_rxq_vec_capable(uint16_t nb_desc, uint16_t rx_free_thresh, uint64_t offloads)
+{
+ if (!rte_is_power_of_2(nb_desc) ||
+ rx_free_thresh < CI_RX_BURST ||
+ (nb_desc % rx_free_thresh) != 0)
+ return false;
+
+ /* no driver supports timestamping or buffer split on vector path */
+ if ((offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) ||
+ (offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT))
+ return false;
+
+ return true;
+}
+
#endif /* _COMMON_INTEL_RX_H_ */
diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index 1ccdbd3fdb..5d0b777e0d 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -61,9 +61,6 @@ i40e_rx_vec_dev_conf_condition_check_default(struct rte_eth_dev *dev)
struct i40e_adapter *ad =
I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
- struct i40e_rx_queue *rxq;
- uint16_t desc, i;
- bool first_queue;
/* no QinQ support */
if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
@@ -73,31 +70,13 @@ i40e_rx_vec_dev_conf_condition_check_default(struct rte_eth_dev *dev)
* Vector mode is allowed only when number of Rx queue
* descriptor is power of 2.
*/
- if (!dev->data->dev_started) {
- first_queue = true;
- for (i = 0; i < dev->data->nb_rx_queues; i++) {
- rxq = dev->data->rx_queues[i];
- if (!rxq)
- continue;
- desc = rxq->nb_rx_desc;
- if (first_queue)
- ad->rx_vec_allowed =
- rte_is_power_of_2(desc);
- else
- ad->rx_vec_allowed =
- ad->rx_vec_allowed ?
- rte_is_power_of_2(desc) :
- ad->rx_vec_allowed;
- first_queue = false;
- }
- } else {
- /* Only check the first queue's descriptor number */
- for (i = 0; i < dev->data->nb_rx_queues; i++) {
- rxq = dev->data->rx_queues[i];
- if (!rxq)
- continue;
- desc = rxq->nb_rx_desc;
- ad->rx_vec_allowed = rte_is_power_of_2(desc);
+ ad->rx_vec_allowed = true;
+ for (uint16_t i = 0; i < dev->data->nb_rx_queues; i++) {
+ struct i40e_rx_queue *rxq = dev->data->rx_queues[i];
+ if (!rxq)
+ continue;
+ if (!ci_rxq_vec_capable(rxq->nb_rx_desc, rxq->rx_free_thresh, rxq->offloads)) {
+ ad->rx_vec_allowed = false;
break;
}
}
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 6692f6992b..e4c4b9682c 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -199,19 +199,6 @@ check_tx_thresh(uint16_t nb_desc, uint16_t tx_rs_thresh,
return 0;
}
-static inline bool
-check_rx_vec_allow(struct iavf_rx_queue *rxq)
-{
- if (rxq->rx_free_thresh >= IAVF_VPMD_RX_MAX_BURST &&
- rxq->nb_rx_desc % rxq->rx_free_thresh == 0) {
- PMD_INIT_LOG(DEBUG, "Vector Rx can be enabled on this rxq.");
- return true;
- }
-
- PMD_INIT_LOG(DEBUG, "Vector Rx cannot be enabled on this rxq.");
- return false;
-}
-
static inline bool
check_tx_vec_allow(struct ci_tx_queue *txq)
{
@@ -722,7 +709,7 @@ iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
ad->rx_bulk_alloc_allowed = false;
}
- if (check_rx_vec_allow(rxq) == false)
+ if (!ci_rxq_vec_capable(rxq->nb_rx_desc, rxq->rx_free_thresh, rxq->offloads))
ad->rx_vec_allowed = false;
#if defined RTE_ARCH_X86 || defined RTE_ARCH_ARM
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index c18e01560c..774c5c3574 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -5,6 +5,7 @@
#ifndef _IAVF_RXTX_H_
#define _IAVF_RXTX_H_
+#include <_common_intel/rx.h>
#include <_common_intel/tx.h>
/* In QLEN must be whole number of 32 descriptors. */
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index d5cf0e6fca..331741e6b0 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -88,24 +88,12 @@ ice_rx_vec_queue_default(struct ice_rx_queue *rxq)
if (!rxq)
return -1;
- if (!rte_is_power_of_2(rxq->nb_rx_desc))
- return -1;
-
- if (rxq->rx_free_thresh < ICE_VPMD_RX_BURST)
- return -1;
-
- if (rxq->nb_rx_desc % rxq->rx_free_thresh)
+ if (!ci_rxq_vec_capable(rxq->nb_rx_desc, rxq->rx_free_thresh, rxq->offloads))
return -1;
if (rxq->proto_xtr != PROTO_XTR_NONE)
return -1;
- if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
- return -1;
-
- if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)
- return -1;
-
if (rxq->offloads & ICE_RX_VECTOR_OFFLOAD)
return ICE_VECTOR_OFFLOAD_PATH;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
index 4a4d793e20..0703d5eecf 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
@@ -168,6 +168,13 @@ ixgbe_rx_vec_dev_conf_condition_check_default(struct rte_eth_dev *dev)
if (fconf->mode != RTE_FDIR_MODE_NONE)
return -1;
+ for (uint16_t i = 0; i < dev->data->nb_rx_queues; i++) {
+ struct ixgbe_rx_queue *rxq = dev->data->rx_queues[i];
+ if (!rxq)
+ continue;
+ if (!ci_rxq_vec_capable(rxq->nb_rx_desc, rxq->rx_free_thresh, rxq->offloads))
+ return -1;
+ }
return 0;
#else
RTE_SET_USED(dev);
--
2.43.0
^ permalink raw reply [flat|nested] 127+ messages in thread
* Re: [PATCH v4 01/24] net/_common_intel: add pkt reassembly fn for intel drivers
2024-12-20 14:38 ` [PATCH v4 01/24] net/_common_intel: add pkt reassembly fn for intel drivers Bruce Richardson
@ 2024-12-20 16:15 ` Stephen Hemminger
0 siblings, 0 replies; 127+ messages in thread
From: Stephen Hemminger @ 2024-12-20 16:15 UTC (permalink / raw)
To: Bruce Richardson
Cc: dev, David Christensen, Ian Stokes, Konstantin Ananyev,
Wathsala Vithanage, Vladimir Medvedkin, Anatoly Burakov
On Fri, 20 Dec 2024 14:38:58 +0000
Bruce Richardson <bruce.richardson@intel.com> wrote:
> +
> + if (!split_flags[buf_idx]) {
> + /* it's the last packet of the set */
> + start->hash = end->hash;
> + start->vlan_tci = end->vlan_tci;
> + start->ol_flags = end->ol_flags;
> + /* we need to strip crc for the whole packet */
> + start->pkt_len -= crc_len;
> + if (end->data_len > crc_len) {
> + end->data_len -= crc_len;
> + } else {
> + /* free up last mbuf */
> + struct rte_mbuf *secondlast = start;
> +
> + start->nb_segs--;
> + while (secondlast->next != end)
> + secondlast = secondlast->next;
> + secondlast->data_len -= (crc_len - end->data_len);
> + secondlast->next = NULL;
> + rte_pktmbuf_free_seg(end);
> + }
The problem with freeing the last buffer is that the CRC will be garbage.
What if the CRC is sitting past the last mbuf?
+-----------------------+ +-----+
| Data +--->+ CRC |
+-----------------------+ +-----+
This part (from original code) will free the second mbuf which contains
the CRC. The whole "don't strip CRC and leave it past the mbuf data" model
of mbuf's is a danger trap.
^ permalink raw reply [flat|nested] 127+ messages in thread
end of thread, other threads:[~2024-12-20 16:15 UTC | newest]
Thread overview: 127+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-11-22 12:53 [RFC PATCH 00/21] Reduce code duplication across Intel NIC drivers Bruce Richardson
2024-11-22 12:53 ` [RFC PATCH 01/21] common/intel_eth: add pkt reassembly fn for intel drivers Bruce Richardson
2024-11-22 12:53 ` [RFC PATCH 02/21] common/intel_eth: provide common Tx entry structures Bruce Richardson
2024-11-22 12:53 ` [RFC PATCH 03/21] common/intel_eth: add Tx mbuf ring replenish fn Bruce Richardson
2024-11-22 12:53 ` [RFC PATCH 04/21] drivers/net: align Tx queue struct field names Bruce Richardson
2024-11-22 12:53 ` [RFC PATCH 05/21] drivers/net: add prefix for driver-specific structs Bruce Richardson
2024-11-22 12:53 ` [RFC PATCH 06/21] common/intel_eth: merge ice and i40e Tx queue struct Bruce Richardson
2024-11-22 12:54 ` [RFC PATCH 07/21] net/iavf: use common Tx queue structure Bruce Richardson
2024-11-22 12:54 ` [RFC PATCH 08/21] net/ixgbe: convert Tx queue context cache field to ptr Bruce Richardson
2024-11-22 12:54 ` [RFC PATCH 09/21] net/ixgbe: use common Tx queue structure Bruce Richardson
2024-11-22 12:54 ` [RFC PATCH 10/21] common/intel_eth: pack " Bruce Richardson
2024-11-22 12:54 ` [RFC PATCH 11/21] common/intel_eth: add post-Tx buffer free function Bruce Richardson
2024-11-22 12:54 ` [RFC PATCH 12/21] common/intel_eth: add Tx buffer free fn for AVX-512 Bruce Richardson
2024-11-22 12:54 ` [RFC PATCH 13/21] net/iavf: use common Tx " Bruce Richardson
2024-11-22 12:54 ` [RFC PATCH 14/21] net/ice: move Tx queue mbuf cleanup fn to common Bruce Richardson
2024-11-22 12:54 ` [RFC PATCH 15/21] net/i40e: use common Tx queue mbuf cleanup fn Bruce Richardson
2024-11-22 12:54 ` [RFC PATCH 16/21] net/ixgbe: " Bruce Richardson
2024-11-22 12:54 ` [RFC PATCH 17/21] net/iavf: " Bruce Richardson
2024-11-22 12:54 ` [RFC PATCH 18/21] net/ice: use vector SW ring for all vector paths Bruce Richardson
2024-11-22 12:54 ` [RFC PATCH 19/21] net/i40e: " Bruce Richardson
2024-11-22 12:54 ` [RFC PATCH 20/21] net/iavf: " Bruce Richardson
2024-11-22 12:54 ` [RFC PATCH 21/21] net/ixgbe: use common Tx backlog entry fn Bruce Richardson
2024-11-25 16:25 ` [RFC PATCH 00/21] Reduce code duplication across Intel NIC drivers David Marchand
2024-11-25 16:31 ` Bruce Richardson
2024-11-26 14:57 ` Thomas Monjalon
2024-11-26 15:27 ` Bruce Richardson
2024-12-02 11:24 ` [PATCH v1 " Bruce Richardson
2024-12-02 11:24 ` [PATCH v1 01/21] net/_common_intel: add pkt reassembly fn for intel drivers Bruce Richardson
2024-12-02 11:24 ` [PATCH v1 02/21] net/_common_intel: provide common Tx entry structures Bruce Richardson
2024-12-02 11:24 ` [PATCH v1 03/21] net/_common_intel: add Tx mbuf ring replenish fn Bruce Richardson
2024-12-02 11:24 ` [PATCH v1 04/21] drivers/net: align Tx queue struct field names Bruce Richardson
2024-12-02 11:24 ` [PATCH v1 05/21] drivers/net: add prefix for driver-specific structs Bruce Richardson
2024-12-02 11:24 ` [PATCH v1 06/21] net/_common_intel: merge ice and i40e Tx queue struct Bruce Richardson
2024-12-02 11:24 ` [PATCH v1 07/21] net/iavf: use common Tx queue structure Bruce Richardson
2024-12-02 11:24 ` [PATCH v1 08/21] net/ixgbe: convert Tx queue context cache field to ptr Bruce Richardson
2024-12-02 11:24 ` [PATCH v1 09/21] net/ixgbe: use common Tx queue structure Bruce Richardson
2024-12-02 13:51 ` Medvedkin, Vladimir
2024-12-02 14:09 ` Bruce Richardson
2024-12-02 15:15 ` Bruce Richardson
2024-12-02 11:24 ` [PATCH v1 10/21] net/_common_intel: pack " Bruce Richardson
2024-12-02 11:24 ` [PATCH v1 11/21] net/_common_intel: add post-Tx buffer free function Bruce Richardson
2024-12-02 12:59 ` David Marchand
2024-12-02 13:12 ` Bruce Richardson
2024-12-02 13:24 ` Bruce Richardson
2024-12-02 13:55 ` David Marchand
2024-12-02 11:24 ` [PATCH v1 12/21] net/_common_intel: add Tx buffer free fn for AVX-512 Bruce Richardson
2024-12-02 11:24 ` [PATCH v1 13/21] net/iavf: use common Tx " Bruce Richardson
2024-12-02 11:24 ` [PATCH v1 14/21] net/ice: move Tx queue mbuf cleanup fn to common Bruce Richardson
2024-12-02 11:24 ` [PATCH v1 15/21] net/i40e: use common Tx queue mbuf cleanup fn Bruce Richardson
2024-12-02 11:24 ` [PATCH v1 16/21] net/ixgbe: " Bruce Richardson
2024-12-02 11:24 ` [PATCH v1 17/21] net/iavf: " Bruce Richardson
2024-12-02 11:24 ` [PATCH v1 18/21] net/ice: use vector SW ring for all vector paths Bruce Richardson
2024-12-02 11:24 ` [PATCH v1 19/21] net/i40e: " Bruce Richardson
2024-12-02 11:24 ` [PATCH v1 20/21] net/iavf: " Bruce Richardson
2024-12-02 11:24 ` [PATCH v1 21/21] net/ixgbe: use common Tx backlog entry fn Bruce Richardson
2024-12-03 16:41 ` [PATCH v2 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
2024-12-03 16:41 ` [PATCH v2 01/22] net/_common_intel: add pkt reassembly fn for intel drivers Bruce Richardson
2024-12-03 16:41 ` [PATCH v2 02/22] net/_common_intel: provide common Tx entry structures Bruce Richardson
2024-12-03 16:41 ` [PATCH v2 03/22] net/_common_intel: add Tx mbuf ring replenish fn Bruce Richardson
2024-12-03 16:41 ` [PATCH v2 04/22] drivers/net: align Tx queue struct field names Bruce Richardson
2024-12-03 16:41 ` [PATCH v2 05/22] drivers/net: add prefix for driver-specific structs Bruce Richardson
2024-12-03 16:41 ` [PATCH v2 06/22] net/_common_intel: merge ice and i40e Tx queue struct Bruce Richardson
2024-12-03 16:41 ` [PATCH v2 07/22] net/iavf: use common Tx queue structure Bruce Richardson
2024-12-03 16:41 ` [PATCH v2 08/22] net/ixgbe: convert Tx queue context cache field to ptr Bruce Richardson
2024-12-03 16:41 ` [PATCH v2 09/22] net/ixgbe: use common Tx queue structure Bruce Richardson
2024-12-03 16:41 ` [PATCH v2 10/22] net/_common_intel: pack " Bruce Richardson
2024-12-03 16:41 ` [PATCH v2 11/22] net/_common_intel: add post-Tx buffer free function Bruce Richardson
2024-12-03 16:41 ` [PATCH v2 12/22] net/_common_intel: add Tx buffer free fn for AVX-512 Bruce Richardson
2024-12-03 16:41 ` [PATCH v2 13/22] net/iavf: use common Tx " Bruce Richardson
2024-12-03 16:41 ` [PATCH v2 14/22] net/ice: move Tx queue mbuf cleanup fn to common Bruce Richardson
2024-12-03 16:41 ` [PATCH v2 15/22] net/i40e: use common Tx queue mbuf cleanup fn Bruce Richardson
2024-12-03 16:41 ` [PATCH v2 16/22] net/ixgbe: " Bruce Richardson
2024-12-03 16:41 ` [PATCH v2 17/22] net/iavf: " Bruce Richardson
2024-12-03 16:41 ` [PATCH v2 18/22] net/ice: use vector SW ring for all vector paths Bruce Richardson
2024-12-03 16:41 ` [PATCH v2 19/22] net/i40e: " Bruce Richardson
2024-12-03 16:41 ` [PATCH v2 20/22] net/iavf: " Bruce Richardson
2024-12-03 16:41 ` [PATCH v2 21/22] net/_common_intel: remove unneeded code Bruce Richardson
2024-12-03 16:41 ` [PATCH v2 22/22] net/ixgbe: use common Tx backlog entry fn Bruce Richardson
2024-12-11 17:33 ` [PATCH v3 00/22] Reduce code duplication across Intel NIC drivers Bruce Richardson
2024-12-11 17:33 ` [PATCH v3 01/22] net/_common_intel: add pkt reassembly fn for intel drivers Bruce Richardson
2024-12-11 17:33 ` [PATCH v3 02/22] net/_common_intel: provide common Tx entry structures Bruce Richardson
2024-12-11 17:33 ` [PATCH v3 03/22] net/_common_intel: add Tx mbuf ring replenish fn Bruce Richardson
2024-12-11 17:33 ` [PATCH v3 04/22] drivers/net: align Tx queue struct field names Bruce Richardson
2024-12-11 17:33 ` [PATCH v3 05/22] drivers/net: add prefix for driver-specific structs Bruce Richardson
2024-12-11 17:33 ` [PATCH v3 06/22] net/_common_intel: merge ice and i40e Tx queue struct Bruce Richardson
2024-12-11 17:33 ` [PATCH v3 07/22] net/iavf: use common Tx queue structure Bruce Richardson
2024-12-11 17:33 ` [PATCH v3 08/22] net/ixgbe: convert Tx queue context cache field to ptr Bruce Richardson
2024-12-11 17:33 ` [PATCH v3 09/22] net/ixgbe: use common Tx queue structure Bruce Richardson
2024-12-11 17:33 ` [PATCH v3 10/22] net/_common_intel: pack " Bruce Richardson
2024-12-11 17:33 ` [PATCH v3 11/22] net/_common_intel: add post-Tx buffer free function Bruce Richardson
2024-12-11 17:33 ` [PATCH v3 12/22] net/_common_intel: add Tx buffer free fn for AVX-512 Bruce Richardson
2024-12-11 17:33 ` [PATCH v3 13/22] net/iavf: use common Tx " Bruce Richardson
2024-12-11 17:33 ` [PATCH v3 14/22] net/ice: move Tx queue mbuf cleanup fn to common Bruce Richardson
2024-12-11 17:33 ` [PATCH v3 15/22] net/i40e: use common Tx queue mbuf cleanup fn Bruce Richardson
2024-12-11 17:33 ` [PATCH v3 16/22] net/ixgbe: " Bruce Richardson
2024-12-11 17:33 ` [PATCH v3 17/22] net/iavf: " Bruce Richardson
2024-12-11 17:33 ` [PATCH v3 18/22] net/ice: use vector SW ring for all vector paths Bruce Richardson
2024-12-11 17:33 ` [PATCH v3 19/22] net/i40e: " Bruce Richardson
2024-12-11 17:33 ` [PATCH v3 20/22] net/iavf: " Bruce Richardson
2024-12-11 17:33 ` [PATCH v3 21/22] net/_common_intel: remove unneeded code Bruce Richardson
2024-12-11 17:33 ` [PATCH v3 22/22] net/ixgbe: use common Tx backlog entry fn Bruce Richardson
2024-12-20 14:38 ` [PATCH v4 00/24] Reduce code duplication across Intel NIC drivers Bruce Richardson
2024-12-20 14:38 ` [PATCH v4 01/24] net/_common_intel: add pkt reassembly fn for intel drivers Bruce Richardson
2024-12-20 16:15 ` Stephen Hemminger
2024-12-20 14:38 ` [PATCH v4 02/24] net/_common_intel: provide common Tx entry structures Bruce Richardson
2024-12-20 14:39 ` [PATCH v4 03/24] net/_common_intel: add Tx mbuf ring replenish fn Bruce Richardson
2024-12-20 14:39 ` [PATCH v4 04/24] drivers/net: align Tx queue struct field names Bruce Richardson
2024-12-20 14:39 ` [PATCH v4 05/24] drivers/net: add prefix for driver-specific structs Bruce Richardson
2024-12-20 14:39 ` [PATCH v4 06/24] net/_common_intel: merge ice and i40e Tx queue struct Bruce Richardson
2024-12-20 14:39 ` [PATCH v4 07/24] net/iavf: use common Tx queue structure Bruce Richardson
2024-12-20 14:39 ` [PATCH v4 08/24] net/ixgbe: convert Tx queue context cache field to ptr Bruce Richardson
2024-12-20 14:39 ` [PATCH v4 09/24] net/ixgbe: use common Tx queue structure Bruce Richardson
2024-12-20 14:39 ` [PATCH v4 10/24] net/_common_intel: pack " Bruce Richardson
2024-12-20 14:39 ` [PATCH v4 11/24] net/_common_intel: add post-Tx buffer free function Bruce Richardson
2024-12-20 14:39 ` [PATCH v4 12/24] net/_common_intel: add Tx buffer free fn for AVX-512 Bruce Richardson
2024-12-20 14:39 ` [PATCH v4 13/24] net/iavf: use common Tx " Bruce Richardson
2024-12-20 14:39 ` [PATCH v4 14/24] net/ice: move Tx queue mbuf cleanup fn to common Bruce Richardson
2024-12-20 14:39 ` [PATCH v4 15/24] net/i40e: use common Tx queue mbuf cleanup fn Bruce Richardson
2024-12-20 14:39 ` [PATCH v4 16/24] net/ixgbe: " Bruce Richardson
2024-12-20 14:39 ` [PATCH v4 17/24] net/iavf: " Bruce Richardson
2024-12-20 14:39 ` [PATCH v4 18/24] net/ice: use vector SW ring for all vector paths Bruce Richardson
2024-12-20 14:39 ` [PATCH v4 19/24] net/i40e: " Bruce Richardson
2024-12-20 14:39 ` [PATCH v4 20/24] net/iavf: " Bruce Richardson
2024-12-20 14:39 ` [PATCH v4 21/24] net/_common_intel: remove unneeded code Bruce Richardson
2024-12-20 14:39 ` [PATCH v4 22/24] net/ixgbe: use common Tx backlog entry fn Bruce Richardson
2024-12-20 14:39 ` [PATCH v4 23/24] net/_common_intel: create common mbuf initializer fn Bruce Richardson
2024-12-20 14:39 ` [PATCH v4 24/24] net/_common_intel: extract common Rx vector criteria Bruce Richardson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).