DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH 0/6] net/intel: fixes and improvements to rx path selection
@ 2025-10-14  8:45 Ciara Loftus
  2025-10-14  8:45 ` [PATCH 1/6] net/intel: fix Rx vector capability detection Ciara Loftus
                   ` (6 more replies)
  0 siblings, 7 replies; 22+ messages in thread
From: Ciara Loftus @ 2025-10-14  8:45 UTC (permalink / raw)
  To: dev; +Cc: Ciara Loftus

This series contains a number of fixes and improvements to
the logic concerned with selecting an rx path in the intel
drivers. The first three patches are fixes that affect the
iavf driver. The final three patches reformat the arrays in
the i40e iavf and ice drivers that hold information for each
Rx path that are used in the common selection process, in an
attempt to improve readability.

Ciara Loftus (6):
  net/intel: fix Rx vector capability detection
  net/iavf: fix Rx paths feature definitions
  net/iavf: fix Rx path selection for scalar flex bulk alloc
  net/iavf: reformat the Rx path infos array
  net/i40e: reformat the Rx path infos array
  net/ice: reformat the Rx path infos array

 drivers/net/intel/common/rx.h      |   5 +-
 drivers/net/intel/i40e/i40e_rxtx.c | 126 +++++++++----
 drivers/net/intel/iavf/iavf.h      |   1 +
 drivers/net/intel/iavf/iavf_rxtx.c | 285 +++++++++++++++++++++--------
 drivers/net/intel/iavf/iavf_rxtx.h |   1 -
 drivers/net/intel/ice/ice_rxtx.c   | 124 +++++++++----
 6 files changed, 401 insertions(+), 141 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 1/6] net/intel: fix Rx vector capability detection
  2025-10-14  8:45 [PATCH 0/6] net/intel: fixes and improvements to rx path selection Ciara Loftus
@ 2025-10-14  8:45 ` Ciara Loftus
  2025-10-14 14:16   ` Bruce Richardson
  2025-10-14  8:45 ` [PATCH 2/6] net/iavf: fix Rx paths feature definitions Ciara Loftus
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 22+ messages in thread
From: Ciara Loftus @ 2025-10-14  8:45 UTC (permalink / raw)
  To: dev; +Cc: Ciara Loftus, stable

The common function for detecting whether an rxq could use a vector
rx path would automatically disqualify rx queues that had the
timestamp offload enabled. This was incorrect behaviour because the
iavf driver which uses this common function supports timestamp offload
on its vector paths. Fix this by removing the conditional check for
timestamp offload.

Fixes: 9eb60580d155 ("net/intel: extract common Rx vector criteria")
Cc: stable@dpdk.org

Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
---
 drivers/net/intel/common/rx.h | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/net/intel/common/rx.h b/drivers/net/intel/common/rx.h
index 741808f573..d3e4492ff1 100644
--- a/drivers/net/intel/common/rx.h
+++ b/drivers/net/intel/common/rx.h
@@ -235,9 +235,8 @@ ci_rxq_vec_capable(uint16_t nb_desc, uint16_t rx_free_thresh, uint64_t offloads)
 			(nb_desc % rx_free_thresh) != 0)
 		return false;
 
-	/* no driver supports timestamping or buffer split on vector path */
-	if ((offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) ||
-			(offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT))
+	/* no driver supports buffer split on vector path */
+	if (offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)
 		return false;
 
 	return true;
-- 
2.34.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 2/6] net/iavf: fix Rx paths feature definitions
  2025-10-14  8:45 [PATCH 0/6] net/intel: fixes and improvements to rx path selection Ciara Loftus
  2025-10-14  8:45 ` [PATCH 1/6] net/intel: fix Rx vector capability detection Ciara Loftus
@ 2025-10-14  8:45 ` Ciara Loftus
  2025-10-14 14:26   ` Bruce Richardson
  2025-10-14  8:45 ` [PATCH 3/6] net/iavf: fix Rx path selection for scalar flex bulk alloc Ciara Loftus
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 22+ messages in thread
From: Ciara Loftus @ 2025-10-14  8:45 UTC (permalink / raw)
  To: dev; +Cc: Ciara Loftus, stable

Two rx paths had incorrect feature and offload definitions
which led to incorrect path selections. Fix these.

Remove timestamp offload from the list of offloads
supported by paths that use the flexible rx descriptor. It
is only available in the "offload" versions of those paths.

Fixes: 91e3205d72d8 ("net/iavf: use common Rx path selection infrastructure")
Cc: stable@dpdk.org

Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
---
 drivers/net/intel/iavf/iavf_rxtx.c | 5 +++--
 drivers/net/intel/iavf/iavf_rxtx.h | 1 -
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/net/intel/iavf/iavf_rxtx.c b/drivers/net/intel/iavf/iavf_rxtx.c
index 775fb4a66f..67c73f9ad6 100644
--- a/drivers/net/intel/iavf/iavf_rxtx.c
+++ b/drivers/net/intel/iavf/iavf_rxtx.c
@@ -3768,13 +3768,14 @@ static const struct ci_rx_path_info iavf_rx_path_infos[] = {
 			{.scattered = true, .flex_desc = true, .bulk_alloc = true}}},
 	[IAVF_RX_AVX2_FLEX_RXD_OFFLOAD] = {
 		iavf_recv_pkts_vec_avx2_flex_rxd_offload, "Vector AVX2 Flex Offload",
-			{IAVF_RX_VECTOR_OFFLOADS, RTE_VECT_SIMD_256,
+			{IAVF_RX_VECTOR_OFFLOAD_FLEX_OFFLOADS, RTE_VECT_SIMD_256,
 				{.flex_desc = true, .bulk_alloc = true}}},
 	[IAVF_RX_AVX2_SCATTERED_FLEX_RXD_OFFLOAD] = {
 		iavf_recv_scattered_pkts_vec_avx2_flex_rxd_offload,
 		"Vector Scattered AVX2 Flex Offload",
 		{IAVF_RX_VECTOR_OFFLOAD_FLEX_OFFLOADS | RTE_ETH_RX_OFFLOAD_SCATTER,
-			RTE_VECT_SIMD_256, {.flex_desc = true, .bulk_alloc = true}}},
+			RTE_VECT_SIMD_256,
+			{.scattered = true, .flex_desc = true, .bulk_alloc = true}}},
 #ifdef CC_AVX512_SUPPORT
 	[IAVF_RX_AVX512] = {iavf_recv_pkts_vec_avx512, "Vector AVX512",
 		{IAVF_RX_VECTOR_OFFLOADS, RTE_VECT_SIMD_512, {.bulk_alloc = true}}},
diff --git a/drivers/net/intel/iavf/iavf_rxtx.h b/drivers/net/intel/iavf/iavf_rxtx.h
index 3f461efb28..44be29caf6 100644
--- a/drivers/net/intel/iavf/iavf_rxtx.h
+++ b/drivers/net/intel/iavf/iavf_rxtx.h
@@ -83,7 +83,6 @@
 /* vector paths that use the flex rx desc */
 #define IAVF_RX_VECTOR_FLEX_OFFLOADS (			\
 		IAVF_RX_VECTOR_OFFLOADS |		\
-		RTE_ETH_RX_OFFLOAD_TIMESTAMP |		\
 		RTE_ETH_RX_OFFLOAD_SECURITY)
 /* vector offload paths */
 #define IAVF_RX_VECTOR_OFFLOAD_OFFLOADS (		\
-- 
2.34.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 3/6] net/iavf: fix Rx path selection for scalar flex bulk alloc
  2025-10-14  8:45 [PATCH 0/6] net/intel: fixes and improvements to rx path selection Ciara Loftus
  2025-10-14  8:45 ` [PATCH 1/6] net/intel: fix Rx vector capability detection Ciara Loftus
  2025-10-14  8:45 ` [PATCH 2/6] net/iavf: fix Rx paths feature definitions Ciara Loftus
@ 2025-10-14  8:45 ` Ciara Loftus
  2025-10-14 14:33   ` Bruce Richardson
  2025-10-14  8:45 ` [PATCH 4/6] net/iavf: reformat the Rx path infos array Ciara Loftus
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 22+ messages in thread
From: Ciara Loftus @ 2025-10-14  8:45 UTC (permalink / raw)
  To: dev; +Cc: Ciara Loftus, stable

The scalar bulk alloc rx burst function supports both legacy and
flexible rx descriptors. The rx path selection infrastructure introduced
in commit 91e3205d72d8 ("net/iavf: use common Rx path selection
infrastructure") cannot define a path that supports both descriptor
formats. To solve this problem, have two rx path definitions which both
point to the same rx burst function but report different descriptor
formats. This allows the rx path selection function to choose the
correct path.

Fixes: 91e3205d72d8 ("net/iavf: use common Rx path selection infrastructure")
Cc: stable@dpdk.org

Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
---
 drivers/net/intel/iavf/iavf.h      | 1 +
 drivers/net/intel/iavf/iavf_rxtx.c | 3 +++
 2 files changed, 4 insertions(+)

diff --git a/drivers/net/intel/iavf/iavf.h b/drivers/net/intel/iavf/iavf.h
index 435902fbc2..4e76162337 100644
--- a/drivers/net/intel/iavf/iavf.h
+++ b/drivers/net/intel/iavf/iavf.h
@@ -327,6 +327,7 @@ enum iavf_rx_func_type {
 	IAVF_RX_FLEX_RXD,
 	IAVF_RX_SCATTERED_FLEX_RXD,
 	IAVF_RX_BULK_ALLOC,
+	IAVF_RX_BULK_ALLOC_FLEX_RXD,
 	IAVF_RX_SSE,
 	IAVF_RX_SSE_SCATTERED,
 	IAVF_RX_SSE_FLEX_RXD,
diff --git a/drivers/net/intel/iavf/iavf_rxtx.c b/drivers/net/intel/iavf/iavf_rxtx.c
index 67c73f9ad6..bbf3a1737e 100644
--- a/drivers/net/intel/iavf/iavf_rxtx.c
+++ b/drivers/net/intel/iavf/iavf_rxtx.c
@@ -3734,6 +3734,9 @@ static const struct ci_rx_path_info iavf_rx_path_infos[] = {
 				{.scattered = true, .flex_desc = true}}},
 	[IAVF_RX_BULK_ALLOC] = {iavf_recv_pkts_bulk_alloc, "Scalar Bulk Alloc",
 		{IAVF_RX_SCALAR_OFFLOADS, RTE_VECT_SIMD_DISABLED, {.bulk_alloc = true}}},
+	[IAVF_RX_BULK_ALLOC_FLEX_RXD] = {iavf_recv_pkts_bulk_alloc, "Scalar Bulk Alloc Flex",
+			{IAVF_RX_SCALAR_FLEX_OFFLOADS, RTE_VECT_SIMD_DISABLED,
+			{.flex_desc = true, .bulk_alloc = true}}},
 #ifdef RTE_ARCH_X86
 	[IAVF_RX_SSE] = {iavf_recv_pkts_vec, "Vector SSE",
 		{IAVF_RX_VECTOR_OFFLOAD_OFFLOADS, RTE_VECT_SIMD_128, {.bulk_alloc = true}}},
-- 
2.34.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 4/6] net/iavf: reformat the Rx path infos array
  2025-10-14  8:45 [PATCH 0/6] net/intel: fixes and improvements to rx path selection Ciara Loftus
                   ` (2 preceding siblings ...)
  2025-10-14  8:45 ` [PATCH 3/6] net/iavf: fix Rx path selection for scalar flex bulk alloc Ciara Loftus
@ 2025-10-14  8:45 ` Ciara Loftus
  2025-10-14 14:38   ` Bruce Richardson
  2025-10-14  8:45 ` [PATCH 5/6] net/i40e: " Ciara Loftus
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 22+ messages in thread
From: Ciara Loftus @ 2025-10-14  8:45 UTC (permalink / raw)
  To: dev; +Cc: Ciara Loftus

In order to improve readability, reformat the rx path infos array.

Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
---
 drivers/net/intel/iavf/iavf_rxtx.c | 289 +++++++++++++++++++++--------
 1 file changed, 210 insertions(+), 79 deletions(-)

diff --git a/drivers/net/intel/iavf/iavf_rxtx.c b/drivers/net/intel/iavf/iavf_rxtx.c
index bbf3a1737e..58d5747c40 100644
--- a/drivers/net/intel/iavf/iavf_rxtx.c
+++ b/drivers/net/intel/iavf/iavf_rxtx.c
@@ -3720,99 +3720,230 @@ iavf_xmit_pkts_no_poll(void *tx_queue, struct rte_mbuf **tx_pkts,
 				uint16_t nb_pkts);
 
 static const struct ci_rx_path_info iavf_rx_path_infos[] = {
-	[IAVF_RX_DISABLED] = {iavf_recv_pkts_no_poll, "Disabled",
-		{IAVF_RX_NO_OFFLOADS, RTE_VECT_SIMD_DISABLED, {.disabled = true}}},
-	[IAVF_RX_DEFAULT] = {iavf_recv_pkts, "Scalar",
-		{IAVF_RX_SCALAR_OFFLOADS, RTE_VECT_SIMD_DISABLED}},
-	[IAVF_RX_SCATTERED] = {iavf_recv_scattered_pkts, "Scalar Scattered",
-		{IAVF_RX_SCALAR_OFFLOADS | RTE_ETH_RX_OFFLOAD_SCATTER, RTE_VECT_SIMD_DISABLED,
-			{.scattered = true}}},
-	[IAVF_RX_FLEX_RXD] = {iavf_recv_pkts_flex_rxd, "Scalar Flex",
-		{IAVF_RX_SCALAR_FLEX_OFFLOADS, RTE_VECT_SIMD_DISABLED, {.flex_desc = true}}},
-	[IAVF_RX_SCATTERED_FLEX_RXD] = {iavf_recv_scattered_pkts_flex_rxd, "Scalar Scattered Flex",
-		{IAVF_RX_SCALAR_FLEX_OFFLOADS | RTE_ETH_RX_OFFLOAD_SCATTER, RTE_VECT_SIMD_DISABLED,
-				{.scattered = true, .flex_desc = true}}},
-	[IAVF_RX_BULK_ALLOC] = {iavf_recv_pkts_bulk_alloc, "Scalar Bulk Alloc",
-		{IAVF_RX_SCALAR_OFFLOADS, RTE_VECT_SIMD_DISABLED, {.bulk_alloc = true}}},
-	[IAVF_RX_BULK_ALLOC_FLEX_RXD] = {iavf_recv_pkts_bulk_alloc, "Scalar Bulk Alloc Flex",
-			{IAVF_RX_SCALAR_FLEX_OFFLOADS, RTE_VECT_SIMD_DISABLED,
-			{.flex_desc = true, .bulk_alloc = true}}},
+	[IAVF_RX_DISABLED] = {
+		.pkt_burst = iavf_recv_pkts_no_poll,
+		.info = "Disabled",
+		.features = {
+			.rx_offloads = IAVF_RX_NO_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_DISABLED,
+			.extra.disabled = true}},
+	[IAVF_RX_DEFAULT] = {
+		.pkt_burst = iavf_recv_pkts,
+		.info = "Scalar",
+		.features = {
+			.rx_offloads = IAVF_RX_SCALAR_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_DISABLED}},
+	[IAVF_RX_SCATTERED] = {
+		.pkt_burst = iavf_recv_scattered_pkts,
+		.info = "Scalar Scattered",
+		.features = {
+			.rx_offloads = IAVF_RX_SCALAR_OFFLOADS | RTE_ETH_RX_OFFLOAD_SCATTER,
+			.simd_width = RTE_VECT_SIMD_DISABLED,
+			.extra.scattered = true}},
+	[IAVF_RX_FLEX_RXD] = {
+		.pkt_burst = iavf_recv_pkts_flex_rxd,
+		.info = "Scalar Flex",
+		.features = {
+			.rx_offloads = IAVF_RX_SCALAR_FLEX_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_DISABLED,
+			.extra.flex_desc = true}},
+	[IAVF_RX_SCATTERED_FLEX_RXD] = {
+		.pkt_burst = iavf_recv_scattered_pkts_flex_rxd,
+		.info = "Scalar Scattered Flex",
+		.features = {
+			.rx_offloads = IAVF_RX_SCALAR_FLEX_OFFLOADS | RTE_ETH_RX_OFFLOAD_SCATTER,
+			.simd_width = RTE_VECT_SIMD_DISABLED,
+			.extra.scattered = true,
+			.extra.flex_desc = true}},
+	[IAVF_RX_BULK_ALLOC] = {
+		.pkt_burst = iavf_recv_pkts_bulk_alloc,
+		.info = "Scalar Bulk Alloc",
+		.features = {
+			.rx_offloads = IAVF_RX_SCALAR_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_DISABLED,
+			.extra.bulk_alloc = true}},
+	[IAVF_RX_BULK_ALLOC_FLEX_RXD] = {
+		.pkt_burst = iavf_recv_pkts_bulk_alloc,
+		.info = "Scalar Bulk Alloc Flex",
+		.features = {
+			.rx_offloads = IAVF_RX_SCALAR_FLEX_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_DISABLED,
+			.extra.flex_desc = true,
+			.extra.bulk_alloc = true}},
 #ifdef RTE_ARCH_X86
-	[IAVF_RX_SSE] = {iavf_recv_pkts_vec, "Vector SSE",
-		{IAVF_RX_VECTOR_OFFLOAD_OFFLOADS, RTE_VECT_SIMD_128, {.bulk_alloc = true}}},
-	[IAVF_RX_SSE_SCATTERED] = {iavf_recv_scattered_pkts_vec, "Vector Scattered SSE",
-		{IAVF_RX_VECTOR_OFFLOADS | RTE_ETH_RX_OFFLOAD_SCATTER, RTE_VECT_SIMD_128,
-			{.scattered = true, .bulk_alloc = true}}},
-	[IAVF_RX_SSE_FLEX_RXD] = {iavf_recv_pkts_vec_flex_rxd, "Vector Flex SSE",
-		{IAVF_RX_VECTOR_OFFLOAD_FLEX_OFFLOADS, RTE_VECT_SIMD_128,
-			{.flex_desc = true, .bulk_alloc = true}}},
+	[IAVF_RX_SSE] = {
+		.pkt_burst = iavf_recv_pkts_vec,
+		.info = "Vector SSE",
+		.features = {
+			.rx_offloads = IAVF_RX_VECTOR_OFFLOAD_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_128,
+			.extra.bulk_alloc = true}},
+	[IAVF_RX_SSE_SCATTERED] = {
+		.pkt_burst = iavf_recv_scattered_pkts_vec,
+		.info = "Vector Scattered SSE",
+		.features = {
+			.rx_offloads = IAVF_RX_VECTOR_OFFLOADS | RTE_ETH_RX_OFFLOAD_SCATTER,
+			.simd_width = RTE_VECT_SIMD_128,
+			.extra.scattered = true,
+			.extra.bulk_alloc = true}},
+	[IAVF_RX_SSE_FLEX_RXD] = {
+		.pkt_burst = iavf_recv_pkts_vec_flex_rxd,
+		.info = "Vector Flex SSE",
+		.features = {
+			.rx_offloads = IAVF_RX_VECTOR_OFFLOAD_FLEX_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_128,
+			.extra.flex_desc = true,
+			.extra.bulk_alloc = true}},
 	[IAVF_RX_SSE_SCATTERED_FLEX_RXD] = {
-		iavf_recv_scattered_pkts_vec_flex_rxd, "Vector Scattered SSE Flex",
-		{IAVF_RX_VECTOR_OFFLOAD_FLEX_OFFLOADS | RTE_ETH_RX_OFFLOAD_SCATTER,
-			RTE_VECT_SIMD_128,
-			{.scattered = true, .flex_desc = true, .bulk_alloc = true}}},
-	[IAVF_RX_AVX2] = {iavf_recv_pkts_vec_avx2, "Vector AVX2",
-		{IAVF_RX_VECTOR_OFFLOADS, RTE_VECT_SIMD_256, {.bulk_alloc = true}}},
-	[IAVF_RX_AVX2_SCATTERED] = {iavf_recv_scattered_pkts_vec_avx2, "Vector Scattered AVX2",
-		{IAVF_RX_VECTOR_OFFLOADS | RTE_ETH_RX_OFFLOAD_SCATTER, RTE_VECT_SIMD_256,
-			{.scattered = true, .bulk_alloc = true}}},
-	[IAVF_RX_AVX2_OFFLOAD] = {iavf_recv_pkts_vec_avx2_offload, "Vector AVX2 Offload",
-		{IAVF_RX_VECTOR_OFFLOAD_OFFLOADS, RTE_VECT_SIMD_256, {.bulk_alloc = true}}},
+		.pkt_burst = iavf_recv_scattered_pkts_vec_flex_rxd,
+		.info = "Vector Scattered SSE Flex",
+		.features = {
+			.rx_offloads = IAVF_RX_VECTOR_OFFLOAD_FLEX_OFFLOADS |
+				       RTE_ETH_RX_OFFLOAD_SCATTER,
+			.simd_width = RTE_VECT_SIMD_128,
+			.extra.scattered = true,
+			.extra.flex_desc = true,
+			.extra.bulk_alloc = true}},
+	[IAVF_RX_AVX2] = {
+		.pkt_burst = iavf_recv_pkts_vec_avx2,
+		.info = "Vector AVX2",
+		.features = {
+			.rx_offloads = IAVF_RX_VECTOR_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_256,
+			.extra.bulk_alloc = true}},
+	[IAVF_RX_AVX2_SCATTERED] = {
+		.pkt_burst = iavf_recv_scattered_pkts_vec_avx2,
+		.info = "Vector Scattered AVX2",
+		.features = {
+			.rx_offloads = IAVF_RX_VECTOR_OFFLOADS | RTE_ETH_RX_OFFLOAD_SCATTER,
+			.simd_width = RTE_VECT_SIMD_256,
+			.extra.scattered = true,
+			.extra.bulk_alloc = true}},
+	[IAVF_RX_AVX2_OFFLOAD] = {
+		.pkt_burst = iavf_recv_pkts_vec_avx2_offload,
+		.info = "Vector AVX2 Offload",
+		.features = {
+			.rx_offloads = IAVF_RX_VECTOR_OFFLOAD_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_256,
+			.extra.bulk_alloc = true}},
 	[IAVF_RX_AVX2_SCATTERED_OFFLOAD] = {
-		iavf_recv_scattered_pkts_vec_avx2_offload, "Vector Scattered AVX2 offload",
-		{IAVF_RX_VECTOR_OFFLOAD_OFFLOADS | RTE_ETH_RX_OFFLOAD_SCATTER, RTE_VECT_SIMD_256,
-			{.scattered = true, .bulk_alloc = true}}},
-	[IAVF_RX_AVX2_FLEX_RXD] = {iavf_recv_pkts_vec_avx2_flex_rxd, "Vector AVX2 Flex",
-		{IAVF_RX_VECTOR_FLEX_OFFLOADS, RTE_VECT_SIMD_256,
-			{.flex_desc = true, .bulk_alloc = true}}},
+		.pkt_burst = iavf_recv_scattered_pkts_vec_avx2_offload,
+		.info = "Vector Scattered AVX2 Offload",
+		.features = {
+			.rx_offloads = IAVF_RX_VECTOR_OFFLOAD_OFFLOADS | RTE_ETH_RX_OFFLOAD_SCATTER,
+			.simd_width = RTE_VECT_SIMD_256,
+			.extra.scattered = true,
+			.extra.bulk_alloc = true}},
+	[IAVF_RX_AVX2_FLEX_RXD] = {
+		.pkt_burst = iavf_recv_pkts_vec_avx2_flex_rxd,
+		.info = "Vector AVX2 Flex",
+		.features = {
+			.rx_offloads = IAVF_RX_VECTOR_FLEX_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_256,
+			.extra.flex_desc = true,
+			.extra.bulk_alloc = true}},
 	[IAVF_RX_AVX2_SCATTERED_FLEX_RXD] = {
-		iavf_recv_scattered_pkts_vec_avx2_flex_rxd, "Vector Scattered AVX2 Flex",
-		{IAVF_RX_VECTOR_FLEX_OFFLOADS | RTE_ETH_RX_OFFLOAD_SCATTER, RTE_VECT_SIMD_256,
-			{.scattered = true, .flex_desc = true, .bulk_alloc = true}}},
+		.pkt_burst = iavf_recv_scattered_pkts_vec_avx2_flex_rxd,
+		.info = "Vector Scattered AVX2 Flex",
+		.features = {
+			.rx_offloads = IAVF_RX_VECTOR_FLEX_OFFLOADS | RTE_ETH_RX_OFFLOAD_SCATTER,
+			.simd_width = RTE_VECT_SIMD_256,
+			.extra.scattered = true,
+			.extra.flex_desc = true,
+			.extra.bulk_alloc = true}},
 	[IAVF_RX_AVX2_FLEX_RXD_OFFLOAD] = {
-		iavf_recv_pkts_vec_avx2_flex_rxd_offload, "Vector AVX2 Flex Offload",
-			{IAVF_RX_VECTOR_OFFLOAD_FLEX_OFFLOADS, RTE_VECT_SIMD_256,
-				{.flex_desc = true, .bulk_alloc = true}}},
+		.pkt_burst = iavf_recv_pkts_vec_avx2_flex_rxd_offload,
+		.info = "Vector AVX2 Flex Offload",
+		.features = {
+			.rx_offloads = IAVF_RX_VECTOR_OFFLOAD_FLEX_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_256,
+			.extra.flex_desc = true,
+			.extra.bulk_alloc = true}},
 	[IAVF_RX_AVX2_SCATTERED_FLEX_RXD_OFFLOAD] = {
-		iavf_recv_scattered_pkts_vec_avx2_flex_rxd_offload,
-		"Vector Scattered AVX2 Flex Offload",
-		{IAVF_RX_VECTOR_OFFLOAD_FLEX_OFFLOADS | RTE_ETH_RX_OFFLOAD_SCATTER,
-			RTE_VECT_SIMD_256,
-			{.scattered = true, .flex_desc = true, .bulk_alloc = true}}},
+		.pkt_burst = iavf_recv_scattered_pkts_vec_avx2_flex_rxd_offload,
+		.info = "Vector Scattered AVX2 Flex Offload",
+		.features = {
+			.rx_offloads = IAVF_RX_VECTOR_OFFLOAD_FLEX_OFFLOADS |
+				       RTE_ETH_RX_OFFLOAD_SCATTER,
+			.simd_width = RTE_VECT_SIMD_256,
+			.extra.scattered = true,
+			.extra.flex_desc = true,
+			.extra.bulk_alloc = true}},
 #ifdef CC_AVX512_SUPPORT
-	[IAVF_RX_AVX512] = {iavf_recv_pkts_vec_avx512, "Vector AVX512",
-		{IAVF_RX_VECTOR_OFFLOADS, RTE_VECT_SIMD_512, {.bulk_alloc = true}}},
+	[IAVF_RX_AVX512] = {
+		.pkt_burst = iavf_recv_pkts_vec_avx512,
+		.info = "Vector AVX512",
+		.features = {
+			.rx_offloads = IAVF_RX_VECTOR_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_512,
+			.extra.bulk_alloc = true}},
 	[IAVF_RX_AVX512_SCATTERED] = {
-		iavf_recv_scattered_pkts_vec_avx512, "Vector Scattered AVX512",
-		{IAVF_RX_VECTOR_OFFLOADS | RTE_ETH_RX_OFFLOAD_SCATTER, RTE_VECT_SIMD_512,
-			{.scattered = true, .bulk_alloc = true}}},
-	[IAVF_RX_AVX512_OFFLOAD] = {iavf_recv_pkts_vec_avx512_offload, "Vector AVX512 Offload",
-		{IAVF_RX_VECTOR_OFFLOAD_OFFLOADS, RTE_VECT_SIMD_512, {.bulk_alloc = true}}},
+		.pkt_burst = iavf_recv_scattered_pkts_vec_avx512,
+		.info = "Vector Scattered AVX512",
+		.features = {
+			.rx_offloads = IAVF_RX_VECTOR_OFFLOADS | RTE_ETH_RX_OFFLOAD_SCATTER,
+			.simd_width = RTE_VECT_SIMD_512,
+			.extra.scattered = true,
+			.extra.bulk_alloc = true}},
+	[IAVF_RX_AVX512_OFFLOAD] = {
+		.pkt_burst = iavf_recv_pkts_vec_avx512_offload,
+		.info = "Vector AVX512 Offload",
+		.features = {
+			.rx_offloads = IAVF_RX_VECTOR_OFFLOAD_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_512,
+			.extra.bulk_alloc = true}},
 	[IAVF_RX_AVX512_SCATTERED_OFFLOAD] = {
-		iavf_recv_scattered_pkts_vec_avx512_offload, "Vector Scattered AVX512 offload",
-		{IAVF_RX_VECTOR_OFFLOAD_OFFLOADS | RTE_ETH_RX_OFFLOAD_SCATTER, RTE_VECT_SIMD_512,
-			{.scattered = true, .bulk_alloc = true}}},
-	[IAVF_RX_AVX512_FLEX_RXD] = {iavf_recv_pkts_vec_avx512_flex_rxd, "Vector AVX512 Flex",
-		{IAVF_RX_VECTOR_FLEX_OFFLOADS, RTE_VECT_SIMD_512,
-			{.flex_desc = true, .bulk_alloc = true}}},
+		.pkt_burst = iavf_recv_scattered_pkts_vec_avx512_offload,
+		.info = "Vector Scattered AVX512 Offload",
+		.features = {
+			.rx_offloads = IAVF_RX_VECTOR_OFFLOAD_OFFLOADS | RTE_ETH_RX_OFFLOAD_SCATTER,
+			.simd_width = RTE_VECT_SIMD_512,
+			.extra.scattered = true,
+			.extra.bulk_alloc = true}},
+	[IAVF_RX_AVX512_FLEX_RXD] = {
+		.pkt_burst = iavf_recv_pkts_vec_avx512_flex_rxd,
+		.info = "Vector AVX512 Flex",
+		.features = {
+			.rx_offloads = IAVF_RX_VECTOR_FLEX_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_512,
+			.extra.flex_desc = true,
+			.extra.bulk_alloc = true}},
 	[IAVF_RX_AVX512_SCATTERED_FLEX_RXD] = {
-		iavf_recv_scattered_pkts_vec_avx512_flex_rxd, "Vector Scattered AVX512 Flex",
-		{IAVF_RX_VECTOR_FLEX_OFFLOADS | RTE_ETH_RX_OFFLOAD_SCATTER, RTE_VECT_SIMD_512,
-			{.scattered = true, .flex_desc = true, .bulk_alloc = true}}},
+		.pkt_burst = iavf_recv_scattered_pkts_vec_avx512_flex_rxd,
+		.info = "Vector Scattered AVX512 Flex",
+		.features = {
+			.rx_offloads = IAVF_RX_VECTOR_FLEX_OFFLOADS | RTE_ETH_RX_OFFLOAD_SCATTER,
+			.simd_width = RTE_VECT_SIMD_512,
+			.extra.scattered = true,
+			.extra.flex_desc = true,
+			.extra.bulk_alloc = true}},
 	[IAVF_RX_AVX512_FLEX_RXD_OFFLOAD] = {
-		iavf_recv_pkts_vec_avx512_flex_rxd_offload, "Vector AVX512 Flex Offload",
-		{IAVF_RX_VECTOR_OFFLOAD_FLEX_OFFLOADS, RTE_VECT_SIMD_512,
-			{.flex_desc = true, .bulk_alloc = true}}},
+		.pkt_burst = iavf_recv_pkts_vec_avx512_flex_rxd_offload,
+		.info = "Vector AVX512 Flex Offload",
+		.features = {
+			.rx_offloads = IAVF_RX_VECTOR_OFFLOAD_FLEX_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_512,
+			.extra.flex_desc = true,
+			.extra.bulk_alloc = true}},
 	[IAVF_RX_AVX512_SCATTERED_FLEX_RXD_OFFLOAD] = {
-		iavf_recv_scattered_pkts_vec_avx512_flex_rxd_offload,
-		"Vector Scattered AVX512 Flex offload",
-		{IAVF_RX_VECTOR_OFFLOAD_FLEX_OFFLOADS | RTE_ETH_RX_OFFLOAD_SCATTER,
-			RTE_VECT_SIMD_512,
-			{.scattered = true, .flex_desc = true, .bulk_alloc = true}}},
+		.pkt_burst = iavf_recv_scattered_pkts_vec_avx512_flex_rxd_offload,
+		.info = "Vector Scattered AVX512 Flex Offload",
+		.features = {
+			.rx_offloads = IAVF_RX_VECTOR_OFFLOAD_FLEX_OFFLOADS |
+				       RTE_ETH_RX_OFFLOAD_SCATTER,
+			.simd_width = RTE_VECT_SIMD_512,
+			.extra.scattered = true,
+			.extra.flex_desc = true,
+			.extra.bulk_alloc = true}},
 #endif
 #elif defined RTE_ARCH_ARM
-	[IAVF_RX_SSE] = {iavf_recv_pkts_vec, "Vector Neon",
-		{IAVF_RX_SCALAR_OFFLOADS, RTE_VECT_SIMD_128, {.bulk_alloc = true}}},
+	[IAVF_RX_SSE] = {
+		.pkt_burst = iavf_recv_pkts_vec,
+		.info = "Vector Neon",
+		.features = {
+			.rx_offloads = IAVF_RX_SCALAR_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_128,
+			.extra.bulk_alloc = true}},
 #endif
 };
 
-- 
2.34.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 5/6] net/i40e: reformat the Rx path infos array
  2025-10-14  8:45 [PATCH 0/6] net/intel: fixes and improvements to rx path selection Ciara Loftus
                   ` (3 preceding siblings ...)
  2025-10-14  8:45 ` [PATCH 4/6] net/iavf: reformat the Rx path infos array Ciara Loftus
@ 2025-10-14  8:45 ` Ciara Loftus
  2025-10-14 14:38   ` Bruce Richardson
  2025-10-14  8:45 ` [PATCH 6/6] net/ice: " Ciara Loftus
  2025-10-15 10:07 ` [PATCH v2 0/7] net/intel: fixes and improvements to rx path selection Ciara Loftus
  6 siblings, 1 reply; 22+ messages in thread
From: Ciara Loftus @ 2025-10-14  8:45 UTC (permalink / raw)
  To: dev; +Cc: Ciara Loftus

In order to improve readability, reformat the rx path infos array.

Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
---
 drivers/net/intel/i40e/i40e_rxtx.c | 126 ++++++++++++++++++++++-------
 1 file changed, 95 insertions(+), 31 deletions(-)

diff --git a/drivers/net/intel/i40e/i40e_rxtx.c b/drivers/net/intel/i40e/i40e_rxtx.c
index 2bd0955225..c09696262d 100644
--- a/drivers/net/intel/i40e/i40e_rxtx.c
+++ b/drivers/net/intel/i40e/i40e_rxtx.c
@@ -3290,42 +3290,106 @@ i40e_recycle_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 }
 
 static const struct ci_rx_path_info i40e_rx_path_infos[] = {
-	[I40E_RX_DEFAULT] = { i40e_recv_pkts, "Scalar",
-		{I40E_RX_SCALAR_OFFLOADS, RTE_VECT_SIMD_DISABLED}},
-	[I40E_RX_SCATTERED] = { i40e_recv_scattered_pkts, "Scalar Scattered",
-		{I40E_RX_SCALAR_OFFLOADS, RTE_VECT_SIMD_DISABLED, {.scattered = true}}},
-	[I40E_RX_BULK_ALLOC] = { i40e_recv_pkts_bulk_alloc, "Scalar Bulk Alloc",
-		{I40E_RX_SCALAR_OFFLOADS, RTE_VECT_SIMD_DISABLED, {.bulk_alloc = true}}},
+	[I40E_RX_DEFAULT] = {
+		.pkt_burst = i40e_recv_pkts,
+		.info = "Scalar",
+		.features = {
+			.rx_offloads = I40E_RX_SCALAR_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_DISABLED}},
+	[I40E_RX_SCATTERED] = {
+		.pkt_burst = i40e_recv_scattered_pkts,
+		.info = "Scalar Scattered",
+		.features = {
+			.rx_offloads = I40E_RX_SCALAR_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_DISABLED,
+			.extra.scattered = true}},
+	[I40E_RX_BULK_ALLOC] = {
+		.pkt_burst = i40e_recv_pkts_bulk_alloc,
+		.info = "Scalar Bulk Alloc",
+		.features = {
+			.rx_offloads = I40E_RX_SCALAR_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_DISABLED,
+			.extra.bulk_alloc = true}},
 #ifdef RTE_ARCH_X86
-	[I40E_RX_SSE] = { i40e_recv_pkts_vec, "Vector SSE",
-		{I40E_RX_VECTOR_OFFLOADS, RTE_VECT_SIMD_128, {.bulk_alloc = true}}},
-	[I40E_RX_SSE_SCATTERED] = { i40e_recv_scattered_pkts_vec, "Vector SSE Scattered",
-		{I40E_RX_VECTOR_OFFLOADS, RTE_VECT_SIMD_128,
-			{.scattered = true, .bulk_alloc = true}}},
-	[I40E_RX_AVX2] = { i40e_recv_pkts_vec_avx2, "Vector AVX2",
-		{I40E_RX_VECTOR_OFFLOADS, RTE_VECT_SIMD_256, {.bulk_alloc = true}}},
-	[I40E_RX_AVX2_SCATTERED] = { i40e_recv_scattered_pkts_vec_avx2, "Vector AVX2 Scattered",
-		{I40E_RX_VECTOR_OFFLOADS, RTE_VECT_SIMD_256,
-			{.scattered = true, .bulk_alloc = true}}},
+	[I40E_RX_SSE] = {
+		.pkt_burst = i40e_recv_pkts_vec,
+		.info = "Vector SSE",
+		.features = {
+			.rx_offloads = I40E_RX_VECTOR_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_128,
+			.extra.bulk_alloc = true}},
+	[I40E_RX_SSE_SCATTERED] = {
+		.pkt_burst = i40e_recv_scattered_pkts_vec,
+		.info = "Vector SSE Scattered",
+		.features = {
+			.rx_offloads = I40E_RX_VECTOR_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_128,
+			.extra.scattered = true,
+			.extra.bulk_alloc = true}},
+	[I40E_RX_AVX2] = {
+		.pkt_burst = i40e_recv_pkts_vec_avx2,
+		.info = "Vector AVX2",
+		.features = {
+			.rx_offloads = I40E_RX_VECTOR_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_256,
+			.extra.bulk_alloc = true}},
+	[I40E_RX_AVX2_SCATTERED] = {
+		.pkt_burst = i40e_recv_scattered_pkts_vec_avx2,
+		.info = "Vector AVX2 Scattered",
+		.features = {
+			.rx_offloads = I40E_RX_VECTOR_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_256,
+			.extra.scattered = true,
+			.extra.bulk_alloc = true}},
 #ifdef CC_AVX512_SUPPORT
-	[I40E_RX_AVX512] = { i40e_recv_pkts_vec_avx512, "Vector AVX512",
-		{I40E_RX_VECTOR_OFFLOADS, RTE_VECT_SIMD_512, {.bulk_alloc = true}}},
-	[I40E_RX_AVX512_SCATTERED] = { i40e_recv_scattered_pkts_vec_avx512,
-		"Vector AVX512 Scattered", {I40E_RX_VECTOR_OFFLOADS, RTE_VECT_SIMD_512,
-			{.scattered = true, .bulk_alloc = true}}},
+	[I40E_RX_AVX512] = {
+		.pkt_burst = i40e_recv_pkts_vec_avx512,
+		.info = "Vector AVX512",
+		.features = {
+			.rx_offloads = I40E_RX_VECTOR_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_512,
+			.extra.bulk_alloc = true}},
+	[I40E_RX_AVX512_SCATTERED] = {
+		.pkt_burst = i40e_recv_scattered_pkts_vec_avx512,
+		.info = "Vector AVX512 Scattered",
+		.features = {
+			.rx_offloads = I40E_RX_VECTOR_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_512,
+			.extra.scattered = true,
+			.extra.bulk_alloc = true}},
 #endif
 #elif defined(RTE_ARCH_ARM64)
-	[I40E_RX_NEON] = { i40e_recv_pkts_vec, "Vector Neon",
-		{I40E_RX_SCALAR_OFFLOADS, RTE_VECT_SIMD_128, {.bulk_alloc = true}}},
-	[I40E_RX_NEON_SCATTERED] = { i40e_recv_scattered_pkts_vec, "Vector Neon Scattered",
-		{I40E_RX_SCALAR_OFFLOADS, RTE_VECT_SIMD_128,
-			{.scattered = true, .bulk_alloc = true}}},
+	[I40E_RX_NEON] = {
+		.pkt_burst = i40e_recv_pkts_vec,
+		.info = "Vector Neon",
+		.features = {
+			.rx_offloads = I40E_RX_SCALAR_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_128,
+			.extra.bulk_alloc = true}},
+	[I40E_RX_NEON_SCATTERED] = {
+		.pkt_burst = i40e_recv_scattered_pkts_vec,
+		.info = "Vector Neon Scattered",
+		.features = {
+			.rx_offloads = I40E_RX_SCALAR_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_128,
+			.extra.scattered = true,
+			.extra.bulk_alloc = true}},
 #elif defined(RTE_ARCH_PPC_64)
-	[I40E_RX_ALTIVEC] = { i40e_recv_pkts_vec, "Vector AltiVec",
-		{I40E_RX_SCALAR_OFFLOADS, RTE_VECT_SIMD_128, {.bulk_alloc = true}}},
-	[I40E_RX_ALTIVEC_SCATTERED] = { i40e_recv_scattered_pkts_vec, "Vector AltiVec Scattered",
-		{I40E_RX_SCALAR_OFFLOADS, RTE_VECT_SIMD_128,
-			{.scattered = true, .bulk_alloc = true}}},
+	[I40E_RX_ALTIVEC] = {
+		.pkt_burst = i40e_recv_pkts_vec,
+		.info = "Vector AltiVec",
+		.features = {
+			.rx_offloads = I40E_RX_SCALAR_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_128,
+			.extra.bulk_alloc = true}},
+	[I40E_RX_ALTIVEC_SCATTERED] = {
+		.pkt_burst = i40e_recv_scattered_pkts_vec,
+		.info = "Vector AltiVec Scattered",
+		.features = {
+			.rx_offloads = I40E_RX_SCALAR_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_128,
+			.extra.scattered = true,
+			.extra.bulk_alloc = true}},
 #endif
 };
 
-- 
2.34.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 6/6] net/ice: reformat the Rx path infos array
  2025-10-14  8:45 [PATCH 0/6] net/intel: fixes and improvements to rx path selection Ciara Loftus
                   ` (4 preceding siblings ...)
  2025-10-14  8:45 ` [PATCH 5/6] net/i40e: " Ciara Loftus
@ 2025-10-14  8:45 ` Ciara Loftus
  2025-10-14 14:39   ` Bruce Richardson
  2025-10-15 10:07 ` [PATCH v2 0/7] net/intel: fixes and improvements to rx path selection Ciara Loftus
  6 siblings, 1 reply; 22+ messages in thread
From: Ciara Loftus @ 2025-10-14  8:45 UTC (permalink / raw)
  To: dev; +Cc: Ciara Loftus

In order to improve readability, reformat the rx path infos array.

Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
---
 drivers/net/intel/ice/ice_rxtx.c | 124 +++++++++++++++++++++++--------
 1 file changed, 93 insertions(+), 31 deletions(-)

diff --git a/drivers/net/intel/ice/ice_rxtx.c b/drivers/net/intel/ice/ice_rxtx.c
index 411b353417..acc36ceb50 100644
--- a/drivers/net/intel/ice/ice_rxtx.c
+++ b/drivers/net/intel/ice/ice_rxtx.c
@@ -3667,41 +3667,103 @@ ice_xmit_pkts_simple(void *tx_queue,
 }
 
 static const struct ci_rx_path_info ice_rx_path_infos[] = {
-	[ICE_RX_DEFAULT] = {ice_recv_pkts, "Scalar",
-		{ICE_RX_SCALAR_OFFLOADS, RTE_VECT_SIMD_DISABLED}},
-	[ICE_RX_SCATTERED] = {ice_recv_scattered_pkts, "Scalar Scattered",
-		{ICE_RX_SCALAR_OFFLOADS, RTE_VECT_SIMD_DISABLED, {.scattered = true}}},
-	[ICE_RX_BULK_ALLOC] = {ice_recv_pkts_bulk_alloc, "Scalar Bulk Alloc",
-		{ICE_RX_SCALAR_OFFLOADS, RTE_VECT_SIMD_DISABLED, {.bulk_alloc = true}}},
+	[ICE_RX_DEFAULT] = {
+		.pkt_burst = ice_recv_pkts,
+		.info = "Scalar",
+		.features = {
+			.rx_offloads = ICE_RX_SCALAR_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_DISABLED}},
+	[ICE_RX_SCATTERED] = {
+		.pkt_burst = ice_recv_scattered_pkts,
+		.info = "Scalar Scattered",
+		.features = {
+			.rx_offloads = ICE_RX_SCALAR_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_DISABLED,
+			.extra.scattered = true}},
+	[ICE_RX_BULK_ALLOC] = {
+		.pkt_burst = ice_recv_pkts_bulk_alloc,
+		.info = "Scalar Bulk Alloc",
+		.features = {
+			.rx_offloads = ICE_RX_SCALAR_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_DISABLED,
+			.extra.bulk_alloc = true}},
 #ifdef RTE_ARCH_X86
-	[ICE_RX_SSE] = {ice_recv_pkts_vec, "Vector SSE",
-		{ICE_RX_VECTOR_OFFLOAD_OFFLOADS, RTE_VECT_SIMD_128, {.bulk_alloc = true}}},
-	[ICE_RX_SSE_SCATTERED] = {ice_recv_scattered_pkts_vec, "Vector SSE Scattered",
-		{ICE_RX_VECTOR_OFFLOAD_OFFLOADS, RTE_VECT_SIMD_128,
-			{.scattered = true, .bulk_alloc = true}}},
-	[ICE_RX_AVX2] = {ice_recv_pkts_vec_avx2, "Vector AVX2",
-		{ICE_RX_VECTOR_OFFLOADS, RTE_VECT_SIMD_256, {.bulk_alloc = true}}},
-	[ICE_RX_AVX2_SCATTERED] = {ice_recv_scattered_pkts_vec_avx2, "Vector AVX2 Scattered",
-		{ICE_RX_VECTOR_OFFLOADS, RTE_VECT_SIMD_256,
-			{.scattered = true, .bulk_alloc = true}}},
-	[ICE_RX_AVX2_OFFLOAD] = {ice_recv_pkts_vec_avx2_offload, "Offload Vector AVX2",
-		{ICE_RX_VECTOR_OFFLOAD_OFFLOADS, RTE_VECT_SIMD_256, {.bulk_alloc = true}}},
+	[ICE_RX_SSE] = {
+		.pkt_burst = ice_recv_pkts_vec,
+		.info = "Vector SSE",
+		.features = {
+			.rx_offloads = ICE_RX_VECTOR_OFFLOAD_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_128,
+			.extra.bulk_alloc = true}},
+	[ICE_RX_SSE_SCATTERED] = {
+		.pkt_burst = ice_recv_scattered_pkts_vec,
+		.info = "Vector SSE Scattered",
+		.features = {
+			.rx_offloads = ICE_RX_VECTOR_OFFLOAD_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_128,
+			.extra.scattered = true,
+			.extra.bulk_alloc = true}},
+	[ICE_RX_AVX2] = {
+		.pkt_burst = ice_recv_pkts_vec_avx2,
+		.info = "Vector AVX2",
+		.features = {
+			.rx_offloads = ICE_RX_VECTOR_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_256,
+			.extra.bulk_alloc = true}},
+	[ICE_RX_AVX2_SCATTERED] = {
+		.pkt_burst = ice_recv_scattered_pkts_vec_avx2,
+		.info = "Vector AVX2 Scattered",
+		.features = {
+			.rx_offloads = ICE_RX_VECTOR_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_256,
+			.extra.scattered = true,
+			.extra.bulk_alloc = true}},
+	[ICE_RX_AVX2_OFFLOAD] = {
+		.pkt_burst = ice_recv_pkts_vec_avx2_offload,
+		.info = "Offload Vector AVX2",
+		.features = {
+			.rx_offloads = ICE_RX_VECTOR_OFFLOAD_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_256,
+			.extra.bulk_alloc = true}},
 	[ICE_RX_AVX2_SCATTERED_OFFLOAD] = {
-		ice_recv_scattered_pkts_vec_avx2_offload, "Offload Vector AVX2 Scattered",
-		{ICE_RX_VECTOR_OFFLOAD_OFFLOADS, RTE_VECT_SIMD_256,
-			{.scattered = true, .bulk_alloc = true}}},
+		.pkt_burst = ice_recv_scattered_pkts_vec_avx2_offload,
+		.info = "Offload Vector AVX2 Scattered",
+		.features = {
+			.rx_offloads = ICE_RX_VECTOR_OFFLOAD_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_256,
+			.extra.scattered = true,
+			.extra.bulk_alloc = true}},
 #ifdef CC_AVX512_SUPPORT
-	[ICE_RX_AVX512] = {ice_recv_pkts_vec_avx512, "Vector AVX512",
-		{ICE_RX_VECTOR_OFFLOADS, RTE_VECT_SIMD_512, {.bulk_alloc = true}}},
-	[ICE_RX_AVX512_SCATTERED] = {ice_recv_scattered_pkts_vec_avx512, "Vector AVX512 Scattered",
-		{ICE_RX_VECTOR_OFFLOADS, RTE_VECT_SIMD_512,
-			{.scattered = true, .bulk_alloc = true}}},
-	[ICE_RX_AVX512_OFFLOAD] = {ice_recv_pkts_vec_avx512_offload, "Offload Vector AVX512",
-		{ICE_RX_VECTOR_OFFLOAD_OFFLOADS, RTE_VECT_SIMD_512, {.bulk_alloc = true}}},
+	[ICE_RX_AVX512] = {
+		.pkt_burst = ice_recv_pkts_vec_avx512,
+		.info = "Vector AVX512",
+		.features = {
+			.rx_offloads = ICE_RX_VECTOR_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_512,
+			.extra.bulk_alloc = true}},
+	[ICE_RX_AVX512_SCATTERED] = {
+		.pkt_burst = ice_recv_scattered_pkts_vec_avx512,
+		.info = "Vector AVX512 Scattered",
+		.features = {
+			.rx_offloads = ICE_RX_VECTOR_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_512,
+			.extra.scattered = true,
+			.extra.bulk_alloc = true}},
+	[ICE_RX_AVX512_OFFLOAD] = {
+		.pkt_burst = ice_recv_pkts_vec_avx512_offload,
+		.info = "Offload Vector AVX512",
+		.features = {
+			.rx_offloads = ICE_RX_VECTOR_OFFLOAD_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_512,
+			.extra.bulk_alloc = true}},
 	[ICE_RX_AVX512_SCATTERED_OFFLOAD] = {
-		ice_recv_scattered_pkts_vec_avx512_offload, "Offload Vector AVX512 Scattered",
-		{ICE_RX_VECTOR_OFFLOAD_OFFLOADS, RTE_VECT_SIMD_512,
-			{.scattered = true, .bulk_alloc = true}}},
+		.pkt_burst = ice_recv_scattered_pkts_vec_avx512_offload,
+		.info = "Offload Vector AVX512 Scattered",
+		.features = {
+			.rx_offloads = ICE_RX_VECTOR_OFFLOAD_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_512,
+			.extra.scattered = true,
+			.extra.bulk_alloc = true}},
 #endif
 #endif
 };
-- 
2.34.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/6] net/intel: fix Rx vector capability detection
  2025-10-14  8:45 ` [PATCH 1/6] net/intel: fix Rx vector capability detection Ciara Loftus
@ 2025-10-14 14:16   ` Bruce Richardson
  2025-10-14 14:59     ` Loftus, Ciara
  0 siblings, 1 reply; 22+ messages in thread
From: Bruce Richardson @ 2025-10-14 14:16 UTC (permalink / raw)
  To: Ciara Loftus; +Cc: dev, stable

On Tue, Oct 14, 2025 at 08:45:12AM +0000, Ciara Loftus wrote:
> The common function for detecting whether an rxq could use a vector
> rx path would automatically disqualify rx queues that had the
> timestamp offload enabled. This was incorrect behaviour because the
> iavf driver which uses this common function supports timestamp offload
> on its vector paths. Fix this by removing the conditional check for
> timestamp offload.
> 
> Fixes: 9eb60580d155 ("net/intel: extract common Rx vector criteria")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
> ---
>  drivers/net/intel/common/rx.h | 5 ++---
>  1 file changed, 2 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/net/intel/common/rx.h b/drivers/net/intel/common/rx.h
> index 741808f573..d3e4492ff1 100644
> --- a/drivers/net/intel/common/rx.h
> +++ b/drivers/net/intel/common/rx.h
> @@ -235,9 +235,8 @@ ci_rxq_vec_capable(uint16_t nb_desc, uint16_t rx_free_thresh, uint64_t offloads)
>  			(nb_desc % rx_free_thresh) != 0)
>  		return false;
>  
> -	/* no driver supports timestamping or buffer split on vector path */
> -	if ((offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) ||
> -			(offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT))
> +	/* no driver supports buffer split on vector path */
> +	if (offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)
>  		return false;
>  
>  	return true;

Given that we check all offload flags later when doing final path
selection, can we drop the flags check here completely? Just have this
funciont check on the non-feature-related conditions, such as ring size
etc.

/Bruce

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/6] net/iavf: fix Rx paths feature definitions
  2025-10-14  8:45 ` [PATCH 2/6] net/iavf: fix Rx paths feature definitions Ciara Loftus
@ 2025-10-14 14:26   ` Bruce Richardson
  0 siblings, 0 replies; 22+ messages in thread
From: Bruce Richardson @ 2025-10-14 14:26 UTC (permalink / raw)
  To: Ciara Loftus; +Cc: dev, stable

On Tue, Oct 14, 2025 at 08:45:13AM +0000, Ciara Loftus wrote:
> Two rx paths had incorrect feature and offload definitions
> which led to incorrect path selections. Fix these.
> 
> Remove timestamp offload from the list of offloads
> supported by paths that use the flexible rx descriptor. It
> is only available in the "offload" versions of those paths.
> 
> Fixes: 91e3205d72d8 ("net/iavf: use common Rx path selection infrastructure")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
> ---
>  drivers/net/intel/iavf/iavf_rxtx.c | 5 +++--
>  drivers/net/intel/iavf/iavf_rxtx.h | 1 -
>  2 files changed, 3 insertions(+), 3 deletions(-)
> 
Acked-by: Bruce Richardson <bruce.richardson@intel.com>


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 3/6] net/iavf: fix Rx path selection for scalar flex bulk alloc
  2025-10-14  8:45 ` [PATCH 3/6] net/iavf: fix Rx path selection for scalar flex bulk alloc Ciara Loftus
@ 2025-10-14 14:33   ` Bruce Richardson
  0 siblings, 0 replies; 22+ messages in thread
From: Bruce Richardson @ 2025-10-14 14:33 UTC (permalink / raw)
  To: Ciara Loftus; +Cc: dev, stable

On Tue, Oct 14, 2025 at 08:45:14AM +0000, Ciara Loftus wrote:
> The scalar bulk alloc rx burst function supports both legacy and
> flexible rx descriptors. The rx path selection infrastructure introduced
> in commit 91e3205d72d8 ("net/iavf: use common Rx path selection
> infrastructure") cannot define a path that supports both descriptor
> formats. To solve this problem, have two rx path definitions which both
> point to the same rx burst function but report different descriptor
> formats. This allows the rx path selection function to choose the
> correct path.
> 
> Fixes: 91e3205d72d8 ("net/iavf: use common Rx path selection infrastructure")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>

I find it strange that both point to the one function but have different
offload capabilities. However, I realise that the code path bifurcates
again later in the function, so doing it this way is correct!

Acked-by: Bruce Richardson <bruce.richardson@intel.com>


> ---
>  drivers/net/intel/iavf/iavf.h      | 1 +
>  drivers/net/intel/iavf/iavf_rxtx.c | 3 +++
>  2 files changed, 4 insertions(+)
> 
> diff --git a/drivers/net/intel/iavf/iavf.h b/drivers/net/intel/iavf/iavf.h
> index 435902fbc2..4e76162337 100644
> --- a/drivers/net/intel/iavf/iavf.h
> +++ b/drivers/net/intel/iavf/iavf.h
> @@ -327,6 +327,7 @@ enum iavf_rx_func_type {
>  	IAVF_RX_FLEX_RXD,
>  	IAVF_RX_SCATTERED_FLEX_RXD,
>  	IAVF_RX_BULK_ALLOC,
> +	IAVF_RX_BULK_ALLOC_FLEX_RXD,
>  	IAVF_RX_SSE,
>  	IAVF_RX_SSE_SCATTERED,
>  	IAVF_RX_SSE_FLEX_RXD,
> diff --git a/drivers/net/intel/iavf/iavf_rxtx.c b/drivers/net/intel/iavf/iavf_rxtx.c
> index 67c73f9ad6..bbf3a1737e 100644
> --- a/drivers/net/intel/iavf/iavf_rxtx.c
> +++ b/drivers/net/intel/iavf/iavf_rxtx.c
> @@ -3734,6 +3734,9 @@ static const struct ci_rx_path_info iavf_rx_path_infos[] = {
>  				{.scattered = true, .flex_desc = true}}},
>  	[IAVF_RX_BULK_ALLOC] = {iavf_recv_pkts_bulk_alloc, "Scalar Bulk Alloc",
>  		{IAVF_RX_SCALAR_OFFLOADS, RTE_VECT_SIMD_DISABLED, {.bulk_alloc = true}}},
> +	[IAVF_RX_BULK_ALLOC_FLEX_RXD] = {iavf_recv_pkts_bulk_alloc, "Scalar Bulk Alloc Flex",
> +			{IAVF_RX_SCALAR_FLEX_OFFLOADS, RTE_VECT_SIMD_DISABLED,
> +			{.flex_desc = true, .bulk_alloc = true}}},
>  #ifdef RTE_ARCH_X86
>  	[IAVF_RX_SSE] = {iavf_recv_pkts_vec, "Vector SSE",
>  		{IAVF_RX_VECTOR_OFFLOAD_OFFLOADS, RTE_VECT_SIMD_128, {.bulk_alloc = true}}},
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 4/6] net/iavf: reformat the Rx path infos array
  2025-10-14  8:45 ` [PATCH 4/6] net/iavf: reformat the Rx path infos array Ciara Loftus
@ 2025-10-14 14:38   ` Bruce Richardson
  0 siblings, 0 replies; 22+ messages in thread
From: Bruce Richardson @ 2025-10-14 14:38 UTC (permalink / raw)
  To: Ciara Loftus; +Cc: dev

On Tue, Oct 14, 2025 at 08:45:15AM +0000, Ciara Loftus wrote:
> In order to improve readability, reformat the rx path infos array.
> 
> Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
> ---

Thanks, I find this format much more readable.

Some comments inline below.

Acked-by: Bruce Richardson <bruce.richardson@intel.com>


>  drivers/net/intel/iavf/iavf_rxtx.c | 289 +++++++++++++++++++++--------
>  1 file changed, 210 insertions(+), 79 deletions(-)
> 
> diff --git a/drivers/net/intel/iavf/iavf_rxtx.c b/drivers/net/intel/iavf/iavf_rxtx.c
> index bbf3a1737e..58d5747c40 100644
> --- a/drivers/net/intel/iavf/iavf_rxtx.c
> +++ b/drivers/net/intel/iavf/iavf_rxtx.c
> @@ -3720,99 +3720,230 @@ iavf_xmit_pkts_no_poll(void *tx_queue, struct rte_mbuf **tx_pkts,
>  				uint16_t nb_pkts);
>  
>  static const struct ci_rx_path_info iavf_rx_path_infos[] = {
> -	[IAVF_RX_DISABLED] = {iavf_recv_pkts_no_poll, "Disabled",
> -		{IAVF_RX_NO_OFFLOADS, RTE_VECT_SIMD_DISABLED, {.disabled = true}}},
> -	[IAVF_RX_DEFAULT] = {iavf_recv_pkts, "Scalar",
> -		{IAVF_RX_SCALAR_OFFLOADS, RTE_VECT_SIMD_DISABLED}},
> -	[IAVF_RX_SCATTERED] = {iavf_recv_scattered_pkts, "Scalar Scattered",
> -		{IAVF_RX_SCALAR_OFFLOADS | RTE_ETH_RX_OFFLOAD_SCATTER, RTE_VECT_SIMD_DISABLED,
> -			{.scattered = true}}},
> -	[IAVF_RX_FLEX_RXD] = {iavf_recv_pkts_flex_rxd, "Scalar Flex",
> -		{IAVF_RX_SCALAR_FLEX_OFFLOADS, RTE_VECT_SIMD_DISABLED, {.flex_desc = true}}},
> -	[IAVF_RX_SCATTERED_FLEX_RXD] = {iavf_recv_scattered_pkts_flex_rxd, "Scalar Scattered Flex",
> -		{IAVF_RX_SCALAR_FLEX_OFFLOADS | RTE_ETH_RX_OFFLOAD_SCATTER, RTE_VECT_SIMD_DISABLED,
> -				{.scattered = true, .flex_desc = true}}},
> -	[IAVF_RX_BULK_ALLOC] = {iavf_recv_pkts_bulk_alloc, "Scalar Bulk Alloc",
> -		{IAVF_RX_SCALAR_OFFLOADS, RTE_VECT_SIMD_DISABLED, {.bulk_alloc = true}}},
> -	[IAVF_RX_BULK_ALLOC_FLEX_RXD] = {iavf_recv_pkts_bulk_alloc, "Scalar Bulk Alloc Flex",
> -			{IAVF_RX_SCALAR_FLEX_OFFLOADS, RTE_VECT_SIMD_DISABLED,
> -			{.flex_desc = true, .bulk_alloc = true}}},
> +	[IAVF_RX_DISABLED] = {
> +		.pkt_burst = iavf_recv_pkts_no_poll,
> +		.info = "Disabled",
> +		.features = {
> +			.rx_offloads = IAVF_RX_NO_OFFLOADS,

This is zero, so I would tend to omit it.

> +			.simd_width = RTE_VECT_SIMD_DISABLED,

Can our logic selection all work, if we have the simd_width set as zero for
these paths, also indicating SIMD is disabled? Again, it would allow us to
omit the SIMD value altogether for non-vector paths.

> +			.extra.disabled = true}},

Space before the closing braces? Maybe put them on new line?

> +	[IAVF_RX_DEFAULT] = {
> +		.pkt_burst = iavf_recv_pkts,
> +		.info = "Scalar",
> +		.features = {
> +			.rx_offloads = IAVF_RX_SCALAR_OFFLOADS,
> +			.simd_width = RTE_VECT_SIMD_DISABLED}},
<snip>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 5/6] net/i40e: reformat the Rx path infos array
  2025-10-14  8:45 ` [PATCH 5/6] net/i40e: " Ciara Loftus
@ 2025-10-14 14:38   ` Bruce Richardson
  0 siblings, 0 replies; 22+ messages in thread
From: Bruce Richardson @ 2025-10-14 14:38 UTC (permalink / raw)
  To: Ciara Loftus; +Cc: dev

On Tue, Oct 14, 2025 at 08:45:16AM +0000, Ciara Loftus wrote:
> In order to improve readability, reformat the rx path infos array.
> 
> Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
> ---
>  drivers/net/intel/i40e/i40e_rxtx.c | 126 ++++++++++++++++++++++-------
>  1 file changed, 95 insertions(+), 31 deletions(-)
> 
Any comments from previous patch would also apply here too.

Acked-by: Bruce Richardson <bruce.richardson@intel.com>


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 6/6] net/ice: reformat the Rx path infos array
  2025-10-14  8:45 ` [PATCH 6/6] net/ice: " Ciara Loftus
@ 2025-10-14 14:39   ` Bruce Richardson
  0 siblings, 0 replies; 22+ messages in thread
From: Bruce Richardson @ 2025-10-14 14:39 UTC (permalink / raw)
  To: Ciara Loftus; +Cc: dev

On Tue, Oct 14, 2025 at 08:45:17AM +0000, Ciara Loftus wrote:
> In order to improve readability, reformat the rx path infos array.
> 
> Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
> ---
>  drivers/net/intel/ice/ice_rxtx.c | 124 +++++++++++++++++++++++--------
>  1 file changed, 93 insertions(+), 31 deletions(-)

Acked-by: Bruce Richardson <bruce.richardson@intel.com>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* RE: [PATCH 1/6] net/intel: fix Rx vector capability detection
  2025-10-14 14:16   ` Bruce Richardson
@ 2025-10-14 14:59     ` Loftus, Ciara
  0 siblings, 0 replies; 22+ messages in thread
From: Loftus, Ciara @ 2025-10-14 14:59 UTC (permalink / raw)
  To: Richardson, Bruce; +Cc: dev, stable

> 
> On Tue, Oct 14, 2025 at 08:45:12AM +0000, Ciara Loftus wrote:
> > The common function for detecting whether an rxq could use a vector
> > rx path would automatically disqualify rx queues that had the
> > timestamp offload enabled. This was incorrect behaviour because the
> > iavf driver which uses this common function supports timestamp offload
> > on its vector paths. Fix this by removing the conditional check for
> > timestamp offload.
> >
> > Fixes: 9eb60580d155 ("net/intel: extract common Rx vector criteria")
> > Cc: stable@dpdk.org
> >
> > Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
> > ---
> >  drivers/net/intel/common/rx.h | 5 ++---
> >  1 file changed, 2 insertions(+), 3 deletions(-)
> >
> > diff --git a/drivers/net/intel/common/rx.h b/drivers/net/intel/common/rx.h
> > index 741808f573..d3e4492ff1 100644
> > --- a/drivers/net/intel/common/rx.h
> > +++ b/drivers/net/intel/common/rx.h
> > @@ -235,9 +235,8 @@ ci_rxq_vec_capable(uint16_t nb_desc, uint16_t
> rx_free_thresh, uint64_t offloads)
> >  			(nb_desc % rx_free_thresh) != 0)
> >  		return false;
> >
> > -	/* no driver supports timestamping or buffer split on vector path */
> > -	if ((offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) ||
> > -			(offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT))
> > +	/* no driver supports buffer split on vector path */
> > +	if (offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)
> >  		return false;
> >
> >  	return true;
> 
> Given that we check all offload flags later when doing final path
> selection, can we drop the flags check here completely? Just have this
> funciont check on the non-feature-related conditions, such as ring size
> etc.

Sure.
This function is used by ixgbe which doesn't use the common
rx path select function yet. But I think this check is
redundant for ixgbe because it's not supported on any of its
paths so initialisation would fail earlier than here, during
dev_configure.

> 
> /Bruce

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2 0/7] net/intel: fixes and improvements to rx path selection
  2025-10-14  8:45 [PATCH 0/6] net/intel: fixes and improvements to rx path selection Ciara Loftus
                   ` (5 preceding siblings ...)
  2025-10-14  8:45 ` [PATCH 6/6] net/ice: " Ciara Loftus
@ 2025-10-15 10:07 ` Ciara Loftus
  2025-10-15 10:07   ` [PATCH v2 1/7] net/intel: fix Rx vector capability detection Ciara Loftus
                     ` (6 more replies)
  6 siblings, 7 replies; 22+ messages in thread
From: Ciara Loftus @ 2025-10-15 10:07 UTC (permalink / raw)
  To: dev; +Cc: Ciara Loftus

This series contains a number of fixes and improvements to the logic
concerned with selecting an rx path in the intel drivers.

Patch 1 fixes incorrect behaviour in the ci_rxq_vec_capable function
which would disqualify rxqs that have the timestamp offload enabled from
selecting vector paths. This was incorrect because iavf vector paths
support timestamp offload.

Patch 2 removes a reduntant check from the ci_rxq_vec_capable function
that disqualifies rxqs that have the buffer split offload enabled from
selecting vector paths. This check is performed during the common rx
path selection for three of the drivers and during dev_configure for the
fourth, so the check in ci_rxq_vec_capable is unneeded. Although it is
similar this patch was kept separate from patch 1 because it is only
relevant after the three aformention drivers adopted the common rx path
selection infrastructure, whereas patch 1 is a fix for an issue that
existed prior to that.

Patches 3 and 4 make fixes to the iavf rx path definitions, ensuring the
correct offloads and features are defined for all paths.

The final three patches reformat the arrays in the i40e iavf and ice
drivers that hold information for each Rx path that are used in the
common selection process, in an attempt to improve readability.

v2:
* Added a new patch (2) that removes the reduntant check for the buffer
split offload in the common function.
* Removed the useless defines IAVF_RX_NO_OFFLOADS and ICE_RX_NO_OFFLOADS
* Use a newline for closing braces in the rx path infos arrays
* Let a simd width of zero represent scalar paths instead of 64 which
lessens the number of fields that need to be initialised for scalar path
definitions

Ciara Loftus (7):
  net/intel: fix Rx vector capability detection
  net/intel: remove redundant Rx offload check
  net/iavf: fix Rx paths feature definitions
  net/iavf: fix Rx path selection for scalar flex bulk alloc
  net/iavf: reformat the Rx path infos array
  net/i40e: reformat the Rx path infos array
  net/ice: reformat the Rx path infos array

 drivers/net/intel/common/rx.h                 |   7 +-
 drivers/net/intel/i40e/i40e_rxtx.c            | 149 ++++++--
 drivers/net/intel/i40e/i40e_rxtx_vec_common.h |   2 +-
 drivers/net/intel/iavf/iavf.h                 |   1 +
 drivers/net/intel/iavf/iavf_rxtx.c            | 335 ++++++++++++++----
 drivers/net/intel/iavf/iavf_rxtx.h            |   2 -
 drivers/net/intel/ice/ice_rxtx.c              | 147 ++++++--
 drivers/net/intel/ice/ice_rxtx.h              |   1 -
 drivers/net/intel/ice/ice_rxtx_vec_common.h   |   2 +-
 .../net/intel/ixgbe/ixgbe_rxtx_vec_common.c   |   2 +-
 10 files changed, 498 insertions(+), 150 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2 1/7] net/intel: fix Rx vector capability detection
  2025-10-15 10:07 ` [PATCH v2 0/7] net/intel: fixes and improvements to rx path selection Ciara Loftus
@ 2025-10-15 10:07   ` Ciara Loftus
  2025-10-15 10:07   ` [PATCH v2 2/7] net/intel: remove redundant Rx offload check Ciara Loftus
                     ` (5 subsequent siblings)
  6 siblings, 0 replies; 22+ messages in thread
From: Ciara Loftus @ 2025-10-15 10:07 UTC (permalink / raw)
  To: dev; +Cc: Ciara Loftus, stable

The common function for detecting whether an rxq could use a vector
rx path would automatically disqualify rx queues that had the
timestamp offload enabled. This was incorrect behaviour because the
iavf driver which uses this common function supports timestamp offload
on its vector paths. Fix this by removing the conditional check for
timestamp offload.

Fixes: 9eb60580d155 ("net/intel: extract common Rx vector criteria")
Cc: stable@dpdk.org

Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
---
 drivers/net/intel/common/rx.h | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/net/intel/common/rx.h b/drivers/net/intel/common/rx.h
index 741808f573..d3e4492ff1 100644
--- a/drivers/net/intel/common/rx.h
+++ b/drivers/net/intel/common/rx.h
@@ -235,9 +235,8 @@ ci_rxq_vec_capable(uint16_t nb_desc, uint16_t rx_free_thresh, uint64_t offloads)
 			(nb_desc % rx_free_thresh) != 0)
 		return false;
 
-	/* no driver supports timestamping or buffer split on vector path */
-	if ((offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) ||
-			(offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT))
+	/* no driver supports buffer split on vector path */
+	if (offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)
 		return false;
 
 	return true;
-- 
2.34.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2 2/7] net/intel: remove redundant Rx offload check
  2025-10-15 10:07 ` [PATCH v2 0/7] net/intel: fixes and improvements to rx path selection Ciara Loftus
  2025-10-15 10:07   ` [PATCH v2 1/7] net/intel: fix Rx vector capability detection Ciara Loftus
@ 2025-10-15 10:07   ` Ciara Loftus
  2025-10-15 10:07   ` [PATCH v2 3/7] net/iavf: fix Rx paths feature definitions Ciara Loftus
                     ` (4 subsequent siblings)
  6 siblings, 0 replies; 22+ messages in thread
From: Ciara Loftus @ 2025-10-15 10:07 UTC (permalink / raw)
  To: dev; +Cc: Ciara Loftus

Four Intel drivers use a common vector compatibility function to
determine if certain characteristics about a given rxq are compatible
with vector rx paths. The function checks if the buffer split offload is
enabled and disqualifies the rxq if it is, because that offload is not
available on any vector path. However, this check is redundant because
three of the drivers that use that function now use the common rx path
selection framework which performs essentially the same validation when
ensuring the requested offloads are available on the rx path to be
selected. The fourth driver does not support buffer split at all, so the
rx queue will never have that offload enabled as initialisation would
fail before getting to the path selection step. The redundant check is
removed in this commit.

Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
---
 drivers/net/intel/common/rx.h                   | 6 +-----
 drivers/net/intel/i40e/i40e_rxtx_vec_common.h   | 2 +-
 drivers/net/intel/iavf/iavf_rxtx.c              | 2 +-
 drivers/net/intel/ice/ice_rxtx_vec_common.h     | 2 +-
 drivers/net/intel/ixgbe/ixgbe_rxtx_vec_common.c | 2 +-
 5 files changed, 5 insertions(+), 9 deletions(-)

diff --git a/drivers/net/intel/common/rx.h b/drivers/net/intel/common/rx.h
index d3e4492ff1..5012e4fced 100644
--- a/drivers/net/intel/common/rx.h
+++ b/drivers/net/intel/common/rx.h
@@ -228,17 +228,13 @@ ci_rxq_mbuf_initializer(uint16_t port_id)
  * Individual drivers may have other further tests beyond this.
  */
 static inline bool
-ci_rxq_vec_capable(uint16_t nb_desc, uint16_t rx_free_thresh, uint64_t offloads)
+ci_rxq_vec_capable(uint16_t nb_desc, uint16_t rx_free_thresh)
 {
 	if (!rte_is_power_of_2(nb_desc) ||
 			rx_free_thresh < CI_RX_MAX_BURST ||
 			(nb_desc % rx_free_thresh) != 0)
 		return false;
 
-	/* no driver supports buffer split on vector path */
-	if (offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)
-		return false;
-
 	return true;
 }
 
diff --git a/drivers/net/intel/i40e/i40e_rxtx_vec_common.h b/drivers/net/intel/i40e/i40e_rxtx_vec_common.h
index 39c9d2ee10..14651f2f06 100644
--- a/drivers/net/intel/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/intel/i40e/i40e_rxtx_vec_common.h
@@ -62,7 +62,7 @@ i40e_rx_vec_dev_conf_condition_check_default(struct rte_eth_dev *dev)
 		struct ci_rx_queue *rxq = dev->data->rx_queues[i];
 		if (!rxq)
 			continue;
-		if (!ci_rxq_vec_capable(rxq->nb_rx_desc, rxq->rx_free_thresh, rxq->offloads))
+		if (!ci_rxq_vec_capable(rxq->nb_rx_desc, rxq->rx_free_thresh))
 			return -1;
 	}
 
diff --git a/drivers/net/intel/iavf/iavf_rxtx.c b/drivers/net/intel/iavf/iavf_rxtx.c
index 775fb4a66f..f500ba030f 100644
--- a/drivers/net/intel/iavf/iavf_rxtx.c
+++ b/drivers/net/intel/iavf/iavf_rxtx.c
@@ -727,7 +727,7 @@ iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 
 #if defined RTE_ARCH_X86 || defined RTE_ARCH_ARM
 	/* check vector conflict */
-	if (ci_rxq_vec_capable(rxq->nb_rx_desc, rxq->rx_free_thresh, rxq->offloads) &&
+	if (ci_rxq_vec_capable(rxq->nb_rx_desc, rxq->rx_free_thresh) &&
 			iavf_rxq_vec_setup(rxq)) {
 		PMD_DRV_LOG(ERR, "Failed vector rx setup.");
 		return -EINVAL;
diff --git a/drivers/net/intel/ice/ice_rxtx_vec_common.h b/drivers/net/intel/ice/ice_rxtx_vec_common.h
index 07996ab2b7..a7cc4736cf 100644
--- a/drivers/net/intel/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/intel/ice/ice_rxtx_vec_common.h
@@ -78,7 +78,7 @@ ice_rx_vec_queue_default(struct ci_rx_queue *rxq)
 	if (!rxq)
 		return -1;
 
-	if (!ci_rxq_vec_capable(rxq->nb_rx_desc, rxq->rx_free_thresh, rxq->offloads))
+	if (!ci_rxq_vec_capable(rxq->nb_rx_desc, rxq->rx_free_thresh))
 		return -1;
 
 	if (rxq->proto_xtr != PROTO_XTR_NONE)
diff --git a/drivers/net/intel/ixgbe/ixgbe_rxtx_vec_common.c b/drivers/net/intel/ixgbe/ixgbe_rxtx_vec_common.c
index 94fbde1de2..eb7c79eaf9 100644
--- a/drivers/net/intel/ixgbe/ixgbe_rxtx_vec_common.c
+++ b/drivers/net/intel/ixgbe/ixgbe_rxtx_vec_common.c
@@ -131,7 +131,7 @@ ixgbe_rx_vec_dev_conf_condition_check(struct rte_eth_dev *dev)
 		struct ci_rx_queue *rxq = dev->data->rx_queues[i];
 		if (!rxq)
 			continue;
-		if (!ci_rxq_vec_capable(rxq->nb_rx_desc, rxq->rx_free_thresh, rxq->offloads))
+		if (!ci_rxq_vec_capable(rxq->nb_rx_desc, rxq->rx_free_thresh))
 			return -1;
 	}
 	return 0;
-- 
2.34.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2 3/7] net/iavf: fix Rx paths feature definitions
  2025-10-15 10:07 ` [PATCH v2 0/7] net/intel: fixes and improvements to rx path selection Ciara Loftus
  2025-10-15 10:07   ` [PATCH v2 1/7] net/intel: fix Rx vector capability detection Ciara Loftus
  2025-10-15 10:07   ` [PATCH v2 2/7] net/intel: remove redundant Rx offload check Ciara Loftus
@ 2025-10-15 10:07   ` Ciara Loftus
  2025-10-15 10:07   ` [PATCH v2 4/7] net/iavf: fix Rx path selection for scalar flex bulk alloc Ciara Loftus
                     ` (3 subsequent siblings)
  6 siblings, 0 replies; 22+ messages in thread
From: Ciara Loftus @ 2025-10-15 10:07 UTC (permalink / raw)
  To: dev; +Cc: Ciara Loftus, stable

Two rx paths had incorrect feature and offload definitions
which led to incorrect path selections. Fix these.

Remove timestamp offload from the list of offloads
supported by paths that use the flexible rx descriptor. It
is only available in the "offload" versions of those paths.

Fixes: 91e3205d72d8 ("net/iavf: use common Rx path selection infrastructure")
Cc: stable@dpdk.org

Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
---
 drivers/net/intel/iavf/iavf_rxtx.c | 5 +++--
 drivers/net/intel/iavf/iavf_rxtx.h | 1 -
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/net/intel/iavf/iavf_rxtx.c b/drivers/net/intel/iavf/iavf_rxtx.c
index f500ba030f..d3bf062619 100644
--- a/drivers/net/intel/iavf/iavf_rxtx.c
+++ b/drivers/net/intel/iavf/iavf_rxtx.c
@@ -3768,13 +3768,14 @@ static const struct ci_rx_path_info iavf_rx_path_infos[] = {
 			{.scattered = true, .flex_desc = true, .bulk_alloc = true}}},
 	[IAVF_RX_AVX2_FLEX_RXD_OFFLOAD] = {
 		iavf_recv_pkts_vec_avx2_flex_rxd_offload, "Vector AVX2 Flex Offload",
-			{IAVF_RX_VECTOR_OFFLOADS, RTE_VECT_SIMD_256,
+			{IAVF_RX_VECTOR_OFFLOAD_FLEX_OFFLOADS, RTE_VECT_SIMD_256,
 				{.flex_desc = true, .bulk_alloc = true}}},
 	[IAVF_RX_AVX2_SCATTERED_FLEX_RXD_OFFLOAD] = {
 		iavf_recv_scattered_pkts_vec_avx2_flex_rxd_offload,
 		"Vector Scattered AVX2 Flex Offload",
 		{IAVF_RX_VECTOR_OFFLOAD_FLEX_OFFLOADS | RTE_ETH_RX_OFFLOAD_SCATTER,
-			RTE_VECT_SIMD_256, {.flex_desc = true, .bulk_alloc = true}}},
+			RTE_VECT_SIMD_256,
+			{.scattered = true, .flex_desc = true, .bulk_alloc = true}}},
 #ifdef CC_AVX512_SUPPORT
 	[IAVF_RX_AVX512] = {iavf_recv_pkts_vec_avx512, "Vector AVX512",
 		{IAVF_RX_VECTOR_OFFLOADS, RTE_VECT_SIMD_512, {.bulk_alloc = true}}},
diff --git a/drivers/net/intel/iavf/iavf_rxtx.h b/drivers/net/intel/iavf/iavf_rxtx.h
index 3f461efb28..44be29caf6 100644
--- a/drivers/net/intel/iavf/iavf_rxtx.h
+++ b/drivers/net/intel/iavf/iavf_rxtx.h
@@ -83,7 +83,6 @@
 /* vector paths that use the flex rx desc */
 #define IAVF_RX_VECTOR_FLEX_OFFLOADS (			\
 		IAVF_RX_VECTOR_OFFLOADS |		\
-		RTE_ETH_RX_OFFLOAD_TIMESTAMP |		\
 		RTE_ETH_RX_OFFLOAD_SECURITY)
 /* vector offload paths */
 #define IAVF_RX_VECTOR_OFFLOAD_OFFLOADS (		\
-- 
2.34.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2 4/7] net/iavf: fix Rx path selection for scalar flex bulk alloc
  2025-10-15 10:07 ` [PATCH v2 0/7] net/intel: fixes and improvements to rx path selection Ciara Loftus
                     ` (2 preceding siblings ...)
  2025-10-15 10:07   ` [PATCH v2 3/7] net/iavf: fix Rx paths feature definitions Ciara Loftus
@ 2025-10-15 10:07   ` Ciara Loftus
  2025-10-15 10:07   ` [PATCH v2 5/7] net/iavf: reformat the Rx path infos array Ciara Loftus
                     ` (2 subsequent siblings)
  6 siblings, 0 replies; 22+ messages in thread
From: Ciara Loftus @ 2025-10-15 10:07 UTC (permalink / raw)
  To: dev; +Cc: Ciara Loftus, stable

The scalar bulk alloc rx burst function supports both legacy and
flexible rx descriptors. The rx path selection infrastructure introduced
in commit 91e3205d72d8 ("net/iavf: use common Rx path selection
infrastructure") cannot define a path that supports both descriptor
formats. To solve this problem, have two rx path definitions which both
point to the same rx burst function but report different descriptor
formats. This allows the rx path selection function to choose the
correct path.

Fixes: 91e3205d72d8 ("net/iavf: use common Rx path selection infrastructure")
Cc: stable@dpdk.org

Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
---
 drivers/net/intel/iavf/iavf.h      | 1 +
 drivers/net/intel/iavf/iavf_rxtx.c | 3 +++
 2 files changed, 4 insertions(+)

diff --git a/drivers/net/intel/iavf/iavf.h b/drivers/net/intel/iavf/iavf.h
index 435902fbc2..4e76162337 100644
--- a/drivers/net/intel/iavf/iavf.h
+++ b/drivers/net/intel/iavf/iavf.h
@@ -327,6 +327,7 @@ enum iavf_rx_func_type {
 	IAVF_RX_FLEX_RXD,
 	IAVF_RX_SCATTERED_FLEX_RXD,
 	IAVF_RX_BULK_ALLOC,
+	IAVF_RX_BULK_ALLOC_FLEX_RXD,
 	IAVF_RX_SSE,
 	IAVF_RX_SSE_SCATTERED,
 	IAVF_RX_SSE_FLEX_RXD,
diff --git a/drivers/net/intel/iavf/iavf_rxtx.c b/drivers/net/intel/iavf/iavf_rxtx.c
index d3bf062619..e217328823 100644
--- a/drivers/net/intel/iavf/iavf_rxtx.c
+++ b/drivers/net/intel/iavf/iavf_rxtx.c
@@ -3734,6 +3734,9 @@ static const struct ci_rx_path_info iavf_rx_path_infos[] = {
 				{.scattered = true, .flex_desc = true}}},
 	[IAVF_RX_BULK_ALLOC] = {iavf_recv_pkts_bulk_alloc, "Scalar Bulk Alloc",
 		{IAVF_RX_SCALAR_OFFLOADS, RTE_VECT_SIMD_DISABLED, {.bulk_alloc = true}}},
+	[IAVF_RX_BULK_ALLOC_FLEX_RXD] = {iavf_recv_pkts_bulk_alloc, "Scalar Bulk Alloc Flex",
+			{IAVF_RX_SCALAR_FLEX_OFFLOADS, RTE_VECT_SIMD_DISABLED,
+			{.flex_desc = true, .bulk_alloc = true}}},
 #ifdef RTE_ARCH_X86
 	[IAVF_RX_SSE] = {iavf_recv_pkts_vec, "Vector SSE",
 		{IAVF_RX_VECTOR_OFFLOAD_OFFLOADS, RTE_VECT_SIMD_128, {.bulk_alloc = true}}},
-- 
2.34.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2 5/7] net/iavf: reformat the Rx path infos array
  2025-10-15 10:07 ` [PATCH v2 0/7] net/intel: fixes and improvements to rx path selection Ciara Loftus
                     ` (3 preceding siblings ...)
  2025-10-15 10:07   ` [PATCH v2 4/7] net/iavf: fix Rx path selection for scalar flex bulk alloc Ciara Loftus
@ 2025-10-15 10:07   ` Ciara Loftus
  2025-10-15 10:07   ` [PATCH v2 6/7] net/i40e: " Ciara Loftus
  2025-10-15 10:07   ` [PATCH v2 7/7] net/ice: " Ciara Loftus
  6 siblings, 0 replies; 22+ messages in thread
From: Ciara Loftus @ 2025-10-15 10:07 UTC (permalink / raw)
  To: dev; +Cc: Ciara Loftus

In order to improve readability, reformat the rx path infos array.

Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
---
v2:
* Removed reduntant assignment to zero value (IAVF_RX_NO_OFFLOADS) and
remove its definitions as it is not used any more.
* Newline for closing braces.
* Removed assignment of RTE_VECT_SIMD_DISABLED to simd_width, the
selection logic can work when this is set to zero for the scalar path.
---
 drivers/net/intel/iavf/iavf_rxtx.c | 337 ++++++++++++++++++++++-------
 drivers/net/intel/iavf/iavf_rxtx.h |   1 -
 2 files changed, 258 insertions(+), 80 deletions(-)

diff --git a/drivers/net/intel/iavf/iavf_rxtx.c b/drivers/net/intel/iavf/iavf_rxtx.c
index e217328823..a3ef13c791 100644
--- a/drivers/net/intel/iavf/iavf_rxtx.c
+++ b/drivers/net/intel/iavf/iavf_rxtx.c
@@ -3720,99 +3720,278 @@ iavf_xmit_pkts_no_poll(void *tx_queue, struct rte_mbuf **tx_pkts,
 				uint16_t nb_pkts);
 
 static const struct ci_rx_path_info iavf_rx_path_infos[] = {
-	[IAVF_RX_DISABLED] = {iavf_recv_pkts_no_poll, "Disabled",
-		{IAVF_RX_NO_OFFLOADS, RTE_VECT_SIMD_DISABLED, {.disabled = true}}},
-	[IAVF_RX_DEFAULT] = {iavf_recv_pkts, "Scalar",
-		{IAVF_RX_SCALAR_OFFLOADS, RTE_VECT_SIMD_DISABLED}},
-	[IAVF_RX_SCATTERED] = {iavf_recv_scattered_pkts, "Scalar Scattered",
-		{IAVF_RX_SCALAR_OFFLOADS | RTE_ETH_RX_OFFLOAD_SCATTER, RTE_VECT_SIMD_DISABLED,
-			{.scattered = true}}},
-	[IAVF_RX_FLEX_RXD] = {iavf_recv_pkts_flex_rxd, "Scalar Flex",
-		{IAVF_RX_SCALAR_FLEX_OFFLOADS, RTE_VECT_SIMD_DISABLED, {.flex_desc = true}}},
-	[IAVF_RX_SCATTERED_FLEX_RXD] = {iavf_recv_scattered_pkts_flex_rxd, "Scalar Scattered Flex",
-		{IAVF_RX_SCALAR_FLEX_OFFLOADS | RTE_ETH_RX_OFFLOAD_SCATTER, RTE_VECT_SIMD_DISABLED,
-				{.scattered = true, .flex_desc = true}}},
-	[IAVF_RX_BULK_ALLOC] = {iavf_recv_pkts_bulk_alloc, "Scalar Bulk Alloc",
-		{IAVF_RX_SCALAR_OFFLOADS, RTE_VECT_SIMD_DISABLED, {.bulk_alloc = true}}},
-	[IAVF_RX_BULK_ALLOC_FLEX_RXD] = {iavf_recv_pkts_bulk_alloc, "Scalar Bulk Alloc Flex",
-			{IAVF_RX_SCALAR_FLEX_OFFLOADS, RTE_VECT_SIMD_DISABLED,
-			{.flex_desc = true, .bulk_alloc = true}}},
+	[IAVF_RX_DISABLED] = {
+		.pkt_burst = iavf_recv_pkts_no_poll,
+		.info = "Disabled",
+		.features = {
+			.extra.disabled = true
+		}
+	},
+	[IAVF_RX_DEFAULT] = {
+		.pkt_burst = iavf_recv_pkts,
+		.info = "Scalar",
+		.features = {
+			.rx_offloads = IAVF_RX_SCALAR_OFFLOADS
+		}
+	},
+	[IAVF_RX_SCATTERED] = {
+		.pkt_burst = iavf_recv_scattered_pkts,
+		.info = "Scalar Scattered",
+		.features = {
+			.rx_offloads = IAVF_RX_SCALAR_OFFLOADS | RTE_ETH_RX_OFFLOAD_SCATTER,
+			.extra.scattered = true
+		}
+	},
+	[IAVF_RX_FLEX_RXD] = {
+		.pkt_burst = iavf_recv_pkts_flex_rxd,
+		.info = "Scalar Flex",
+		.features = {
+			.rx_offloads = IAVF_RX_SCALAR_FLEX_OFFLOADS,
+			.extra.flex_desc = true
+		}
+	},
+	[IAVF_RX_SCATTERED_FLEX_RXD] = {
+		.pkt_burst = iavf_recv_scattered_pkts_flex_rxd,
+		.info = "Scalar Scattered Flex",
+		.features = {
+			.rx_offloads = IAVF_RX_SCALAR_FLEX_OFFLOADS | RTE_ETH_RX_OFFLOAD_SCATTER,
+			.extra.scattered = true,
+			.extra.flex_desc = true
+		}
+	},
+	[IAVF_RX_BULK_ALLOC] = {
+		.pkt_burst = iavf_recv_pkts_bulk_alloc,
+		.info = "Scalar Bulk Alloc",
+		.features = {
+			.rx_offloads = IAVF_RX_SCALAR_OFFLOADS,
+			.extra.bulk_alloc = true
+		}
+	},
+	[IAVF_RX_BULK_ALLOC_FLEX_RXD] = {
+		.pkt_burst = iavf_recv_pkts_bulk_alloc,
+		.info = "Scalar Bulk Alloc Flex",
+		.features = {
+			.rx_offloads = IAVF_RX_SCALAR_FLEX_OFFLOADS,
+			.extra.flex_desc = true,
+			.extra.bulk_alloc = true
+		}
+	},
 #ifdef RTE_ARCH_X86
-	[IAVF_RX_SSE] = {iavf_recv_pkts_vec, "Vector SSE",
-		{IAVF_RX_VECTOR_OFFLOAD_OFFLOADS, RTE_VECT_SIMD_128, {.bulk_alloc = true}}},
-	[IAVF_RX_SSE_SCATTERED] = {iavf_recv_scattered_pkts_vec, "Vector Scattered SSE",
-		{IAVF_RX_VECTOR_OFFLOADS | RTE_ETH_RX_OFFLOAD_SCATTER, RTE_VECT_SIMD_128,
-			{.scattered = true, .bulk_alloc = true}}},
-	[IAVF_RX_SSE_FLEX_RXD] = {iavf_recv_pkts_vec_flex_rxd, "Vector Flex SSE",
-		{IAVF_RX_VECTOR_OFFLOAD_FLEX_OFFLOADS, RTE_VECT_SIMD_128,
-			{.flex_desc = true, .bulk_alloc = true}}},
+	[IAVF_RX_SSE] = {
+		.pkt_burst = iavf_recv_pkts_vec,
+		.info = "Vector SSE",
+		.features = {
+			.rx_offloads = IAVF_RX_VECTOR_OFFLOAD_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_128,
+			.extra.bulk_alloc = true
+		}
+	},
+	[IAVF_RX_SSE_SCATTERED] = {
+		.pkt_burst = iavf_recv_scattered_pkts_vec,
+		.info = "Vector Scattered SSE",
+		.features = {
+			.rx_offloads = IAVF_RX_VECTOR_OFFLOADS | RTE_ETH_RX_OFFLOAD_SCATTER,
+			.simd_width = RTE_VECT_SIMD_128,
+			.extra.scattered = true,
+			.extra.bulk_alloc = true
+		}
+	},
+	[IAVF_RX_SSE_FLEX_RXD] = {
+		.pkt_burst = iavf_recv_pkts_vec_flex_rxd,
+		.info = "Vector Flex SSE",
+		.features = {
+			.rx_offloads = IAVF_RX_VECTOR_OFFLOAD_FLEX_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_128,
+			.extra.flex_desc = true,
+			.extra.bulk_alloc = true
+		}
+	},
 	[IAVF_RX_SSE_SCATTERED_FLEX_RXD] = {
-		iavf_recv_scattered_pkts_vec_flex_rxd, "Vector Scattered SSE Flex",
-		{IAVF_RX_VECTOR_OFFLOAD_FLEX_OFFLOADS | RTE_ETH_RX_OFFLOAD_SCATTER,
-			RTE_VECT_SIMD_128,
-			{.scattered = true, .flex_desc = true, .bulk_alloc = true}}},
-	[IAVF_RX_AVX2] = {iavf_recv_pkts_vec_avx2, "Vector AVX2",
-		{IAVF_RX_VECTOR_OFFLOADS, RTE_VECT_SIMD_256, {.bulk_alloc = true}}},
-	[IAVF_RX_AVX2_SCATTERED] = {iavf_recv_scattered_pkts_vec_avx2, "Vector Scattered AVX2",
-		{IAVF_RX_VECTOR_OFFLOADS | RTE_ETH_RX_OFFLOAD_SCATTER, RTE_VECT_SIMD_256,
-			{.scattered = true, .bulk_alloc = true}}},
-	[IAVF_RX_AVX2_OFFLOAD] = {iavf_recv_pkts_vec_avx2_offload, "Vector AVX2 Offload",
-		{IAVF_RX_VECTOR_OFFLOAD_OFFLOADS, RTE_VECT_SIMD_256, {.bulk_alloc = true}}},
+		.pkt_burst = iavf_recv_scattered_pkts_vec_flex_rxd,
+		.info = "Vector Scattered SSE Flex",
+		.features = {
+			.rx_offloads = IAVF_RX_VECTOR_OFFLOAD_FLEX_OFFLOADS |
+				       RTE_ETH_RX_OFFLOAD_SCATTER,
+			.simd_width = RTE_VECT_SIMD_128,
+			.extra.scattered = true,
+			.extra.flex_desc = true,
+			.extra.bulk_alloc = true
+		}
+	},
+	[IAVF_RX_AVX2] = {
+		.pkt_burst = iavf_recv_pkts_vec_avx2,
+		.info = "Vector AVX2",
+		.features = {
+			.rx_offloads = IAVF_RX_VECTOR_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_256,
+			.extra.bulk_alloc = true
+		}
+	},
+	[IAVF_RX_AVX2_SCATTERED] = {
+		.pkt_burst = iavf_recv_scattered_pkts_vec_avx2,
+		.info = "Vector Scattered AVX2",
+		.features = {
+			.rx_offloads = IAVF_RX_VECTOR_OFFLOADS | RTE_ETH_RX_OFFLOAD_SCATTER,
+			.simd_width = RTE_VECT_SIMD_256,
+			.extra.scattered = true,
+			.extra.bulk_alloc = true
+		}
+	},
+	[IAVF_RX_AVX2_OFFLOAD] = {
+		.pkt_burst = iavf_recv_pkts_vec_avx2_offload,
+		.info = "Vector AVX2 Offload",
+		.features = {
+			.rx_offloads = IAVF_RX_VECTOR_OFFLOAD_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_256,
+			.extra.bulk_alloc = true
+		}
+	},
 	[IAVF_RX_AVX2_SCATTERED_OFFLOAD] = {
-		iavf_recv_scattered_pkts_vec_avx2_offload, "Vector Scattered AVX2 offload",
-		{IAVF_RX_VECTOR_OFFLOAD_OFFLOADS | RTE_ETH_RX_OFFLOAD_SCATTER, RTE_VECT_SIMD_256,
-			{.scattered = true, .bulk_alloc = true}}},
-	[IAVF_RX_AVX2_FLEX_RXD] = {iavf_recv_pkts_vec_avx2_flex_rxd, "Vector AVX2 Flex",
-		{IAVF_RX_VECTOR_FLEX_OFFLOADS, RTE_VECT_SIMD_256,
-			{.flex_desc = true, .bulk_alloc = true}}},
+		.pkt_burst = iavf_recv_scattered_pkts_vec_avx2_offload,
+		.info = "Vector Scattered AVX2 Offload",
+		.features = {
+			.rx_offloads = IAVF_RX_VECTOR_OFFLOAD_OFFLOADS | RTE_ETH_RX_OFFLOAD_SCATTER,
+			.simd_width = RTE_VECT_SIMD_256,
+			.extra.scattered = true,
+			.extra.bulk_alloc = true
+		}
+	},
+	[IAVF_RX_AVX2_FLEX_RXD] = {
+		.pkt_burst = iavf_recv_pkts_vec_avx2_flex_rxd,
+		.info = "Vector AVX2 Flex",
+		.features = {
+			.rx_offloads = IAVF_RX_VECTOR_FLEX_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_256,
+			.extra.flex_desc = true,
+			.extra.bulk_alloc = true
+		}
+	},
 	[IAVF_RX_AVX2_SCATTERED_FLEX_RXD] = {
-		iavf_recv_scattered_pkts_vec_avx2_flex_rxd, "Vector Scattered AVX2 Flex",
-		{IAVF_RX_VECTOR_FLEX_OFFLOADS | RTE_ETH_RX_OFFLOAD_SCATTER, RTE_VECT_SIMD_256,
-			{.scattered = true, .flex_desc = true, .bulk_alloc = true}}},
+		.pkt_burst = iavf_recv_scattered_pkts_vec_avx2_flex_rxd,
+		.info = "Vector Scattered AVX2 Flex",
+		.features = {
+			.rx_offloads = IAVF_RX_VECTOR_FLEX_OFFLOADS | RTE_ETH_RX_OFFLOAD_SCATTER,
+			.simd_width = RTE_VECT_SIMD_256,
+			.extra.scattered = true,
+			.extra.flex_desc = true,
+			.extra.bulk_alloc = true
+		}
+	},
 	[IAVF_RX_AVX2_FLEX_RXD_OFFLOAD] = {
-		iavf_recv_pkts_vec_avx2_flex_rxd_offload, "Vector AVX2 Flex Offload",
-			{IAVF_RX_VECTOR_OFFLOAD_FLEX_OFFLOADS, RTE_VECT_SIMD_256,
-				{.flex_desc = true, .bulk_alloc = true}}},
+		.pkt_burst = iavf_recv_pkts_vec_avx2_flex_rxd_offload,
+		.info = "Vector AVX2 Flex Offload",
+		.features = {
+			.rx_offloads = IAVF_RX_VECTOR_OFFLOAD_FLEX_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_256,
+			.extra.flex_desc = true,
+			.extra.bulk_alloc = true
+		}
+	},
 	[IAVF_RX_AVX2_SCATTERED_FLEX_RXD_OFFLOAD] = {
-		iavf_recv_scattered_pkts_vec_avx2_flex_rxd_offload,
-		"Vector Scattered AVX2 Flex Offload",
-		{IAVF_RX_VECTOR_OFFLOAD_FLEX_OFFLOADS | RTE_ETH_RX_OFFLOAD_SCATTER,
-			RTE_VECT_SIMD_256,
-			{.scattered = true, .flex_desc = true, .bulk_alloc = true}}},
+		.pkt_burst = iavf_recv_scattered_pkts_vec_avx2_flex_rxd_offload,
+		.info = "Vector Scattered AVX2 Flex Offload",
+		.features = {
+			.rx_offloads = IAVF_RX_VECTOR_OFFLOAD_FLEX_OFFLOADS |
+				       RTE_ETH_RX_OFFLOAD_SCATTER,
+			.simd_width = RTE_VECT_SIMD_256,
+			.extra.scattered = true,
+			.extra.flex_desc = true,
+			.extra.bulk_alloc = true
+		}
+	},
 #ifdef CC_AVX512_SUPPORT
-	[IAVF_RX_AVX512] = {iavf_recv_pkts_vec_avx512, "Vector AVX512",
-		{IAVF_RX_VECTOR_OFFLOADS, RTE_VECT_SIMD_512, {.bulk_alloc = true}}},
+	[IAVF_RX_AVX512] = {
+		.pkt_burst = iavf_recv_pkts_vec_avx512,
+		.info = "Vector AVX512",
+		.features = {
+			.rx_offloads = IAVF_RX_VECTOR_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_512,
+			.extra.bulk_alloc = true
+		}
+	},
 	[IAVF_RX_AVX512_SCATTERED] = {
-		iavf_recv_scattered_pkts_vec_avx512, "Vector Scattered AVX512",
-		{IAVF_RX_VECTOR_OFFLOADS | RTE_ETH_RX_OFFLOAD_SCATTER, RTE_VECT_SIMD_512,
-			{.scattered = true, .bulk_alloc = true}}},
-	[IAVF_RX_AVX512_OFFLOAD] = {iavf_recv_pkts_vec_avx512_offload, "Vector AVX512 Offload",
-		{IAVF_RX_VECTOR_OFFLOAD_OFFLOADS, RTE_VECT_SIMD_512, {.bulk_alloc = true}}},
+		.pkt_burst = iavf_recv_scattered_pkts_vec_avx512,
+		.info = "Vector Scattered AVX512",
+		.features = {
+			.rx_offloads = IAVF_RX_VECTOR_OFFLOADS | RTE_ETH_RX_OFFLOAD_SCATTER,
+			.simd_width = RTE_VECT_SIMD_512,
+			.extra.scattered = true,
+			.extra.bulk_alloc = true
+		}
+	},
+	[IAVF_RX_AVX512_OFFLOAD] = {
+		.pkt_burst = iavf_recv_pkts_vec_avx512_offload,
+		.info = "Vector AVX512 Offload",
+		.features = {
+			.rx_offloads = IAVF_RX_VECTOR_OFFLOAD_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_512,
+			.extra.bulk_alloc = true
+		}
+	},
 	[IAVF_RX_AVX512_SCATTERED_OFFLOAD] = {
-		iavf_recv_scattered_pkts_vec_avx512_offload, "Vector Scattered AVX512 offload",
-		{IAVF_RX_VECTOR_OFFLOAD_OFFLOADS | RTE_ETH_RX_OFFLOAD_SCATTER, RTE_VECT_SIMD_512,
-			{.scattered = true, .bulk_alloc = true}}},
-	[IAVF_RX_AVX512_FLEX_RXD] = {iavf_recv_pkts_vec_avx512_flex_rxd, "Vector AVX512 Flex",
-		{IAVF_RX_VECTOR_FLEX_OFFLOADS, RTE_VECT_SIMD_512,
-			{.flex_desc = true, .bulk_alloc = true}}},
+		.pkt_burst = iavf_recv_scattered_pkts_vec_avx512_offload,
+		.info = "Vector Scattered AVX512 Offload",
+		.features = {
+			.rx_offloads = IAVF_RX_VECTOR_OFFLOAD_OFFLOADS | RTE_ETH_RX_OFFLOAD_SCATTER,
+			.simd_width = RTE_VECT_SIMD_512,
+			.extra.scattered = true,
+			.extra.bulk_alloc = true
+		}
+	},
+	[IAVF_RX_AVX512_FLEX_RXD] = {
+		.pkt_burst = iavf_recv_pkts_vec_avx512_flex_rxd,
+		.info = "Vector AVX512 Flex",
+		.features = {
+			.rx_offloads = IAVF_RX_VECTOR_FLEX_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_512,
+			.extra.flex_desc = true,
+			.extra.bulk_alloc = true
+		}
+	},
 	[IAVF_RX_AVX512_SCATTERED_FLEX_RXD] = {
-		iavf_recv_scattered_pkts_vec_avx512_flex_rxd, "Vector Scattered AVX512 Flex",
-		{IAVF_RX_VECTOR_FLEX_OFFLOADS | RTE_ETH_RX_OFFLOAD_SCATTER, RTE_VECT_SIMD_512,
-			{.scattered = true, .flex_desc = true, .bulk_alloc = true}}},
+		.pkt_burst = iavf_recv_scattered_pkts_vec_avx512_flex_rxd,
+		.info = "Vector Scattered AVX512 Flex",
+		.features = {
+			.rx_offloads = IAVF_RX_VECTOR_FLEX_OFFLOADS | RTE_ETH_RX_OFFLOAD_SCATTER,
+			.simd_width = RTE_VECT_SIMD_512,
+			.extra.scattered = true,
+			.extra.flex_desc = true,
+			.extra.bulk_alloc = true
+		}
+	},
 	[IAVF_RX_AVX512_FLEX_RXD_OFFLOAD] = {
-		iavf_recv_pkts_vec_avx512_flex_rxd_offload, "Vector AVX512 Flex Offload",
-		{IAVF_RX_VECTOR_OFFLOAD_FLEX_OFFLOADS, RTE_VECT_SIMD_512,
-			{.flex_desc = true, .bulk_alloc = true}}},
+		.pkt_burst = iavf_recv_pkts_vec_avx512_flex_rxd_offload,
+		.info = "Vector AVX512 Flex Offload",
+		.features = {
+			.rx_offloads = IAVF_RX_VECTOR_OFFLOAD_FLEX_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_512,
+			.extra.flex_desc = true,
+			.extra.bulk_alloc = true
+		}
+	},
 	[IAVF_RX_AVX512_SCATTERED_FLEX_RXD_OFFLOAD] = {
-		iavf_recv_scattered_pkts_vec_avx512_flex_rxd_offload,
-		"Vector Scattered AVX512 Flex offload",
-		{IAVF_RX_VECTOR_OFFLOAD_FLEX_OFFLOADS | RTE_ETH_RX_OFFLOAD_SCATTER,
-			RTE_VECT_SIMD_512,
-			{.scattered = true, .flex_desc = true, .bulk_alloc = true}}},
+		.pkt_burst = iavf_recv_scattered_pkts_vec_avx512_flex_rxd_offload,
+		.info = "Vector Scattered AVX512 Flex Offload",
+		.features = {
+			.rx_offloads = IAVF_RX_VECTOR_OFFLOAD_FLEX_OFFLOADS |
+				       RTE_ETH_RX_OFFLOAD_SCATTER,
+			.simd_width = RTE_VECT_SIMD_512,
+			.extra.scattered = true,
+			.extra.flex_desc = true,
+			.extra.bulk_alloc = true
+		}
+	},
 #endif
 #elif defined RTE_ARCH_ARM
-	[IAVF_RX_SSE] = {iavf_recv_pkts_vec, "Vector Neon",
-		{IAVF_RX_SCALAR_OFFLOADS, RTE_VECT_SIMD_128, {.bulk_alloc = true}}},
+	[IAVF_RX_SSE] = {
+		.pkt_burst = iavf_recv_pkts_vec,
+		.info = "Vector Neon",
+		.features = {
+			.rx_offloads = IAVF_RX_SCALAR_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_128,
+			.extra.bulk_alloc = true
+		}
+	},
 #endif
 };
 
diff --git a/drivers/net/intel/iavf/iavf_rxtx.h b/drivers/net/intel/iavf/iavf_rxtx.h
index 44be29caf6..5c9339b99f 100644
--- a/drivers/net/intel/iavf/iavf_rxtx.h
+++ b/drivers/net/intel/iavf/iavf_rxtx.h
@@ -55,7 +55,6 @@
 		RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM |	\
 		RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
 
-#define IAVF_RX_NO_OFFLOADS 0
 /* basic scalar path */
 #define IAVF_RX_SCALAR_OFFLOADS (			\
 		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |		\
-- 
2.34.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2 6/7] net/i40e: reformat the Rx path infos array
  2025-10-15 10:07 ` [PATCH v2 0/7] net/intel: fixes and improvements to rx path selection Ciara Loftus
                     ` (4 preceding siblings ...)
  2025-10-15 10:07   ` [PATCH v2 5/7] net/iavf: reformat the Rx path infos array Ciara Loftus
@ 2025-10-15 10:07   ` Ciara Loftus
  2025-10-15 10:07   ` [PATCH v2 7/7] net/ice: " Ciara Loftus
  6 siblings, 0 replies; 22+ messages in thread
From: Ciara Loftus @ 2025-10-15 10:07 UTC (permalink / raw)
  To: dev; +Cc: Ciara Loftus

In order to improve readability, reformat the rx path infos array.

Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
---
v2:
* Newline for closing braces.
* Removed assignment of RTE_VECT_SIMD_DISABLED to simd_width, the
selection logic can work when this is set to zero for the scalar path.
---
 drivers/net/intel/i40e/i40e_rxtx.c | 149 +++++++++++++++++++++++------
 1 file changed, 118 insertions(+), 31 deletions(-)

diff --git a/drivers/net/intel/i40e/i40e_rxtx.c b/drivers/net/intel/i40e/i40e_rxtx.c
index 2bd0955225..255414dd03 100644
--- a/drivers/net/intel/i40e/i40e_rxtx.c
+++ b/drivers/net/intel/i40e/i40e_rxtx.c
@@ -3290,42 +3290,129 @@ i40e_recycle_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 }
 
 static const struct ci_rx_path_info i40e_rx_path_infos[] = {
-	[I40E_RX_DEFAULT] = { i40e_recv_pkts, "Scalar",
-		{I40E_RX_SCALAR_OFFLOADS, RTE_VECT_SIMD_DISABLED}},
-	[I40E_RX_SCATTERED] = { i40e_recv_scattered_pkts, "Scalar Scattered",
-		{I40E_RX_SCALAR_OFFLOADS, RTE_VECT_SIMD_DISABLED, {.scattered = true}}},
-	[I40E_RX_BULK_ALLOC] = { i40e_recv_pkts_bulk_alloc, "Scalar Bulk Alloc",
-		{I40E_RX_SCALAR_OFFLOADS, RTE_VECT_SIMD_DISABLED, {.bulk_alloc = true}}},
+	[I40E_RX_DEFAULT] = {
+		.pkt_burst = i40e_recv_pkts,
+		.info = "Scalar",
+		.features = {
+			.rx_offloads = I40E_RX_SCALAR_OFFLOADS
+		}
+	},
+	[I40E_RX_SCATTERED] = {
+		.pkt_burst = i40e_recv_scattered_pkts,
+		.info = "Scalar Scattered",
+		.features = {
+			.rx_offloads = I40E_RX_SCALAR_OFFLOADS,
+			.extra.scattered = true
+		}
+	},
+	[I40E_RX_BULK_ALLOC] = {
+		.pkt_burst = i40e_recv_pkts_bulk_alloc,
+		.info = "Scalar Bulk Alloc",
+		.features = {
+			.rx_offloads = I40E_RX_SCALAR_OFFLOADS,
+			.extra.bulk_alloc = true
+		}
+	},
 #ifdef RTE_ARCH_X86
-	[I40E_RX_SSE] = { i40e_recv_pkts_vec, "Vector SSE",
-		{I40E_RX_VECTOR_OFFLOADS, RTE_VECT_SIMD_128, {.bulk_alloc = true}}},
-	[I40E_RX_SSE_SCATTERED] = { i40e_recv_scattered_pkts_vec, "Vector SSE Scattered",
-		{I40E_RX_VECTOR_OFFLOADS, RTE_VECT_SIMD_128,
-			{.scattered = true, .bulk_alloc = true}}},
-	[I40E_RX_AVX2] = { i40e_recv_pkts_vec_avx2, "Vector AVX2",
-		{I40E_RX_VECTOR_OFFLOADS, RTE_VECT_SIMD_256, {.bulk_alloc = true}}},
-	[I40E_RX_AVX2_SCATTERED] = { i40e_recv_scattered_pkts_vec_avx2, "Vector AVX2 Scattered",
-		{I40E_RX_VECTOR_OFFLOADS, RTE_VECT_SIMD_256,
-			{.scattered = true, .bulk_alloc = true}}},
+	[I40E_RX_SSE] = {
+		.pkt_burst = i40e_recv_pkts_vec,
+		.info = "Vector SSE",
+		.features = {
+			.rx_offloads = I40E_RX_VECTOR_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_128,
+			.extra.bulk_alloc = true
+		}
+	},
+	[I40E_RX_SSE_SCATTERED] = {
+		.pkt_burst = i40e_recv_scattered_pkts_vec,
+		.info = "Vector SSE Scattered",
+		.features = {
+			.rx_offloads = I40E_RX_VECTOR_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_128,
+			.extra.scattered = true,
+			.extra.bulk_alloc = true
+		}
+	},
+	[I40E_RX_AVX2] = {
+		.pkt_burst = i40e_recv_pkts_vec_avx2,
+		.info = "Vector AVX2",
+		.features = {
+			.rx_offloads = I40E_RX_VECTOR_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_256,
+			.extra.bulk_alloc = true
+		}
+	},
+	[I40E_RX_AVX2_SCATTERED] = {
+		.pkt_burst = i40e_recv_scattered_pkts_vec_avx2,
+		.info = "Vector AVX2 Scattered",
+		.features = {
+			.rx_offloads = I40E_RX_VECTOR_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_256,
+			.extra.scattered = true,
+			.extra.bulk_alloc = true
+		}
+	},
 #ifdef CC_AVX512_SUPPORT
-	[I40E_RX_AVX512] = { i40e_recv_pkts_vec_avx512, "Vector AVX512",
-		{I40E_RX_VECTOR_OFFLOADS, RTE_VECT_SIMD_512, {.bulk_alloc = true}}},
-	[I40E_RX_AVX512_SCATTERED] = { i40e_recv_scattered_pkts_vec_avx512,
-		"Vector AVX512 Scattered", {I40E_RX_VECTOR_OFFLOADS, RTE_VECT_SIMD_512,
-			{.scattered = true, .bulk_alloc = true}}},
+	[I40E_RX_AVX512] = {
+		.pkt_burst = i40e_recv_pkts_vec_avx512,
+		.info = "Vector AVX512",
+		.features = {
+			.rx_offloads = I40E_RX_VECTOR_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_512,
+			.extra.bulk_alloc = true
+		}
+	},
+	[I40E_RX_AVX512_SCATTERED] = {
+		.pkt_burst = i40e_recv_scattered_pkts_vec_avx512,
+		.info = "Vector AVX512 Scattered",
+		.features = {
+			.rx_offloads = I40E_RX_VECTOR_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_512,
+			.extra.scattered = true,
+			.extra.bulk_alloc = true
+		}
+	},
 #endif
 #elif defined(RTE_ARCH_ARM64)
-	[I40E_RX_NEON] = { i40e_recv_pkts_vec, "Vector Neon",
-		{I40E_RX_SCALAR_OFFLOADS, RTE_VECT_SIMD_128, {.bulk_alloc = true}}},
-	[I40E_RX_NEON_SCATTERED] = { i40e_recv_scattered_pkts_vec, "Vector Neon Scattered",
-		{I40E_RX_SCALAR_OFFLOADS, RTE_VECT_SIMD_128,
-			{.scattered = true, .bulk_alloc = true}}},
+	[I40E_RX_NEON] = {
+		.pkt_burst = i40e_recv_pkts_vec,
+		.info = "Vector Neon",
+		.features = {
+			.rx_offloads = I40E_RX_SCALAR_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_128,
+			.extra.bulk_alloc = true
+		}
+	},
+	[I40E_RX_NEON_SCATTERED] = {
+		.pkt_burst = i40e_recv_scattered_pkts_vec,
+		.info = "Vector Neon Scattered",
+		.features = {
+			.rx_offloads = I40E_RX_SCALAR_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_128,
+			.extra.scattered = true,
+			.extra.bulk_alloc = true
+		}
+	},
 #elif defined(RTE_ARCH_PPC_64)
-	[I40E_RX_ALTIVEC] = { i40e_recv_pkts_vec, "Vector AltiVec",
-		{I40E_RX_SCALAR_OFFLOADS, RTE_VECT_SIMD_128, {.bulk_alloc = true}}},
-	[I40E_RX_ALTIVEC_SCATTERED] = { i40e_recv_scattered_pkts_vec, "Vector AltiVec Scattered",
-		{I40E_RX_SCALAR_OFFLOADS, RTE_VECT_SIMD_128,
-			{.scattered = true, .bulk_alloc = true}}},
+	[I40E_RX_ALTIVEC] = {
+		.pkt_burst = i40e_recv_pkts_vec,
+		.info = "Vector AltiVec",
+		.features = {
+			.rx_offloads = I40E_RX_SCALAR_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_128,
+			.extra.bulk_alloc = true
+		}
+	},
+	[I40E_RX_ALTIVEC_SCATTERED] = {
+		.pkt_burst = i40e_recv_scattered_pkts_vec,
+		.info = "Vector AltiVec Scattered",
+		.features = {
+			.rx_offloads = I40E_RX_SCALAR_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_128,
+			.extra.scattered = true,
+			.extra.bulk_alloc = true
+		}
+	},
 #endif
 };
 
-- 
2.34.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2 7/7] net/ice: reformat the Rx path infos array
  2025-10-15 10:07 ` [PATCH v2 0/7] net/intel: fixes and improvements to rx path selection Ciara Loftus
                     ` (5 preceding siblings ...)
  2025-10-15 10:07   ` [PATCH v2 6/7] net/i40e: " Ciara Loftus
@ 2025-10-15 10:07   ` Ciara Loftus
  6 siblings, 0 replies; 22+ messages in thread
From: Ciara Loftus @ 2025-10-15 10:07 UTC (permalink / raw)
  To: dev; +Cc: Ciara Loftus

In order to improve readability, reformat the rx path infos array.

Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
---
v2:
* Newline for closing braces.
* Removed assignment of RTE_VECT_SIMD_DISABLED to simd_width, the
selection logic can work when this is set to zero for the scalar path.
* Removed unused define ICE_RX_NO_OFFLOADS
---
 drivers/net/intel/ice/ice_rxtx.c | 147 ++++++++++++++++++++++++-------
 drivers/net/intel/ice/ice_rxtx.h |   1 -
 2 files changed, 116 insertions(+), 32 deletions(-)

diff --git a/drivers/net/intel/ice/ice_rxtx.c b/drivers/net/intel/ice/ice_rxtx.c
index 411b353417..2c87e56da4 100644
--- a/drivers/net/intel/ice/ice_rxtx.c
+++ b/drivers/net/intel/ice/ice_rxtx.c
@@ -3667,41 +3667,126 @@ ice_xmit_pkts_simple(void *tx_queue,
 }
 
 static const struct ci_rx_path_info ice_rx_path_infos[] = {
-	[ICE_RX_DEFAULT] = {ice_recv_pkts, "Scalar",
-		{ICE_RX_SCALAR_OFFLOADS, RTE_VECT_SIMD_DISABLED}},
-	[ICE_RX_SCATTERED] = {ice_recv_scattered_pkts, "Scalar Scattered",
-		{ICE_RX_SCALAR_OFFLOADS, RTE_VECT_SIMD_DISABLED, {.scattered = true}}},
-	[ICE_RX_BULK_ALLOC] = {ice_recv_pkts_bulk_alloc, "Scalar Bulk Alloc",
-		{ICE_RX_SCALAR_OFFLOADS, RTE_VECT_SIMD_DISABLED, {.bulk_alloc = true}}},
+	[ICE_RX_DEFAULT] = {
+		.pkt_burst = ice_recv_pkts,
+		.info = "Scalar",
+		.features = {
+			.rx_offloads = ICE_RX_SCALAR_OFFLOADS
+		}
+	},
+	[ICE_RX_SCATTERED] = {
+		.pkt_burst = ice_recv_scattered_pkts,
+		.info = "Scalar Scattered",
+		.features = {
+			.rx_offloads = ICE_RX_SCALAR_OFFLOADS,
+			.extra.scattered = true
+		}
+	},
+	[ICE_RX_BULK_ALLOC] = {
+		.pkt_burst = ice_recv_pkts_bulk_alloc,
+		.info = "Scalar Bulk Alloc",
+		.features = {
+			.rx_offloads = ICE_RX_SCALAR_OFFLOADS,
+			.extra.bulk_alloc = true
+		}
+	},
 #ifdef RTE_ARCH_X86
-	[ICE_RX_SSE] = {ice_recv_pkts_vec, "Vector SSE",
-		{ICE_RX_VECTOR_OFFLOAD_OFFLOADS, RTE_VECT_SIMD_128, {.bulk_alloc = true}}},
-	[ICE_RX_SSE_SCATTERED] = {ice_recv_scattered_pkts_vec, "Vector SSE Scattered",
-		{ICE_RX_VECTOR_OFFLOAD_OFFLOADS, RTE_VECT_SIMD_128,
-			{.scattered = true, .bulk_alloc = true}}},
-	[ICE_RX_AVX2] = {ice_recv_pkts_vec_avx2, "Vector AVX2",
-		{ICE_RX_VECTOR_OFFLOADS, RTE_VECT_SIMD_256, {.bulk_alloc = true}}},
-	[ICE_RX_AVX2_SCATTERED] = {ice_recv_scattered_pkts_vec_avx2, "Vector AVX2 Scattered",
-		{ICE_RX_VECTOR_OFFLOADS, RTE_VECT_SIMD_256,
-			{.scattered = true, .bulk_alloc = true}}},
-	[ICE_RX_AVX2_OFFLOAD] = {ice_recv_pkts_vec_avx2_offload, "Offload Vector AVX2",
-		{ICE_RX_VECTOR_OFFLOAD_OFFLOADS, RTE_VECT_SIMD_256, {.bulk_alloc = true}}},
+	[ICE_RX_SSE] = {
+		.pkt_burst = ice_recv_pkts_vec,
+		.info = "Vector SSE",
+		.features = {
+			.rx_offloads = ICE_RX_VECTOR_OFFLOAD_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_128,
+			.extra.bulk_alloc = true
+		}
+	},
+	[ICE_RX_SSE_SCATTERED] = {
+		.pkt_burst = ice_recv_scattered_pkts_vec,
+		.info = "Vector SSE Scattered",
+		.features = {
+			.rx_offloads = ICE_RX_VECTOR_OFFLOAD_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_128,
+			.extra.scattered = true,
+			.extra.bulk_alloc = true
+		}
+	},
+	[ICE_RX_AVX2] = {
+		.pkt_burst = ice_recv_pkts_vec_avx2,
+		.info = "Vector AVX2",
+		.features = {
+			.rx_offloads = ICE_RX_VECTOR_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_256,
+			.extra.bulk_alloc = true
+		}
+	},
+	[ICE_RX_AVX2_SCATTERED] = {
+		.pkt_burst = ice_recv_scattered_pkts_vec_avx2,
+		.info = "Vector AVX2 Scattered",
+		.features = {
+			.rx_offloads = ICE_RX_VECTOR_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_256,
+			.extra.scattered = true,
+			.extra.bulk_alloc = true
+		}
+	},
+	[ICE_RX_AVX2_OFFLOAD] = {
+		.pkt_burst = ice_recv_pkts_vec_avx2_offload,
+		.info = "Offload Vector AVX2",
+		.features = {
+			.rx_offloads = ICE_RX_VECTOR_OFFLOAD_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_256,
+			.extra.bulk_alloc = true
+		}
+	},
 	[ICE_RX_AVX2_SCATTERED_OFFLOAD] = {
-		ice_recv_scattered_pkts_vec_avx2_offload, "Offload Vector AVX2 Scattered",
-		{ICE_RX_VECTOR_OFFLOAD_OFFLOADS, RTE_VECT_SIMD_256,
-			{.scattered = true, .bulk_alloc = true}}},
+		.pkt_burst = ice_recv_scattered_pkts_vec_avx2_offload,
+		.info = "Offload Vector AVX2 Scattered",
+		.features = {
+			.rx_offloads = ICE_RX_VECTOR_OFFLOAD_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_256,
+			.extra.scattered = true,
+			.extra.bulk_alloc = true
+		}
+	},
 #ifdef CC_AVX512_SUPPORT
-	[ICE_RX_AVX512] = {ice_recv_pkts_vec_avx512, "Vector AVX512",
-		{ICE_RX_VECTOR_OFFLOADS, RTE_VECT_SIMD_512, {.bulk_alloc = true}}},
-	[ICE_RX_AVX512_SCATTERED] = {ice_recv_scattered_pkts_vec_avx512, "Vector AVX512 Scattered",
-		{ICE_RX_VECTOR_OFFLOADS, RTE_VECT_SIMD_512,
-			{.scattered = true, .bulk_alloc = true}}},
-	[ICE_RX_AVX512_OFFLOAD] = {ice_recv_pkts_vec_avx512_offload, "Offload Vector AVX512",
-		{ICE_RX_VECTOR_OFFLOAD_OFFLOADS, RTE_VECT_SIMD_512, {.bulk_alloc = true}}},
+	[ICE_RX_AVX512] = {
+		.pkt_burst = ice_recv_pkts_vec_avx512,
+		.info = "Vector AVX512",
+		.features = {
+			.rx_offloads = ICE_RX_VECTOR_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_512,
+			.extra.bulk_alloc = true
+		}
+	},
+	[ICE_RX_AVX512_SCATTERED] = {
+		.pkt_burst = ice_recv_scattered_pkts_vec_avx512,
+		.info = "Vector AVX512 Scattered",
+		.features = {
+			.rx_offloads = ICE_RX_VECTOR_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_512,
+			.extra.scattered = true,
+			.extra.bulk_alloc = true
+		}
+	},
+	[ICE_RX_AVX512_OFFLOAD] = {
+		.pkt_burst = ice_recv_pkts_vec_avx512_offload,
+		.info = "Offload Vector AVX512",
+		.features = {
+			.rx_offloads = ICE_RX_VECTOR_OFFLOAD_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_512,
+			.extra.bulk_alloc = true
+		}
+	},
 	[ICE_RX_AVX512_SCATTERED_OFFLOAD] = {
-		ice_recv_scattered_pkts_vec_avx512_offload, "Offload Vector AVX512 Scattered",
-		{ICE_RX_VECTOR_OFFLOAD_OFFLOADS, RTE_VECT_SIMD_512,
-			{.scattered = true, .bulk_alloc = true}}},
+		.pkt_burst = ice_recv_scattered_pkts_vec_avx512_offload,
+		.info = "Offload Vector AVX512 Scattered",
+		.features = {
+			.rx_offloads = ICE_RX_VECTOR_OFFLOAD_OFFLOADS,
+			.simd_width = RTE_VECT_SIMD_512,
+			.extra.scattered = true,
+			.extra.bulk_alloc = true
+		}
+	},
 #endif
 #endif
 };
diff --git a/drivers/net/intel/ice/ice_rxtx.h b/drivers/net/intel/ice/ice_rxtx.h
index 6dac592eb4..141a62a7da 100644
--- a/drivers/net/intel/ice/ice_rxtx.h
+++ b/drivers/net/intel/ice/ice_rxtx.h
@@ -80,7 +80,6 @@
 #define ICE_TX_OFFLOAD_NOTSUP_MASK \
 		(RTE_MBUF_F_TX_OFFLOAD_MASK ^ ICE_TX_OFFLOAD_MASK)
 
-#define ICE_RX_NO_OFFLOADS 0
 /* basic scalar path */
 #define ICE_RX_SCALAR_OFFLOADS (				\
 			RTE_ETH_RX_OFFLOAD_VLAN_STRIP |		\
-- 
2.34.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2025-10-15 10:08 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-10-14  8:45 [PATCH 0/6] net/intel: fixes and improvements to rx path selection Ciara Loftus
2025-10-14  8:45 ` [PATCH 1/6] net/intel: fix Rx vector capability detection Ciara Loftus
2025-10-14 14:16   ` Bruce Richardson
2025-10-14 14:59     ` Loftus, Ciara
2025-10-14  8:45 ` [PATCH 2/6] net/iavf: fix Rx paths feature definitions Ciara Loftus
2025-10-14 14:26   ` Bruce Richardson
2025-10-14  8:45 ` [PATCH 3/6] net/iavf: fix Rx path selection for scalar flex bulk alloc Ciara Loftus
2025-10-14 14:33   ` Bruce Richardson
2025-10-14  8:45 ` [PATCH 4/6] net/iavf: reformat the Rx path infos array Ciara Loftus
2025-10-14 14:38   ` Bruce Richardson
2025-10-14  8:45 ` [PATCH 5/6] net/i40e: " Ciara Loftus
2025-10-14 14:38   ` Bruce Richardson
2025-10-14  8:45 ` [PATCH 6/6] net/ice: " Ciara Loftus
2025-10-14 14:39   ` Bruce Richardson
2025-10-15 10:07 ` [PATCH v2 0/7] net/intel: fixes and improvements to rx path selection Ciara Loftus
2025-10-15 10:07   ` [PATCH v2 1/7] net/intel: fix Rx vector capability detection Ciara Loftus
2025-10-15 10:07   ` [PATCH v2 2/7] net/intel: remove redundant Rx offload check Ciara Loftus
2025-10-15 10:07   ` [PATCH v2 3/7] net/iavf: fix Rx paths feature definitions Ciara Loftus
2025-10-15 10:07   ` [PATCH v2 4/7] net/iavf: fix Rx path selection for scalar flex bulk alloc Ciara Loftus
2025-10-15 10:07   ` [PATCH v2 5/7] net/iavf: reformat the Rx path infos array Ciara Loftus
2025-10-15 10:07   ` [PATCH v2 6/7] net/i40e: " Ciara Loftus
2025-10-15 10:07   ` [PATCH v2 7/7] net/ice: " Ciara Loftus

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).