patches for DPDK stable branches
 help / color / mirror / Atom feed
From: Kevin Traynor <ktraynor@redhat.com>
To: Bruce Richardson <bruce.richardson@intel.com>
Cc: Ciara Loftus <ciara.loftus@intel.com>, dpdk stable <stable@dpdk.org>
Subject: patch 'net/ice: fix path selection for QinQ Tx offload' has been queued to stable release 24.11.4
Date: Fri, 21 Nov 2025 11:20:54 +0000	[thread overview]
Message-ID: <20251121112128.485623-70-ktraynor@redhat.com> (raw)
In-Reply-To: <20251121112128.485623-1-ktraynor@redhat.com>

Hi,

FYI, your patch has been queued to stable release 24.11.4

Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/26/25. So please
shout if anyone has objections.

Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.

Queued patches are on a temporary branch at:
https://github.com/kevintraynor/dpdk-stable

This queued commit can be viewed at:
https://github.com/kevintraynor/dpdk-stable/commit/3fdaa456b2b0018fcdb5d0596e3216bd5e920958

Thanks.

Kevin

---
From 3fdaa456b2b0018fcdb5d0596e3216bd5e920958 Mon Sep 17 00:00:00 2001
From: Bruce Richardson <bruce.richardson@intel.com>
Date: Wed, 12 Nov 2025 11:57:26 +0000
Subject: [PATCH] net/ice: fix path selection for QinQ Tx offload

[ upstream commit 61ccab85e3972d6e3ee61b3e6a6a6872a33e5ac3 ]

The capabilities flag for the vector offload path include the QinQ
offload capability, but in fact the offload path lacks any ability to
create context descriptors. This means that it cannot insert multiple
vlan tags for QinQ support, so move the offload from the VECTOR_OFFLOAD
list to the NO_VECTOR list. Similarly, remove any check for the QinQ
mbuf flag in any packets being transmitted, since that offload is
invalid to request if the feature is not enabled.

Fixes: 808a17b3c1e6 ("net/ice: add Rx AVX512 offload path")

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Ciara Loftus <ciara.loftus@intel.com>
---
 drivers/net/intel/ice/ice_rxtx_vec_common.h | 207 ++++++++++++++++++++
 1 file changed, 207 insertions(+)
 create mode 100644 drivers/net/intel/ice/ice_rxtx_vec_common.h

diff --git a/drivers/net/intel/ice/ice_rxtx_vec_common.h b/drivers/net/intel/ice/ice_rxtx_vec_common.h
new file mode 100644
index 0000000000..39581cb7ae
--- /dev/null
+++ b/drivers/net/intel/ice/ice_rxtx_vec_common.h
@@ -0,0 +1,207 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _ICE_RXTX_VEC_COMMON_H_
+#define _ICE_RXTX_VEC_COMMON_H_
+
+#include "../common/rx.h"
+#include "ice_rxtx.h"
+
+static inline int
+ice_tx_desc_done(struct ci_tx_queue *txq, uint16_t idx)
+{
+	return (txq->ice_tx_ring[idx].cmd_type_offset_bsz &
+			rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) ==
+				rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE);
+}
+
+static inline void
+_ice_rx_queue_release_mbufs_vec(struct ci_rx_queue *rxq)
+{
+	const unsigned int mask = rxq->nb_rx_desc - 1;
+	unsigned int i;
+
+	if (unlikely(!rxq->sw_ring)) {
+		PMD_DRV_LOG(DEBUG, "sw_ring is NULL");
+		return;
+	}
+
+	if (rxq->rxrearm_nb >= rxq->nb_rx_desc)
+		return;
+
+	/* free all mbufs that are valid in the ring */
+	if (rxq->rxrearm_nb == 0) {
+		for (i = 0; i < rxq->nb_rx_desc; i++) {
+			if (rxq->sw_ring[i].mbuf)
+				rte_pktmbuf_free_seg(rxq->sw_ring[i].mbuf);
+		}
+	} else {
+		for (i = rxq->rx_tail;
+		     i != rxq->rxrearm_start;
+		     i = (i + 1) & mask) {
+			if (rxq->sw_ring[i].mbuf)
+				rte_pktmbuf_free_seg(rxq->sw_ring[i].mbuf);
+		}
+	}
+
+	rxq->rxrearm_nb = rxq->nb_rx_desc;
+
+	/* set all entries to NULL */
+	memset(rxq->sw_ring, 0, sizeof(rxq->sw_ring[0]) * rxq->nb_rx_desc);
+}
+
+#define ICE_TX_NO_VECTOR_FLAGS (			\
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |		\
+		RTE_ETH_TX_OFFLOAD_QINQ_INSERT |	\
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |	\
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |	\
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |    \
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |    \
+		RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |    \
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |    \
+		RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM |	\
+		RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP)
+
+#define ICE_TX_VECTOR_OFFLOAD (				\
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |		\
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |		\
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |		\
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |		\
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
+
+#define ICE_VECTOR_PATH		0
+#define ICE_VECTOR_OFFLOAD_PATH	1
+
+static inline int
+ice_rx_vec_queue_default(struct ci_rx_queue *rxq)
+{
+	if (!rxq)
+		return -1;
+
+	if (!ci_rxq_vec_capable(rxq->nb_rx_desc, rxq->rx_free_thresh))
+		return -1;
+
+	if (rxq->proto_xtr != PROTO_XTR_NONE)
+		return -1;
+
+	return 0;
+}
+
+static inline int
+ice_tx_vec_queue_default(struct ci_tx_queue *txq)
+{
+	if (!txq)
+		return -1;
+
+	if (txq->tx_rs_thresh < ICE_VPMD_TX_BURST ||
+	    txq->tx_rs_thresh > ICE_TX_MAX_FREE_BUF_SZ)
+		return -1;
+
+	if (txq->offloads & ICE_TX_NO_VECTOR_FLAGS)
+		return -1;
+
+	if (txq->offloads & ICE_TX_VECTOR_OFFLOAD)
+		return ICE_VECTOR_OFFLOAD_PATH;
+
+	return ICE_VECTOR_PATH;
+}
+
+static inline int
+ice_rx_vec_dev_check_default(struct rte_eth_dev *dev)
+{
+	int i;
+	struct ci_rx_queue *rxq;
+	int ret = 0;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		ret = (ice_rx_vec_queue_default(rxq));
+		if (ret < 0)
+			break;
+	}
+
+	return ret;
+}
+
+static inline int
+ice_tx_vec_dev_check_default(struct rte_eth_dev *dev)
+{
+	int i;
+	struct ci_tx_queue *txq;
+	int ret = 0;
+	int result = 0;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		ret = ice_tx_vec_queue_default(txq);
+		if (ret < 0)
+			return -1;
+		if (ret == ICE_VECTOR_OFFLOAD_PATH)
+			result = ret;
+	}
+
+	return result;
+}
+
+static inline void
+ice_txd_enable_offload(struct rte_mbuf *tx_pkt,
+		       uint64_t *txd_hi)
+{
+	uint64_t ol_flags = tx_pkt->ol_flags;
+	uint32_t td_cmd = 0;
+	uint32_t td_offset = 0;
+
+	/* Tx Checksum Offload */
+	/* SET MACLEN */
+	td_offset |= (tx_pkt->l2_len >> 1) <<
+		ICE_TX_DESC_LEN_MACLEN_S;
+
+	/* Enable L3 checksum offload */
+	if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
+		td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4_CSUM;
+		td_offset |= (tx_pkt->l3_len >> 2) <<
+			ICE_TX_DESC_LEN_IPLEN_S;
+	} else if (ol_flags & RTE_MBUF_F_TX_IPV4) {
+		td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4;
+		td_offset |= (tx_pkt->l3_len >> 2) <<
+			ICE_TX_DESC_LEN_IPLEN_S;
+	} else if (ol_flags & RTE_MBUF_F_TX_IPV6) {
+		td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV6;
+		td_offset |= (tx_pkt->l3_len >> 2) <<
+			ICE_TX_DESC_LEN_IPLEN_S;
+	}
+
+	/* Enable L4 checksum offloads */
+	switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+	case RTE_MBUF_F_TX_TCP_CKSUM:
+		td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_TCP;
+		td_offset |= (sizeof(struct rte_tcp_hdr) >> 2) <<
+			ICE_TX_DESC_LEN_L4_LEN_S;
+		break;
+	case RTE_MBUF_F_TX_SCTP_CKSUM:
+		td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_SCTP;
+		td_offset |= (sizeof(struct rte_sctp_hdr) >> 2) <<
+			ICE_TX_DESC_LEN_L4_LEN_S;
+		break;
+	case RTE_MBUF_F_TX_UDP_CKSUM:
+		td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_UDP;
+		td_offset |= (sizeof(struct rte_udp_hdr) >> 2) <<
+			ICE_TX_DESC_LEN_L4_LEN_S;
+		break;
+	default:
+		break;
+	}
+
+	*txd_hi |= ((uint64_t)td_offset) << ICE_TXD_QW1_OFFSET_S;
+
+	/* Tx VLAN insertion Offload */
+	if (ol_flags & RTE_MBUF_F_TX_VLAN) {
+		td_cmd |= ICE_TX_DESC_CMD_IL2TAG1;
+		*txd_hi |= ((uint64_t)tx_pkt->vlan_tci <<
+				ICE_TXD_QW1_L2TAG1_S);
+	}
+
+	*txd_hi |= ((uint64_t)td_cmd) << ICE_TXD_QW1_CMD_S;
+}
+#endif
-- 
2.51.0

---
  Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- -	2025-11-21 11:05:11.759890234 +0000
+++ 0070-net-ice-fix-path-selection-for-QinQ-Tx-offload.patch	2025-11-21 11:05:09.546978177 +0000
@@ -1 +1 @@
-From 61ccab85e3972d6e3ee61b3e6a6a6872a33e5ac3 Mon Sep 17 00:00:00 2001
+From 3fdaa456b2b0018fcdb5d0596e3216bd5e920958 Mon Sep 17 00:00:00 2001
@@ -5,0 +6,2 @@
+[ upstream commit 61ccab85e3972d6e3ee61b3e6a6a6872a33e5ac3 ]
+
@@ -15 +16,0 @@
-Cc: stable@dpdk.org
@@ -20,2 +21,3 @@
- drivers/net/intel/ice/ice_rxtx_vec_common.h | 6 +++---
- 1 file changed, 3 insertions(+), 3 deletions(-)
+ drivers/net/intel/ice/ice_rxtx_vec_common.h | 207 ++++++++++++++++++++
+ 1 file changed, 207 insertions(+)
+ create mode 100644 drivers/net/intel/ice/ice_rxtx_vec_common.h
@@ -24,2 +26,3 @@
-index a24694c0b1..39581cb7ae 100644
---- a/drivers/net/intel/ice/ice_rxtx_vec_common.h
+new file mode 100644
+index 0000000000..39581cb7ae
+--- /dev/null
@@ -27,3 +30,56 @@
-@@ -54,4 +54,5 @@ _ice_rx_queue_release_mbufs_vec(struct ci_rx_queue *rxq)
- #define ICE_TX_NO_VECTOR_FLAGS (			\
- 		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |		\
+@@ -0,0 +1,207 @@
++/* SPDX-License-Identifier: BSD-3-Clause
++ * Copyright(c) 2019 Intel Corporation
++ */
++
++#ifndef _ICE_RXTX_VEC_COMMON_H_
++#define _ICE_RXTX_VEC_COMMON_H_
++
++#include "../common/rx.h"
++#include "ice_rxtx.h"
++
++static inline int
++ice_tx_desc_done(struct ci_tx_queue *txq, uint16_t idx)
++{
++	return (txq->ice_tx_ring[idx].cmd_type_offset_bsz &
++			rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) ==
++				rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE);
++}
++
++static inline void
++_ice_rx_queue_release_mbufs_vec(struct ci_rx_queue *rxq)
++{
++	const unsigned int mask = rxq->nb_rx_desc - 1;
++	unsigned int i;
++
++	if (unlikely(!rxq->sw_ring)) {
++		PMD_DRV_LOG(DEBUG, "sw_ring is NULL");
++		return;
++	}
++
++	if (rxq->rxrearm_nb >= rxq->nb_rx_desc)
++		return;
++
++	/* free all mbufs that are valid in the ring */
++	if (rxq->rxrearm_nb == 0) {
++		for (i = 0; i < rxq->nb_rx_desc; i++) {
++			if (rxq->sw_ring[i].mbuf)
++				rte_pktmbuf_free_seg(rxq->sw_ring[i].mbuf);
++		}
++	} else {
++		for (i = rxq->rx_tail;
++		     i != rxq->rxrearm_start;
++		     i = (i + 1) & mask) {
++			if (rxq->sw_ring[i].mbuf)
++				rte_pktmbuf_free_seg(rxq->sw_ring[i].mbuf);
++		}
++	}
++
++	rxq->rxrearm_nb = rxq->nb_rx_desc;
++
++	/* set all entries to NULL */
++	memset(rxq->sw_ring, 0, sizeof(rxq->sw_ring[0]) * rxq->nb_rx_desc);
++}
++
++#define ICE_TX_NO_VECTOR_FLAGS (			\
++		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |		\
@@ -31,13 +87,141 @@
- 		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |	\
- 		RTE_ETH_TX_OFFLOAD_TCP_TSO |	\
-@@ -65,5 +66,4 @@ _ice_rx_queue_release_mbufs_vec(struct ci_rx_queue *rxq)
- #define ICE_TX_VECTOR_OFFLOAD (				\
- 		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |		\
--		RTE_ETH_TX_OFFLOAD_QINQ_INSERT |		\
- 		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |		\
- 		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |		\
-@@ -196,6 +196,6 @@ ice_txd_enable_offload(struct rte_mbuf *tx_pkt,
- 	*txd_hi |= ((uint64_t)td_offset) << ICE_TXD_QW1_OFFSET_S;
- 
--	/* Tx VLAN/QINQ insertion Offload */
--	if (ol_flags & (RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_QINQ)) {
++		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |	\
++		RTE_ETH_TX_OFFLOAD_TCP_TSO |	\
++		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |    \
++		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |    \
++		RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |    \
++		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |    \
++		RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM |	\
++		RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP)
++
++#define ICE_TX_VECTOR_OFFLOAD (				\
++		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |		\
++		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |		\
++		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |		\
++		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |		\
++		RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
++
++#define ICE_VECTOR_PATH		0
++#define ICE_VECTOR_OFFLOAD_PATH	1
++
++static inline int
++ice_rx_vec_queue_default(struct ci_rx_queue *rxq)
++{
++	if (!rxq)
++		return -1;
++
++	if (!ci_rxq_vec_capable(rxq->nb_rx_desc, rxq->rx_free_thresh))
++		return -1;
++
++	if (rxq->proto_xtr != PROTO_XTR_NONE)
++		return -1;
++
++	return 0;
++}
++
++static inline int
++ice_tx_vec_queue_default(struct ci_tx_queue *txq)
++{
++	if (!txq)
++		return -1;
++
++	if (txq->tx_rs_thresh < ICE_VPMD_TX_BURST ||
++	    txq->tx_rs_thresh > ICE_TX_MAX_FREE_BUF_SZ)
++		return -1;
++
++	if (txq->offloads & ICE_TX_NO_VECTOR_FLAGS)
++		return -1;
++
++	if (txq->offloads & ICE_TX_VECTOR_OFFLOAD)
++		return ICE_VECTOR_OFFLOAD_PATH;
++
++	return ICE_VECTOR_PATH;
++}
++
++static inline int
++ice_rx_vec_dev_check_default(struct rte_eth_dev *dev)
++{
++	int i;
++	struct ci_rx_queue *rxq;
++	int ret = 0;
++
++	for (i = 0; i < dev->data->nb_rx_queues; i++) {
++		rxq = dev->data->rx_queues[i];
++		ret = (ice_rx_vec_queue_default(rxq));
++		if (ret < 0)
++			break;
++	}
++
++	return ret;
++}
++
++static inline int
++ice_tx_vec_dev_check_default(struct rte_eth_dev *dev)
++{
++	int i;
++	struct ci_tx_queue *txq;
++	int ret = 0;
++	int result = 0;
++
++	for (i = 0; i < dev->data->nb_tx_queues; i++) {
++		txq = dev->data->tx_queues[i];
++		ret = ice_tx_vec_queue_default(txq);
++		if (ret < 0)
++			return -1;
++		if (ret == ICE_VECTOR_OFFLOAD_PATH)
++			result = ret;
++	}
++
++	return result;
++}
++
++static inline void
++ice_txd_enable_offload(struct rte_mbuf *tx_pkt,
++		       uint64_t *txd_hi)
++{
++	uint64_t ol_flags = tx_pkt->ol_flags;
++	uint32_t td_cmd = 0;
++	uint32_t td_offset = 0;
++
++	/* Tx Checksum Offload */
++	/* SET MACLEN */
++	td_offset |= (tx_pkt->l2_len >> 1) <<
++		ICE_TX_DESC_LEN_MACLEN_S;
++
++	/* Enable L3 checksum offload */
++	if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
++		td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4_CSUM;
++		td_offset |= (tx_pkt->l3_len >> 2) <<
++			ICE_TX_DESC_LEN_IPLEN_S;
++	} else if (ol_flags & RTE_MBUF_F_TX_IPV4) {
++		td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4;
++		td_offset |= (tx_pkt->l3_len >> 2) <<
++			ICE_TX_DESC_LEN_IPLEN_S;
++	} else if (ol_flags & RTE_MBUF_F_TX_IPV6) {
++		td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV6;
++		td_offset |= (tx_pkt->l3_len >> 2) <<
++			ICE_TX_DESC_LEN_IPLEN_S;
++	}
++
++	/* Enable L4 checksum offloads */
++	switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
++	case RTE_MBUF_F_TX_TCP_CKSUM:
++		td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_TCP;
++		td_offset |= (sizeof(struct rte_tcp_hdr) >> 2) <<
++			ICE_TX_DESC_LEN_L4_LEN_S;
++		break;
++	case RTE_MBUF_F_TX_SCTP_CKSUM:
++		td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_SCTP;
++		td_offset |= (sizeof(struct rte_sctp_hdr) >> 2) <<
++			ICE_TX_DESC_LEN_L4_LEN_S;
++		break;
++	case RTE_MBUF_F_TX_UDP_CKSUM:
++		td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_UDP;
++		td_offset |= (sizeof(struct rte_udp_hdr) >> 2) <<
++			ICE_TX_DESC_LEN_L4_LEN_S;
++		break;
++	default:
++		break;
++	}
++
++	*txd_hi |= ((uint64_t)td_offset) << ICE_TXD_QW1_OFFSET_S;
++
@@ -46,2 +230,8 @@
- 		td_cmd |= ICE_TX_DESC_CMD_IL2TAG1;
- 		*txd_hi |= ((uint64_t)tx_pkt->vlan_tci <<
++		td_cmd |= ICE_TX_DESC_CMD_IL2TAG1;
++		*txd_hi |= ((uint64_t)tx_pkt->vlan_tci <<
++				ICE_TXD_QW1_L2TAG1_S);
++	}
++
++	*txd_hi |= ((uint64_t)td_cmd) << ICE_TXD_QW1_CMD_S;
++}
++#endif


  parent reply	other threads:[~2025-11-21 11:24 UTC|newest]

Thread overview: 100+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-11-21 11:19 patch 'test/hash: check memory allocation' " Kevin Traynor
2025-11-21 11:19 ` patch 'dmadev: fix debug build with tracepoints' " Kevin Traynor
2025-11-21 11:19 ` patch 'bus/cdx: fix device name in probing error message' " Kevin Traynor
2025-11-21 11:19 ` patch 'bus/cdx: fix release in probing for secondary process' " Kevin Traynor
2025-11-21 11:19 ` patch 'buildtools/pmdinfogen: fix warning with python 3.14' " Kevin Traynor
2025-11-21 11:19 ` patch 'net/iavf: fix build with clang 21' " Kevin Traynor
2025-11-21 11:19 ` patch 'test: " Kevin Traynor
2025-11-21 11:19 ` patch 'app/eventdev: " Kevin Traynor
2025-11-21 11:19 ` patch 'eventdev/crypto: " Kevin Traynor
2025-11-21 11:19 ` patch 'rawdev: " Kevin Traynor
2025-11-21 11:19 ` patch 'vdpa/mlx5: remove unused constant' " Kevin Traynor
2025-11-21 11:19 ` patch 'crypto/mlx5: remove unused constants' " Kevin Traynor
2025-11-21 11:19 ` patch 'regex/mlx5: remove useless " Kevin Traynor
2025-11-21 11:19 ` patch 'common/mlx5: " Kevin Traynor
2025-11-21 11:19 ` patch 'net/mlx5: " Kevin Traynor
2025-11-21 11:20 ` patch 'net/mlx5: remove unused macros' " Kevin Traynor
2025-11-21 11:20 ` patch 'doc: fix NVIDIA bifurcated driver presentation link' " Kevin Traynor
2025-11-21 11:20 ` patch 'app/dma-perf: fix use after free' " Kevin Traynor
2025-11-21 11:20 ` patch 'app/dma-perf: check buffer size' " Kevin Traynor
2025-11-21 11:20 ` patch 'net/vmxnet3: disable RSS for single queue for ESX8.0+' " Kevin Traynor
2025-11-21 11:20 ` patch 'net/dpaa: fix resource leak' " Kevin Traynor
2025-11-21 11:20 ` patch 'net/txgbe: fix checksum error counter' " Kevin Traynor
2025-11-21 11:20 ` patch 'net/ngbe: " Kevin Traynor
2025-11-21 11:20 ` patch 'net/txgbe: reduce memory size of ring descriptors' " Kevin Traynor
2025-11-21 11:20 ` patch 'net/ngbe: " Kevin Traynor
2025-11-21 11:20 ` patch 'net/txgbe: fix VF Rx buffer size in config register' " Kevin Traynor
2025-11-21 11:20 ` patch 'net/txgbe: remove duplicate Tx queue assignment' " Kevin Traynor
2025-11-21 11:20 ` patch 'net/txgbe: add device arguments for FDIR' " Kevin Traynor
2025-11-21 11:20 ` patch 'net/txgbe: fix maximum number of FDIR filters' " Kevin Traynor
2025-11-21 11:20 ` patch 'net/txgbe: fix FDIR mode clearing' " Kevin Traynor
2025-11-21 11:20 ` patch 'net/txgbe: fix FDIR drop action for L4 match packets' " Kevin Traynor
2025-11-21 11:20 ` patch 'net/txgbe: fix FDIR filter for SCTP tunnel' " Kevin Traynor
2025-11-21 11:20 ` patch 'net/txgbe: filter FDIR match flex bytes for " Kevin Traynor
2025-11-21 11:20 ` patch 'net/txgbe: fix FDIR rule raw relative for L3 packets' " Kevin Traynor
2025-11-21 11:20 ` patch 'net/txgbe: fix FDIR input mask' " Kevin Traynor
2025-11-21 11:20 ` patch 'net/txgbe: switch to FDIR when ntuple filter is full' " Kevin Traynor
2025-11-21 11:20 ` patch 'net/txgbe: remove unsupported flow action mark' " Kevin Traynor
2025-11-21 11:20 ` patch 'net/nfp: fix metering cleanup' " Kevin Traynor
2025-11-21 11:20 ` patch 'net/bonding: fix MAC address propagation in 802.3ad mode' " Kevin Traynor
2025-11-21 11:20 ` patch 'app/testpmd: fix DCB Tx port' " Kevin Traynor
2025-11-21 11:20 ` patch 'app/testpmd: fix DCB Rx queues' " Kevin Traynor
2025-11-21 11:20 ` patch 'net/e1000/base: fix crash on init with GCC 13' " Kevin Traynor
2025-11-21 11:20 ` patch 'net/fm10k: fix build with GCC 16' " Kevin Traynor
2025-11-21 11:20 ` patch 'net/mlx4: fix unnecessary comma' " Kevin Traynor
2025-11-21 11:20 ` patch 'net/mlx5: fix unnecessary commas' " Kevin Traynor
2025-11-21 11:20 ` patch 'net/mlx5: store MTU at Rx queue allocation time' " Kevin Traynor
2025-11-21 11:20 ` patch 'net/mlx5: fix indirect RSS action hash' " Kevin Traynor
2025-11-21 11:20 ` patch 'net/mlx5: remove counter alignment' " Kevin Traynor
2025-11-21 11:20 ` patch 'net/mlx5: fix external queues access' " Kevin Traynor
2025-11-21 11:20 ` patch 'net/mlx5: fix modify field action restriction' " Kevin Traynor
2025-11-21 11:20 ` patch 'net/mlx5: fix meter mark allocation' " Kevin Traynor
2025-11-21 11:20 ` patch 'net/mlx5: fix indirect meter index leak' " Kevin Traynor
2025-11-21 11:20 ` patch 'net/mlx5: fix error reporting on masked indirect actions' " Kevin Traynor
2025-11-21 11:20 ` patch 'vhost: fix external buffer in VDUSE' " Kevin Traynor
2025-11-21 11:20 ` patch 'net: fix L2 length for GRE packets' " Kevin Traynor
2025-11-21 11:20 ` patch 'graph: fix xstats description allocation' " Kevin Traynor
2025-11-21 11:20 ` patch 'graph: fix updating edge with active graph' " Kevin Traynor
2025-11-21 11:20 ` patch 'app/pdump: remove hard-coded memory channels' " Kevin Traynor
2025-11-21 11:20 ` patch 'pdump: handle primary process exit' " Kevin Traynor
2025-11-21 11:20 ` patch 'telemetry: make socket handler typedef private' " Kevin Traynor
2025-11-21 11:20 ` patch 'usertools/telemetry: fix exporter default IP binding' " Kevin Traynor
2025-11-21 11:20 ` patch 'examples/l3fwd-power: fix telemetry command registration' " Kevin Traynor
2025-11-21 11:20 ` patch 'lib: fix backticks matching in Doxygen comments' " Kevin Traynor
2025-11-21 11:20 ` patch 'mcslock: fix memory ordering' " Kevin Traynor
2025-11-21 15:22   ` Wathsala Vithanage
2025-11-21 11:20 ` patch 'net/axgbe: fix build with GCC 16' " Kevin Traynor
2025-11-21 11:20 ` patch 'net/dpaa2: fix duplicate call of close' " Kevin Traynor
2025-11-21 11:20 ` patch 'net/dpaa2: clear active VDQ state when freeing Rx queues' " Kevin Traynor
2025-11-21 11:20 ` patch 'app/testpmd: fix flex item link parsing' " Kevin Traynor
2025-11-21 11:20 ` patch 'net/ixgbe/base: fix PF link state request size' " Kevin Traynor
2025-11-21 11:20 ` Kevin Traynor [this message]
2025-11-21 11:20 ` patch 'net/ice: fix statistics' " Kevin Traynor
2025-11-21 11:20 ` patch 'net/idpf: fix queue setup with TSO offload' " Kevin Traynor
2025-11-21 11:20 ` patch 'net/iavf: fix check for PF Rx timestamp support' " Kevin Traynor
2025-11-21 11:20 ` patch 'net/iavf: fix Rx timestamp validity check' " Kevin Traynor
2025-11-21 23:41   ` Jacob Keller
2025-11-21 11:20 ` patch 'common/cnxk: fix max number of SQB buffers in clean up' " Kevin Traynor
2025-11-21 11:21 ` patch 'common/cnxk: fix null SQ access' " Kevin Traynor
2025-11-21 11:21 ` patch 'common/cnxk: fix format specifier for bandwidth profile ID' " Kevin Traynor
2025-11-21 11:21 ` patch 'common/cnxk: fix NIX Rx inject enabling' " Kevin Traynor
2025-11-21 11:21 ` patch 'net/cnxk: fix Rx inject LF' " Kevin Traynor
2025-11-21 11:21 ` patch 'net/cnxk: fix default meter pre-color' " Kevin Traynor
2025-11-21 11:21 ` patch 'crypto/qat: fix CCM request descriptor hash state size' " Kevin Traynor
2025-11-21 11:21 ` patch 'net/dpaa2: remove ethdev pointer from bus device' " Kevin Traynor
2025-11-21 11:21 ` patch 'app/flow-perf: fix rules array length' " Kevin Traynor
2025-11-21 11:21 ` patch 'net/mlx5: fix spurious CPU wakeups' " Kevin Traynor
2025-11-21 11:21 ` patch 'net/mlx5: fix IPv6 DSCP offset in HWS sync API' " Kevin Traynor
2025-11-21 11:21 ` patch 'net/mlx5: fix send to kernel action resources release' " Kevin Traynor
2025-11-21 11:21 ` patch 'net/mlx5: release representor interrupt handler' " Kevin Traynor
2025-11-21 11:21 ` patch 'common/mlx5: release unused mempool entries' " Kevin Traynor
2025-11-21 11:21 ` patch 'net/mlx5/hws: fix buddy memory allocation' " Kevin Traynor
2025-11-21 11:21 ` patch 'net/mlx5: fix uninitialized variable' " Kevin Traynor
2025-11-21 11:21 ` patch 'net/mlx5: fix flow tag indexes support on root table' " Kevin Traynor
2025-11-21 11:21 ` patch 'net/mlx5/hws: fix flow rule hash capability' " Kevin Traynor
2025-11-21 11:21 ` patch 'net/mlx5: fix null dereference in modify header' " Kevin Traynor
2025-11-21 11:21 ` patch 'net/mlx5: skip Rx control flow tables in isolated mode' " Kevin Traynor
2025-11-21 11:21 ` patch 'net/mlx5: fix crash on flow rule destruction' " Kevin Traynor
2025-11-21 11:21 ` patch 'net/mlx5: move auxiliary data inline' " Kevin Traynor
2025-11-21 11:21 ` patch 'net/mlx5/windows: fix match criteria in flow creation' " Kevin Traynor
2025-11-21 11:21 ` patch 'net: fix IPv6 link local compliance with RFC 4291' " Kevin Traynor

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20251121112128.485623-70-ktraynor@redhat.com \
    --to=ktraynor@redhat.com \
    --cc=bruce.richardson@intel.com \
    --cc=ciara.loftus@intel.com \
    --cc=stable@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).