patches for DPDK stable branches
 help / color / mirror / Atom feed
From: Xueming Li <xuemingl@nvidia.com>
To: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Cc: Xueming Li <xuemingl@nvidia.com>, dpdk stable <stable@dpdk.org>
Subject: patch 'net/mlx5: fix out-of-order completions in ordinary Rx burst' has been queued to stable release 23.11.5
Date: Wed, 30 Jul 2025 22:56:31 +0800	[thread overview]
Message-ID: <20250730145633.245984-23-xuemingl@nvidia.com> (raw)
In-Reply-To: <20250730145633.245984-1-xuemingl@nvidia.com>

Hi,

FYI, your patch has been queued to stable release 23.11.5

Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 08/10/25. So please
shout if anyone has objections.

Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.

Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging

This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=c7c7562ab52984a394e9ef7ce524f1aa8e1db2d9

Thanks.

Xueming Li <xuemingl@nvidia.com>

---
From c7c7562ab52984a394e9ef7ce524f1aa8e1db2d9 Mon Sep 17 00:00:00 2001
From: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Date: Tue, 8 Jul 2025 13:46:41 +0300
Subject: [PATCH] net/mlx5: fix out-of-order completions in ordinary Rx burst
Cc: Xueming Li <xuemingl@nvidia.com>

[ upstream commit 5f9223611f3570c974b9c8e6c0b62db605fb3076 ]

The existing Rx burst routines suppose the completions in CQ
arrive in order and address the WQEs in receiving queue in order.
That is not true for the shared RQs, CQEs can arrive in out of
order and to address appropriate WQE we should fetch its index
from the CQE wqe_counter field.

Also, we can advance the RQ CI if and only if all the WQEs are
handled in the covered range. This requires slide window to track
handled WQEs. We support the out-of-order window size up to the
full queue size.

Fixes: 09c2555303be ("net/mlx5: support shared Rx queue")
Cc: stable@dpdk.org

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 drivers/net/mlx5/linux/mlx5_verbs.c |   8 +-
 drivers/net/mlx5/mlx5_devx.c        |   7 +-
 drivers/net/mlx5/mlx5_ethdev.c      |   8 +-
 drivers/net/mlx5/mlx5_rx.c          | 284 +++++++++++++++++++++++++++-
 drivers/net/mlx5/mlx5_rx.h          |  28 ++-
 drivers/net/mlx5/mlx5_rxq.c         |  11 +-
 6 files changed, 334 insertions(+), 12 deletions(-)

diff --git a/drivers/net/mlx5/linux/mlx5_verbs.c b/drivers/net/mlx5/linux/mlx5_verbs.c
index b54f3ccd9a..efe8aa12fb 100644
--- a/drivers/net/mlx5/linux/mlx5_verbs.c
+++ b/drivers/net/mlx5/linux/mlx5_verbs.c
@@ -397,7 +397,13 @@ mlx5_rxq_ibv_obj_new(struct mlx5_rxq_priv *rxq)
 	rxq_data->wqes = rwq.buf;
 	rxq_data->rq_db = rwq.dbrec;
 	rxq_data->cq_arm_sn = 0;
-	mlx5_rxq_initialize(rxq_data);
+	ret = mlx5_rxq_initialize(rxq_data);
+	if (ret) {
+		DRV_LOG(ERR, "Port %u Rx queue %u RQ initialization failure.",
+			priv->dev_data->port_id, rxq->idx);
+		rte_errno = ENOMEM;
+		goto error;
+	}
 	rxq_data->cq_ci = 0;
 	priv->dev_data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STARTED;
 	rxq_ctrl->wqn = ((struct ibv_wq *)(tmpl->wq))->wq_num;
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index 47e86197b8..be9dbf0467 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -617,7 +617,12 @@ mlx5_rxq_devx_obj_new(struct mlx5_rxq_priv *rxq)
 				(uint32_t *)(uintptr_t)tmpl->devx_rmp.wq.db_rec;
 	}
 	if (!rxq_ctrl->started) {
-		mlx5_rxq_initialize(rxq_data);
+		if (mlx5_rxq_initialize(rxq_data)) {
+			DRV_LOG(ERR, "Port %u Rx queue %u RQ initialization failure.",
+			priv->dev_data->port_id, rxq->idx);
+			rte_errno = ENOMEM;
+			goto error;
+		}
 		rxq_ctrl->wqn = rxq->devx_rq.rq->id;
 	}
 	priv->dev_data->rx_queue_state[rxq->idx] = RTE_ETH_QUEUE_STATE_STARTED;
diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index 3762797fe2..dbfd46ce1c 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -641,6 +641,7 @@ mlx5_dev_supported_ptypes_get(struct rte_eth_dev *dev)
 	};
 
 	if (dev->rx_pkt_burst == mlx5_rx_burst ||
+	    dev->rx_pkt_burst == mlx5_rx_burst_out_of_order ||
 	    dev->rx_pkt_burst == mlx5_rx_burst_mprq ||
 	    dev->rx_pkt_burst == mlx5_rx_burst_vec ||
 	    dev->rx_pkt_burst == mlx5_rx_burst_mprq_vec)
@@ -709,7 +710,12 @@ mlx5_select_rx_function(struct rte_eth_dev *dev)
 	eth_rx_burst_t rx_pkt_burst = mlx5_rx_burst;
 
 	MLX5_ASSERT(dev != NULL);
-	if (mlx5_check_vec_rx_support(dev) > 0) {
+	if (mlx5_shared_rq_enabled(dev)) {
+		rx_pkt_burst = mlx5_rx_burst_out_of_order;
+		DRV_LOG(DEBUG, "port %u forced to use SPRQ"
+			" Rx function with Out-of-Order completions",
+			dev->data->port_id);
+	} else if (mlx5_check_vec_rx_support(dev) > 0) {
 		if (mlx5_mprq_enabled(dev)) {
 			rx_pkt_burst = mlx5_rx_burst_mprq_vec;
 			DRV_LOG(DEBUG, "port %u selected vectorized"
diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c
index 86a7e090a1..73d9f23a65 100644
--- a/drivers/net/mlx5/mlx5_rx.c
+++ b/drivers/net/mlx5/mlx5_rx.c
@@ -41,7 +41,7 @@ static __rte_always_inline int
 mlx5_rx_poll_len(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cqe,
 		 uint16_t cqe_n, uint16_t cqe_mask,
 		 volatile struct mlx5_mini_cqe8 **mcqe,
-		 uint16_t *skip_cnt, bool mprq);
+		 uint16_t *skip_cnt, bool mprq, uint32_t *widx);
 
 static __rte_always_inline uint32_t
 rxq_cq_to_ol_flags(volatile struct mlx5_cqe *cqe);
@@ -220,6 +220,8 @@ mlx5_rx_burst_mode_get(struct rte_eth_dev *dev,
 	}
 	if (pkt_burst == mlx5_rx_burst) {
 		snprintf(mode->info, sizeof(mode->info), "%s", "Scalar");
+	} else if (pkt_burst == mlx5_rx_burst_out_of_order) {
+		snprintf(mode->info, sizeof(mode->info), "%s", "Scalar Out-of-Order");
 	} else if (pkt_burst == mlx5_rx_burst_mprq) {
 		snprintf(mode->info, sizeof(mode->info), "%s", "Multi-Packet RQ");
 	} else if (pkt_burst == mlx5_rx_burst_vec) {
@@ -358,13 +360,84 @@ rxq_cq_to_pkt_type(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cqe,
 	return mlx5_ptype_table[idx] | rxq->tunnel * !!(idx & (1 << 6));
 }
 
+static inline void mlx5_rq_win_reset(struct mlx5_rxq_data *rxq)
+{
+	static_assert(MLX5_WINOOO_BITS == (sizeof(*rxq->rq_win_data) * CHAR_BIT),
+		      "Invalid out-of-order window bitwidth");
+	rxq->rq_win_idx = 0;
+	rxq->rq_win_cnt = 0;
+	if (rxq->rq_win_data != NULL && rxq->rq_win_idx_mask != 0)
+		memset(rxq->rq_win_data, 0, (rxq->rq_win_idx_mask + 1) * sizeof(*rxq->rq_win_data));
+}
+
+static inline int mlx5_rq_win_init(struct mlx5_rxq_data *rxq)
+{
+	struct mlx5_rxq_ctrl *ctrl = container_of(rxq, struct mlx5_rxq_ctrl, rxq);
+	uint32_t win_size, win_mask;
+
+	/* Set queue size as window size */
+	win_size = 1u << rxq->elts_n;
+	win_size = RTE_MAX(win_size, MLX5_WINOOO_BITS);
+	win_size = win_size / MLX5_WINOOO_BITS;
+	win_mask = win_size - 1;
+	if (win_mask != rxq->rq_win_idx_mask || rxq->rq_win_data == NULL) {
+		mlx5_free(rxq->rq_win_data);
+		rxq->rq_win_idx_mask = 0;
+		rxq->rq_win_data = mlx5_malloc(MLX5_MEM_RTE,
+					       win_size * sizeof(*rxq->rq_win_data),
+					       RTE_CACHE_LINE_SIZE, ctrl->socket);
+		if (rxq->rq_win_data == NULL)
+			return -ENOMEM;
+		rxq->rq_win_idx_mask = (uint16_t)win_mask;
+	}
+	mlx5_rq_win_reset(rxq);
+	return 0;
+}
+
+static inline bool mlx5_rq_win_test(struct mlx5_rxq_data *rxq)
+{
+	return !!rxq->rq_win_cnt;
+}
+
+static inline void mlx5_rq_win_update(struct mlx5_rxq_data *rxq, uint32_t delta)
+{
+	uint32_t idx;
+
+	idx = (delta / MLX5_WINOOO_BITS) + rxq->rq_win_idx;
+	idx &= rxq->rq_win_idx_mask;
+	rxq->rq_win_cnt = 1;
+	rxq->rq_win_data[idx] |= 1u << (delta % MLX5_WINOOO_BITS);
+}
+
+static inline uint32_t mlx5_rq_win_advance(struct mlx5_rxq_data *rxq, uint32_t delta)
+{
+	uint32_t idx;
+
+	idx = (delta / MLX5_WINOOO_BITS) + rxq->rq_win_idx;
+	idx &= rxq->rq_win_idx_mask;
+	rxq->rq_win_data[idx] |= 1u << (delta % MLX5_WINOOO_BITS);
+	++rxq->rq_win_cnt;
+	if (delta >= MLX5_WINOOO_BITS)
+		return 0;
+	delta = 0;
+	while (~rxq->rq_win_data[idx] == 0) {
+		rxq->rq_win_data[idx] = 0;
+		MLX5_ASSERT(rxq->rq_win_cnt >= MLX5_WINOOO_BITS);
+		rxq->rq_win_cnt -= MLX5_WINOOO_BITS;
+		idx = (idx + 1) & rxq->rq_win_idx_mask;
+		rxq->rq_win_idx = idx;
+		delta += MLX5_WINOOO_BITS;
+	}
+	return delta;
+}
+
 /**
  * Initialize Rx WQ and indexes.
  *
  * @param[in] rxq
  *   Pointer to RX queue structure.
  */
-void
+int
 mlx5_rxq_initialize(struct mlx5_rxq_data *rxq)
 {
 	const unsigned int wqe_n = 1 << rxq->elts_n;
@@ -413,8 +486,12 @@ mlx5_rxq_initialize(struct mlx5_rxq_data *rxq)
 		(wqe_n >> rxq->sges_n) * RTE_BIT32(rxq->log_strd_num) : 0;
 	/* Update doorbell counter. */
 	rxq->rq_ci = wqe_n >> rxq->sges_n;
+	rxq->rq_ci_ooo = rxq->rq_ci;
+	if (mlx5_rq_win_init(rxq))
+		return -ENOMEM;
 	rte_io_wmb();
 	*rxq->rq_db = rte_cpu_to_be_32(rxq->rq_ci);
+	return 0;
 }
 
 #define MLX5_ERROR_CQE_MASK 0x40000000
@@ -523,6 +600,9 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec,
 						    16 * wqe_n);
 			rxq_ctrl->dump_file_n++;
 		}
+		/* Try to find the actual cq_ci in hardware for shared queue. */
+		if (rxq->shared)
+			rxq_sync_cq(rxq);
 		rxq->err_state = MLX5_RXQ_ERR_STATE_NEED_READY;
 		/* Fall-through */
 	case MLX5_RXQ_ERR_STATE_NEED_READY:
@@ -582,7 +662,8 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec,
 					(*rxq->elts)[elts_n + i] =
 								&rxq->fake_mbuf;
 			}
-			mlx5_rxq_initialize(rxq);
+			if (mlx5_rxq_initialize(rxq))
+				return MLX5_RECOVERY_ERROR_RET;
 			rxq->err_state = MLX5_RXQ_ERR_STATE_NO_ERROR;
 			return MLX5_RECOVERY_COMPLETED_RET;
 		}
@@ -612,6 +693,10 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec,
  *   Number of packets skipped due to recoverable errors.
  * @param mprq
  *   Indication if it is called from MPRQ.
+ * @param[out] widx
+ *   Store WQE index from CQE to support out of order completions. NULL
+ *   can be specified if index is not needed
+ *
  * @return
  *   0 in case of empty CQE,
  *   MLX5_REGULAR_ERROR_CQE_RET in case of error CQE,
@@ -623,7 +708,7 @@ static inline int
 mlx5_rx_poll_len(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cqe,
 		 uint16_t cqe_n, uint16_t cqe_mask,
 		 volatile struct mlx5_mini_cqe8 **mcqe,
-		 uint16_t *skip_cnt, bool mprq)
+		 uint16_t *skip_cnt, bool mprq, uint32_t *widx)
 {
 	struct rxq_zip *zip = &rxq->zip;
 	int len = 0, ret = 0;
@@ -639,6 +724,8 @@ mlx5_rx_poll_len(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cqe,
 							cqe_mask].pkt_info);
 			len = rte_be_to_cpu_32((*mc)[zip->ai & 7].byte_cnt &
 						rxq->byte_mask);
+			if (widx != NULL)
+				*widx = zip->wqe_idx + zip->ai;
 			*mcqe = &(*mc)[zip->ai & 7];
 			if (rxq->cqe_comp_layout) {
 				zip->ai++;
@@ -692,6 +779,9 @@ mlx5_rx_poll_len(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cqe,
 			if (unlikely(ret != MLX5_CQE_STATUS_SW_OWN)) {
 				if (unlikely(ret == MLX5_CQE_STATUS_ERR ||
 					     rxq->err_state)) {
+					/* We should try to track out-pf-order WQE */
+					if (widx != NULL)
+						*widx = rte_be_to_cpu_16(cqe->wqe_counter);
 					ret = mlx5_rx_err_handle(rxq, 0, 1, skip_cnt);
 					if (ret == MLX5_CQE_STATUS_HW_OWN)
 						return MLX5_ERROR_CQE_MASK;
@@ -736,6 +826,10 @@ mlx5_rx_poll_len(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cqe,
 				*/
 				zip->ca = cq_ci;
 				zip->na = zip->ca + 7;
+				if (widx != NULL) {
+					zip->wqe_idx = rte_be_to_cpu_16(cqe->wqe_counter);
+					*widx = zip->wqe_idx;
+				}
 				/* Compute the next non compressed CQE. */
 				zip->cq_ci = rxq->cq_ci + zip->cqe_cnt;
 				/* Get packet size to return. */
@@ -760,6 +854,8 @@ mlx5_rx_poll_len(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cqe,
 			} else {
 				++rxq->cq_ci;
 				len = rte_be_to_cpu_32(cqe->byte_cnt);
+				if (widx != NULL)
+					*widx = rte_be_to_cpu_16(cqe->wqe_counter);
 				if (rxq->cqe_comp_layout) {
 					volatile struct mlx5_cqe *next;
 
@@ -975,7 +1071,8 @@ mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
 		}
 		if (!pkt) {
 			cqe = &(*rxq->cqes)[rxq->cq_ci & cqe_mask];
-			len = mlx5_rx_poll_len(rxq, cqe, cqe_n, cqe_mask, &mcqe, &skip_cnt, false);
+			len = mlx5_rx_poll_len(rxq, cqe, cqe_n, cqe_mask,
+					       &mcqe, &skip_cnt, false, NULL);
 			if (unlikely(len & MLX5_ERROR_CQE_MASK)) {
 				/* We drop packets with non-critical errors */
 				rte_mbuf_raw_free(rep);
@@ -1061,6 +1158,181 @@ mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
 	return i;
 }
 
+/**
+ * DPDK callback for RX with Out-of-Order completions support.
+ *
+ * @param dpdk_rxq
+ *   Generic pointer to RX queue structure.
+ * @param[out] pkts
+ *   Array to store received packets.
+ * @param pkts_n
+ *   Maximum number of packets in array.
+ *
+ * @return
+ *   Number of packets successfully received (<= pkts_n).
+ */
+uint16_t
+mlx5_rx_burst_out_of_order(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
+{
+	struct mlx5_rxq_data *rxq = dpdk_rxq;
+	const uint32_t wqe_n = 1 << rxq->elts_n;
+	const uint32_t wqe_mask = wqe_n - 1;
+	const uint32_t cqe_n = 1 << rxq->cqe_n;
+	const uint32_t cqe_mask = cqe_n - 1;
+	const unsigned int sges_n = rxq->sges_n;
+	const uint32_t pkt_mask = wqe_mask >> sges_n;
+	struct rte_mbuf *pkt = NULL;
+	struct rte_mbuf *seg = NULL;
+	volatile struct mlx5_cqe *cqe =
+		&(*rxq->cqes)[rxq->cq_ci & cqe_mask];
+	unsigned int i = 0;
+	int len = 0; /* keep its value across iterations. */
+	const uint32_t rq_ci = rxq->rq_ci;
+	uint32_t idx = 0;
+
+	do {
+		volatile struct mlx5_wqe_data_seg *wqe;
+		struct rte_mbuf *rep = NULL;
+		volatile struct mlx5_mini_cqe8 *mcqe = NULL;
+		uint32_t delta;
+		uint16_t skip_cnt;
+
+		if (!pkt) {
+			cqe = &(*rxq->cqes)[rxq->cq_ci & cqe_mask];
+			rte_prefetch0(cqe);
+			/* Allocate from the first packet mbuf pool */
+			rep = (*rxq->elts)[0];
+			/* We must allocate before CQE consuming to allow retry */
+			rep = rte_mbuf_raw_alloc(rep->pool);
+			if (unlikely(rep == NULL)) {
+				++rxq->stats.rx_nombuf;
+				break;
+			}
+			len = mlx5_rx_poll_len(rxq, cqe, cqe_n, cqe_mask,
+					       &mcqe, &skip_cnt, false, &idx);
+			if (unlikely(len == MLX5_CRITICAL_ERROR_CQE_RET)) {
+				rte_mbuf_raw_free(rep);
+				mlx5_rq_win_reset(rxq);
+				break;
+			}
+			if (len == 0) {
+				rte_mbuf_raw_free(rep);
+				break;
+			}
+			idx &= pkt_mask;
+			delta = (idx - rxq->rq_ci) & pkt_mask;
+			MLX5_ASSERT(delta < ((rxq->rq_win_idx_mask + 1) * MLX5_WINOOO_BITS));
+			if (likely(!mlx5_rq_win_test(rxq))) {
+				/* No out of order completions in sliding window */
+				if (likely(delta == 0))
+					rxq->rq_ci++;
+				else
+					mlx5_rq_win_update(rxq, delta);
+			} else {
+				/* We have out of order completions */
+				rxq->rq_ci += mlx5_rq_win_advance(rxq, delta);
+			}
+			if (rxq->zip.ai == 0)
+				rxq->rq_ci_ooo = rxq->rq_ci;
+			idx <<= sges_n;
+			/* We drop packets with non-critical errors */
+			if (unlikely(len & MLX5_ERROR_CQE_MASK)) {
+				rte_mbuf_raw_free(rep);
+				continue;
+			}
+		}
+		wqe = &((volatile struct mlx5_wqe_data_seg *)rxq->wqes)[idx];
+		if (unlikely(pkt))
+			NEXT(seg) = (*rxq->elts)[idx];
+		seg = (*rxq->elts)[idx];
+		rte_prefetch0(seg);
+		rte_prefetch0(wqe);
+		/* Allocate the buf from the same pool. */
+		if (unlikely(rep == NULL)) {
+			rep = rte_mbuf_raw_alloc(seg->pool);
+			if (unlikely(rep == NULL)) {
+				++rxq->stats.rx_nombuf;
+				if (!pkt) {
+					/*
+					 * no buffers before we even started,
+					 * bail out silently.
+					 */
+					break;
+				}
+				while (pkt != seg) {
+					MLX5_ASSERT(pkt != (*rxq->elts)[idx]);
+					rep = NEXT(pkt);
+					NEXT(pkt) = NULL;
+					NB_SEGS(pkt) = 1;
+					rte_mbuf_raw_free(pkt);
+					pkt = rep;
+				}
+				break;
+			}
+		}
+		if (!pkt) {
+			pkt = seg;
+			MLX5_ASSERT(len >= (rxq->crc_present << 2));
+			pkt->ol_flags &= RTE_MBUF_F_EXTERNAL;
+			if (rxq->cqe_comp_layout && mcqe)
+				cqe = &rxq->title_cqe;
+			rxq_cq_to_mbuf(rxq, pkt, cqe, mcqe);
+			if (rxq->crc_present)
+				len -= RTE_ETHER_CRC_LEN;
+			PKT_LEN(pkt) = len;
+			if (cqe->lro_num_seg > 1) {
+				mlx5_lro_update_hdr
+					(rte_pktmbuf_mtod(pkt, uint8_t *), cqe,
+					 mcqe, rxq, len);
+				pkt->ol_flags |= RTE_MBUF_F_RX_LRO;
+				pkt->tso_segsz = len / cqe->lro_num_seg;
+			}
+		}
+		DATA_LEN(rep) = DATA_LEN(seg);
+		PKT_LEN(rep) = PKT_LEN(seg);
+		SET_DATA_OFF(rep, DATA_OFF(seg));
+		PORT(rep) = PORT(seg);
+		(*rxq->elts)[idx] = rep;
+		/*
+		 * Fill NIC descriptor with the new buffer. The lkey and size
+		 * of the buffers are already known, only the buffer address
+		 * changes.
+		 */
+		wqe->addr = rte_cpu_to_be_64(rte_pktmbuf_mtod(rep, uintptr_t));
+		/* If there's only one MR, no need to replace LKey in WQE. */
+		if (unlikely(mlx5_mr_btree_len(&rxq->mr_ctrl.cache_bh) > 1))
+			wqe->lkey = mlx5_rx_mb2mr(rxq, rep);
+		if (len > DATA_LEN(seg)) {
+			len -= DATA_LEN(seg);
+			++NB_SEGS(pkt);
+			++idx;
+			idx &= wqe_mask;
+			continue;
+		}
+		DATA_LEN(seg) = len;
+#ifdef MLX5_PMD_SOFT_COUNTERS
+		/* Increment bytes counter. */
+		rxq->stats.ibytes += PKT_LEN(pkt);
+#endif
+		/* Return packet. */
+		*(pkts++) = pkt;
+		pkt = NULL;
+		++i;
+	} while (i < pkts_n);
+	if (unlikely(i == 0 && rq_ci == rxq->rq_ci_ooo))
+		return 0;
+	/* Update the consumer index. */
+	rte_io_wmb();
+	*rxq->cq_db = rte_cpu_to_be_32(rxq->cq_ci);
+	rte_io_wmb();
+	*rxq->rq_db = rte_cpu_to_be_32(rxq->rq_ci_ooo);
+#ifdef MLX5_PMD_SOFT_COUNTERS
+	/* Increment packets counter. */
+	rxq->stats.ipackets += i;
+#endif
+	return i;
+}
+
 /**
  * Update LRO packet TCP header.
  * The HW LRO feature doesn't update the TCP header after coalescing the
@@ -1219,7 +1491,7 @@ mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
 			buf = (*rxq->mprq_bufs)[rq_ci & wq_mask];
 		}
 		cqe = &(*rxq->cqes)[rxq->cq_ci & cq_mask];
-		ret = mlx5_rx_poll_len(rxq, cqe, cqe_n, cq_mask, &mcqe, &skip_cnt, true);
+		ret = mlx5_rx_poll_len(rxq, cqe, cqe_n, cq_mask, &mcqe, &skip_cnt, true, NULL);
 		if (unlikely(ret & MLX5_ERROR_CQE_MASK)) {
 			if (ret == MLX5_CRITICAL_ERROR_CQE_RET) {
 				rq_ci = rxq->rq_ci;
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index 2205149458..f9510176a2 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -22,6 +22,7 @@
 
 /* Support tunnel matching. */
 #define MLX5_FLOW_TUNNEL 10
+#define MLX5_WINOOO_BITS  (sizeof(uint32_t) * CHAR_BIT)
 
 #define RXQ_PORT(rxq_ctrl) LIST_FIRST(&(rxq_ctrl)->owners)->priv
 #define RXQ_DEV(rxq_ctrl) ETH_DEV(RXQ_PORT(rxq_ctrl))
@@ -46,6 +47,7 @@ struct rxq_zip {
 	uint32_t ca; /* Current array index. */
 	uint32_t na; /* Next array index. */
 	uint32_t cq_ci; /* The next CQE. */
+	uint16_t wqe_idx; /* WQE index */
 };
 
 /* Get pointer to the first stride. */
@@ -106,6 +108,7 @@ struct mlx5_rxq_data {
 	volatile uint32_t *cq_db;
 	uint32_t elts_ci;
 	uint32_t rq_ci;
+	uint32_t rq_ci_ooo;
 	uint16_t consumed_strd; /* Number of consumed strides in WQE. */
 	uint32_t rq_pi;
 	uint32_t cq_ci:24;
@@ -146,6 +149,10 @@ struct mlx5_rxq_data {
 	uint32_t rxseg_n; /* Number of split segment descriptions. */
 	struct mlx5_eth_rxseg rxseg[MLX5_MAX_RXQ_NSEG];
 	/* Buffer split segment descriptions - sizes, offsets, pools. */
+	uint16_t rq_win_cnt; /* Number of packets in the sliding window data. */
+	uint16_t rq_win_idx_mask; /* Sliding window index wrapping mask. */
+	uint16_t rq_win_idx; /* Index of the first element in sliding window. */
+	uint32_t *rq_win_data; /* Out-of-Order completions sliding window. */
 } __rte_cache_aligned;
 
 /* RX queue control descriptor. */
@@ -291,7 +298,8 @@ int mlx5_hrxq_modify(struct rte_eth_dev *dev, uint32_t hxrq_idx,
 /* mlx5_rx.c */
 
 uint16_t mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n);
-void mlx5_rxq_initialize(struct mlx5_rxq_data *rxq);
+uint16_t mlx5_rx_burst_out_of_order(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n);
+int mlx5_rxq_initialize(struct mlx5_rxq_data *rxq);
 __rte_noinline int mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec,
 				      uint16_t err_n, uint16_t *skip_cnt);
 void mlx5_mprq_buf_free(struct mlx5_mprq_buf *buf);
@@ -317,6 +325,7 @@ uint16_t mlx5_rx_burst_vec(void *dpdk_rxq, struct rte_mbuf **pkts,
 			   uint16_t pkts_n);
 uint16_t mlx5_rx_burst_mprq_vec(void *dpdk_rxq, struct rte_mbuf **pkts,
 				uint16_t pkts_n);
+void rxq_sync_cq(struct mlx5_rxq_data *rxq);
 
 static int mlx5_rxq_mprq_enabled(struct mlx5_rxq_data *rxq);
 
@@ -647,6 +656,23 @@ mlx5_mprq_enabled(struct rte_eth_dev *dev)
 	return n == n_ibv;
 }
 
+/**
+ * Check whether Shared RQ is enabled for the device.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ *
+ * @return
+ *   0 if disabled, otherwise enabled.
+ */
+static __rte_always_inline int
+mlx5_shared_rq_enabled(struct rte_eth_dev *dev)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+
+	return !LIST_EMPTY(&priv->sh->shared_rxqs);
+}
+
 /**
  * Check whether given RxQ is external.
  *
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index aa8e9316af..fb2d9869b6 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -420,7 +420,7 @@ mlx5_rxq_releasable(struct rte_eth_dev *dev, uint16_t idx)
 }
 
 /* Fetches and drops all SW-owned and error CQEs to synchronize CQ. */
-static void
+void
 rxq_sync_cq(struct mlx5_rxq_data *rxq)
 {
 	const uint16_t cqe_n = 1 << rxq->cqe_n;
@@ -592,7 +592,13 @@ mlx5_rx_queue_start_primary(struct rte_eth_dev *dev, uint16_t idx)
 		return ret;
 	}
 	/* Reinitialize RQ - set WQEs. */
-	mlx5_rxq_initialize(rxq_data);
+	ret = mlx5_rxq_initialize(rxq_data);
+	if (ret) {
+		DRV_LOG(ERR, "Port %u Rx queue %u RQ initialization failure.",
+			priv->dev_data->port_id, rxq->idx);
+		rte_errno = ENOMEM;
+		return ret;
+	}
 	rxq_data->err_state = MLX5_RXQ_ERR_STATE_NO_ERROR;
 	/* Set actual queue state. */
 	dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STARTED;
@@ -2306,6 +2312,7 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx)
 			if (rxq_ctrl->rxq.shared)
 				LIST_REMOVE(rxq_ctrl, share_entry);
 			LIST_REMOVE(rxq_ctrl, next);
+			mlx5_free(rxq_ctrl->rxq.rq_win_data);
 			mlx5_free(rxq_ctrl);
 		}
 		dev->data->rx_queues[idx] = NULL;
-- 
2.34.1

---
  Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- -	2025-07-30 22:50:04.489404650 +0800
+++ 0022-net-mlx5-fix-out-of-order-completions-in-ordinary-Rx.patch	2025-07-30 22:50:03.092765987 +0800
@@ -1 +1 @@
-From 5f9223611f3570c974b9c8e6c0b62db605fb3076 Mon Sep 17 00:00:00 2001
+From c7c7562ab52984a394e9ef7ce524f1aa8e1db2d9 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 5f9223611f3570c974b9c8e6c0b62db605fb3076 ]
@@ -31 +34 @@
-index 454bd7c77e..9011319a3e 100644
+index b54f3ccd9a..efe8aa12fb 100644
@@ -50 +53 @@
-index 0ee16ba4f0..10bd93c29a 100644
+index 47e86197b8..be9dbf0467 100644
@@ -53 +56 @@
-@@ -683,7 +683,12 @@ mlx5_rxq_devx_obj_new(struct mlx5_rxq_priv *rxq)
+@@ -617,7 +617,12 @@ mlx5_rxq_devx_obj_new(struct mlx5_rxq_priv *rxq)
@@ -68 +71 @@
-index b7df39ace9..68d1c1bfa7 100644
+index 3762797fe2..dbfd46ce1c 100644
@@ -71 +74 @@
-@@ -648,6 +648,7 @@ mlx5_dev_supported_ptypes_get(struct rte_eth_dev *dev, size_t *no_of_elements)
+@@ -641,6 +641,7 @@ mlx5_dev_supported_ptypes_get(struct rte_eth_dev *dev)
@@ -78,2 +81,2 @@
- 	    dev->rx_pkt_burst == mlx5_rx_burst_mprq_vec) {
-@@ -718,7 +719,12 @@ mlx5_select_rx_function(struct rte_eth_dev *dev)
+ 	    dev->rx_pkt_burst == mlx5_rx_burst_mprq_vec)
+@@ -709,7 +710,12 @@ mlx5_select_rx_function(struct rte_eth_dev *dev)
@@ -94 +97 @@
-index 5f4a93fe8c..5e8c312d00 100644
+index 86a7e090a1..73d9f23a65 100644
@@ -97 +100 @@
-@@ -42,7 +42,7 @@ static __rte_always_inline int
+@@ -41,7 +41,7 @@ static __rte_always_inline int
@@ -106 +109 @@
-@@ -221,6 +221,8 @@ mlx5_rx_burst_mode_get(struct rte_eth_dev *dev,
+@@ -220,6 +220,8 @@ mlx5_rx_burst_mode_get(struct rte_eth_dev *dev,
@@ -115 +118 @@
-@@ -359,13 +361,84 @@ rxq_cq_to_pkt_type(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cqe,
+@@ -358,13 +360,84 @@ rxq_cq_to_pkt_type(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cqe,
@@ -201 +204 @@
-@@ -414,8 +487,12 @@ mlx5_rxq_initialize(struct mlx5_rxq_data *rxq)
+@@ -413,8 +486,12 @@ mlx5_rxq_initialize(struct mlx5_rxq_data *rxq)
@@ -214 +217 @@
-@@ -524,6 +601,9 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec,
+@@ -523,6 +600,9 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec,
@@ -224 +227 @@
-@@ -583,7 +663,8 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec,
+@@ -582,7 +662,8 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec,
@@ -234 +237 @@
-@@ -613,6 +694,10 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec,
+@@ -612,6 +693,10 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec,
@@ -245 +248 @@
-@@ -624,7 +709,7 @@ static inline int
+@@ -623,7 +708,7 @@ static inline int
@@ -254 +257 @@
-@@ -640,6 +725,8 @@ mlx5_rx_poll_len(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cqe,
+@@ -639,6 +724,8 @@ mlx5_rx_poll_len(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cqe,
@@ -263 +266 @@
-@@ -693,6 +780,9 @@ mlx5_rx_poll_len(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cqe,
+@@ -692,6 +779,9 @@ mlx5_rx_poll_len(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cqe,
@@ -273 +276 @@
-@@ -737,6 +827,10 @@ mlx5_rx_poll_len(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cqe,
+@@ -736,6 +826,10 @@ mlx5_rx_poll_len(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cqe,
@@ -284 +287 @@
-@@ -761,6 +855,8 @@ mlx5_rx_poll_len(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cqe,
+@@ -760,6 +854,8 @@ mlx5_rx_poll_len(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cqe,
@@ -293 +296 @@
-@@ -976,7 +1072,8 @@ mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
+@@ -975,7 +1071,8 @@ mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
@@ -303 +306 @@
-@@ -1062,6 +1159,181 @@ mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
+@@ -1061,6 +1158,181 @@ mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
@@ -485 +488 @@
-@@ -1220,7 +1492,7 @@ mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
+@@ -1219,7 +1491,7 @@ mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
@@ -495 +498 @@
-index 6380895502..4f3d73e3c4 100644
+index 2205149458..f9510176a2 100644
@@ -506 +509 @@
-@@ -64,6 +65,7 @@ struct rxq_zip {
+@@ -46,6 +47,7 @@ struct rxq_zip {
@@ -514 +517 @@
-@@ -124,6 +126,7 @@ struct __rte_cache_aligned mlx5_rxq_data {
+@@ -106,6 +108,7 @@ struct mlx5_rxq_data {
@@ -522 +525 @@
-@@ -164,6 +167,10 @@ struct __rte_cache_aligned mlx5_rxq_data {
+@@ -146,6 +149,10 @@ struct mlx5_rxq_data {
@@ -530 +533 @@
- };
+ } __rte_cache_aligned;
@@ -533 +536 @@
-@@ -305,7 +312,8 @@ int mlx5_hrxq_modify(struct rte_eth_dev *dev, uint32_t hxrq_idx,
+@@ -291,7 +298,8 @@ int mlx5_hrxq_modify(struct rte_eth_dev *dev, uint32_t hxrq_idx,
@@ -543 +546 @@
-@@ -331,6 +339,7 @@ uint16_t mlx5_rx_burst_vec(void *dpdk_rxq, struct rte_mbuf **pkts,
+@@ -317,6 +325,7 @@ uint16_t mlx5_rx_burst_vec(void *dpdk_rxq, struct rte_mbuf **pkts,
@@ -551 +554 @@
-@@ -661,6 +670,23 @@ mlx5_mprq_enabled(struct rte_eth_dev *dev)
+@@ -647,6 +656,23 @@ mlx5_mprq_enabled(struct rte_eth_dev *dev)
@@ -576 +579 @@
-index 2e9bcbea4d..77c5848c37 100644
+index aa8e9316af..fb2d9869b6 100644
@@ -579 +582 @@
-@@ -421,7 +421,7 @@ mlx5_rxq_releasable(struct rte_eth_dev *dev, uint16_t idx)
+@@ -420,7 +420,7 @@ mlx5_rxq_releasable(struct rte_eth_dev *dev, uint16_t idx)
@@ -588 +591 @@
-@@ -593,7 +593,13 @@ mlx5_rx_queue_start_primary(struct rte_eth_dev *dev, uint16_t idx)
+@@ -592,7 +592,13 @@ mlx5_rx_queue_start_primary(struct rte_eth_dev *dev, uint16_t idx)
@@ -603 +606 @@
-@@ -2360,6 +2366,7 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx)
+@@ -2306,6 +2312,7 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx)

  parent reply	other threads:[~2025-07-30 15:02 UTC|newest]

Thread overview: 150+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-06-26 12:00 patch " Xueming Li
2025-06-26 12:00 ` patch 'ethdev: convert string initialization' " Xueming Li
2025-06-26 12:00 ` patch 'net/fm10k/base: fix compilation warnings' " Xueming Li
2025-06-26 12:00 ` patch 'net/ixgbe/base: correct definition of endianness macro' " Xueming Li
2025-06-26 12:00 ` patch 'net/ixgbe/base: fix compilation warnings' " Xueming Li
2025-06-26 12:00 ` patch 'net/i40e/base: fix unused value " Xueming Li
2025-06-26 12:00 ` patch 'net/i40e/base: fix compiler " Xueming Li
2025-06-26 12:00 ` patch 'acl: fix build with GCC 15 on aarch64' " Xueming Li
2025-06-26 12:00 ` patch 'eal/linux: improve ASLR check' " Xueming Li
2025-06-26 12:00 ` patch 'net/e1000: fix xstats name' " Xueming Li
2025-06-26 12:00 ` patch 'net/e1000: fix EEPROM dump' " Xueming Li
2025-06-26 12:00 ` patch 'net/ixgbe: enable ethertype filter for E610' " Xueming Li
2025-06-26 12:00 ` patch 'net/ixgbe: fix port mask default value in filter' " Xueming Li
2025-06-26 12:00 ` patch 'net/e1000: fix igb Tx queue offloads capability' " Xueming Li
2025-06-26 12:00 ` patch 'net/ice: fix flow creation failure' " Xueming Li
2025-06-26 12:00 ` patch 'vhost: fix wrapping on control virtqueue rings' " Xueming Li
2025-06-26 12:00 ` patch 'vhost/crypto: fix cipher data length' " Xueming Li
2025-06-26 12:00 ` patch 'crypto/virtio: fix cipher data source " Xueming Li
2025-06-26 12:00 ` patch 'app/crypto-perf: fix AAD offset alignment' " Xueming Li
2025-06-26 12:00 ` patch 'crypto/qat: fix out-of-place header bytes in AEAD raw API' " Xueming Li
2025-06-26 12:00 ` patch 'crypto/qat: fix out-of-place chain/cipher/auth headers' " Xueming Li
2025-06-26 12:00 ` patch 'net/mlx5: fix header modify action on group 0' " Xueming Li
2025-06-26 12:00 ` patch 'net/mlx5: validate GTP PSC QFI width' " Xueming Li
2025-06-26 12:00 ` patch 'net/mlx5: fix counter service cleanup on init failure' " Xueming Li
2025-06-26 12:00 ` patch 'net/mlx5/hws: fix send queue drain on FW WQE destroy' " Xueming Li
2025-06-26 12:00 ` patch 'net/mlx5: remove unsupported flow meter action in HWS' " Xueming Li
2025-06-26 12:00 ` patch 'net/mlx5: fix maximal queue size query' " Xueming Li
2025-06-26 12:00 ` patch 'net/mlx5: fix mark action with shared Rx queue' " Xueming Li
2025-06-26 12:00 ` patch 'net/mlx5: align PF and VF/SF MAC address handling' " Xueming Li
2025-06-26 12:00 ` patch 'net/sfc: fix action order on start failure' " Xueming Li
2025-06-26 12:00 ` patch 'net/nfp: fix crash with null RSS hash key' " Xueming Li
2025-06-26 12:00 ` patch 'net/nfp: fix hash key length logic' " Xueming Li
2025-06-26 12:00 ` patch 'app/testpmd: fix RSS hash key update' " Xueming Li
2025-06-26 12:00 ` patch 'net/af_xdp: fix use after free in zero-copy Tx' " Xueming Li
2025-06-26 12:00 ` patch 'net/hns3: fix integer overflow in interrupt unmap' " Xueming Li
2025-06-26 12:00 ` patch 'net/hns3: fix memory leak on failure' " Xueming Li
2025-06-26 12:00 ` patch 'net/hns3: fix extra wait for link up' " Xueming Li
2025-06-26 12:00 ` patch 'net/hns3: fix memory leak for indirect flow action' " Xueming Li
2025-06-26 12:00 ` patch 'net/hns3: fix interrupt rollback' " Xueming Li
2025-06-26 12:00 ` patch 'net/hns3: fix divide by zero' " Xueming Li
2025-06-26 12:01 ` patch 'net/hns3: fix resources release on reset' " Xueming Li
2025-06-26 12:01 ` patch 'net/nfp: standardize NFD3 Tx descriptor endianness' " Xueming Li
2025-06-26 12:01 ` patch 'net/nfp: standardize NFDk " Xueming Li
2025-06-26 12:01 ` patch 'net/qede: fix use after free' " Xueming Li
2025-06-26 12:01 ` patch 'bus/fslmc: " Xueming Li
2025-06-26 12:01 ` patch 'net/null: fix packet copy' " Xueming Li
2025-06-26 12:01 ` patch 'bus/vmbus: align ring buffer data to page boundary' " Xueming Li
2025-06-26 12:01 ` patch 'bus/vmbus: use Hyper-V page size' " Xueming Li
2025-06-26 12:01 ` patch 'net/netvsc: " Xueming Li
2025-06-26 12:01 ` patch 'net/netvsc: add stats counters from VF' " Xueming Li
2025-06-26 12:01 ` patch 'app/testpmd: relax number of TCs in DCB command' " Xueming Li
2025-06-26 12:01 ` patch 'net/mana: check vendor ID when probing RDMA device' " Xueming Li
2025-06-26 12:01 ` patch 'net/hns3: fix CRC data segment' " Xueming Li
2025-06-26 12:01 ` patch 'net/tap: fix qdisc add failure handling' " Xueming Li
2025-06-26 12:01 ` patch 'net/mlx5: fix VLAN stripping on hairpin queue' " Xueming Li
2025-06-26 12:01 ` patch 'mem: fix lockup on address space shortage' " Xueming Li
2025-06-26 12:01 ` patch 'test/malloc: improve resiliency' " Xueming Li
2025-06-26 12:01 ` patch 'trace: fix overflow in per-lcore trace buffer' " Xueming Li
2025-06-26 12:01 ` patch 'common/cnxk: fix E-tag pattern parsing' " Xueming Li
2025-06-26 12:01 ` patch 'common/cnxk: fix CQ tail drop' " Xueming Li
2025-06-26 12:01 ` patch 'net/cnxk: fix descriptor count update on reconfig' " Xueming Li
2025-06-26 12:01 ` patch 'ethdev: fix error struct in flow configure' " Xueming Li
2025-06-26 12:01 ` patch 'net/ice/base: fix integer overflow' " Xueming Li
2025-06-26 12:01 ` patch 'net/ice/base: fix typo in device ID description' " Xueming Li
2025-06-26 12:01 ` patch 'common/dpaax: fix PDCP key command race condition' " Xueming Li
2025-06-26 12:01 ` patch 'common/dpaax: fix PDCP AES only 12-bit SN' " Xueming Li
2025-06-26 12:01 ` patch 'crypto/dpaa2_sec: fix uninitialized variable' " Xueming Li
2025-06-26 12:01 ` patch 'crypto/virtio: add request check on request side' " Xueming Li
2025-06-26 12:01 ` patch 'crypto/virtio: fix driver cleanup' " Xueming Li
2025-06-26 12:01 ` patch 'crypto/virtio: fix driver ID' " Xueming Li
2025-06-26 12:01 ` patch 'ethdev: keep promiscuous/allmulti value before disabling' " Xueming Li
2025-06-26 12:01 ` patch 'eal: fix return value of lcore role' " Xueming Li
2025-06-26 12:01 ` patch 'eal: warn if no lcore is available' " Xueming Li
2025-06-26 12:01 ` patch 'test/lcore: fix race in per-lcore test' " Xueming Li
2025-06-26 12:01 ` patch 'bus: cleanup device lists' " Xueming Li
2025-06-26 12:01 ` patch 'eal/linux: unregister alarm callback before free' " Xueming Li
2025-06-26 12:01 ` patch 'eal/freebsd: " Xueming Li
2025-06-26 12:01 ` patch 'bus/pci/bsd: fix device existence check' " Xueming Li
2025-06-26 12:01 ` patch 'power/intel_uncore: fix crash closing uninitialized driver' " Xueming Li
2025-06-26 12:01 ` patch 'crypto/qat: fix size calculation for memset' " Xueming Li
2025-06-26 12:01 ` patch 'net/mlx5: avoid setting kernel MTU if not needed' " Xueming Li
2025-06-26 12:01 ` patch 'doc: add kernel options required for mlx5' " Xueming Li
2025-06-26 12:01 ` patch 'net/mlx5: fix hypervisor detection in VLAN workaround' " Xueming Li
2025-06-26 12:01 ` patch 'net/hns3: check requirement for hardware GRO' " Xueming Li
2025-06-26 12:01 ` patch 'net/hns3: allow Tx vector when fast free not enabled' " Xueming Li
2025-06-26 12:01 ` patch 'net/hns3: allow Rx vector mode with VLAN filter' " Xueming Li
2025-07-30  7:10 ` patch " Xueming Li
2025-07-30  7:10   ` patch 'net/hns3: fix Rx packet without CRC data' " Xueming Li
2025-07-30  7:10   ` patch 'common/mlx5: fix extraction of auxiliary device name' " Xueming Li
2025-07-30  7:10   ` patch 'net/ice: fix handling empty DCF RSS hash' " Xueming Li
2025-07-30  7:10   ` patch 'common/cnxk: fix null pointer checks' " Xueming Li
2025-07-30  7:10   ` patch 'vhost: fix net control virtqueue used length' " Xueming Li
2025-07-30  7:10   ` patch 'eal/unix: fix log message for madvise failure' " Xueming Li
2025-07-30  7:10   ` patch 'buildtools/test: scan muti-line registrations' " Xueming Li
2025-07-30  7:10   ` patch 'examples/ipsec-secgw: fix crash with IPv6' " Xueming Li
2025-07-30  7:10   ` patch 'examples/ipsec-secgw: fix crash in event vector mode' " Xueming Li
2025-07-30  7:10   ` patch 'test/crypto: fix auth and cipher case IV length' " Xueming Li
2025-07-30  7:10   ` patch 'test/crypto: set to null after freeing operation' " Xueming Li
2025-07-30  7:10   ` patch 'crypto/openssl: include private exponent in RSA session' " Xueming Li
2025-07-30  7:10   ` patch 'event/dlb2: fix validaton of LDB port COS ID arguments' " Xueming Li
2025-07-30  7:10   ` patch 'event/dlb2: fix num single link ports for DLB2.5' " Xueming Li
2025-07-30  7:10   ` patch 'event/dlb2: fix QID depth xstat' " Xueming Li
2025-07-30  7:10   ` patch 'event/dlb2: fix public symbol namespace' " Xueming Li
2025-07-30  7:10   ` patch 'app/eventdev: fix number of releases sent during cleanup' " Xueming Li
2025-07-30  7:10   ` patch 'net/txgbe: fix ntuple filter parsing' " Xueming Li
2025-07-30  7:10   ` patch 'net/txgbe: fix raw pattern match for FDIR rule' " Xueming Li
2025-07-30  7:10   ` patch 'net/txgbe: fix MAC control frame forwarding' " Xueming Li
2025-07-30  7:10   ` patch 'net/ngbe: " Xueming Li
2025-07-30  7:10   ` patch 'net/txgbe: fix device statistics' " Xueming Li
2025-07-30  7:10   ` patch 'net/ngbe: " Xueming Li
2025-07-30  7:10   ` patch 'net/txgbe: restrict VLAN strip configuration on VF' " Xueming Li
2025-07-30  7:10   ` patch 'net/hns3: fix queue TC " Xueming Li
2025-07-30  7:10   ` patch 'net/bonding: avoid RSS RETA update in flow isolation mode' " Xueming Li
2025-07-30  7:10   ` patch 'net/octeon_ep: increase mailbox timeout' " Xueming Li
2025-07-30  7:10   ` patch 'bus/auxiliary: fix crash in cleanup' " Xueming Li
2025-07-30  7:10   ` patch 'examples/multi_process: fix ports cleanup on exit' " Xueming Li
2025-07-30  7:10   ` patch 'examples/flow_filtering: fix make clean' " Xueming Li
2025-07-30  7:10   ` patch 'dts: fix deterministic doc' " Xueming Li
2025-07-30  7:10   ` patch 'examples/multi_process: revert ports cleanup on exit' " Xueming Li
2025-07-30  7:10   ` patch 'doc: remove reference to deprecated --use-device option' " Xueming Li
2025-07-30  7:10   ` patch 'eal: add description of service corelist in usage' " Xueming Li
2025-07-30 14:56 ` patch " Xueming Li
2025-07-30 14:56   ` patch 'net/txgbe: fix packet type for FDIR filter' " Xueming Li
2025-07-30 14:56   ` patch 'net/txgbe: fix to create FDIR filter for SCTP packet' " Xueming Li
2025-07-30 14:56   ` patch 'net/txgbe: fix reserved extra FDIR headroom' " Xueming Li
2025-07-30 14:56   ` patch 'net/txgbe: add LRO flag in mbuf when enabled' " Xueming Li
2025-07-30 14:56   ` patch 'net/mlx5: fix crash in HWS counter pool destroy' " Xueming Li
2025-07-30 14:56   ` patch 'net/mlx5: fix crash on age query with indirect conntrack' " Xueming Li
2025-07-30 14:56   ` patch 'net/mlx5: fix WQE size calculation for Tx queue' " Xueming Li
2025-07-30 14:56   ` patch 'net/ixgbe: fix indentation' " Xueming Li
2025-07-30 14:56   ` patch 'net/ice: fix querying RSS hash for DCF' " Xueming Li
2025-07-30 14:56   ` patch 'net/iavf: fix VLAN strip setting after enabling filter' " Xueming Li
2025-07-30 15:23     ` Amiya Ranjan Mohakud
2025-07-31  6:04       ` Xueming Li
2025-07-30 14:56   ` patch 'vhost: search virtqueues driver data in read-only area' " Xueming Li
2025-07-30 14:56   ` patch 'net/virtio: fix check of threshold for Tx freeing' " Xueming Li
2025-07-30 14:56   ` patch 'common/cnxk: fix qsize in CPT iq enable' " Xueming Li
2025-07-30 14:56   ` patch 'test/crypto: fix RSA decrypt validation' " Xueming Li
2025-07-30 14:56   ` patch 'event/dlb2: fix dequeue with CQ depth <= 16' " Xueming Li
2025-07-30 14:56   ` patch 'event/dlb2: fix default credits based on HW version' " Xueming Li
2025-07-30 14:56   ` patch 'latencystats: fix receive sample race' " Xueming Li
2025-07-30 14:56   ` patch 'net/ice: fix inconsistency in Rx queue VLAN tag placement' " Xueming Li
2025-07-30 15:03     ` Richardson, Bruce
2025-07-31  6:07       ` Xueming Li
2025-07-30 14:56   ` patch 'net/i40e: fix RSS on plain IPv4' " Xueming Li
2025-07-30 14:56   ` patch 'net/virtio: revert Tx free threshold fix' " Xueming Li
2025-07-30 14:56   ` patch 'net/mlx5: fix masked indirect age action validation' " Xueming Li
2025-07-30 14:56   ` Xueming Li [this message]
2025-07-30 14:56   ` patch 'examples/ntb: check more heap allocations' " Xueming Li
2025-07-30 14:56   ` patch 'examples/ipsec-secgw: fix number of queue pairs' " Xueming Li

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250730145633.245984-23-xuemingl@nvidia.com \
    --to=xuemingl@nvidia.com \
    --cc=stable@dpdk.org \
    --cc=viacheslavo@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).