DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH 0/4] net/gve: out of order completion processing for DQO
@ 2025-08-26  0:03 Joshua Washington
  2025-08-26  0:03 ` [PATCH 1/4] net/gve: free Rx mbufs if allocation fails on ring setup Joshua Washington
                   ` (3 more replies)
  0 siblings, 4 replies; 5+ messages in thread
From: Joshua Washington @ 2025-08-26  0:03 UTC (permalink / raw)
  Cc: dev, Joshua Washington

Both RX and TX processing on DQ were originally implemented with the
asusmption that descriptor completions will be written by the hardware
in the order that they are posted. In certain cases, such as RSC on RX
and double completions on TX, this will not necessarily be the case.

Depends-on: series-35656 ("net/gve: Tx datapath fixes for GVE DQO")

Joshua Washington (4):
  net/gve: free Rx mbufs if allocation fails on ring setup
  net/gve: add datapath-specific logging for gve
  net/gve: support for out of order completions on DQ Tx
  net/gve: support for out of order completions on DQ Rx

 drivers/net/gve/base/gve_adminq.c |   2 +-
 drivers/net/gve/gve_ethdev.h      |  20 ++-
 drivers/net/gve/gve_logs.h        |   3 +
 drivers/net/gve/gve_rx_dqo.c      | 135 +++++++++++-----
 drivers/net/gve/gve_tx_dqo.c      | 250 ++++++++++++++++++------------
 5 files changed, 267 insertions(+), 143 deletions(-)

-- 
2.51.0.rc1.167.g924127e9c0-goog


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH 1/4] net/gve: free Rx mbufs if allocation fails on ring setup
  2025-08-26  0:03 [PATCH 0/4] net/gve: out of order completion processing for DQO Joshua Washington
@ 2025-08-26  0:03 ` Joshua Washington
  2025-08-26  0:03 ` [PATCH 2/4] net/gve: add datapath-specific logging for gve Joshua Washington
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 5+ messages in thread
From: Joshua Washington @ 2025-08-26  0:03 UTC (permalink / raw)
  To: Jeroen de Borst, Joshua Washington, Rushil Gupta, Praveen Kaligineedi
  Cc: dev, stable, Ankit Garg, Ziwei Xiao

When creating new RX rings, one less than the number of buffers in the
ring need to be allocated. It is possible that only a part of the
allocation is successful, resulting in a failure to create the rings.
In this case, the driver should free the buffers which were successfully
allocated to avoid a memory leak in case the application does not
automatically exit.

Fixes: 265daac8a53a ("net/gve: fix mbuf allocation memory leak for DQ Rx")
Cc: stable@dpdk.org
Signed-off-by: Joshua Washington <joshwash@google.com>
Reviewed-by: Ankit Garg <nktgrg@google.com>
Reviewed-by: Ziwei Xiao <ziweixiao@google.com>
---
 drivers/net/gve/gve_rx_dqo.c | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/drivers/net/gve/gve_rx_dqo.c b/drivers/net/gve/gve_rx_dqo.c
index 0103add985..cd85d90bb6 100644
--- a/drivers/net/gve/gve_rx_dqo.c
+++ b/drivers/net/gve/gve_rx_dqo.c
@@ -376,14 +376,13 @@ gve_rxq_mbufs_alloc_dqo(struct gve_rx_queue *rxq)
 		rxq->stats.no_mbufs_bulk++;
 		for (i = 0; i < rx_mask; i++) {
 			nmb = rte_pktmbuf_alloc(rxq->mpool);
-			if (!nmb)
-				break;
+			if (!nmb) {
+				rxq->stats.no_mbufs++;
+				gve_release_rxq_mbufs_dqo(rxq);
+				return -ENOMEM;
+			}
 			rxq->sw_ring[i] = nmb;
 		}
-		if (i < rxq->nb_rx_desc - 1) {
-			rxq->stats.no_mbufs += rx_mask - i;
-			return -ENOMEM;
-		}
 	}
 
 	for (i = 0; i < rx_mask; i++) {
-- 
2.51.0.rc1.167.g924127e9c0-goog


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH 2/4] net/gve: add datapath-specific logging for gve
  2025-08-26  0:03 [PATCH 0/4] net/gve: out of order completion processing for DQO Joshua Washington
  2025-08-26  0:03 ` [PATCH 1/4] net/gve: free Rx mbufs if allocation fails on ring setup Joshua Washington
@ 2025-08-26  0:03 ` Joshua Washington
  2025-08-26  0:03 ` [PATCH 3/4] net/gve: support for out of order completions on DQ Tx Joshua Washington
  2025-08-26  0:03 ` [PATCH 4/4] net/gve: support for out of order completions on DQ Rx Joshua Washington
  3 siblings, 0 replies; 5+ messages in thread
From: Joshua Washington @ 2025-08-26  0:03 UTC (permalink / raw)
  To: Jeroen de Borst, Joshua Washington; +Cc: dev, Ankit Garg

This would allow for separate control of datapath-related logs vs
non-datapath logs.

Signed-off-by: Joshua Washington <joshwash@google.com>
Reviewed-by: Ankit Garg <nktgrg@google.com>
---
 drivers/net/gve/gve_logs.h   |  3 +++
 drivers/net/gve/gve_rx_dqo.c |  5 +++--
 drivers/net/gve/gve_tx_dqo.c | 11 ++++++-----
 3 files changed, 12 insertions(+), 7 deletions(-)

diff --git a/drivers/net/gve/gve_logs.h b/drivers/net/gve/gve_logs.h
index a3d50fa45c..beada20e02 100644
--- a/drivers/net/gve/gve_logs.h
+++ b/drivers/net/gve/gve_logs.h
@@ -11,4 +11,7 @@ extern int gve_logtype_driver;
 #define PMD_DRV_LOG(level, ...) \
 	RTE_LOG_LINE_PREFIX(level, GVE_DRIVER, "%s(): ", __func__, __VA_ARGS__)
 
+#define PMD_DRV_DP_LOG(level, ...) \
+	RTE_LOG_DP_LINE_PREFIX(level, GVE_DRIVER, "%s(): ", __func__,  __VA_ARGS__)
+
 #endif
diff --git a/drivers/net/gve/gve_rx_dqo.c b/drivers/net/gve/gve_rx_dqo.c
index cd85d90bb6..ccaca1b0ea 100644
--- a/drivers/net/gve/gve_rx_dqo.c
+++ b/drivers/net/gve/gve_rx_dqo.c
@@ -27,8 +27,9 @@ gve_rx_refill_dqo(struct gve_rx_queue *rxq)
 		rxq->stats.no_mbufs += nb_refill;
 		dev = &rte_eth_devices[rxq->port_id];
 		dev->data->rx_mbuf_alloc_failed += nb_refill;
-		PMD_DRV_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u",
-			    rxq->port_id, rxq->queue_id);
+		PMD_DRV_DP_LOG(DEBUG,
+			       "RX mbuf alloc failed port_id=%u queue_id=%u",
+			       rxq->port_id, rxq->queue_id);
 		return;
 	}
 
diff --git a/drivers/net/gve/gve_tx_dqo.c b/drivers/net/gve/gve_tx_dqo.c
index c36c215b94..10ef645802 100644
--- a/drivers/net/gve/gve_tx_dqo.c
+++ b/drivers/net/gve/gve_tx_dqo.c
@@ -42,7 +42,7 @@ gve_tx_clean_dqo(struct gve_tx_queue *txq)
 		aim_txq->last_desc_cleaned = compl_tag;
 		break;
 	case GVE_COMPL_TYPE_DQO_REINJECTION:
-		PMD_DRV_LOG(DEBUG, "GVE_COMPL_TYPE_DQO_REINJECTION !!!");
+		PMD_DRV_DP_LOG(DEBUG, "GVE_COMPL_TYPE_DQO_REINJECTION !!!");
 		/* FALLTHROUGH */
 	case GVE_COMPL_TYPE_DQO_PKT:
 		/* free all segments. */
@@ -58,10 +58,10 @@ gve_tx_clean_dqo(struct gve_tx_queue *txq)
 		break;
 	case GVE_COMPL_TYPE_DQO_MISS:
 		rte_delay_us_sleep(1);
-		PMD_DRV_LOG(DEBUG, "GVE_COMPL_TYPE_DQO_MISS ignored !!!");
+		PMD_DRV_DP_LOG(DEBUG, "GVE_COMPL_TYPE_DQO_MISS ignored !!!");
 		break;
 	default:
-		PMD_DRV_LOG(ERR, "unknown completion type.");
+		PMD_DRV_DP_LOG(ERR, "unknown completion type.");
 		return;
 	}
 
@@ -206,7 +206,7 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 
 
 		if (rte_mbuf_check(tx_pkt, true, &reason)) {
-			PMD_DRV_LOG(DEBUG, "Invalid mbuf: %s", reason);
+			PMD_DRV_DP_LOG(DEBUG, "Invalid mbuf: %s", reason);
 			break;
 		}
 
@@ -243,7 +243,8 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 
 		do {
 			if (sw_ring[sw_id] != NULL)
-				PMD_DRV_LOG(DEBUG, "Overwriting an entry in sw_ring");
+				PMD_DRV_DP_LOG(DEBUG,
+					       "Overwriting an entry in sw_ring");
 
 			/* Skip writing descriptor if mbuf has no data. */
 			if (!tx_pkt->data_len)
-- 
2.51.0.rc1.167.g924127e9c0-goog


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH 3/4] net/gve: support for out of order completions on DQ Tx
  2025-08-26  0:03 [PATCH 0/4] net/gve: out of order completion processing for DQO Joshua Washington
  2025-08-26  0:03 ` [PATCH 1/4] net/gve: free Rx mbufs if allocation fails on ring setup Joshua Washington
  2025-08-26  0:03 ` [PATCH 2/4] net/gve: add datapath-specific logging for gve Joshua Washington
@ 2025-08-26  0:03 ` Joshua Washington
  2025-08-26  0:03 ` [PATCH 4/4] net/gve: support for out of order completions on DQ Rx Joshua Washington
  3 siblings, 0 replies; 5+ messages in thread
From: Joshua Washington @ 2025-08-26  0:03 UTC (permalink / raw)
  To: Jeroen de Borst, Joshua Washington; +Cc: dev, Ankit Garg

Prior to this patch, the DQ format made use of two separate arrays which
dictate the TX submission queue, named tx_ring and sw_ring. The tx_ring
represented the actual hardware descriptor ring, while the sw_ring
represented a ring of DMA'd buffers to be used in transmission.

Given that the DQ format makes use of out-of-order completions, the use
of a queue for keeping track of buffers DMA'd to the hardware is
sub-optimal at best, and error-prone in the worst case. Because
completions can come out of order, it was possible to hit a case where
the next buffer to be posted in the submission ring to have not yet
gotten a completion. If this is the case, the driver would have to
pause transmission until the completion is received or risk leaving a
pointer unfreed. In this sense, to prevent memory leaks, the driver must
act as if the completions come in-order, sometimes pausing traffic even
though there are buffers ready to be used, which can lead to degraded
performance.

To combat this, the GVE driver made a 4x multiplier for the sw_ring
size, relative to the size of the hardware ring. However, this
mitigation had no true guarantee against memory leaks, and should not
have been relied on for a correctly behaving driver.

DQ completions happen at a packet granularity, not a buffer granularity.
As such, this patch introduces a new pkt_ring_dqo struct, allowing
multidescriptor packets not to take up as many entries in the "sw_ring".
Additionally, new free list stack is introduced to obtain available
completion tags, allowing for packet completions to be properly processed
out of order. Where the sw_ring would have at once been overrun, there
is a new limitation on the number of available completion tags, which
aligns with the number of possible outstanding packets.

Signed-off-by: Joshua Washington <joshwash@google.com>
Reviewed-by: Ankit Garg <nktgrg@google.com>
---
 drivers/net/gve/base/gve_adminq.c |   2 +-
 drivers/net/gve/gve_ethdev.h      |  16 +-
 drivers/net/gve/gve_tx_dqo.c      | 243 ++++++++++++++++++------------
 3 files changed, 160 insertions(+), 101 deletions(-)

diff --git a/drivers/net/gve/base/gve_adminq.c b/drivers/net/gve/base/gve_adminq.c
index b1fe33080a..5d3270106c 100644
--- a/drivers/net/gve/base/gve_adminq.c
+++ b/drivers/net/gve/base/gve_adminq.c
@@ -582,7 +582,7 @@ static int gve_adminq_create_tx_queue(struct gve_priv *priv, u32 queue_index)
 		cmd.create_tx_queue.tx_comp_ring_addr =
 			cpu_to_be64(txq->compl_ring_phys_addr);
 		cmd.create_tx_queue.tx_comp_ring_size =
-			cpu_to_be16(txq->sw_size);
+			cpu_to_be16(txq->nb_complq_desc);
 	}
 
 	return gve_adminq_issue_cmd(priv, &cmd);
diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h
index 195dadc4d4..b36f0ff746 100644
--- a/drivers/net/gve/gve_ethdev.h
+++ b/drivers/net/gve/gve_ethdev.h
@@ -119,16 +119,25 @@ struct gve_xstats_name_offset {
 	unsigned int offset;
 };
 
+struct gve_tx_pkt {
+	struct rte_mbuf *mbuf;
+	int16_t next_avail_pkt; /* Entry in software ring for next free packet slot */
+};
+
 struct gve_tx_queue {
 	volatile union gve_tx_desc *tx_desc_ring;
 	const struct rte_memzone *mz;
 	uint64_t tx_ring_phys_addr;
-	struct rte_mbuf **sw_ring;
+	union {
+		struct rte_mbuf **sw_ring;
+		struct gve_tx_pkt *pkt_ring_dqo;
+	};
 	volatile rte_be32_t *qtx_tail;
 	volatile rte_be32_t *qtx_head;
 
 	uint32_t tx_tail;
 	uint16_t nb_tx_desc;
+	uint16_t nb_complq_desc;
 	uint16_t nb_free;
 	uint16_t nb_used;
 	uint32_t next_to_clean;
@@ -162,6 +171,11 @@ struct gve_tx_queue {
 	/* newly added for DQO */
 	volatile union gve_tx_desc_dqo *tx_ring;
 	struct gve_tx_compl_desc *compl_ring;
+
+	/* List of free completion tags that map into sw_ring. */
+	int16_t free_compl_tags_head;
+	uint16_t num_free_compl_tags;
+
 	const struct rte_memzone *compl_ring_mz;
 	uint64_t compl_ring_phys_addr;
 	uint32_t complq_tail;
diff --git a/drivers/net/gve/gve_tx_dqo.c b/drivers/net/gve/gve_tx_dqo.c
index 10ef645802..4948502725 100644
--- a/drivers/net/gve/gve_tx_dqo.c
+++ b/drivers/net/gve/gve_tx_dqo.c
@@ -5,6 +5,50 @@
 
 #include "gve_ethdev.h"
 #include "base/gve_adminq.h"
+#include "rte_malloc.h"
+
+static inline void
+gve_free_compl_tags_init(struct gve_tx_queue *txq)
+{
+	uint16_t sw_mask = txq->sw_size - 1;
+	int i;
+
+	for (i = 0; i < sw_mask; i++)
+		txq->pkt_ring_dqo[i].next_avail_pkt = (i + 1) & sw_mask;
+
+	txq->pkt_ring_dqo[sw_mask].next_avail_pkt = -1;
+	txq->free_compl_tags_head = 0;
+	txq->num_free_compl_tags = txq->sw_size;
+}
+
+/* Removes first item from the buffer free list, and return its array index. */
+static inline bool
+gve_free_compl_tags_pop(struct gve_tx_queue *txq, uint16_t *compl_tag)
+{
+	int16_t head = txq->free_compl_tags_head;
+	if (likely(head != -1)) {
+		struct gve_tx_pkt *head_pkt = &txq->pkt_ring_dqo[head];
+
+		txq->free_compl_tags_head = head_pkt->next_avail_pkt;
+		txq->num_free_compl_tags--;
+		*compl_tag = head;
+		return true;
+	}
+
+	PMD_DRV_DP_LOG(ERR, "Completion tags list is empty!");
+	return false;
+}
+
+/* Adds gve_tx_pkt in pkt_ring to free list. Assumes that
+ * buf_id < txq->sw_size.
+ */
+static inline void
+gve_free_compl_tags_push(struct gve_tx_queue *txq, uint16_t compl_tag)
+{
+	txq->pkt_ring_dqo[compl_tag].next_avail_pkt = txq->free_compl_tags_head;
+	txq->free_compl_tags_head = compl_tag;
+	txq->num_free_compl_tags++;
+}
 
 static inline void
 gve_tx_clean_dqo(struct gve_tx_queue *txq)
@@ -12,8 +56,8 @@ gve_tx_clean_dqo(struct gve_tx_queue *txq)
 	struct gve_tx_compl_desc *compl_ring;
 	struct gve_tx_compl_desc *compl_desc;
 	struct gve_tx_queue *aim_txq;
-	uint16_t nb_desc_clean;
-	struct rte_mbuf *txe, *txe_next;
+	struct gve_tx_pkt *pkt;
+	uint16_t new_tx_head;
 	uint16_t compl_tag;
 	uint16_t next;
 
@@ -26,35 +70,38 @@ gve_tx_clean_dqo(struct gve_tx_queue *txq)
 
 	rte_io_rmb();
 
-	compl_tag = rte_le_to_cpu_16(compl_desc->completion_tag);
-
 	aim_txq = txq->txqs[compl_desc->id];
 
 	switch (compl_desc->type) {
 	case GVE_COMPL_TYPE_DQO_DESC:
-		/* need to clean Descs from last_cleaned to compl_tag */
-		if (aim_txq->last_desc_cleaned > compl_tag)
-			nb_desc_clean = aim_txq->nb_tx_desc - aim_txq->last_desc_cleaned +
-					compl_tag;
-		else
-			nb_desc_clean = compl_tag - aim_txq->last_desc_cleaned;
-		aim_txq->nb_free += nb_desc_clean;
-		aim_txq->last_desc_cleaned = compl_tag;
+		new_tx_head = rte_le_to_cpu_16(compl_desc->tx_head);
+		aim_txq->nb_free +=
+			(new_tx_head - aim_txq->last_desc_cleaned)
+				& (aim_txq->nb_tx_desc - 1);
+		aim_txq->last_desc_cleaned = new_tx_head;
 		break;
 	case GVE_COMPL_TYPE_DQO_REINJECTION:
 		PMD_DRV_DP_LOG(DEBUG, "GVE_COMPL_TYPE_DQO_REINJECTION !!!");
 		/* FALLTHROUGH */
 	case GVE_COMPL_TYPE_DQO_PKT:
-		/* free all segments. */
-		txe = aim_txq->sw_ring[compl_tag];
-		while (txe != NULL) {
-			txe_next = txe->next;
-			rte_pktmbuf_free_seg(txe);
-			if (aim_txq->sw_ring[compl_tag] == txe)
-				aim_txq->sw_ring[compl_tag] = NULL;
-			txe = txe_next;
-			compl_tag = (compl_tag + 1) & (aim_txq->sw_size - 1);
+		/* The first segment has buf_id == completion_tag. */
+		compl_tag = rte_le_to_cpu_16(compl_desc->completion_tag);
+		if (unlikely(compl_tag >= txq->sw_size)) {
+			PMD_DRV_DP_LOG(ERR, "Invalid completion tag %d",
+				       compl_tag);
+			break;
+		}
+
+		/* Free packet.*/
+		pkt = &aim_txq->pkt_ring_dqo[compl_tag];
+		if (unlikely(!pkt->mbuf)) {
+			PMD_DRV_DP_LOG(ERR, "No outstanding packet for completion tag %d",
+				       compl_tag);
+			break;
 		}
+		rte_pktmbuf_free(pkt->mbuf);
+		pkt->mbuf = NULL;
+		gve_free_compl_tags_push(txq, compl_tag);
 		break;
 	case GVE_COMPL_TYPE_DQO_MISS:
 		rte_delay_us_sleep(1);
@@ -66,11 +113,10 @@ gve_tx_clean_dqo(struct gve_tx_queue *txq)
 	}
 
 	next++;
-	if (next == txq->nb_tx_desc * DQO_TX_MULTIPLIER) {
+	if (next == txq->nb_complq_desc) {
 		next = 0;
 		txq->cur_gen_bit ^= 1;
 	}
-
 	txq->complq_tail = next;
 }
 
@@ -155,6 +201,12 @@ gve_tx_pkt_nb_data_descs(struct rte_mbuf *tx_pkt)
 	return nb_descs;
 }
 
+static inline bool
+gve_can_tx(struct gve_tx_queue *txq, uint16_t nb_desc, uint16_t nb_pkts)
+{
+	return txq->nb_free >= nb_desc && txq->num_free_compl_tags >= nb_pkts;
+}
+
 static inline void
 gve_tx_fill_seg_desc_dqo(volatile union gve_tx_desc_dqo *desc, struct rte_mbuf *tx_pkt)
 {
@@ -168,39 +220,60 @@ gve_tx_fill_seg_desc_dqo(volatile union gve_tx_desc_dqo *desc, struct rte_mbuf *
 	desc->tso_ctx.header_len = (uint8_t)hlen;
 }
 
+static void
+gve_fill_tx_pkt_desc(struct gve_tx_queue *txq, uint16_t *tx_id,
+		     struct rte_mbuf *tx_pkt, uint16_t compl_tag,
+		     bool checksum_offload_enable)
+{
+	volatile union gve_tx_desc_dqo *desc;
+	uint16_t mask = txq->nb_tx_desc - 1;
+	int mbuf_offset = 0;
+
+	while (mbuf_offset < tx_pkt->data_len) {
+		uint64_t buf_addr = rte_mbuf_data_iova(tx_pkt) + mbuf_offset;
+
+		desc = &txq->tx_ring[*tx_id];
+		desc->pkt = (struct gve_tx_pkt_desc_dqo) {};
+		desc->pkt.buf_addr = rte_cpu_to_le_64(buf_addr);
+		desc->pkt.compl_tag = rte_cpu_to_le_16(compl_tag);
+		desc->pkt.dtype = GVE_TX_PKT_DESC_DTYPE_DQO;
+		desc->pkt.buf_size = RTE_MIN(tx_pkt->data_len - mbuf_offset,
+					     GVE_TX_MAX_BUF_SIZE_DQO);
+		desc->pkt.end_of_packet = 0;
+		desc->pkt.checksum_offload_enable = checksum_offload_enable;
+
+		mbuf_offset += desc->pkt.buf_size;
+		*tx_id = (*tx_id + 1) & mask;
+	}
+}
+
 uint16_t
 gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 {
 	struct gve_tx_queue *txq = tx_queue;
 	volatile union gve_tx_desc_dqo *txr;
 	volatile union gve_tx_desc_dqo *txd;
-	struct rte_mbuf **sw_ring;
+	uint16_t mask = txq->nb_tx_desc - 1;
+	struct gve_tx_pkt *pkts;
 	struct rte_mbuf *tx_pkt;
-	uint16_t mask, sw_mask;
-	uint16_t first_sw_id;
+	uint16_t compl_tag;
 	const char *reason;
 	uint16_t nb_tx = 0;
+	uint64_t bytes = 0;
 	uint64_t ol_flags;
 	uint16_t nb_descs;
+	bool csum, tso;
 	uint16_t tx_id;
-	uint16_t sw_id;
-	uint64_t bytes;
-	uint8_t tso;
-	uint8_t csum;
 
-	sw_ring = txq->sw_ring;
+	pkts = txq->pkt_ring_dqo;
 	txr = txq->tx_ring;
 
-	bytes = 0;
-	mask = txq->nb_tx_desc - 1;
-	sw_mask = txq->sw_size - 1;
 	tx_id = txq->tx_tail;
-	sw_id = txq->sw_tail;
 
 	for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
 		tx_pkt = tx_pkts[nb_tx];
 
-		if (txq->nb_free <= txq->free_thresh)
+		if (!gve_can_tx(txq, txq->free_thresh, nb_pkts - nb_tx))
 			gve_tx_clean_descs_dqo(txq, DQO_TX_MULTIPLIER *
 					       txq->rs_thresh);
 
@@ -211,8 +284,6 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		}
 
 		ol_flags = tx_pkt->ol_flags;
-		first_sw_id = sw_id;
-
 		tso = !!(ol_flags & RTE_MBUF_F_TX_TCP_SEG);
 		csum = !!(ol_flags & GVE_TX_CKSUM_OFFLOAD_MASK_DQO);
 
@@ -220,12 +291,12 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		nb_descs += tso;
 
 		/* Clean if there aren't enough descriptors to send the packet. */
-		if (unlikely(txq->nb_free < nb_descs)) {
+		if (unlikely(!gve_can_tx(txq, nb_descs, 1))) {
 			int nb_to_clean = RTE_MAX(DQO_TX_MULTIPLIER * txq->rs_thresh,
 						  nb_descs);
 
 			gve_tx_clean_descs_dqo(txq, nb_to_clean);
-			if (txq->nb_free < nb_descs)
+			if (!gve_can_tx(txq, nb_descs, 1))
 				break;
 		}
 
@@ -241,44 +312,21 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			tx_id = (tx_id + 1) & mask;
 		}
 
-		do {
-			if (sw_ring[sw_id] != NULL)
-				PMD_DRV_DP_LOG(DEBUG,
-					       "Overwriting an entry in sw_ring");
-
-			/* Skip writing descriptor if mbuf has no data. */
-			if (!tx_pkt->data_len)
-				goto finish_mbuf;
+		if (!gve_free_compl_tags_pop(txq, &compl_tag))
+			break;
 
-			txd = &txr[tx_id];
-			sw_ring[sw_id] = tx_pkt;
-
-			/* fill Tx descriptors */
-			int mbuf_offset = 0;
-			while (mbuf_offset < tx_pkt->data_len) {
-				uint64_t buf_addr = rte_mbuf_data_iova(tx_pkt) +
-					mbuf_offset;
-
-				txd = &txr[tx_id];
-				txd->pkt = (struct gve_tx_pkt_desc_dqo) {};
-				txd->pkt.buf_addr = rte_cpu_to_le_64(buf_addr);
-				txd->pkt.compl_tag = rte_cpu_to_le_16(first_sw_id);
-				txd->pkt.dtype = GVE_TX_PKT_DESC_DTYPE_DQO;
-				txd->pkt.buf_size = RTE_MIN(tx_pkt->data_len - mbuf_offset,
-							    GVE_TX_MAX_BUF_SIZE_DQO);
-				txd->pkt.end_of_packet = 0;
-				txd->pkt.checksum_offload_enable = csum;
-
-				mbuf_offset += txd->pkt.buf_size;
-				tx_id = (tx_id + 1) & mask;
+		pkts[compl_tag].mbuf = tx_pkt;
+		while (tx_pkt) {
+			/* Skip writing descriptors if mbuf has no data. */
+			if (!tx_pkt->data_len) {
+				tx_pkt = tx_pkt->next;
+				continue;
 			}
-
-finish_mbuf:
-			sw_id = (sw_id + 1) & sw_mask;
+			gve_fill_tx_pkt_desc(txq, &tx_id, tx_pkt, compl_tag,
+					     csum);
 			bytes += tx_pkt->data_len;
 			tx_pkt = tx_pkt->next;
-		} while (tx_pkt);
-
+		}
 		/* fill the last written descriptor with End of Packet (EOP) bit */
 		txd = &txr[(tx_id - 1) & mask];
 		txd->pkt.end_of_packet = 1;
@@ -299,7 +347,6 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 
 		rte_write32(tx_id, txq->qtx_tail);
 		txq->tx_tail = tx_id;
-		txq->sw_tail = sw_id;
 
 		txq->stats.packets += nb_tx;
 		txq->stats.bytes += bytes;
@@ -314,12 +361,8 @@ gve_release_txq_mbufs_dqo(struct gve_tx_queue *txq)
 {
 	uint16_t i;
 
-	for (i = 0; i < txq->sw_size; i++) {
-		if (txq->sw_ring[i]) {
-			rte_pktmbuf_free_seg(txq->sw_ring[i]);
-			txq->sw_ring[i] = NULL;
-		}
-	}
+	for (i = 0; i < txq->sw_size; i++)
+		rte_pktmbuf_free(txq->pkt_ring_dqo[i].mbuf);
 }
 
 void
@@ -331,7 +374,7 @@ gve_tx_queue_release_dqo(struct rte_eth_dev *dev, uint16_t qid)
 		return;
 
 	gve_release_txq_mbufs_dqo(q);
-	rte_free(q->sw_ring);
+	rte_free(q->pkt_ring_dqo);
 	rte_memzone_free(q->mz);
 	rte_memzone_free(q->compl_ring_mz);
 	rte_memzone_free(q->qres_mz);
@@ -372,13 +415,13 @@ check_tx_thresh_dqo(uint16_t nb_desc, uint16_t tx_rs_thresh,
 }
 
 static void
-gve_reset_txq_dqo(struct gve_tx_queue *txq)
+gve_reset_tx_ring_state_dqo(struct gve_tx_queue *txq)
 {
-	struct rte_mbuf **sw_ring;
+	struct gve_tx_pkt *pkt_ring_dqo;
 	uint32_t size, i;
 
 	if (txq == NULL) {
-		PMD_DRV_LOG(DEBUG, "Pointer to txq is NULL");
+		PMD_DRV_LOG(ERR, "Pointer to txq is NULL");
 		return;
 	}
 
@@ -386,19 +429,20 @@ gve_reset_txq_dqo(struct gve_tx_queue *txq)
 	for (i = 0; i < size; i++)
 		((volatile char *)txq->tx_ring)[i] = 0;
 
-	size = txq->sw_size * sizeof(struct gve_tx_compl_desc);
+	size = txq->nb_complq_desc * sizeof(struct gve_tx_compl_desc);
 	for (i = 0; i < size; i++)
 		((volatile char *)txq->compl_ring)[i] = 0;
 
-	sw_ring = txq->sw_ring;
+	pkt_ring_dqo = txq->pkt_ring_dqo;
 	for (i = 0; i < txq->sw_size; i++)
-		sw_ring[i] = NULL;
+		pkt_ring_dqo[i].mbuf = NULL;
+
+	gve_free_compl_tags_init(txq);
 
 	txq->tx_tail = 0;
 	txq->nb_used = 0;
 
 	txq->last_desc_cleaned = 0;
-	txq->sw_tail = 0;
 	txq->nb_free = txq->nb_tx_desc - 1;
 
 	txq->complq_tail = 0;
@@ -442,6 +486,7 @@ gve_tx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id,
 		return -EINVAL;
 
 	txq->nb_tx_desc = nb_desc;
+	txq->nb_complq_desc = nb_desc * DQO_TX_MULTIPLIER;
 	txq->free_thresh = free_thresh;
 	txq->rs_thresh = rs_thresh;
 	txq->queue_id = queue_id;
@@ -451,11 +496,11 @@ gve_tx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id,
 	txq->ntfy_addr = &hw->db_bar2[rte_be_to_cpu_32(hw->irq_dbs[txq->ntfy_id].id)];
 
 	/* Allocate software ring */
-	sw_size = nb_desc * DQO_TX_MULTIPLIER;
-	txq->sw_ring = rte_zmalloc_socket("gve tx sw ring",
-					  sw_size * sizeof(struct rte_mbuf *),
+	sw_size = nb_desc;
+	txq->pkt_ring_dqo = rte_zmalloc_socket("gve tx sw ring",
+					  sw_size * sizeof(struct gve_tx_pkt),
 					  RTE_CACHE_LINE_SIZE, socket_id);
-	if (txq->sw_ring == NULL) {
+	if (txq->pkt_ring_dqo == NULL) {
 		PMD_DRV_LOG(ERR, "Failed to allocate memory for SW TX ring");
 		err = -ENOMEM;
 		goto free_txq;
@@ -469,7 +514,7 @@ gve_tx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id,
 	if (mz == NULL) {
 		PMD_DRV_LOG(ERR, "Failed to reserve DMA memory for TX");
 		err = -ENOMEM;
-		goto free_txq_sw_ring;
+		goto free_txq_pkt_ring;
 	}
 	txq->tx_ring = (union gve_tx_desc_dqo *)mz->addr;
 	txq->tx_ring_phys_addr = mz->iova;
@@ -477,7 +522,7 @@ gve_tx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id,
 
 	/* Allocate TX completion ring descriptors. */
 	mz = rte_eth_dma_zone_reserve(dev, "tx_compl_ring", queue_id,
-				      sw_size * sizeof(struct gve_tx_compl_desc),
+				       txq->nb_complq_desc * sizeof(struct gve_tx_compl_desc),
 				      PAGE_SIZE, socket_id);
 	if (mz == NULL) {
 		PMD_DRV_LOG(ERR, "Failed to reserve DMA memory for TX completion queue");
@@ -500,7 +545,7 @@ gve_tx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id,
 	txq->qres = (struct gve_queue_resources *)mz->addr;
 	txq->qres_mz = mz;
 
-	gve_reset_txq_dqo(txq);
+	gve_reset_tx_ring_state_dqo(txq);
 
 	dev->data->tx_queues[queue_id] = txq;
 
@@ -510,8 +555,8 @@ gve_tx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id,
 	rte_memzone_free(txq->compl_ring_mz);
 free_txq_mz:
 	rte_memzone_free(txq->mz);
-free_txq_sw_ring:
-	rte_free(txq->sw_ring);
+free_txq_pkt_ring:
+	rte_free(txq->pkt_ring_dqo);
 free_txq:
 	rte_free(txq);
 	return err;
@@ -551,7 +596,7 @@ gve_tx_queue_stop_dqo(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 
 	txq = dev->data->tx_queues[tx_queue_id];
 	gve_release_txq_mbufs_dqo(txq);
-	gve_reset_txq_dqo(txq);
+	gve_reset_tx_ring_state_dqo(txq);
 
 	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
 
-- 
2.51.0.rc1.167.g924127e9c0-goog


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH 4/4] net/gve: support for out of order completions on DQ Rx
  2025-08-26  0:03 [PATCH 0/4] net/gve: out of order completion processing for DQO Joshua Washington
                   ` (2 preceding siblings ...)
  2025-08-26  0:03 ` [PATCH 3/4] net/gve: support for out of order completions on DQ Tx Joshua Washington
@ 2025-08-26  0:03 ` Joshua Washington
  3 siblings, 0 replies; 5+ messages in thread
From: Joshua Washington @ 2025-08-26  0:03 UTC (permalink / raw)
  To: Jeroen de Borst, Joshua Washington; +Cc: dev, Ankit Garg

The DPDK DQ driver made an implicit assumption that buffers will be
returned from the NIC in the order which they are posted. While this is
generally the case, there are certain edge cases, such as HW GRO, where
buffers may be completed out of order. In such cases, the driver would
have returned the wrong mbuf to the program, which would cause issues if
the mbuf is freed while the NIC still expects to own the buffer or the
application receives a buffer which contains garbage data.

This patch updates the RX path behavior more in line with what is
expected by the NIC by utilizing the buf_id in the completion descriptor
to decide which buffer to complete. A stack is introduced to hold the
available buf_ids when posting to handle the fact that buf_ids can come
back from the hardware out of order.

Signed-off-by: Joshua Washington <joshwash@google.com>
Reviewed-by: Ankit Garg <nktgrg@google.com>
---
 drivers/net/gve/gve_ethdev.h |   4 ++
 drivers/net/gve/gve_rx_dqo.c | 119 ++++++++++++++++++++++++++---------
 2 files changed, 92 insertions(+), 31 deletions(-)

diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h
index b36f0ff746..f7cc781640 100644
--- a/drivers/net/gve/gve_ethdev.h
+++ b/drivers/net/gve/gve_ethdev.h
@@ -242,6 +242,10 @@ struct gve_rx_queue {
 	uint8_t cur_gen_bit;
 	uint16_t bufq_tail;
 
+	/* List of buffers which are known to be completed by the hardware. */
+	int16_t *completed_buf_list;
+	int16_t completed_buf_list_head;
+
 	/* Only valid for DQO_RDA queue format */
 	struct gve_rx_queue *bufq;
 
diff --git a/drivers/net/gve/gve_rx_dqo.c b/drivers/net/gve/gve_rx_dqo.c
index ccaca1b0ea..a0ef21bc8e 100644
--- a/drivers/net/gve/gve_rx_dqo.c
+++ b/drivers/net/gve/gve_rx_dqo.c
@@ -7,22 +7,55 @@
 #include "gve_ethdev.h"
 #include "base/gve_adminq.h"
 #include "rte_mbuf_ptype.h"
+#include "rte_atomic.h"
+
+static inline void
+gve_completed_buf_list_init(struct gve_rx_queue *rxq)
+{
+	for (int i = 0; i < rxq->nb_rx_desc; i++)
+		rxq->completed_buf_list[i] = -1;
+
+	rxq->completed_buf_list_head = -1;
+}
+
+/* Assumes buf_id < nb_rx_desc */
+static inline void
+gve_completed_buf_list_push(struct gve_rx_queue *rxq, uint16_t buf_id)
+{
+	rxq->completed_buf_list[buf_id] = rxq->completed_buf_list_head;
+	rxq->completed_buf_list_head = buf_id;
+}
+
+static inline int16_t
+gve_completed_buf_list_pop(struct gve_rx_queue *rxq)
+{
+	int16_t head = rxq->completed_buf_list_head;
+	if (head != -1)
+		rxq->completed_buf_list_head = rxq->completed_buf_list[head];
+
+	return head;
+}
 
 static inline void
 gve_rx_refill_dqo(struct gve_rx_queue *rxq)
 {
 	volatile struct gve_rx_desc_dqo *rx_buf_desc;
-	struct rte_mbuf *nmb[rxq->nb_rx_hold];
-	uint16_t nb_refill = rxq->nb_rx_hold;
+	struct rte_mbuf *new_bufs[rxq->nb_rx_desc];
+	uint16_t rx_mask = rxq->nb_rx_desc - 1;
 	uint16_t next_avail = rxq->bufq_tail;
 	struct rte_eth_dev *dev;
+	uint16_t nb_refill;
 	uint64_t dma_addr;
+	int16_t buf_id;
+	int diag;
 	int i;
 
-	if (rxq->nb_rx_hold < rxq->free_thresh)
+	nb_refill = rxq->nb_rx_hold;
+	if (nb_refill < rxq->free_thresh)
 		return;
 
-	if (unlikely(rte_pktmbuf_alloc_bulk(rxq->mpool, nmb, nb_refill))) {
+	diag = rte_pktmbuf_alloc_bulk(rxq->mpool, new_bufs, nb_refill);
+	if (unlikely(diag < 0)) {
 		rxq->stats.no_mbufs_bulk++;
 		rxq->stats.no_mbufs += nb_refill;
 		dev = &rte_eth_devices[rxq->port_id];
@@ -33,17 +66,31 @@ gve_rx_refill_dqo(struct gve_rx_queue *rxq)
 		return;
 	}
 
+	/* Mbuf allocation succeeded, so refill buffers. */
 	for (i = 0; i < nb_refill; i++) {
 		rx_buf_desc = &rxq->rx_ring[next_avail];
-		rxq->sw_ring[next_avail] = nmb[i];
-		dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb[i]));
+		buf_id = gve_completed_buf_list_pop(rxq);
+
+		/* Out of buffers. Free remaining mbufs and return. */
+		if (unlikely(buf_id == -1)) {
+			PMD_DRV_DP_LOG(ERR,
+				       "No free entries in sw_ring for port %d, queue %d.",
+				       rxq->port_id, rxq->queue_id);
+			rte_pktmbuf_free_bulk(new_bufs + i, nb_refill - i);
+			nb_refill = i;
+			break;
+		}
+		rxq->sw_ring[buf_id] = new_bufs[i];
+		dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_bufs[i]));
+		rx_buf_desc->buf_id = buf_id;
 		rx_buf_desc->header_buf_addr = 0;
 		rx_buf_desc->buf_addr = dma_addr;
-		next_avail = (next_avail + 1) & (rxq->nb_rx_desc - 1);
+
+		next_avail = (next_avail + 1) & rx_mask;
 	}
+
 	rxq->nb_rx_hold -= nb_refill;
 	rte_write32(next_avail, rxq->qrx_tail);
-
 	rxq->bufq_tail = next_avail;
 }
 
@@ -109,11 +156,10 @@ gve_rx_set_mbuf_ptype(struct gve_priv *priv, struct rte_mbuf *rx_mbuf,
 uint16_t
 gve_rx_burst_dqo(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 {
-	volatile struct gve_rx_compl_desc_dqo *rx_compl_ring;
 	volatile struct gve_rx_compl_desc_dqo *rx_desc;
 	struct gve_rx_queue *rxq;
 	struct rte_mbuf *rxm;
-	uint16_t rx_id_bufq;
+	uint16_t rx_buf_id;
 	uint16_t pkt_len;
 	uint16_t rx_id;
 	uint16_t nb_rx;
@@ -123,11 +169,9 @@ gve_rx_burst_dqo(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 	nb_rx = 0;
 	rxq = rx_queue;
 	rx_id = rxq->rx_tail;
-	rx_id_bufq = rxq->next_avail;
-	rx_compl_ring = rxq->compl_ring;
 
 	while (nb_rx < nb_pkts) {
-		rx_desc = &rx_compl_ring[rx_id];
+		rx_desc = &rxq->compl_ring[rx_id];
 
 		/* check status */
 		if (rx_desc->generation != rxq->cur_gen_bit)
@@ -135,25 +179,25 @@ gve_rx_burst_dqo(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 
 		rte_io_rmb();
 
-		if (unlikely(rx_desc->rx_error)) {
-			rxq->stats.errors++;
-			continue;
-		}
-
-		pkt_len = rx_desc->packet_len;
-
+		rxq->nb_rx_hold++;
 		rx_id++;
 		if (rx_id == rxq->nb_rx_desc) {
 			rx_id = 0;
 			rxq->cur_gen_bit ^= 1;
 		}
 
-		rxm = rxq->sw_ring[rx_id_bufq];
-		rx_id_bufq++;
-		if (rx_id_bufq == rxq->nb_rx_desc)
-			rx_id_bufq = 0;
-		rxq->nb_rx_hold++;
+		rx_buf_id = rte_le_to_cpu_16(rx_desc->buf_id);
+		rxm = rxq->sw_ring[rx_buf_id];
+		gve_completed_buf_list_push(rxq, rx_buf_id);
 
+		/* Free buffer and report error. */
+		if (unlikely(rx_desc->rx_error)) {
+			rxq->stats.errors++;
+			rte_pktmbuf_free(rxm);
+			continue;
+		}
+
+		pkt_len = rte_le_to_cpu_16(rx_desc->packet_len);
 		rxm->pkt_len = pkt_len;
 		rxm->data_len = pkt_len;
 		rxm->port = rxq->port_id;
@@ -168,7 +212,6 @@ gve_rx_burst_dqo(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 
 	if (nb_rx > 0) {
 		rxq->rx_tail = rx_id;
-		rxq->next_avail = rx_id_bufq;
 
 		rxq->stats.packets += nb_rx;
 		rxq->stats.bytes += bytes;
@@ -203,6 +246,7 @@ gve_rx_queue_release_dqo(struct rte_eth_dev *dev, uint16_t qid)
 
 	gve_release_rxq_mbufs_dqo(q);
 	rte_free(q->sw_ring);
+	rte_free(q->completed_buf_list);
 	rte_memzone_free(q->compl_ring_mz);
 	rte_memzone_free(q->mz);
 	rte_memzone_free(q->qres_mz);
@@ -211,7 +255,7 @@ gve_rx_queue_release_dqo(struct rte_eth_dev *dev, uint16_t qid)
 }
 
 static void
-gve_reset_rxq_dqo(struct gve_rx_queue *rxq)
+gve_reset_rx_ring_state_dqo(struct gve_rx_queue *rxq)
 {
 	struct rte_mbuf **sw_ring;
 	uint32_t size, i;
@@ -233,8 +277,9 @@ gve_reset_rxq_dqo(struct gve_rx_queue *rxq)
 	for (i = 0; i < rxq->nb_rx_desc; i++)
 		sw_ring[i] = NULL;
 
+	gve_completed_buf_list_init(rxq);
+
 	rxq->bufq_tail = 0;
-	rxq->next_avail = 0;
 	rxq->nb_rx_hold = rxq->nb_rx_desc - 1;
 
 	rxq->rx_tail = 0;
@@ -306,6 +351,16 @@ gve_rx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id,
 		goto free_rxq;
 	}
 
+	/* Allocate completed bufs list */
+	rxq->completed_buf_list = rte_zmalloc_socket("gve completed buf list",
+		nb_desc * sizeof(*rxq->completed_buf_list), RTE_CACHE_LINE_SIZE,
+		socket_id);
+	if (rxq->completed_buf_list == NULL) {
+		PMD_DRV_LOG(ERR, "Failed to allocate completed buffer list.");
+		err = -ENOMEM;
+		goto free_rxq_sw_ring;
+	}
+
 	/* Allocate RX buffer queue */
 	mz = rte_eth_dma_zone_reserve(dev, "rx_ring", queue_id,
 				      nb_desc * sizeof(struct gve_rx_desc_dqo),
@@ -313,7 +368,7 @@ gve_rx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id,
 	if (mz == NULL) {
 		PMD_DRV_LOG(ERR, "Failed to reserve DMA memory for RX buffer queue");
 		err = -ENOMEM;
-		goto free_rxq_sw_ring;
+		goto free_rxq_completed_buf_list;
 	}
 	rxq->rx_ring = (struct gve_rx_desc_dqo *)mz->addr;
 	rxq->rx_ring_phys_addr = mz->iova;
@@ -345,7 +400,7 @@ gve_rx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id,
 	rxq->qres = (struct gve_queue_resources *)mz->addr;
 	rxq->qres_mz = mz;
 
-	gve_reset_rxq_dqo(rxq);
+	gve_reset_rx_ring_state_dqo(rxq);
 
 	dev->data->rx_queues[queue_id] = rxq;
 
@@ -355,6 +410,8 @@ gve_rx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id,
 	rte_memzone_free(rxq->compl_ring_mz);
 free_rxq_mz:
 	rte_memzone_free(rxq->mz);
+free_rxq_completed_buf_list:
+	rte_free(rxq->completed_buf_list);
 free_rxq_sw_ring:
 	rte_free(rxq->sw_ring);
 free_rxq:
@@ -440,7 +497,7 @@ gve_rx_queue_stop_dqo(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 
 	rxq = dev->data->rx_queues[rx_queue_id];
 	gve_release_rxq_mbufs_dqo(rxq);
-	gve_reset_rxq_dqo(rxq);
+	gve_reset_rx_ring_state_dqo(rxq);
 
 	dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
 
-- 
2.51.0.rc1.167.g924127e9c0-goog


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2025-08-26  0:04 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-08-26  0:03 [PATCH 0/4] net/gve: out of order completion processing for DQO Joshua Washington
2025-08-26  0:03 ` [PATCH 1/4] net/gve: free Rx mbufs if allocation fails on ring setup Joshua Washington
2025-08-26  0:03 ` [PATCH 2/4] net/gve: add datapath-specific logging for gve Joshua Washington
2025-08-26  0:03 ` [PATCH 3/4] net/gve: support for out of order completions on DQ Tx Joshua Washington
2025-08-26  0:03 ` [PATCH 4/4] net/gve: support for out of order completions on DQ Rx Joshua Washington

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).