patches for DPDK stable branches
 help / color / mirror / Atom feed
From: Joshua Washington <joshwash@google.com>
To: Jeroen de Borst <jeroendb@google.com>,
	Rushil Gupta <rushilg@google.com>,
	 Joshua Washington <joshwash@google.com>,
	Junfeng Guo <junfeng.guo@intel.com>
Cc: dev@dpdk.org, stable@dpdk.org,
	Ferruh Yigit <ferruh.yigit@amd.com>,
	 Praveen Kaligineedi <pkaligineedi@google.com>
Subject: [PATCH] net/gve: fix refill logic causing memory corruption
Date: Thu,  3 Oct 2024 18:05:18 -0700	[thread overview]
Message-ID: <20241004010518.238331-1-joshwash@google.com> (raw)

There is a seemingly mundane error in the RX refill path which can lead
to major issues and ultimately program crashing.

This error occurs as part of an edge case where the exact number of
buffers the refill causes the ring to wrap around to 0. The current
refill logic is split into two conditions: first, when the number of
buffers to refill is greater than the number of buffers left in the ring
before wraparound occurs; second, when the opposite is true, and there
are enough buffers before wraparound to refill all buffers.

In this edge case, the first condition erroneously uses a (<) condition
to decide whether to wrap around, when it should have been (<=). In that
case, the second condition would run and the tail pointer would be set
to an invalid value (RING_SIZE). This causes a number of cascading
failures.

1. The first issue rather mundane in that rxq->bufq_tail == RING_SIZE at
   the end of the refill, this will correct itself on the next refill
   without any sort of memory leak or courrption;
2. The second failure is that the head pointer would end up overrunning
   the tail because the last buffer that is refilled is refilled at
   sw_ring[RING_SIZE] instead of sw_ring[0]. This would cause the driver
   to give the application a stale mbuf, one that has been potentially
   freed or is otherwise stale;
3. The third failure comes from the fact that the software ring is being
   overrun. Because we directly use the sw_ring pointer to refill
   buffers, when sw_ring[RING_SIZE] is filled, a buffer overflow occurs.
   The overwritten data has the potential to be important data, and this
   can potentially cause the program to crash outright.

This patch fixes the refill bug while greatly simplifying the logic so
that it is much less error-prone.

Fixes: 45da16b5b181 ("net/gve: support basic Rx data path for DQO")
Cc: junfeng.guo@intel.com
Cc: stable@dpdk.org

Signed-off-by: Joshua Washington <joshwash@google.com>
Reviewed-by: Rushil Gupta <rushilg@google.com>
Reviewed-by: Praveen Kaligineedi <pkaligineedi@google.com>
---
 drivers/net/gve/gve_rx_dqo.c | 62 ++++++++++--------------------------
 1 file changed, 16 insertions(+), 46 deletions(-)

diff --git a/drivers/net/gve/gve_rx_dqo.c b/drivers/net/gve/gve_rx_dqo.c
index e4084bc0dd..5371bab77d 100644
--- a/drivers/net/gve/gve_rx_dqo.c
+++ b/drivers/net/gve/gve_rx_dqo.c
@@ -11,66 +11,36 @@
 static inline void
 gve_rx_refill_dqo(struct gve_rx_queue *rxq)
 {
-	volatile struct gve_rx_desc_dqo *rx_buf_ring;
 	volatile struct gve_rx_desc_dqo *rx_buf_desc;
 	struct rte_mbuf *nmb[rxq->nb_rx_hold];
 	uint16_t nb_refill = rxq->nb_rx_hold;
-	uint16_t nb_desc = rxq->nb_rx_desc;
 	uint16_t next_avail = rxq->bufq_tail;
 	struct rte_eth_dev *dev;
 	uint64_t dma_addr;
-	uint16_t delta;
 	int i;
 
 	if (rxq->nb_rx_hold < rxq->free_thresh)
 		return;
 
-	rx_buf_ring = rxq->rx_ring;
-	delta = nb_desc - next_avail;
-	if (unlikely(delta < nb_refill)) {
-		if (likely(rte_pktmbuf_alloc_bulk(rxq->mpool, nmb, delta) == 0)) {
-			for (i = 0; i < delta; i++) {
-				rx_buf_desc = &rx_buf_ring[next_avail + i];
-				rxq->sw_ring[next_avail + i] = nmb[i];
-				dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb[i]));
-				rx_buf_desc->header_buf_addr = 0;
-				rx_buf_desc->buf_addr = dma_addr;
-			}
-			nb_refill -= delta;
-			next_avail = 0;
-			rxq->nb_rx_hold -= delta;
-		} else {
-			rxq->stats.no_mbufs_bulk++;
-			rxq->stats.no_mbufs += nb_desc - next_avail;
-			dev = &rte_eth_devices[rxq->port_id];
-			dev->data->rx_mbuf_alloc_failed += nb_desc - next_avail;
-			PMD_DRV_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u",
-				    rxq->port_id, rxq->queue_id);
-			return;
-		}
+	if (unlikely(rte_pktmbuf_alloc_bulk(rxq->mpool, nmb, nb_refill))) {
+		rxq->stats.no_mbufs_bulk++;
+		rxq->stats.no_mbufs += nb_refill;
+		dev = &rte_eth_devices[rxq->port_id];
+		dev->data->rx_mbuf_alloc_failed += nb_refill;
+		PMD_DRV_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u",
+			    rxq->port_id, rxq->queue_id);
+		return;
 	}
 
-	if (nb_desc - next_avail >= nb_refill) {
-		if (likely(rte_pktmbuf_alloc_bulk(rxq->mpool, nmb, nb_refill) == 0)) {
-			for (i = 0; i < nb_refill; i++) {
-				rx_buf_desc = &rx_buf_ring[next_avail + i];
-				rxq->sw_ring[next_avail + i] = nmb[i];
-				dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb[i]));
-				rx_buf_desc->header_buf_addr = 0;
-				rx_buf_desc->buf_addr = dma_addr;
-			}
-			next_avail += nb_refill;
-			rxq->nb_rx_hold -= nb_refill;
-		} else {
-			rxq->stats.no_mbufs_bulk++;
-			rxq->stats.no_mbufs += nb_desc - next_avail;
-			dev = &rte_eth_devices[rxq->port_id];
-			dev->data->rx_mbuf_alloc_failed += nb_desc - next_avail;
-			PMD_DRV_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u",
-				    rxq->port_id, rxq->queue_id);
-		}
+	for (i = 0; i < nb_refill; i++) {
+		rx_buf_desc = &rxq->rx_ring[next_avail];
+		rxq->sw_ring[next_avail] = nmb[i];
+		dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb[i]));
+		rx_buf_desc->header_buf_addr = 0;
+		rx_buf_desc->buf_addr = dma_addr;
+		next_avail = (next_avail + 1) & (rxq->nb_rx_desc - 1);
 	}
-
+	rxq->nb_rx_hold -= nb_refill;
 	rte_write32(next_avail, rxq->qrx_tail);
 
 	rxq->bufq_tail = next_avail;
-- 
2.47.0.rc0.187.ge670bccf7e-goog


             reply	other threads:[~2024-10-04  1:05 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-10-04  1:05 Joshua Washington [this message]
2024-10-08  0:46 ` Ferruh Yigit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20241004010518.238331-1-joshwash@google.com \
    --to=joshwash@google.com \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@amd.com \
    --cc=jeroendb@google.com \
    --cc=junfeng.guo@intel.com \
    --cc=pkaligineedi@google.com \
    --cc=rushilg@google.com \
    --cc=stable@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).