DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH v2 0/4] vhost: fix and improve dequeue error path
@ 2025-01-15 12:59 Maxime Coquelin
  2025-01-15 12:59 ` [PATCH v2 1/4] vhost: fix missing packets count reset when not ready Maxime Coquelin
                   ` (3 more replies)
  0 siblings, 4 replies; 10+ messages in thread
From: Maxime Coquelin @ 2025-01-15 12:59 UTC (permalink / raw)
  To: dev, david.marchand, chenbox; +Cc: Maxime Coquelin

This series starts with a fix for a regression in the Vhost
dequeue error path.

The other patches improve the error handling to reduce the
chance of such regressions in the future.

Changes in v2:
==============
- Add RARP handling refactoring

Maxime Coquelin (4):
  vhost: fix missing packets count reset when not ready
  vhost: rework dequeue path error handling
  vhost: rework async dequeue path error handling
  vhost: improve RARP handling in dequeue paths

 lib/vhost/virtio_net.c | 80 ++++++++++++++++++------------------------
 1 file changed, 35 insertions(+), 45 deletions(-)

-- 
2.47.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v2 1/4] vhost: fix missing packets count reset when not ready
  2025-01-15 12:59 [PATCH v2 0/4] vhost: fix and improve dequeue error path Maxime Coquelin
@ 2025-01-15 12:59 ` Maxime Coquelin
  2025-01-15 16:41   ` David Marchand
  2025-01-15 12:59 ` [PATCH v2 2/4] vhost: rework dequeue path error handling Maxime Coquelin
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 10+ messages in thread
From: Maxime Coquelin @ 2025-01-15 12:59 UTC (permalink / raw)
  To: dev, david.marchand, chenbox; +Cc: Maxime Coquelin, stable

This patch fixes the rte_vhost_dequeue_burst return value
when the virtqueue is not ready. Without it, a discrepancy
between the packet array and its size is faced by the caller
of this API when the virtqueue is not ready.

Fixes: 9fc93a1e2320 ("vhost: fix virtqueue access check in datapath")
Cc: stable@dpdk.org

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 lib/vhost/virtio_net.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
index 69901ab3b5..a340e5a772 100644
--- a/lib/vhost/virtio_net.c
+++ b/lib/vhost/virtio_net.c
@@ -3629,6 +3629,8 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
 		rte_rwlock_read_unlock(&vq->access_lock);
 
 		virtio_dev_vring_translate(dev, vq);
+
+		count = 0;
 		goto out_no_unlock;
 	}
 
-- 
2.47.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v2 2/4] vhost: rework dequeue path error handling
  2025-01-15 12:59 [PATCH v2 0/4] vhost: fix and improve dequeue error path Maxime Coquelin
  2025-01-15 12:59 ` [PATCH v2 1/4] vhost: fix missing packets count reset when not ready Maxime Coquelin
@ 2025-01-15 12:59 ` Maxime Coquelin
  2025-01-15 16:42   ` David Marchand
  2025-01-15 12:59 ` [PATCH v2 3/4] vhost: rework async " Maxime Coquelin
  2025-01-15 12:59 ` [PATCH v2 4/4] vhost: improve RARP handling in dequeue paths Maxime Coquelin
  3 siblings, 1 reply; 10+ messages in thread
From: Maxime Coquelin @ 2025-01-15 12:59 UTC (permalink / raw)
  To: dev, david.marchand, chenbox; +Cc: Maxime Coquelin

This patch refactors the error handling in the Vhost
dequeue path to ease its maintenance and readability.

Suggested-by: David Marchand <david.marchand@redhat.com>
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 lib/vhost/virtio_net.c | 27 ++++++++++++---------------
 1 file changed, 12 insertions(+), 15 deletions(-)

diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
index a340e5a772..3a4955fd30 100644
--- a/lib/vhost/virtio_net.c
+++ b/lib/vhost/virtio_net.c
@@ -3593,6 +3593,7 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
 	struct rte_mbuf *rarp_mbuf = NULL;
 	struct vhost_virtqueue *vq;
 	int16_t success = 1;
+	uint16_t nb_rx = 0;
 
 	dev = get_device(vid);
 	if (!dev)
@@ -3602,25 +3603,23 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
 		VHOST_DATA_LOG(dev->ifname, ERR,
 			"%s: built-in vhost net backend is disabled.",
 			__func__);
-		return 0;
+		goto out_no_unlock;
 	}
 
 	if (unlikely(!is_valid_virt_queue_idx(queue_id, 1, dev->nr_vring))) {
 		VHOST_DATA_LOG(dev->ifname, ERR,
 			"%s: invalid virtqueue idx %d.",
 			__func__, queue_id);
-		return 0;
+		goto out_no_unlock;
 	}
 
 	vq = dev->virtqueue[queue_id];
 
 	if (unlikely(rte_rwlock_read_trylock(&vq->access_lock) != 0))
-		return 0;
+		goto out_no_unlock;
 
-	if (unlikely(!vq->enabled)) {
-		count = 0;
+	if (unlikely(!vq->enabled))
 		goto out_access_unlock;
-	}
 
 	vhost_user_iotlb_rd_lock(vq);
 
@@ -3630,7 +3629,6 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
 
 		virtio_dev_vring_translate(dev, vq);
 
-		count = 0;
 		goto out_no_unlock;
 	}
 
@@ -3657,7 +3655,6 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
 		rarp_mbuf = rte_net_make_rarp_packet(mbuf_pool, &dev->mac);
 		if (rarp_mbuf == NULL) {
 			VHOST_DATA_LOG(dev->ifname, ERR, "failed to make RARP packet.");
-			count = 0;
 			goto out;
 		}
 		/*
@@ -3672,17 +3669,17 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
 
 	if (vq_is_packed(dev)) {
 		if (dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS)
-			count = virtio_dev_tx_packed_legacy(dev, vq, mbuf_pool, pkts, count);
+			nb_rx = virtio_dev_tx_packed_legacy(dev, vq, mbuf_pool, pkts, count);
 		else
-			count = virtio_dev_tx_packed_compliant(dev, vq, mbuf_pool, pkts, count);
+			nb_rx = virtio_dev_tx_packed_compliant(dev, vq, mbuf_pool, pkts, count);
 	} else {
 		if (dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS)
-			count = virtio_dev_tx_split_legacy(dev, vq, mbuf_pool, pkts, count);
+			nb_rx = virtio_dev_tx_split_legacy(dev, vq, mbuf_pool, pkts, count);
 		else
-			count = virtio_dev_tx_split_compliant(dev, vq, mbuf_pool, pkts, count);
+			nb_rx = virtio_dev_tx_split_compliant(dev, vq, mbuf_pool, pkts, count);
 	}
 
-	vhost_queue_stats_update(dev, vq, pkts, count);
+	vhost_queue_stats_update(dev, vq, pkts, nb_rx);
 
 out:
 	vhost_user_iotlb_rd_unlock(vq);
@@ -3691,10 +3688,10 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
 	rte_rwlock_read_unlock(&vq->access_lock);
 
 	if (unlikely(rarp_mbuf != NULL))
-		count += 1;
+		nb_rx += 1;
 
 out_no_unlock:
-	return count;
+	return nb_rx;
 }
 
 static __rte_always_inline uint16_t
-- 
2.47.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v2 3/4] vhost: rework async dequeue path error handling
  2025-01-15 12:59 [PATCH v2 0/4] vhost: fix and improve dequeue error path Maxime Coquelin
  2025-01-15 12:59 ` [PATCH v2 1/4] vhost: fix missing packets count reset when not ready Maxime Coquelin
  2025-01-15 12:59 ` [PATCH v2 2/4] vhost: rework dequeue path error handling Maxime Coquelin
@ 2025-01-15 12:59 ` Maxime Coquelin
  2025-01-15 16:42   ` David Marchand
  2025-01-15 12:59 ` [PATCH v2 4/4] vhost: improve RARP handling in dequeue paths Maxime Coquelin
  3 siblings, 1 reply; 10+ messages in thread
From: Maxime Coquelin @ 2025-01-15 12:59 UTC (permalink / raw)
  To: dev, david.marchand, chenbox; +Cc: Maxime Coquelin

This patch refactors the error handling in the Vhost async
dequeue path to ease its maintenance and readability.

Suggested-by: David Marchand <david.marchand@redhat.com>
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 lib/vhost/virtio_net.c | 31 ++++++++++++++-----------------
 1 file changed, 14 insertions(+), 17 deletions(-)

diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
index 3a4955fd30..59ea2d16a5 100644
--- a/lib/vhost/virtio_net.c
+++ b/lib/vhost/virtio_net.c
@@ -4197,52 +4197,51 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id,
 	struct rte_mbuf *rarp_mbuf = NULL;
 	struct vhost_virtqueue *vq;
 	int16_t success = 1;
+	uint16_t nb_rx = 0;
 
 	dev = get_device(vid);
 	if (!dev || !nr_inflight)
-		return 0;
+		goto out_no_unlock;
 
 	*nr_inflight = -1;
 
 	if (unlikely(!(dev->flags & VIRTIO_DEV_BUILTIN_VIRTIO_NET))) {
 		VHOST_DATA_LOG(dev->ifname, ERR, "%s: built-in vhost net backend is disabled.",
 			__func__);
-		return 0;
+		goto out_no_unlock;
 	}
 
 	if (unlikely(!is_valid_virt_queue_idx(queue_id, 1, dev->nr_vring))) {
 		VHOST_DATA_LOG(dev->ifname, ERR, "%s: invalid virtqueue idx %d.",
 			__func__, queue_id);
-		return 0;
+		goto out_no_unlock;
 	}
 
 	if (unlikely(dma_id < 0 || dma_id >= RTE_DMADEV_DEFAULT_MAX)) {
 		VHOST_DATA_LOG(dev->ifname, ERR, "%s: invalid dma id %d.",
 			__func__, dma_id);
-		return 0;
+		goto out_no_unlock;
 	}
 
 	if (unlikely(!dma_copy_track[dma_id].vchans ||
 				!dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr)) {
 		VHOST_DATA_LOG(dev->ifname, ERR, "%s: invalid channel %d:%u.",
 			__func__, dma_id, vchan_id);
-		return 0;
+		goto out_no_unlock;
 	}
 
 	vq = dev->virtqueue[queue_id];
 
 	if (unlikely(rte_rwlock_read_trylock(&vq->access_lock) != 0))
-		return 0;
+		goto out_no_unlock;
 
 	if (unlikely(vq->enabled == 0)) {
-		count = 0;
 		goto out_access_unlock;
 	}
 
 	if (unlikely(!vq->async)) {
 		VHOST_DATA_LOG(dev->ifname, ERR, "%s: async not registered for queue id %d.",
 			__func__, queue_id);
-		count = 0;
 		goto out_access_unlock;
 	}
 
@@ -4253,7 +4252,6 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id,
 		rte_rwlock_read_unlock(&vq->access_lock);
 
 		virtio_dev_vring_translate(dev, vq);
-		count = 0;
 		goto out_no_unlock;
 	}
 
@@ -4280,7 +4278,6 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id,
 		rarp_mbuf = rte_net_make_rarp_packet(mbuf_pool, &dev->mac);
 		if (rarp_mbuf == NULL) {
 			VHOST_DATA_LOG(dev->ifname, ERR, "failed to make RARP packet.");
-			count = 0;
 			goto out;
 		}
 		/*
@@ -4295,22 +4292,22 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id,
 
 	if (vq_is_packed(dev)) {
 		if (dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS)
-			count = virtio_dev_tx_async_packed_legacy(dev, vq, mbuf_pool,
+			nb_rx = virtio_dev_tx_async_packed_legacy(dev, vq, mbuf_pool,
 					pkts, count, dma_id, vchan_id);
 		else
-			count = virtio_dev_tx_async_packed_compliant(dev, vq, mbuf_pool,
+			nb_rx = virtio_dev_tx_async_packed_compliant(dev, vq, mbuf_pool,
 					pkts, count, dma_id, vchan_id);
 	} else {
 		if (dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS)
-			count = virtio_dev_tx_async_split_legacy(dev, vq, mbuf_pool,
+			nb_rx = virtio_dev_tx_async_split_legacy(dev, vq, mbuf_pool,
 					pkts, count, dma_id, vchan_id);
 		else
-			count = virtio_dev_tx_async_split_compliant(dev, vq, mbuf_pool,
+			nb_rx = virtio_dev_tx_async_split_compliant(dev, vq, mbuf_pool,
 					pkts, count, dma_id, vchan_id);
 	}
 
 	*nr_inflight = vq->async->pkts_inflight_n;
-	vhost_queue_stats_update(dev, vq, pkts, count);
+	vhost_queue_stats_update(dev, vq, pkts, nb_rx);
 
 out:
 	vhost_user_iotlb_rd_unlock(vq);
@@ -4319,8 +4316,8 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id,
 	rte_rwlock_read_unlock(&vq->access_lock);
 
 	if (unlikely(rarp_mbuf != NULL))
-		count += 1;
+		nb_rx += 1;
 
 out_no_unlock:
-	return count;
+	return nb_rx;
 }
-- 
2.47.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v2 4/4] vhost: improve RARP handling in dequeue paths
  2025-01-15 12:59 [PATCH v2 0/4] vhost: fix and improve dequeue error path Maxime Coquelin
                   ` (2 preceding siblings ...)
  2025-01-15 12:59 ` [PATCH v2 3/4] vhost: rework async " Maxime Coquelin
@ 2025-01-15 12:59 ` Maxime Coquelin
  2025-01-15 16:46   ` David Marchand
  3 siblings, 1 reply; 10+ messages in thread
From: Maxime Coquelin @ 2025-01-15 12:59 UTC (permalink / raw)
  To: dev, david.marchand, chenbox; +Cc: Maxime Coquelin

With previous refactoring, we can now simplify the RARP
packet injection handling in both the sync and async
dequeue paths.

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 lib/vhost/virtio_net.c | 42 ++++++++++++++++++------------------------
 1 file changed, 18 insertions(+), 24 deletions(-)

diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
index 59ea2d16a5..fab45ebd54 100644
--- a/lib/vhost/virtio_net.c
+++ b/lib/vhost/virtio_net.c
@@ -3662,21 +3662,23 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
 		 * learning table will get updated first.
 		 */
 		pkts[0] = rarp_mbuf;
-		vhost_queue_stats_update(dev, vq, pkts, 1);
-		pkts++;
-		count -= 1;
+		nb_rx += 1;
 	}
 
 	if (vq_is_packed(dev)) {
 		if (dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS)
-			nb_rx = virtio_dev_tx_packed_legacy(dev, vq, mbuf_pool, pkts, count);
+			nb_rx += virtio_dev_tx_packed_legacy(dev, vq, mbuf_pool,
+					pkts + nb_rx, count - nb_rx);
 		else
-			nb_rx = virtio_dev_tx_packed_compliant(dev, vq, mbuf_pool, pkts, count);
+			nb_rx += virtio_dev_tx_packed_compliant(dev, vq, mbuf_pool,
+					pkts + nb_rx, count - nb_rx);
 	} else {
 		if (dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS)
-			nb_rx = virtio_dev_tx_split_legacy(dev, vq, mbuf_pool, pkts, count);
+			nb_rx += virtio_dev_tx_split_legacy(dev, vq, mbuf_pool,
+					pkts + nb_rx, count - nb_rx);
 		else
-			nb_rx = virtio_dev_tx_split_compliant(dev, vq, mbuf_pool, pkts, count);
+			nb_rx += virtio_dev_tx_split_compliant(dev, vq, mbuf_pool,
+					pkts + nb_rx, count - nb_rx);
 	}
 
 	vhost_queue_stats_update(dev, vq, pkts, nb_rx);
@@ -3687,9 +3689,6 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
 out_access_unlock:
 	rte_rwlock_read_unlock(&vq->access_lock);
 
-	if (unlikely(rarp_mbuf != NULL))
-		nb_rx += 1;
-
 out_no_unlock:
 	return nb_rx;
 }
@@ -4285,25 +4284,23 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id,
 		 * learning table will get updated first.
 		 */
 		pkts[0] = rarp_mbuf;
-		vhost_queue_stats_update(dev, vq, pkts, 1);
-		pkts++;
-		count -= 1;
+		nb_rx += 1;
 	}
 
 	if (vq_is_packed(dev)) {
 		if (dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS)
-			nb_rx = virtio_dev_tx_async_packed_legacy(dev, vq, mbuf_pool,
-					pkts, count, dma_id, vchan_id);
+			nb_rx += virtio_dev_tx_async_packed_legacy(dev, vq, mbuf_pool,
+					pkts + nb_rx, count - nb_rx, dma_id, vchan_id);
 		else
-			nb_rx = virtio_dev_tx_async_packed_compliant(dev, vq, mbuf_pool,
-					pkts, count, dma_id, vchan_id);
+			nb_rx += virtio_dev_tx_async_packed_compliant(dev, vq, mbuf_pool,
+					pkts + nb_rx, count - nb_rx, dma_id, vchan_id);
 	} else {
 		if (dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS)
-			nb_rx = virtio_dev_tx_async_split_legacy(dev, vq, mbuf_pool,
-					pkts, count, dma_id, vchan_id);
+			nb_rx += virtio_dev_tx_async_split_legacy(dev, vq, mbuf_pool,
+					pkts + nb_rx, count - nb_rx, dma_id, vchan_id);
 		else
-			nb_rx = virtio_dev_tx_async_split_compliant(dev, vq, mbuf_pool,
-					pkts, count, dma_id, vchan_id);
+			nb_rx += virtio_dev_tx_async_split_compliant(dev, vq, mbuf_pool,
+					pkts + nb_rx, count - nb_rx, dma_id, vchan_id);
 	}
 
 	*nr_inflight = vq->async->pkts_inflight_n;
@@ -4315,9 +4312,6 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id,
 out_access_unlock:
 	rte_rwlock_read_unlock(&vq->access_lock);
 
-	if (unlikely(rarp_mbuf != NULL))
-		nb_rx += 1;
-
 out_no_unlock:
 	return nb_rx;
 }
-- 
2.47.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 1/4] vhost: fix missing packets count reset when not ready
  2025-01-15 12:59 ` [PATCH v2 1/4] vhost: fix missing packets count reset when not ready Maxime Coquelin
@ 2025-01-15 16:41   ` David Marchand
  0 siblings, 0 replies; 10+ messages in thread
From: David Marchand @ 2025-01-15 16:41 UTC (permalink / raw)
  To: Maxime Coquelin; +Cc: dev, chenbox, stable

On Wed, Jan 15, 2025 at 1:59 PM Maxime Coquelin
<maxime.coquelin@redhat.com> wrote:
>
> This patch fixes the rte_vhost_dequeue_burst return value
> when the virtqueue is not ready. Without it, a discrepancy
> between the packet array and its size is faced by the caller
> of this API when the virtqueue is not ready.
>
> Fixes: 9fc93a1e2320 ("vhost: fix virtqueue access check in datapath")
> Cc: stable@dpdk.org
>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>

Good catch!
I am surprised no one caught the issue earlier.

Reviewed-by: David Marchand <david.marchand@redhat.com>


-- 
David Marchand


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 2/4] vhost: rework dequeue path error handling
  2025-01-15 12:59 ` [PATCH v2 2/4] vhost: rework dequeue path error handling Maxime Coquelin
@ 2025-01-15 16:42   ` David Marchand
  0 siblings, 0 replies; 10+ messages in thread
From: David Marchand @ 2025-01-15 16:42 UTC (permalink / raw)
  To: Maxime Coquelin; +Cc: dev, chenbox

On Wed, Jan 15, 2025 at 1:59 PM Maxime Coquelin
<maxime.coquelin@redhat.com> wrote:
>
> This patch refactors the error handling in the Vhost
> dequeue path to ease its maintenance and readability.
>
> Suggested-by: David Marchand <david.marchand@redhat.com>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>

Reviewed-by: David Marchand <david.marchand@redhat.com>


-- 
David Marchand


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 3/4] vhost: rework async dequeue path error handling
  2025-01-15 12:59 ` [PATCH v2 3/4] vhost: rework async " Maxime Coquelin
@ 2025-01-15 16:42   ` David Marchand
  2025-01-15 16:49     ` David Marchand
  0 siblings, 1 reply; 10+ messages in thread
From: David Marchand @ 2025-01-15 16:42 UTC (permalink / raw)
  To: Maxime Coquelin; +Cc: dev, chenbox

On Wed, Jan 15, 2025 at 1:59 PM Maxime Coquelin
<maxime.coquelin@redhat.com> wrote:
>
> This patch refactors the error handling in the Vhost async
> dequeue path to ease its maintenance and readability.
>
> Suggested-by: David Marchand <david.marchand@redhat.com>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>

Reviewed-by: David Marchand <david.marchand@redhat.com>


-- 
David Marchand


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 4/4] vhost: improve RARP handling in dequeue paths
  2025-01-15 12:59 ` [PATCH v2 4/4] vhost: improve RARP handling in dequeue paths Maxime Coquelin
@ 2025-01-15 16:46   ` David Marchand
  0 siblings, 0 replies; 10+ messages in thread
From: David Marchand @ 2025-01-15 16:46 UTC (permalink / raw)
  To: Maxime Coquelin; +Cc: dev, chenbox

On Wed, Jan 15, 2025 at 1:59 PM Maxime Coquelin
<maxime.coquelin@redhat.com> wrote:
>
> With previous refactoring, we can now simplify the RARP
> packet injection handling in both the sync and async
> dequeue paths.
>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
>  lib/vhost/virtio_net.c | 42 ++++++++++++++++++------------------------
>  1 file changed, 18 insertions(+), 24 deletions(-)
>
> diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
> index 59ea2d16a5..fab45ebd54 100644
> --- a/lib/vhost/virtio_net.c
> +++ b/lib/vhost/virtio_net.c
> @@ -3662,21 +3662,23 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
>                  * learning table will get updated first.
>                  */
>                 pkts[0] = rarp_mbuf;

Well, ideally it would be pkts[nb_rx], but see comment below.

> -               vhost_queue_stats_update(dev, vq, pkts, 1);
> -               pkts++;
> -               count -= 1;
> +               nb_rx += 1;
>         }

With this change, the rarp_mbuf variable is unneeded.
You can store to pkts[nb_rx] when calling rte_net_make_rarp_packet()
(and at the same time, move the comment about injecting the packet to
the head of the array).



>
>         if (vq_is_packed(dev)) {
>                 if (dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS)
> -                       nb_rx = virtio_dev_tx_packed_legacy(dev, vq, mbuf_pool, pkts, count);
> +                       nb_rx += virtio_dev_tx_packed_legacy(dev, vq, mbuf_pool,
> +                                       pkts + nb_rx, count - nb_rx);
>                 else
> -                       nb_rx = virtio_dev_tx_packed_compliant(dev, vq, mbuf_pool, pkts, count);
> +                       nb_rx += virtio_dev_tx_packed_compliant(dev, vq, mbuf_pool,
> +                                       pkts + nb_rx, count - nb_rx);
>         } else {
>                 if (dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS)
> -                       nb_rx = virtio_dev_tx_split_legacy(dev, vq, mbuf_pool, pkts, count);
> +                       nb_rx += virtio_dev_tx_split_legacy(dev, vq, mbuf_pool,
> +                                       pkts + nb_rx, count - nb_rx);
>                 else
> -                       nb_rx = virtio_dev_tx_split_compliant(dev, vq, mbuf_pool, pkts, count);
> +                       nb_rx += virtio_dev_tx_split_compliant(dev, vq, mbuf_pool,
> +                                       pkts + nb_rx, count - nb_rx);
>         }
>
>         vhost_queue_stats_update(dev, vq, pkts, nb_rx);
> @@ -3687,9 +3689,6 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
>  out_access_unlock:
>         rte_rwlock_read_unlock(&vq->access_lock);
>
> -       if (unlikely(rarp_mbuf != NULL))
> -               nb_rx += 1;
> -
>  out_no_unlock:
>         return nb_rx;
>  }
> @@ -4285,25 +4284,23 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id,
>                  * learning table will get updated first.
>                  */
>                 pkts[0] = rarp_mbuf;
> -               vhost_queue_stats_update(dev, vq, pkts, 1);
> -               pkts++;
> -               count -= 1;
> +               nb_rx += 1;
>         }

Idem.

>
>         if (vq_is_packed(dev)) {
>                 if (dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS)
> -                       nb_rx = virtio_dev_tx_async_packed_legacy(dev, vq, mbuf_pool,
> -                                       pkts, count, dma_id, vchan_id);
> +                       nb_rx += virtio_dev_tx_async_packed_legacy(dev, vq, mbuf_pool,
> +                                       pkts + nb_rx, count - nb_rx, dma_id, vchan_id);
>                 else
> -                       nb_rx = virtio_dev_tx_async_packed_compliant(dev, vq, mbuf_pool,
> -                                       pkts, count, dma_id, vchan_id);
> +                       nb_rx += virtio_dev_tx_async_packed_compliant(dev, vq, mbuf_pool,
> +                                       pkts + nb_rx, count - nb_rx, dma_id, vchan_id);
>         } else {
>                 if (dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS)
> -                       nb_rx = virtio_dev_tx_async_split_legacy(dev, vq, mbuf_pool,
> -                                       pkts, count, dma_id, vchan_id);
> +                       nb_rx += virtio_dev_tx_async_split_legacy(dev, vq, mbuf_pool,
> +                                       pkts + nb_rx, count - nb_rx, dma_id, vchan_id);
>                 else
> -                       nb_rx = virtio_dev_tx_async_split_compliant(dev, vq, mbuf_pool,
> -                                       pkts, count, dma_id, vchan_id);
> +                       nb_rx += virtio_dev_tx_async_split_compliant(dev, vq, mbuf_pool,
> +                                       pkts + nb_rx, count - nb_rx, dma_id, vchan_id);
>         }
>
>         *nr_inflight = vq->async->pkts_inflight_n;
> @@ -4315,9 +4312,6 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id,
>  out_access_unlock:
>         rte_rwlock_read_unlock(&vq->access_lock);
>
> -       if (unlikely(rarp_mbuf != NULL))
> -               nb_rx += 1;
> -
>  out_no_unlock:
>         return nb_rx;
>  }
> --
> 2.47.1
>


-- 
David Marchand


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 3/4] vhost: rework async dequeue path error handling
  2025-01-15 16:42   ` David Marchand
@ 2025-01-15 16:49     ` David Marchand
  0 siblings, 0 replies; 10+ messages in thread
From: David Marchand @ 2025-01-15 16:49 UTC (permalink / raw)
  To: Maxime Coquelin; +Cc: dev, chenbox

On Wed, Jan 15, 2025 at 5:42 PM David Marchand
<david.marchand@redhat.com> wrote:
>
> On Wed, Jan 15, 2025 at 1:59 PM Maxime Coquelin
> <maxime.coquelin@redhat.com> wrote:
> >
> > This patch refactors the error handling in the Vhost async
> > dequeue path to ease its maintenance and readability.
> >
> > Suggested-by: David Marchand <david.marchand@redhat.com>
> > Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
>
> Reviewed-by: David Marchand <david.marchand@redhat.com>

Btw, I would squash patch 2 and 3 (especially as you updated both sync
and async code at the same time in patch 4).


-- 
David Marchand


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2025-01-15 16:49 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-01-15 12:59 [PATCH v2 0/4] vhost: fix and improve dequeue error path Maxime Coquelin
2025-01-15 12:59 ` [PATCH v2 1/4] vhost: fix missing packets count reset when not ready Maxime Coquelin
2025-01-15 16:41   ` David Marchand
2025-01-15 12:59 ` [PATCH v2 2/4] vhost: rework dequeue path error handling Maxime Coquelin
2025-01-15 16:42   ` David Marchand
2025-01-15 12:59 ` [PATCH v2 3/4] vhost: rework async " Maxime Coquelin
2025-01-15 16:42   ` David Marchand
2025-01-15 16:49     ` David Marchand
2025-01-15 12:59 ` [PATCH v2 4/4] vhost: improve RARP handling in dequeue paths Maxime Coquelin
2025-01-15 16:46   ` David Marchand

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).