* [PATCH v3 0/3] vhost: fix and improve dequeue error path
@ 2025-01-16 9:54 Maxime Coquelin
2025-01-16 9:54 ` [PATCH v3 1/3] vhost: fix missing packets count reset when not ready Maxime Coquelin
` (2 more replies)
0 siblings, 3 replies; 11+ messages in thread
From: Maxime Coquelin @ 2025-01-16 9:54 UTC (permalink / raw)
To: dev, david.marchand, chenbox; +Cc: Maxime Coquelin
This series starts with a fix for a regression in the Vhost
dequeue error path.
The other patches improve the error handling to reduce the
chance of such regressions in the future.
Changes in v3:
==============
- Squashed patches 2 & 3 (David)
- Simplify RARP handling further (David
Changes in v2:
==============
- Add RARP handling refactoring
Maxime Coquelin (3):
vhost: fix missing packets count reset when not ready
vhost: rework dequeue paths error handling
vhost: improve RARP handling in dequeue paths
lib/vhost/virtio_net.c | 110 ++++++++++++++++++-----------------------
1 file changed, 47 insertions(+), 63 deletions(-)
--
2.47.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v3 1/3] vhost: fix missing packets count reset when not ready
2025-01-16 9:54 [PATCH v3 0/3] vhost: fix and improve dequeue error path Maxime Coquelin
@ 2025-01-16 9:54 ` Maxime Coquelin
2025-01-16 10:10 ` David Marchand
2025-01-16 12:14 ` Chenbo Xia
2025-01-16 9:54 ` [PATCH v3 2/3] vhost: rework dequeue paths error handling Maxime Coquelin
2025-01-16 9:54 ` [PATCH v3 3/3] vhost: improve RARP handling in dequeue paths Maxime Coquelin
2 siblings, 2 replies; 11+ messages in thread
From: Maxime Coquelin @ 2025-01-16 9:54 UTC (permalink / raw)
To: dev, david.marchand, chenbox; +Cc: Maxime Coquelin, stable
This patch fixes the rte_vhost_dequeue_burst return value
when the virtqueue is not ready. Without it, a discrepancy
between the packet array and its size is faced by the caller
of this API when the virtqueue is not ready.
Fixes: 9fc93a1e2320 ("vhost: fix virtqueue access check in datapath")
Cc: stable@dpdk.org
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
lib/vhost/virtio_net.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
index 69901ab3b5..a340e5a772 100644
--- a/lib/vhost/virtio_net.c
+++ b/lib/vhost/virtio_net.c
@@ -3629,6 +3629,8 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
rte_rwlock_read_unlock(&vq->access_lock);
virtio_dev_vring_translate(dev, vq);
+
+ count = 0;
goto out_no_unlock;
}
--
2.47.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v3 2/3] vhost: rework dequeue paths error handling
2025-01-16 9:54 [PATCH v3 0/3] vhost: fix and improve dequeue error path Maxime Coquelin
2025-01-16 9:54 ` [PATCH v3 1/3] vhost: fix missing packets count reset when not ready Maxime Coquelin
@ 2025-01-16 9:54 ` Maxime Coquelin
2025-01-16 10:10 ` David Marchand
2025-01-16 12:15 ` Chenbo Xia
2025-01-16 9:54 ` [PATCH v3 3/3] vhost: improve RARP handling in dequeue paths Maxime Coquelin
2 siblings, 2 replies; 11+ messages in thread
From: Maxime Coquelin @ 2025-01-16 9:54 UTC (permalink / raw)
To: dev, david.marchand, chenbox; +Cc: Maxime Coquelin
This patch refactors the error handling in the Vhost
dequeue paths to ease its maintenance and readability.
Suggested-by: David Marchand <david.marchand@redhat.com>
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
lib/vhost/virtio_net.c | 58 +++++++++++++++++++-----------------------
1 file changed, 26 insertions(+), 32 deletions(-)
diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
index a340e5a772..59ea2d16a5 100644
--- a/lib/vhost/virtio_net.c
+++ b/lib/vhost/virtio_net.c
@@ -3593,6 +3593,7 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
struct rte_mbuf *rarp_mbuf = NULL;
struct vhost_virtqueue *vq;
int16_t success = 1;
+ uint16_t nb_rx = 0;
dev = get_device(vid);
if (!dev)
@@ -3602,25 +3603,23 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
VHOST_DATA_LOG(dev->ifname, ERR,
"%s: built-in vhost net backend is disabled.",
__func__);
- return 0;
+ goto out_no_unlock;
}
if (unlikely(!is_valid_virt_queue_idx(queue_id, 1, dev->nr_vring))) {
VHOST_DATA_LOG(dev->ifname, ERR,
"%s: invalid virtqueue idx %d.",
__func__, queue_id);
- return 0;
+ goto out_no_unlock;
}
vq = dev->virtqueue[queue_id];
if (unlikely(rte_rwlock_read_trylock(&vq->access_lock) != 0))
- return 0;
+ goto out_no_unlock;
- if (unlikely(!vq->enabled)) {
- count = 0;
+ if (unlikely(!vq->enabled))
goto out_access_unlock;
- }
vhost_user_iotlb_rd_lock(vq);
@@ -3630,7 +3629,6 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
virtio_dev_vring_translate(dev, vq);
- count = 0;
goto out_no_unlock;
}
@@ -3657,7 +3655,6 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
rarp_mbuf = rte_net_make_rarp_packet(mbuf_pool, &dev->mac);
if (rarp_mbuf == NULL) {
VHOST_DATA_LOG(dev->ifname, ERR, "failed to make RARP packet.");
- count = 0;
goto out;
}
/*
@@ -3672,17 +3669,17 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
if (vq_is_packed(dev)) {
if (dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS)
- count = virtio_dev_tx_packed_legacy(dev, vq, mbuf_pool, pkts, count);
+ nb_rx = virtio_dev_tx_packed_legacy(dev, vq, mbuf_pool, pkts, count);
else
- count = virtio_dev_tx_packed_compliant(dev, vq, mbuf_pool, pkts, count);
+ nb_rx = virtio_dev_tx_packed_compliant(dev, vq, mbuf_pool, pkts, count);
} else {
if (dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS)
- count = virtio_dev_tx_split_legacy(dev, vq, mbuf_pool, pkts, count);
+ nb_rx = virtio_dev_tx_split_legacy(dev, vq, mbuf_pool, pkts, count);
else
- count = virtio_dev_tx_split_compliant(dev, vq, mbuf_pool, pkts, count);
+ nb_rx = virtio_dev_tx_split_compliant(dev, vq, mbuf_pool, pkts, count);
}
- vhost_queue_stats_update(dev, vq, pkts, count);
+ vhost_queue_stats_update(dev, vq, pkts, nb_rx);
out:
vhost_user_iotlb_rd_unlock(vq);
@@ -3691,10 +3688,10 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
rte_rwlock_read_unlock(&vq->access_lock);
if (unlikely(rarp_mbuf != NULL))
- count += 1;
+ nb_rx += 1;
out_no_unlock:
- return count;
+ return nb_rx;
}
static __rte_always_inline uint16_t
@@ -4200,52 +4197,51 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id,
struct rte_mbuf *rarp_mbuf = NULL;
struct vhost_virtqueue *vq;
int16_t success = 1;
+ uint16_t nb_rx = 0;
dev = get_device(vid);
if (!dev || !nr_inflight)
- return 0;
+ goto out_no_unlock;
*nr_inflight = -1;
if (unlikely(!(dev->flags & VIRTIO_DEV_BUILTIN_VIRTIO_NET))) {
VHOST_DATA_LOG(dev->ifname, ERR, "%s: built-in vhost net backend is disabled.",
__func__);
- return 0;
+ goto out_no_unlock;
}
if (unlikely(!is_valid_virt_queue_idx(queue_id, 1, dev->nr_vring))) {
VHOST_DATA_LOG(dev->ifname, ERR, "%s: invalid virtqueue idx %d.",
__func__, queue_id);
- return 0;
+ goto out_no_unlock;
}
if (unlikely(dma_id < 0 || dma_id >= RTE_DMADEV_DEFAULT_MAX)) {
VHOST_DATA_LOG(dev->ifname, ERR, "%s: invalid dma id %d.",
__func__, dma_id);
- return 0;
+ goto out_no_unlock;
}
if (unlikely(!dma_copy_track[dma_id].vchans ||
!dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr)) {
VHOST_DATA_LOG(dev->ifname, ERR, "%s: invalid channel %d:%u.",
__func__, dma_id, vchan_id);
- return 0;
+ goto out_no_unlock;
}
vq = dev->virtqueue[queue_id];
if (unlikely(rte_rwlock_read_trylock(&vq->access_lock) != 0))
- return 0;
+ goto out_no_unlock;
if (unlikely(vq->enabled == 0)) {
- count = 0;
goto out_access_unlock;
}
if (unlikely(!vq->async)) {
VHOST_DATA_LOG(dev->ifname, ERR, "%s: async not registered for queue id %d.",
__func__, queue_id);
- count = 0;
goto out_access_unlock;
}
@@ -4256,7 +4252,6 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id,
rte_rwlock_read_unlock(&vq->access_lock);
virtio_dev_vring_translate(dev, vq);
- count = 0;
goto out_no_unlock;
}
@@ -4283,7 +4278,6 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id,
rarp_mbuf = rte_net_make_rarp_packet(mbuf_pool, &dev->mac);
if (rarp_mbuf == NULL) {
VHOST_DATA_LOG(dev->ifname, ERR, "failed to make RARP packet.");
- count = 0;
goto out;
}
/*
@@ -4298,22 +4292,22 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id,
if (vq_is_packed(dev)) {
if (dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS)
- count = virtio_dev_tx_async_packed_legacy(dev, vq, mbuf_pool,
+ nb_rx = virtio_dev_tx_async_packed_legacy(dev, vq, mbuf_pool,
pkts, count, dma_id, vchan_id);
else
- count = virtio_dev_tx_async_packed_compliant(dev, vq, mbuf_pool,
+ nb_rx = virtio_dev_tx_async_packed_compliant(dev, vq, mbuf_pool,
pkts, count, dma_id, vchan_id);
} else {
if (dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS)
- count = virtio_dev_tx_async_split_legacy(dev, vq, mbuf_pool,
+ nb_rx = virtio_dev_tx_async_split_legacy(dev, vq, mbuf_pool,
pkts, count, dma_id, vchan_id);
else
- count = virtio_dev_tx_async_split_compliant(dev, vq, mbuf_pool,
+ nb_rx = virtio_dev_tx_async_split_compliant(dev, vq, mbuf_pool,
pkts, count, dma_id, vchan_id);
}
*nr_inflight = vq->async->pkts_inflight_n;
- vhost_queue_stats_update(dev, vq, pkts, count);
+ vhost_queue_stats_update(dev, vq, pkts, nb_rx);
out:
vhost_user_iotlb_rd_unlock(vq);
@@ -4322,8 +4316,8 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id,
rte_rwlock_read_unlock(&vq->access_lock);
if (unlikely(rarp_mbuf != NULL))
- count += 1;
+ nb_rx += 1;
out_no_unlock:
- return count;
+ return nb_rx;
}
--
2.47.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v3 3/3] vhost: improve RARP handling in dequeue paths
2025-01-16 9:54 [PATCH v3 0/3] vhost: fix and improve dequeue error path Maxime Coquelin
2025-01-16 9:54 ` [PATCH v3 1/3] vhost: fix missing packets count reset when not ready Maxime Coquelin
2025-01-16 9:54 ` [PATCH v3 2/3] vhost: rework dequeue paths error handling Maxime Coquelin
@ 2025-01-16 9:54 ` Maxime Coquelin
2025-01-16 10:12 ` David Marchand
` (2 more replies)
2 siblings, 3 replies; 11+ messages in thread
From: Maxime Coquelin @ 2025-01-16 9:54 UTC (permalink / raw)
To: dev, david.marchand, chenbox; +Cc: Maxime Coquelin
With previous refactoring, we can now simplify the RARP
packet injection handling in both the sync and async
dequeue paths.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
lib/vhost/virtio_net.c | 72 ++++++++++++++++++------------------------
1 file changed, 30 insertions(+), 42 deletions(-)
diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
index 59ea2d16a5..c5de2d7a28 100644
--- a/lib/vhost/virtio_net.c
+++ b/lib/vhost/virtio_net.c
@@ -3590,7 +3590,6 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count)
{
struct virtio_net *dev;
- struct rte_mbuf *rarp_mbuf = NULL;
struct vhost_virtqueue *vq;
int16_t success = 1;
uint16_t nb_rx = 0;
@@ -3651,32 +3650,32 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
if (unlikely(rte_atomic_load_explicit(&dev->broadcast_rarp, rte_memory_order_acquire) &&
rte_atomic_compare_exchange_strong_explicit(&dev->broadcast_rarp,
&success, 0, rte_memory_order_release, rte_memory_order_relaxed))) {
-
- rarp_mbuf = rte_net_make_rarp_packet(mbuf_pool, &dev->mac);
- if (rarp_mbuf == NULL) {
+ /*
+ * Inject the RARP packet to the head of "pkts" array,
+ * so that switch's mac learning table will get updated first.
+ */
+ pkts[nb_rx] = rte_net_make_rarp_packet(mbuf_pool, &dev->mac);
+ if (pkts[nb_rx] == NULL) {
VHOST_DATA_LOG(dev->ifname, ERR, "failed to make RARP packet.");
goto out;
}
- /*
- * Inject it to the head of "pkts" array, so that switch's mac
- * learning table will get updated first.
- */
- pkts[0] = rarp_mbuf;
- vhost_queue_stats_update(dev, vq, pkts, 1);
- pkts++;
- count -= 1;
+ nb_rx += 1;
}
if (vq_is_packed(dev)) {
if (dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS)
- nb_rx = virtio_dev_tx_packed_legacy(dev, vq, mbuf_pool, pkts, count);
+ nb_rx += virtio_dev_tx_packed_legacy(dev, vq, mbuf_pool,
+ pkts + nb_rx, count - nb_rx);
else
- nb_rx = virtio_dev_tx_packed_compliant(dev, vq, mbuf_pool, pkts, count);
+ nb_rx += virtio_dev_tx_packed_compliant(dev, vq, mbuf_pool,
+ pkts + nb_rx, count - nb_rx);
} else {
if (dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS)
- nb_rx = virtio_dev_tx_split_legacy(dev, vq, mbuf_pool, pkts, count);
+ nb_rx += virtio_dev_tx_split_legacy(dev, vq, mbuf_pool,
+ pkts + nb_rx, count - nb_rx);
else
- nb_rx = virtio_dev_tx_split_compliant(dev, vq, mbuf_pool, pkts, count);
+ nb_rx += virtio_dev_tx_split_compliant(dev, vq, mbuf_pool,
+ pkts + nb_rx, count - nb_rx);
}
vhost_queue_stats_update(dev, vq, pkts, nb_rx);
@@ -3687,9 +3686,6 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
out_access_unlock:
rte_rwlock_read_unlock(&vq->access_lock);
- if (unlikely(rarp_mbuf != NULL))
- nb_rx += 1;
-
out_no_unlock:
return nb_rx;
}
@@ -4194,7 +4190,6 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id,
int *nr_inflight, int16_t dma_id, uint16_t vchan_id)
{
struct virtio_net *dev;
- struct rte_mbuf *rarp_mbuf = NULL;
struct vhost_virtqueue *vq;
int16_t success = 1;
uint16_t nb_rx = 0;
@@ -4274,36 +4269,32 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id,
if (unlikely(rte_atomic_load_explicit(&dev->broadcast_rarp, rte_memory_order_acquire) &&
rte_atomic_compare_exchange_strong_explicit(&dev->broadcast_rarp,
&success, 0, rte_memory_order_release, rte_memory_order_relaxed))) {
-
- rarp_mbuf = rte_net_make_rarp_packet(mbuf_pool, &dev->mac);
- if (rarp_mbuf == NULL) {
+ /*
+ * Inject the RARP packet to the head of "pkts" array,
+ * so that switch's mac learning table will get updated first.
+ */
+ pkts[nb_rx] = rte_net_make_rarp_packet(mbuf_pool, &dev->mac);
+ if (pkts[nb_rx] == NULL) {
VHOST_DATA_LOG(dev->ifname, ERR, "failed to make RARP packet.");
goto out;
}
- /*
- * Inject it to the head of "pkts" array, so that switch's mac
- * learning table will get updated first.
- */
- pkts[0] = rarp_mbuf;
- vhost_queue_stats_update(dev, vq, pkts, 1);
- pkts++;
- count -= 1;
+ nb_rx += 1;
}
if (vq_is_packed(dev)) {
if (dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS)
- nb_rx = virtio_dev_tx_async_packed_legacy(dev, vq, mbuf_pool,
- pkts, count, dma_id, vchan_id);
+ nb_rx += virtio_dev_tx_async_packed_legacy(dev, vq, mbuf_pool,
+ pkts + nb_rx, count - nb_rx, dma_id, vchan_id);
else
- nb_rx = virtio_dev_tx_async_packed_compliant(dev, vq, mbuf_pool,
- pkts, count, dma_id, vchan_id);
+ nb_rx += virtio_dev_tx_async_packed_compliant(dev, vq, mbuf_pool,
+ pkts + nb_rx, count - nb_rx, dma_id, vchan_id);
} else {
if (dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS)
- nb_rx = virtio_dev_tx_async_split_legacy(dev, vq, mbuf_pool,
- pkts, count, dma_id, vchan_id);
+ nb_rx += virtio_dev_tx_async_split_legacy(dev, vq, mbuf_pool,
+ pkts + nb_rx, count - nb_rx, dma_id, vchan_id);
else
- nb_rx = virtio_dev_tx_async_split_compliant(dev, vq, mbuf_pool,
- pkts, count, dma_id, vchan_id);
+ nb_rx += virtio_dev_tx_async_split_compliant(dev, vq, mbuf_pool,
+ pkts + nb_rx, count - nb_rx, dma_id, vchan_id);
}
*nr_inflight = vq->async->pkts_inflight_n;
@@ -4315,9 +4306,6 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id,
out_access_unlock:
rte_rwlock_read_unlock(&vq->access_lock);
- if (unlikely(rarp_mbuf != NULL))
- nb_rx += 1;
-
out_no_unlock:
return nb_rx;
}
--
2.47.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3 1/3] vhost: fix missing packets count reset when not ready
2025-01-16 9:54 ` [PATCH v3 1/3] vhost: fix missing packets count reset when not ready Maxime Coquelin
@ 2025-01-16 10:10 ` David Marchand
2025-01-16 12:14 ` Chenbo Xia
1 sibling, 0 replies; 11+ messages in thread
From: David Marchand @ 2025-01-16 10:10 UTC (permalink / raw)
To: Maxime Coquelin; +Cc: dev, chenbox, stable
On Thu, Jan 16, 2025 at 10:54 AM Maxime Coquelin
<maxime.coquelin@redhat.com> wrote:
>
> This patch fixes the rte_vhost_dequeue_burst return value
> when the virtqueue is not ready. Without it, a discrepancy
> between the packet array and its size is faced by the caller
> of this API when the virtqueue is not ready.
>
> Fixes: 9fc93a1e2320 ("vhost: fix virtqueue access check in datapath")
> Cc: stable@dpdk.org
>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
--
David Marchand
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3 2/3] vhost: rework dequeue paths error handling
2025-01-16 9:54 ` [PATCH v3 2/3] vhost: rework dequeue paths error handling Maxime Coquelin
@ 2025-01-16 10:10 ` David Marchand
2025-01-16 12:15 ` Chenbo Xia
1 sibling, 0 replies; 11+ messages in thread
From: David Marchand @ 2025-01-16 10:10 UTC (permalink / raw)
To: Maxime Coquelin; +Cc: dev, chenbox
On Thu, Jan 16, 2025 at 10:54 AM Maxime Coquelin
<maxime.coquelin@redhat.com> wrote:
>
> This patch refactors the error handling in the Vhost
> dequeue paths to ease its maintenance and readability.
>
> Suggested-by: David Marchand <david.marchand@redhat.com>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
--
David Marchand
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3 3/3] vhost: improve RARP handling in dequeue paths
2025-01-16 9:54 ` [PATCH v3 3/3] vhost: improve RARP handling in dequeue paths Maxime Coquelin
@ 2025-01-16 10:12 ` David Marchand
2025-01-16 12:15 ` Chenbo Xia
2025-01-16 13:08 ` David Marchand
2 siblings, 0 replies; 11+ messages in thread
From: David Marchand @ 2025-01-16 10:12 UTC (permalink / raw)
To: Maxime Coquelin; +Cc: dev, chenbox
On Thu, Jan 16, 2025 at 10:54 AM Maxime Coquelin
<maxime.coquelin@redhat.com> wrote:
>
> With previous refactoring, we can now simplify the RARP
> packet injection handling in both the sync and async
> dequeue paths.
>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
--
David Marchand
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3 1/3] vhost: fix missing packets count reset when not ready
2025-01-16 9:54 ` [PATCH v3 1/3] vhost: fix missing packets count reset when not ready Maxime Coquelin
2025-01-16 10:10 ` David Marchand
@ 2025-01-16 12:14 ` Chenbo Xia
1 sibling, 0 replies; 11+ messages in thread
From: Chenbo Xia @ 2025-01-16 12:14 UTC (permalink / raw)
To: Maxime Coquelin; +Cc: dev, david.marchand, stable
> On Jan 16, 2025, at 17:54, Maxime Coquelin <maxime.coquelin@redhat.com> wrote:
>
> External email: Use caution opening links or attachments
>
>
> This patch fixes the rte_vhost_dequeue_burst return value
> when the virtqueue is not ready. Without it, a discrepancy
> between the packet array and its size is faced by the caller
> of this API when the virtqueue is not ready.
>
> Fixes: 9fc93a1e2320 ("vhost: fix virtqueue access check in datapath")
> Cc: stable@dpdk.org
>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
> lib/vhost/virtio_net.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
> index 69901ab3b5..a340e5a772 100644
> --- a/lib/vhost/virtio_net.c
> +++ b/lib/vhost/virtio_net.c
> @@ -3629,6 +3629,8 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
> rte_rwlock_read_unlock(&vq->access_lock);
>
> virtio_dev_vring_translate(dev, vq);
> +
> + count = 0;
> goto out_no_unlock;
> }
>
> --
> 2.47.1
>
Reviewed-by: Chenbo Xia <chenbox@nvidia.com>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3 2/3] vhost: rework dequeue paths error handling
2025-01-16 9:54 ` [PATCH v3 2/3] vhost: rework dequeue paths error handling Maxime Coquelin
2025-01-16 10:10 ` David Marchand
@ 2025-01-16 12:15 ` Chenbo Xia
1 sibling, 0 replies; 11+ messages in thread
From: Chenbo Xia @ 2025-01-16 12:15 UTC (permalink / raw)
To: Maxime Coquelin; +Cc: dev, david.marchand
> On Jan 16, 2025, at 17:54, Maxime Coquelin <maxime.coquelin@redhat.com> wrote:
>
> External email: Use caution opening links or attachments
>
>
> This patch refactors the error handling in the Vhost
> dequeue paths to ease its maintenance and readability.
>
> Suggested-by: David Marchand <david.marchand@redhat.com>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
> lib/vhost/virtio_net.c | 58 +++++++++++++++++++-----------------------
> 1 file changed, 26 insertions(+), 32 deletions(-)
>
> diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
> index a340e5a772..59ea2d16a5 100644
> --- a/lib/vhost/virtio_net.c
> +++ b/lib/vhost/virtio_net.c
> @@ -3593,6 +3593,7 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
> struct rte_mbuf *rarp_mbuf = NULL;
> struct vhost_virtqueue *vq;
> int16_t success = 1;
> + uint16_t nb_rx = 0;
>
> dev = get_device(vid);
> if (!dev)
> @@ -3602,25 +3603,23 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
> VHOST_DATA_LOG(dev->ifname, ERR,
> "%s: built-in vhost net backend is disabled.",
> __func__);
> - return 0;
> + goto out_no_unlock;
> }
>
> if (unlikely(!is_valid_virt_queue_idx(queue_id, 1, dev->nr_vring))) {
> VHOST_DATA_LOG(dev->ifname, ERR,
> "%s: invalid virtqueue idx %d.",
> __func__, queue_id);
> - return 0;
> + goto out_no_unlock;
> }
>
> vq = dev->virtqueue[queue_id];
>
> if (unlikely(rte_rwlock_read_trylock(&vq->access_lock) != 0))
> - return 0;
> + goto out_no_unlock;
>
> - if (unlikely(!vq->enabled)) {
> - count = 0;
> + if (unlikely(!vq->enabled))
> goto out_access_unlock;
> - }
>
> vhost_user_iotlb_rd_lock(vq);
>
> @@ -3630,7 +3629,6 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
>
> virtio_dev_vring_translate(dev, vq);
>
> - count = 0;
> goto out_no_unlock;
> }
>
> @@ -3657,7 +3655,6 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
> rarp_mbuf = rte_net_make_rarp_packet(mbuf_pool, &dev->mac);
> if (rarp_mbuf == NULL) {
> VHOST_DATA_LOG(dev->ifname, ERR, "failed to make RARP packet.");
> - count = 0;
> goto out;
> }
> /*
> @@ -3672,17 +3669,17 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
>
> if (vq_is_packed(dev)) {
> if (dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS)
> - count = virtio_dev_tx_packed_legacy(dev, vq, mbuf_pool, pkts, count);
> + nb_rx = virtio_dev_tx_packed_legacy(dev, vq, mbuf_pool, pkts, count);
> else
> - count = virtio_dev_tx_packed_compliant(dev, vq, mbuf_pool, pkts, count);
> + nb_rx = virtio_dev_tx_packed_compliant(dev, vq, mbuf_pool, pkts, count);
> } else {
> if (dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS)
> - count = virtio_dev_tx_split_legacy(dev, vq, mbuf_pool, pkts, count);
> + nb_rx = virtio_dev_tx_split_legacy(dev, vq, mbuf_pool, pkts, count);
> else
> - count = virtio_dev_tx_split_compliant(dev, vq, mbuf_pool, pkts, count);
> + nb_rx = virtio_dev_tx_split_compliant(dev, vq, mbuf_pool, pkts, count);
> }
>
> - vhost_queue_stats_update(dev, vq, pkts, count);
> + vhost_queue_stats_update(dev, vq, pkts, nb_rx);
>
> out:
> vhost_user_iotlb_rd_unlock(vq);
> @@ -3691,10 +3688,10 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
> rte_rwlock_read_unlock(&vq->access_lock);
>
> if (unlikely(rarp_mbuf != NULL))
> - count += 1;
> + nb_rx += 1;
>
> out_no_unlock:
> - return count;
> + return nb_rx;
> }
>
> static __rte_always_inline uint16_t
> @@ -4200,52 +4197,51 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id,
> struct rte_mbuf *rarp_mbuf = NULL;
> struct vhost_virtqueue *vq;
> int16_t success = 1;
> + uint16_t nb_rx = 0;
>
> dev = get_device(vid);
> if (!dev || !nr_inflight)
> - return 0;
> + goto out_no_unlock;
>
> *nr_inflight = -1;
>
> if (unlikely(!(dev->flags & VIRTIO_DEV_BUILTIN_VIRTIO_NET))) {
> VHOST_DATA_LOG(dev->ifname, ERR, "%s: built-in vhost net backend is disabled.",
> __func__);
> - return 0;
> + goto out_no_unlock;
> }
>
> if (unlikely(!is_valid_virt_queue_idx(queue_id, 1, dev->nr_vring))) {
> VHOST_DATA_LOG(dev->ifname, ERR, "%s: invalid virtqueue idx %d.",
> __func__, queue_id);
> - return 0;
> + goto out_no_unlock;
> }
>
> if (unlikely(dma_id < 0 || dma_id >= RTE_DMADEV_DEFAULT_MAX)) {
> VHOST_DATA_LOG(dev->ifname, ERR, "%s: invalid dma id %d.",
> __func__, dma_id);
> - return 0;
> + goto out_no_unlock;
> }
>
> if (unlikely(!dma_copy_track[dma_id].vchans ||
> !dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr)) {
> VHOST_DATA_LOG(dev->ifname, ERR, "%s: invalid channel %d:%u.",
> __func__, dma_id, vchan_id);
> - return 0;
> + goto out_no_unlock;
> }
>
> vq = dev->virtqueue[queue_id];
>
> if (unlikely(rte_rwlock_read_trylock(&vq->access_lock) != 0))
> - return 0;
> + goto out_no_unlock;
>
> if (unlikely(vq->enabled == 0)) {
> - count = 0;
> goto out_access_unlock;
> }
>
> if (unlikely(!vq->async)) {
> VHOST_DATA_LOG(dev->ifname, ERR, "%s: async not registered for queue id %d.",
> __func__, queue_id);
> - count = 0;
> goto out_access_unlock;
> }
>
> @@ -4256,7 +4252,6 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id,
> rte_rwlock_read_unlock(&vq->access_lock);
>
> virtio_dev_vring_translate(dev, vq);
> - count = 0;
> goto out_no_unlock;
> }
>
> @@ -4283,7 +4278,6 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id,
> rarp_mbuf = rte_net_make_rarp_packet(mbuf_pool, &dev->mac);
> if (rarp_mbuf == NULL) {
> VHOST_DATA_LOG(dev->ifname, ERR, "failed to make RARP packet.");
> - count = 0;
> goto out;
> }
> /*
> @@ -4298,22 +4292,22 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id,
>
> if (vq_is_packed(dev)) {
> if (dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS)
> - count = virtio_dev_tx_async_packed_legacy(dev, vq, mbuf_pool,
> + nb_rx = virtio_dev_tx_async_packed_legacy(dev, vq, mbuf_pool,
> pkts, count, dma_id, vchan_id);
> else
> - count = virtio_dev_tx_async_packed_compliant(dev, vq, mbuf_pool,
> + nb_rx = virtio_dev_tx_async_packed_compliant(dev, vq, mbuf_pool,
> pkts, count, dma_id, vchan_id);
> } else {
> if (dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS)
> - count = virtio_dev_tx_async_split_legacy(dev, vq, mbuf_pool,
> + nb_rx = virtio_dev_tx_async_split_legacy(dev, vq, mbuf_pool,
> pkts, count, dma_id, vchan_id);
> else
> - count = virtio_dev_tx_async_split_compliant(dev, vq, mbuf_pool,
> + nb_rx = virtio_dev_tx_async_split_compliant(dev, vq, mbuf_pool,
> pkts, count, dma_id, vchan_id);
> }
>
> *nr_inflight = vq->async->pkts_inflight_n;
> - vhost_queue_stats_update(dev, vq, pkts, count);
> + vhost_queue_stats_update(dev, vq, pkts, nb_rx);
>
> out:
> vhost_user_iotlb_rd_unlock(vq);
> @@ -4322,8 +4316,8 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id,
> rte_rwlock_read_unlock(&vq->access_lock);
>
> if (unlikely(rarp_mbuf != NULL))
> - count += 1;
> + nb_rx += 1;
>
> out_no_unlock:
> - return count;
> + return nb_rx;
> }
> --
> 2.47.1
>
Reviewed-by: Chenbo Xia <chenbox@nvidia.com>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3 3/3] vhost: improve RARP handling in dequeue paths
2025-01-16 9:54 ` [PATCH v3 3/3] vhost: improve RARP handling in dequeue paths Maxime Coquelin
2025-01-16 10:12 ` David Marchand
@ 2025-01-16 12:15 ` Chenbo Xia
2025-01-16 13:08 ` David Marchand
2 siblings, 0 replies; 11+ messages in thread
From: Chenbo Xia @ 2025-01-16 12:15 UTC (permalink / raw)
To: Maxime Coquelin; +Cc: dev, david.marchand
> On Jan 16, 2025, at 17:54, Maxime Coquelin <maxime.coquelin@redhat.com> wrote:
>
> External email: Use caution opening links or attachments
>
>
> With previous refactoring, we can now simplify the RARP
> packet injection handling in both the sync and async
> dequeue paths.
>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
> lib/vhost/virtio_net.c | 72 ++++++++++++++++++------------------------
> 1 file changed, 30 insertions(+), 42 deletions(-)
>
> diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
> index 59ea2d16a5..c5de2d7a28 100644
> --- a/lib/vhost/virtio_net.c
> +++ b/lib/vhost/virtio_net.c
> @@ -3590,7 +3590,6 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
> struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count)
> {
> struct virtio_net *dev;
> - struct rte_mbuf *rarp_mbuf = NULL;
> struct vhost_virtqueue *vq;
> int16_t success = 1;
> uint16_t nb_rx = 0;
> @@ -3651,32 +3650,32 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
> if (unlikely(rte_atomic_load_explicit(&dev->broadcast_rarp, rte_memory_order_acquire) &&
> rte_atomic_compare_exchange_strong_explicit(&dev->broadcast_rarp,
> &success, 0, rte_memory_order_release, rte_memory_order_relaxed))) {
> -
> - rarp_mbuf = rte_net_make_rarp_packet(mbuf_pool, &dev->mac);
> - if (rarp_mbuf == NULL) {
> + /*
> + * Inject the RARP packet to the head of "pkts" array,
> + * so that switch's mac learning table will get updated first.
> + */
> + pkts[nb_rx] = rte_net_make_rarp_packet(mbuf_pool, &dev->mac);
> + if (pkts[nb_rx] == NULL) {
> VHOST_DATA_LOG(dev->ifname, ERR, "failed to make RARP packet.");
> goto out;
> }
> - /*
> - * Inject it to the head of "pkts" array, so that switch's mac
> - * learning table will get updated first.
> - */
> - pkts[0] = rarp_mbuf;
> - vhost_queue_stats_update(dev, vq, pkts, 1);
> - pkts++;
> - count -= 1;
> + nb_rx += 1;
> }
>
> if (vq_is_packed(dev)) {
> if (dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS)
> - nb_rx = virtio_dev_tx_packed_legacy(dev, vq, mbuf_pool, pkts, count);
> + nb_rx += virtio_dev_tx_packed_legacy(dev, vq, mbuf_pool,
> + pkts + nb_rx, count - nb_rx);
> else
> - nb_rx = virtio_dev_tx_packed_compliant(dev, vq, mbuf_pool, pkts, count);
> + nb_rx += virtio_dev_tx_packed_compliant(dev, vq, mbuf_pool,
> + pkts + nb_rx, count - nb_rx);
> } else {
> if (dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS)
> - nb_rx = virtio_dev_tx_split_legacy(dev, vq, mbuf_pool, pkts, count);
> + nb_rx += virtio_dev_tx_split_legacy(dev, vq, mbuf_pool,
> + pkts + nb_rx, count - nb_rx);
> else
> - nb_rx = virtio_dev_tx_split_compliant(dev, vq, mbuf_pool, pkts, count);
> + nb_rx += virtio_dev_tx_split_compliant(dev, vq, mbuf_pool,
> + pkts + nb_rx, count - nb_rx);
> }
>
> vhost_queue_stats_update(dev, vq, pkts, nb_rx);
> @@ -3687,9 +3686,6 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
> out_access_unlock:
> rte_rwlock_read_unlock(&vq->access_lock);
>
> - if (unlikely(rarp_mbuf != NULL))
> - nb_rx += 1;
> -
> out_no_unlock:
> return nb_rx;
> }
> @@ -4194,7 +4190,6 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id,
> int *nr_inflight, int16_t dma_id, uint16_t vchan_id)
> {
> struct virtio_net *dev;
> - struct rte_mbuf *rarp_mbuf = NULL;
> struct vhost_virtqueue *vq;
> int16_t success = 1;
> uint16_t nb_rx = 0;
> @@ -4274,36 +4269,32 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id,
> if (unlikely(rte_atomic_load_explicit(&dev->broadcast_rarp, rte_memory_order_acquire) &&
> rte_atomic_compare_exchange_strong_explicit(&dev->broadcast_rarp,
> &success, 0, rte_memory_order_release, rte_memory_order_relaxed))) {
> -
> - rarp_mbuf = rte_net_make_rarp_packet(mbuf_pool, &dev->mac);
> - if (rarp_mbuf == NULL) {
> + /*
> + * Inject the RARP packet to the head of "pkts" array,
> + * so that switch's mac learning table will get updated first.
> + */
> + pkts[nb_rx] = rte_net_make_rarp_packet(mbuf_pool, &dev->mac);
> + if (pkts[nb_rx] == NULL) {
> VHOST_DATA_LOG(dev->ifname, ERR, "failed to make RARP packet.");
> goto out;
> }
> - /*
> - * Inject it to the head of "pkts" array, so that switch's mac
> - * learning table will get updated first.
> - */
> - pkts[0] = rarp_mbuf;
> - vhost_queue_stats_update(dev, vq, pkts, 1);
> - pkts++;
> - count -= 1;
> + nb_rx += 1;
> }
>
> if (vq_is_packed(dev)) {
> if (dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS)
> - nb_rx = virtio_dev_tx_async_packed_legacy(dev, vq, mbuf_pool,
> - pkts, count, dma_id, vchan_id);
> + nb_rx += virtio_dev_tx_async_packed_legacy(dev, vq, mbuf_pool,
> + pkts + nb_rx, count - nb_rx, dma_id, vchan_id);
> else
> - nb_rx = virtio_dev_tx_async_packed_compliant(dev, vq, mbuf_pool,
> - pkts, count, dma_id, vchan_id);
> + nb_rx += virtio_dev_tx_async_packed_compliant(dev, vq, mbuf_pool,
> + pkts + nb_rx, count - nb_rx, dma_id, vchan_id);
> } else {
> if (dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS)
> - nb_rx = virtio_dev_tx_async_split_legacy(dev, vq, mbuf_pool,
> - pkts, count, dma_id, vchan_id);
> + nb_rx += virtio_dev_tx_async_split_legacy(dev, vq, mbuf_pool,
> + pkts + nb_rx, count - nb_rx, dma_id, vchan_id);
> else
> - nb_rx = virtio_dev_tx_async_split_compliant(dev, vq, mbuf_pool,
> - pkts, count, dma_id, vchan_id);
> + nb_rx += virtio_dev_tx_async_split_compliant(dev, vq, mbuf_pool,
> + pkts + nb_rx, count - nb_rx, dma_id, vchan_id);
> }
>
> *nr_inflight = vq->async->pkts_inflight_n;
> @@ -4315,9 +4306,6 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id,
> out_access_unlock:
> rte_rwlock_read_unlock(&vq->access_lock);
>
> - if (unlikely(rarp_mbuf != NULL))
> - nb_rx += 1;
> -
> out_no_unlock:
> return nb_rx;
> }
> --
> 2.47.1
>
Reviewed-by: Chenbo Xia <chenbox@nvidia.com>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3 3/3] vhost: improve RARP handling in dequeue paths
2025-01-16 9:54 ` [PATCH v3 3/3] vhost: improve RARP handling in dequeue paths Maxime Coquelin
2025-01-16 10:12 ` David Marchand
2025-01-16 12:15 ` Chenbo Xia
@ 2025-01-16 13:08 ` David Marchand
2 siblings, 0 replies; 11+ messages in thread
From: David Marchand @ 2025-01-16 13:08 UTC (permalink / raw)
To: dev; +Cc: chenbox, Maxime Coquelin
On Thu, Jan 16, 2025 at 10:54 AM Maxime Coquelin
<maxime.coquelin@redhat.com> wrote:
>
> With previous refactoring, we can now simplify the RARP
> packet injection handling in both the sync and async
> dequeue paths.
>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Recheck-request: github-robot
--
David Marchand
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2025-01-16 13:08 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-01-16 9:54 [PATCH v3 0/3] vhost: fix and improve dequeue error path Maxime Coquelin
2025-01-16 9:54 ` [PATCH v3 1/3] vhost: fix missing packets count reset when not ready Maxime Coquelin
2025-01-16 10:10 ` David Marchand
2025-01-16 12:14 ` Chenbo Xia
2025-01-16 9:54 ` [PATCH v3 2/3] vhost: rework dequeue paths error handling Maxime Coquelin
2025-01-16 10:10 ` David Marchand
2025-01-16 12:15 ` Chenbo Xia
2025-01-16 9:54 ` [PATCH v3 3/3] vhost: improve RARP handling in dequeue paths Maxime Coquelin
2025-01-16 10:12 ` David Marchand
2025-01-16 12:15 ` Chenbo Xia
2025-01-16 13:08 ` David Marchand
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).