From: David Marchand <david.marchand@redhat.com>
To: Maxime Coquelin <maxime.coquelin@redhat.com>
Cc: dev@dpdk.org, chenbox@nvidia.com
Subject: Re: [PATCH v2 4/4] vhost: improve RARP handling in dequeue paths
Date: Wed, 15 Jan 2025 17:46:48 +0100 [thread overview]
Message-ID: <CAJFAV8xhcToUXtp-XAsA=tXA3iBJYOw1aaBqqTiWFzY+x1BaFQ@mail.gmail.com> (raw)
In-Reply-To: <20250115125938.2699577-5-maxime.coquelin@redhat.com>
On Wed, Jan 15, 2025 at 1:59 PM Maxime Coquelin
<maxime.coquelin@redhat.com> wrote:
>
> With previous refactoring, we can now simplify the RARP
> packet injection handling in both the sync and async
> dequeue paths.
>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
> lib/vhost/virtio_net.c | 42 ++++++++++++++++++------------------------
> 1 file changed, 18 insertions(+), 24 deletions(-)
>
> diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
> index 59ea2d16a5..fab45ebd54 100644
> --- a/lib/vhost/virtio_net.c
> +++ b/lib/vhost/virtio_net.c
> @@ -3662,21 +3662,23 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
> * learning table will get updated first.
> */
> pkts[0] = rarp_mbuf;
Well, ideally it would be pkts[nb_rx], but see comment below.
> - vhost_queue_stats_update(dev, vq, pkts, 1);
> - pkts++;
> - count -= 1;
> + nb_rx += 1;
> }
With this change, the rarp_mbuf variable is unneeded.
You can store to pkts[nb_rx] when calling rte_net_make_rarp_packet()
(and at the same time, move the comment about injecting the packet to
the head of the array).
>
> if (vq_is_packed(dev)) {
> if (dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS)
> - nb_rx = virtio_dev_tx_packed_legacy(dev, vq, mbuf_pool, pkts, count);
> + nb_rx += virtio_dev_tx_packed_legacy(dev, vq, mbuf_pool,
> + pkts + nb_rx, count - nb_rx);
> else
> - nb_rx = virtio_dev_tx_packed_compliant(dev, vq, mbuf_pool, pkts, count);
> + nb_rx += virtio_dev_tx_packed_compliant(dev, vq, mbuf_pool,
> + pkts + nb_rx, count - nb_rx);
> } else {
> if (dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS)
> - nb_rx = virtio_dev_tx_split_legacy(dev, vq, mbuf_pool, pkts, count);
> + nb_rx += virtio_dev_tx_split_legacy(dev, vq, mbuf_pool,
> + pkts + nb_rx, count - nb_rx);
> else
> - nb_rx = virtio_dev_tx_split_compliant(dev, vq, mbuf_pool, pkts, count);
> + nb_rx += virtio_dev_tx_split_compliant(dev, vq, mbuf_pool,
> + pkts + nb_rx, count - nb_rx);
> }
>
> vhost_queue_stats_update(dev, vq, pkts, nb_rx);
> @@ -3687,9 +3689,6 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
> out_access_unlock:
> rte_rwlock_read_unlock(&vq->access_lock);
>
> - if (unlikely(rarp_mbuf != NULL))
> - nb_rx += 1;
> -
> out_no_unlock:
> return nb_rx;
> }
> @@ -4285,25 +4284,23 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id,
> * learning table will get updated first.
> */
> pkts[0] = rarp_mbuf;
> - vhost_queue_stats_update(dev, vq, pkts, 1);
> - pkts++;
> - count -= 1;
> + nb_rx += 1;
> }
Idem.
>
> if (vq_is_packed(dev)) {
> if (dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS)
> - nb_rx = virtio_dev_tx_async_packed_legacy(dev, vq, mbuf_pool,
> - pkts, count, dma_id, vchan_id);
> + nb_rx += virtio_dev_tx_async_packed_legacy(dev, vq, mbuf_pool,
> + pkts + nb_rx, count - nb_rx, dma_id, vchan_id);
> else
> - nb_rx = virtio_dev_tx_async_packed_compliant(dev, vq, mbuf_pool,
> - pkts, count, dma_id, vchan_id);
> + nb_rx += virtio_dev_tx_async_packed_compliant(dev, vq, mbuf_pool,
> + pkts + nb_rx, count - nb_rx, dma_id, vchan_id);
> } else {
> if (dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS)
> - nb_rx = virtio_dev_tx_async_split_legacy(dev, vq, mbuf_pool,
> - pkts, count, dma_id, vchan_id);
> + nb_rx += virtio_dev_tx_async_split_legacy(dev, vq, mbuf_pool,
> + pkts + nb_rx, count - nb_rx, dma_id, vchan_id);
> else
> - nb_rx = virtio_dev_tx_async_split_compliant(dev, vq, mbuf_pool,
> - pkts, count, dma_id, vchan_id);
> + nb_rx += virtio_dev_tx_async_split_compliant(dev, vq, mbuf_pool,
> + pkts + nb_rx, count - nb_rx, dma_id, vchan_id);
> }
>
> *nr_inflight = vq->async->pkts_inflight_n;
> @@ -4315,9 +4312,6 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id,
> out_access_unlock:
> rte_rwlock_read_unlock(&vq->access_lock);
>
> - if (unlikely(rarp_mbuf != NULL))
> - nb_rx += 1;
> -
> out_no_unlock:
> return nb_rx;
> }
> --
> 2.47.1
>
--
David Marchand
prev parent reply other threads:[~2025-01-15 16:47 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-01-15 12:59 [PATCH v2 0/4] vhost: fix and improve dequeue error path Maxime Coquelin
2025-01-15 12:59 ` [PATCH v2 1/4] vhost: fix missing packets count reset when not ready Maxime Coquelin
2025-01-15 16:41 ` David Marchand
2025-01-15 12:59 ` [PATCH v2 2/4] vhost: rework dequeue path error handling Maxime Coquelin
2025-01-15 16:42 ` David Marchand
2025-01-15 12:59 ` [PATCH v2 3/4] vhost: rework async " Maxime Coquelin
2025-01-15 16:42 ` David Marchand
2025-01-15 16:49 ` David Marchand
2025-01-15 12:59 ` [PATCH v2 4/4] vhost: improve RARP handling in dequeue paths Maxime Coquelin
2025-01-15 16:46 ` David Marchand [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAJFAV8xhcToUXtp-XAsA=tXA3iBJYOw1aaBqqTiWFzY+x1BaFQ@mail.gmail.com' \
--to=david.marchand@redhat.com \
--cc=chenbox@nvidia.com \
--cc=dev@dpdk.org \
--cc=maxime.coquelin@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).