DPDK patches and discussions
 help / color / mirror / Atom feed
From: Andrey Ignatov <rdna@apple.com>
To: dev@dpdk.org
Cc: Maxime Coquelin <maxime.coquelin@redhat.com>,
	Chenbo Xia <chenbox@nvidia.com>, Wei Shen <wshen0123@apple.com>,
	Andrey Ignatov <rdna@apple.com>
Subject: [PATCH] vhost: optimize mbuf allocation in virtio Tx packed path
Date: Thu, 28 Mar 2024 16:33:38 -0700	[thread overview]
Message-ID: <20240328233338.56544-1-rdna@apple.com> (raw)

Currently virtio_dev_tx_packed() always allocates requested @count of
packets, no matter how many packets are really available on the virtio
Tx ring. Later it has to free all packets it didn't use and if, for
example, there were zero available packets on the ring, then all @count
mbufs would be allocated just to be freed afterwards.

This wastes CPU cycles since rte_pktmbuf_alloc_bulk() /
rte_pktmbuf_free_bulk() do quite a lot of work.

Optimize it by using the same idea as the virtio_dev_tx_split() uses on
the Tx split path: estimate the number of available entries on the ring
and allocate only that number of mbufs.

On the split path it's pretty easy to estimate.

On the packed path it's more work since it requires checking flags for
up to @count of descriptors. Still it's much less expensive than the
alloc/free pair.

The new get_nb_avail_entries_packed() function doesn't change how
virtio_dev_tx_packed() works with regard to memory barriers since the
barrier between checking flags and other descriptor fields is still in
place later in virtio_dev_tx_batch_packed() and
virtio_dev_tx_single_packed().

The difference for a guest transmitting ~17Gbps with MTU 1500 on a `perf
record` / `perf report` (on lower pps the savings will be bigger):

* Before the change:

    Samples: 18K of event 'cycles:P', Event count (approx.): 19206831288
      Children      Self      Pid:Command
    -  100.00%   100.00%   798808:dpdk-worker1
                <... skip ...>
                - 99.09% pkt_burst_io_forward
                   - 90.26% common_fwd_stream_receive
                      - 90.04% rte_eth_rx_burst
                         - 75.53% eth_vhost_rx
                            - 74.29% rte_vhost_dequeue_burst
                               - 71.48% virtio_dev_tx_packed_compliant
                                  + 17.11% rte_pktmbuf_alloc_bulk
                                  + 11.80% rte_pktmbuf_free_bulk
                                  + 2.11% vhost_user_inject_irq
                                    0.75% rte_pktmbuf_reset
                                    0.53% __rte_pktmbuf_free_seg_via_array
                                 0.88% vhost_queue_stats_update
                         + 13.66% mlx5_rx_burst_vec
                   + 8.69% common_fwd_stream_transmit

* After:

    Samples: 18K of event 'cycles:P', Event count (approx.): 19225310840
      Children      Self      Pid:Command
    -  100.00%   100.00%   859754:dpdk-worker1
                <... skip ...>
                - 98.61% pkt_burst_io_forward
                   - 86.29% common_fwd_stream_receive
                      - 85.84% rte_eth_rx_burst
                         - 61.94% eth_vhost_rx
                            - 60.05% rte_vhost_dequeue_burst
                               - 55.98% virtio_dev_tx_packed_compliant
                                  + 3.43% rte_pktmbuf_alloc_bulk
                                  + 2.50% vhost_user_inject_irq
                                 1.17% vhost_queue_stats_update
                                 0.76% rte_rwlock_read_unlock
                                 0.54% rte_rwlock_read_trylock
                         + 22.21% mlx5_rx_burst_vec
                   + 12.00% common_fwd_stream_transmit

It can be seen that virtio_dev_tx_packed_compliant() goes from 71.48% to
55.98% with rte_pktmbuf_alloc_bulk() going from 17.11% to 3.43% and
rte_pktmbuf_free_bulk() going away completely.

Signed-off-by: Andrey Ignatov <rdna@apple.com>
---
 lib/vhost/virtio_net.c | 33 +++++++++++++++++++++++++++++++++
 1 file changed, 33 insertions(+)

diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
index 1359c5fb1f..b406b5d7d9 100644
--- a/lib/vhost/virtio_net.c
+++ b/lib/vhost/virtio_net.c
@@ -3484,6 +3484,35 @@ virtio_dev_tx_single_packed(struct virtio_net *dev,
 	return ret;
 }
 
+static __rte_always_inline uint16_t
+get_nb_avail_entries_packed(const struct vhost_virtqueue *__rte_restrict vq,
+			    uint16_t max_nb_avail_entries)
+{
+	const struct vring_packed_desc *descs = vq->desc_packed;
+	bool avail_wrap = vq->avail_wrap_counter;
+	uint16_t avail_idx = vq->last_avail_idx;
+	uint16_t nb_avail_entries = 0;
+	uint16_t flags;
+
+	while (nb_avail_entries < max_nb_avail_entries) {
+		flags = descs[avail_idx].flags;
+
+		if ((avail_wrap != !!(flags & VRING_DESC_F_AVAIL)) ||
+		    (avail_wrap == !!(flags & VRING_DESC_F_USED)))
+			return nb_avail_entries;
+
+		if (!(flags & VRING_DESC_F_NEXT))
+			++nb_avail_entries;
+
+		if (unlikely(++avail_idx >= vq->size)) {
+			avail_idx -= vq->size;
+			avail_wrap = !avail_wrap;
+		}
+	}
+
+	return nb_avail_entries;
+}
+
 __rte_always_inline
 static uint16_t
 virtio_dev_tx_packed(struct virtio_net *dev,
@@ -3497,6 +3526,10 @@ virtio_dev_tx_packed(struct virtio_net *dev,
 {
 	uint32_t pkt_idx = 0;
 
+	count = get_nb_avail_entries_packed(vq, count);
+	if (count == 0)
+		return 0;
+
 	if (rte_pktmbuf_alloc_bulk(mbuf_pool, pkts, count)) {
 		vq->stats.mbuf_alloc_failed += count;
 		return 0;
-- 
2.41.0


             reply	other threads:[~2024-03-28 23:35 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-28 23:33 Andrey Ignatov [this message]
2024-03-28 23:44 ` Stephen Hemminger
2024-03-29  0:10   ` Andrey Ignatov
2024-03-29  2:53     ` Stephen Hemminger
2024-03-29 13:04       ` Maxime Coquelin
2024-03-29 13:42         ` The effect of inlining Morten Brørup
2024-03-29 20:26           ` Tyler Retzlaff
2024-04-01 15:20           ` Mattias Rönnblom
2024-04-03 16:01             ` Morten Brørup
2024-04-03 10:19 ` [PATCH] vhost: optimize mbuf allocation in virtio Tx packed path Maxime Coquelin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240328233338.56544-1-rdna@apple.com \
    --to=rdna@apple.com \
    --cc=chenbox@nvidia.com \
    --cc=dev@dpdk.org \
    --cc=maxime.coquelin@redhat.com \
    --cc=wshen0123@apple.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).