DPDK patches and discussions
 help / color / mirror / Atom feed
From: Maxime Coquelin <maxime.coquelin@redhat.com>
To: Andrey Ignatov <rdna@apple.com>, dev@dpdk.org
Cc: Chenbo Xia <chenbox@nvidia.com>, Wei Shen <wshen0123@apple.com>
Subject: Re: [PATCH] vhost: optimize mbuf allocation in virtio Tx packed path
Date: Wed, 3 Apr 2024 12:19:46 +0200	[thread overview]
Message-ID: <98c1642c-c32d-46b5-a34c-4bfcc530905f@redhat.com> (raw)
In-Reply-To: <20240328233338.56544-1-rdna@apple.com>



On 3/29/24 00:33, Andrey Ignatov wrote:
> Currently virtio_dev_tx_packed() always allocates requested @count of
> packets, no matter how many packets are really available on the virtio
> Tx ring. Later it has to free all packets it didn't use and if, for
> example, there were zero available packets on the ring, then all @count
> mbufs would be allocated just to be freed afterwards.
> 
> This wastes CPU cycles since rte_pktmbuf_alloc_bulk() /
> rte_pktmbuf_free_bulk() do quite a lot of work.
> 
> Optimize it by using the same idea as the virtio_dev_tx_split() uses on
> the Tx split path: estimate the number of available entries on the ring
> and allocate only that number of mbufs.
> 
> On the split path it's pretty easy to estimate.
> 
> On the packed path it's more work since it requires checking flags for
> up to @count of descriptors. Still it's much less expensive than the
> alloc/free pair.
> 
> The new get_nb_avail_entries_packed() function doesn't change how
> virtio_dev_tx_packed() works with regard to memory barriers since the
> barrier between checking flags and other descriptor fields is still in
> place later in virtio_dev_tx_batch_packed() and
> virtio_dev_tx_single_packed().
> 
> The difference for a guest transmitting ~17Gbps with MTU 1500 on a `perf
> record` / `perf report` (on lower pps the savings will be bigger):
> 
> * Before the change:
> 
>      Samples: 18K of event 'cycles:P', Event count (approx.): 19206831288
>        Children      Self      Pid:Command
>      -  100.00%   100.00%   798808:dpdk-worker1
>                  <... skip ...>
>                  - 99.09% pkt_burst_io_forward
>                     - 90.26% common_fwd_stream_receive
>                        - 90.04% rte_eth_rx_burst
>                           - 75.53% eth_vhost_rx
>                              - 74.29% rte_vhost_dequeue_burst
>                                 - 71.48% virtio_dev_tx_packed_compliant
>                                    + 17.11% rte_pktmbuf_alloc_bulk
>                                    + 11.80% rte_pktmbuf_free_bulk
>                                    + 2.11% vhost_user_inject_irq
>                                      0.75% rte_pktmbuf_reset
>                                      0.53% __rte_pktmbuf_free_seg_via_array
>                                   0.88% vhost_queue_stats_update
>                           + 13.66% mlx5_rx_burst_vec
>                     + 8.69% common_fwd_stream_transmit
> 
> * After:
> 
>      Samples: 18K of event 'cycles:P', Event count (approx.): 19225310840
>        Children      Self      Pid:Command
>      -  100.00%   100.00%   859754:dpdk-worker1
>                  <... skip ...>
>                  - 98.61% pkt_burst_io_forward
>                     - 86.29% common_fwd_stream_receive
>                        - 85.84% rte_eth_rx_burst
>                           - 61.94% eth_vhost_rx
>                              - 60.05% rte_vhost_dequeue_burst
>                                 - 55.98% virtio_dev_tx_packed_compliant
>                                    + 3.43% rte_pktmbuf_alloc_bulk
>                                    + 2.50% vhost_user_inject_irq
>                                   1.17% vhost_queue_stats_update
>                                   0.76% rte_rwlock_read_unlock
>                                   0.54% rte_rwlock_read_trylock
>                           + 22.21% mlx5_rx_burst_vec
>                     + 12.00% common_fwd_stream_transmit
> 
> It can be seen that virtio_dev_tx_packed_compliant() goes from 71.48% to
> 55.98% with rte_pktmbuf_alloc_bulk() going from 17.11% to 3.43% and
> rte_pktmbuf_free_bulk() going away completely.
> 
> Signed-off-by: Andrey Ignatov <rdna@apple.com>
> ---
>   lib/vhost/virtio_net.c | 33 +++++++++++++++++++++++++++++++++
>   1 file changed, 33 insertions(+)
> 

Thanks for the contribution and the detailed commit message.

Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

Maxime


      parent reply	other threads:[~2024-04-03 10:19 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-28 23:33 Andrey Ignatov
2024-03-28 23:44 ` Stephen Hemminger
2024-03-29  0:10   ` Andrey Ignatov
2024-03-29  2:53     ` Stephen Hemminger
2024-03-29 13:04       ` Maxime Coquelin
2024-03-29 13:42         ` The effect of inlining Morten Brørup
2024-03-29 20:26           ` Tyler Retzlaff
2024-04-01 15:20           ` Mattias Rönnblom
2024-04-03 16:01             ` Morten Brørup
2024-04-03 10:19 ` Maxime Coquelin [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=98c1642c-c32d-46b5-a34c-4bfcc530905f@redhat.com \
    --to=maxime.coquelin@redhat.com \
    --cc=chenbox@nvidia.com \
    --cc=dev@dpdk.org \
    --cc=rdna@apple.com \
    --cc=wshen0123@apple.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).