DPDK patches and discussions
 help / color / mirror / Atom feed
From: David Marchand <david.marchand@redhat.com>
To: Balazs Nemeth <bnemeth@redhat.com>
Cc: dev <dev@dpdk.org>, Maxime Coquelin <maxime.coquelin@redhat.com>,
	 "Xia, Chenbo" <chenbo.xia@intel.com>
Subject: Re: [dpdk-dev] [PATCH v3] vhost: allocate and free packets in bulk
Date: Fri, 16 Apr 2021 11:05:07 +0200
Message-ID: <CAJFAV8yUKVaHW+fzC5szp-GHwNVe0SVYKCtEFadbMm+dkFd2Ug@mail.gmail.com> (raw)
In-Reply-To: <f273e53ea580bad5f72410e276b371ededa272f2.1618560902.git.bnemeth@redhat.com>

On Fri, Apr 16, 2021 at 10:18 AM Balazs Nemeth <bnemeth@redhat.com> wrote:
>
> Move allocation out further and perform all allocation in bulk. The same
> goes for freeing packets. In the process, also rename
> virtio_dev_pktmbuf_alloc to virtio_dev_pktmbuf_prep. This
> function now receives an already allocated mbuf pointer.
>
> Signed-off-by: Balazs Nemeth <bnemeth@redhat.com>

The title should indicate we are only touching the tx packed path.

What about tx split?
If it is too complex to rework, this can wait.


> ---
>  lib/librte_vhost/virtio_net.c | 58 +++++++++++++++++++++++------------
>  1 file changed, 38 insertions(+), 20 deletions(-)
>
> diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
> index ff39878609..d6d5636e0f 100644
> --- a/lib/librte_vhost/virtio_net.c
> +++ b/lib/librte_vhost/virtio_net.c
> @@ -2168,6 +2168,24 @@ virtio_dev_pktmbuf_alloc(struct virtio_net *dev, struct rte_mempool *mp,
>         return NULL;
>  }
>
> +static __rte_always_inline int
> +virtio_dev_pktmbuf_prep(struct virtio_net *dev, struct rte_mbuf *pkt,
> +                        uint32_t data_len)
> +{
> +       if (rte_pktmbuf_tailroom(pkt) >= data_len)
> +               return 0;
> +
> +       /* attach an external buffer if supported */
> +       if (dev->extbuf && !virtio_dev_extbuf_alloc(pkt, data_len))
> +               return 0;
> +
> +       /* check if chained buffers are allowed */
> +       if (!dev->linearbuf)
> +               return 0;
> +
> +       return -1;
> +}
> +

If virtio_dev_pktmbuf_alloc() uses this new helper, we avoid
duplicating the logic.


>  static __rte_noinline uint16_t
>  virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
>         struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count)

[snip]

> @@ -2429,7 +2440,7 @@ static __rte_always_inline int
>  virtio_dev_tx_single_packed(struct virtio_net *dev,
>                             struct vhost_virtqueue *vq,
>                             struct rte_mempool *mbuf_pool,
> -                           struct rte_mbuf **pkts)
> +                           struct rte_mbuf *pkts)
>  {
>
>         uint16_t buf_id, desc_count = 0;
> @@ -2462,26 +2473,33 @@ virtio_dev_tx_packed(struct virtio_net *dev,
>         uint32_t pkt_idx = 0;
>         uint32_t remained = count;
>
> +       if (rte_pktmbuf_alloc_bulk(mbuf_pool, pkts, count))
> +               return 0;
> +
>         do {
>                 rte_prefetch0(&vq->desc_packed[vq->last_avail_idx]);
>
>                 if (remained >= PACKED_BATCH_SIZE) {
> -                       if (!virtio_dev_tx_batch_packed(dev, vq, mbuf_pool,
> +                       if (!virtio_dev_tx_batch_packed(dev, vq,
>                                                         &pkts[pkt_idx])) {
>                                 pkt_idx += PACKED_BATCH_SIZE;
>                                 remained -= PACKED_BATCH_SIZE;
> +

No need for the extra line.


>                                 continue;
>                         }
>                 }
>
>                 if (virtio_dev_tx_single_packed(dev, vq, mbuf_pool,
> -                                               &pkts[pkt_idx]))
> +                                               pkts[pkt_idx]))
>                         break;
>                 pkt_idx++;
>                 remained--;
>
>         } while (remained);
>
> +       if (pkt_idx != count)
> +               rte_pktmbuf_free_bulk(&pkts[pkt_idx], count - pkt_idx);
> +
>         if (vq->shadow_used_idx) {
>                 do_data_copy_dequeue(vq);
>

With those comments addressed,
Reviewed-by: David Marchand <david.marchand@redhat.com>

Thanks Balazs!


-- 
David Marchand


  parent reply	other threads:[~2021-04-16  9:05 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-06 15:44 [dpdk-dev] [PATCH 0/4] Use bulk alloc/free in virtio packed Balazs Nemeth
2021-04-06 15:44 ` [dpdk-dev] [PATCH 1/4] vhost: move allocation of mbuf outside of packet enqueue Balazs Nemeth
2021-04-07 10:17   ` [dpdk-dev] [PATCH v2 " Balazs Nemeth
2021-04-15 12:37     ` Maxime Coquelin
2021-04-16  8:18     ` [dpdk-dev] [PATCH v3] vhost: allocate and free packets in bulk Balazs Nemeth
2021-04-16  8:36       ` Maxime Coquelin
2021-04-16  9:05       ` David Marchand [this message]
2021-04-16  9:12         ` Balazs Nemeth
2021-04-16  9:41           ` Maxime Coquelin
2021-04-16  9:43           ` David Marchand
2021-04-16  9:48       ` [dpdk-dev] [PATCH v4] " Balazs Nemeth
2021-04-16 10:25         ` [dpdk-dev] [PATCH v5] vhost: allocate and free packets in bulk in Tx packed Balazs Nemeth
2021-04-16 11:14           ` David Marchand
2021-04-21  7:48           ` Maxime Coquelin
2021-04-28  3:16             ` Xia, Chenbo
2021-04-07 10:17   ` [dpdk-dev] [PATCH v2 2/4] vhost: perform all mbuf allocations in one loop Balazs Nemeth
2021-04-15 15:30     ` Maxime Coquelin
2021-04-07 10:17   ` [dpdk-dev] [PATCH v2 3/4] vhost: allocate and free packets in bulk Balazs Nemeth
2021-04-15 15:32     ` Maxime Coquelin
2021-04-15 15:45       ` David Marchand
2021-04-15 15:50         ` Maxime Coquelin
2021-04-15 15:58           ` Maxime Coquelin
2021-04-07 10:17   ` [dpdk-dev] [PATCH v2 4/4] vhost: remove unnecessary level of indirection Balazs Nemeth
2021-04-15 15:38     ` Maxime Coquelin
2021-04-06 15:44 ` [dpdk-dev] [PATCH 2/4] vhost: perform all mbuf allocations in one loop Balazs Nemeth
2021-04-06 15:44 ` [dpdk-dev] [PATCH 3/4] vhost: allocate and free packets in bulk Balazs Nemeth
2021-04-06 15:44 ` [dpdk-dev] [PATCH 4/4] vhost: remove unnecessary level of indirection Balazs Nemeth

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAJFAV8yUKVaHW+fzC5szp-GHwNVe0SVYKCtEFadbMm+dkFd2Ug@mail.gmail.com \
    --to=david.marchand@redhat.com \
    --cc=bnemeth@redhat.com \
    --cc=chenbo.xia@intel.com \
    --cc=dev@dpdk.org \
    --cc=maxime.coquelin@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

DPDK patches and discussions

This inbox may be cloned and mirrored by anyone:

	git clone --mirror https://inbox.dpdk.org/dev/0 dev/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 dev dev/ https://inbox.dpdk.org/dev \
		dev@dpdk.org
	public-inbox-index dev

Example config snippet for mirrors.
Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.dev


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git