DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Wangyunjian(wangyunjian,TongTu)" <wangyunjian@huawei.com>
To: David Marchand <david.marchand@redhat.com>,
	"maxime.coquelin@redhat.com" <maxime.coquelin@redhat.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
	"chenbox@nvidia.com" <chenbox@nvidia.com>,
	 "Lilijun (Jerry)" <jerry.lilijun@huawei.com>,
	"xiawei (H)" <xiawei40@huawei.com>,
	wangzengyuan <wangzengyuan@huawei.com>,
	"stable@dpdk.org" <stable@dpdk.org>
Subject: RE: [PATCH 1/1] vhost: fix a double fetch when dequeue offloading
Date: Thu, 19 Dec 2024 11:02:40 +0000	[thread overview]
Message-ID: <5809b61b17cf490b98b1bbec74c55b1a@huawei.com> (raw)
In-Reply-To: <CAJFAV8yWEmK7PwddfeKFLB8LYJ9ai6SSmTgSy2gSmT4vU5Y_4g@mail.gmail.com>


> -----Original Message-----
> From: David Marchand [mailto:david.marchand@redhat.com]
> Sent: Thursday, December 19, 2024 4:24 PM
> To: Wangyunjian(wangyunjian,TongTu) <wangyunjian@huawei.com>;
> maxime.coquelin@redhat.com
> Cc: dev@dpdk.org; chenbox@nvidia.com; Lilijun (Jerry)
> <jerry.lilijun@huawei.com>; xiawei (H) <xiawei40@huawei.com>;
> wangzengyuan <wangzengyuan@huawei.com>; stable@dpdk.org
> Subject: Re: [PATCH 1/1] vhost: fix a double fetch when dequeue offloading
> 
> On Thu, Dec 19, 2024 at 7:38 AM Yunjian Wang <wangyunjian@huawei.com>
> wrote:
> >
> > The hdr->csum_start does two successive reads from user space to read a
> > variable length data structure. The result overflow if the data structure
> > changes between the two reads.
> >
> > To fix this, we can prevent double fetch issue by copying virtio_hdr to
> > the temporary variable.
> >
> > Fixes: 4dc4e33ffa10 ("net/virtio: fix Rx checksum calculation")
> > Cc: stable@dpdk.org
> >
> > Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
> > ---
> >  lib/vhost/virtio_net.c | 13 ++++++++-----
> >  1 file changed, 8 insertions(+), 5 deletions(-)
> >
> > diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
> > index 69901ab3b5..5c40ae7069 100644
> > --- a/lib/vhost/virtio_net.c
> > +++ b/lib/vhost/virtio_net.c
> > @@ -2914,10 +2914,12 @@ desc_to_mbuf(struct virtio_net *dev, struct
> vhost_virtqueue *vq,
> >                          * in a contiguous virtual area.
> >                          */
> >                         copy_vnet_hdr_from_desc(&tmp_hdr,
> buf_vec);
> > -                       hdr = &tmp_hdr;
> >                 } else {
> > -                       hdr = (struct virtio_net_hdr
> *)((uintptr_t)buf_vec[0].buf_addr);
> > +                       rte_memcpy((void *)(uintptr_t)&tmp_hdr,
> > +                               (void
> *)(uintptr_t)buf_vec[0].buf_addr,
> > +                               sizeof(struct virtio_net_hdr));
> >                 }
> > +               hdr = &tmp_hdr;
> >         }
> 
> This will need some benchmark, as I remember putting rte_memcpy in
> inlined helpers had some performance impact.
> 
> Instead, I would call copy_vnet_hdr_from_desc unconditionnally, and
> store in a struct virtio_net_hdr hdr variable (+ a has_vnet_hdr
> boolean to indicate validity).
> Something like:
>         if (virtio_net_with_host_offload(dev)) {
> -               if (unlikely(buf_vec[0].buf_len < sizeof(struct
> virtio_net_hdr))) {
> -                       /*
> -                        * No luck, the virtio-net header doesn't fit
> -                        * in a contiguous virtual area.
> -                        */
> -                       copy_vnet_hdr_from_desc(&tmp_hdr, buf_vec);
> -                       hdr = &tmp_hdr;
> -               } else {
> -                       hdr = (struct virtio_net_hdr
> *)((uintptr_t)buf_vec[0].buf_addr);
> -               }
> +               copy_vnet_hdr_from_desc(&hdr, buf_vec);
> +               has_vnet_hdr = true;
>         }
> 
> (besides, in copy_vnet_hdr_from_desc, the while (cond) {} loop could
> be changed to do a do {} while (cond), and that approach requires
> performance numbers too)

How about this?
@@ -2904,8 +2904,8 @@ desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq,
        uint32_t hdr_remain = dev->vhost_hlen;
        uint32_t cpy_len;
        struct rte_mbuf *cur = m, *prev = m;
-       struct virtio_net_hdr tmp_hdr;
-       struct virtio_net_hdr *hdr = NULL;
+       bool has_vnet_hdr = false;
+       struct virtio_net_hdr hdr;
        uint16_t vec_idx;
        struct vhost_async *async = vq->async;
        struct async_inflight_info *pkts_info;
@@ -2921,11 +2921,11 @@ desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq,
                         * No luck, the virtio-net header doesn't fit
                         * in a contiguous virtual area.
                         */
-                       copy_vnet_hdr_from_desc(&tmp_hdr, buf_vec);
-                       hdr = &tmp_hdr;
+                       copy_vnet_hdr_from_desc(&hdr, buf_vec);
                } else {
-                       hdr = (struct virtio_net_hdr *)((uintptr_t)buf_vec[0].buf_addr);
+                       hdr = *(struct virtio_net_hdr *)((uintptr_t)buf_vec[0].buf_addr);
                }
+               has_vnet_hdr = true;
        }


> 
> 
> >
> >         for (vec_idx = 0; vec_idx < nr_vec; vec_idx++) {
> > @@ -3363,7 +3365,7 @@ virtio_dev_tx_batch_packed(struct virtio_net
> *dev,
> >  {
> >         uint16_t avail_idx = vq->last_avail_idx;
> >         uint32_t buf_offset = sizeof(struct virtio_net_hdr_mrg_rxbuf);
> > -       struct virtio_net_hdr *hdr;
> > +       struct virtio_net_hdr hdr;
> >         uintptr_t desc_addrs[PACKED_BATCH_SIZE];
> >         uint16_t ids[PACKED_BATCH_SIZE];
> >         uint16_t i;
> > @@ -3382,8 +3384,9 @@ virtio_dev_tx_batch_packed(struct virtio_net
> *dev,
> >
> >         if (virtio_net_with_host_offload(dev)) {
> >                 vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) {
> > -                       hdr = (struct virtio_net_hdr *)(desc_addrs[i]);
> > -                       vhost_dequeue_offload(dev, hdr, pkts[i],
> legacy_ol_flags);
> > +                       rte_memcpy((void *)(uintptr_t)&hdr,
> > +                               (void *)(uintptr_t)desc_addrs[i],
> sizeof(struct virtio_net_hdr));
> > +                       vhost_dequeue_offload(dev, &hdr, pkts[i],
> legacy_ol_flags);
> >                 }
> >         }
> 
> Here too, there may be an impact with adding rte_memcpy.
> Just do a copy like:
> 
>         if (virtio_net_with_host_offload(dev)) {
> +               struct virtio_net_hdr hdr;
> +
>                 vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) {
> -                       hdr = (struct virtio_net_hdr *)(desc_addrs[i]);
> -                       vhost_dequeue_offload(dev, hdr, pkts[i],
> legacy_ol_flags);
> +                       hdr = *(struct virtio_net_hdr *)(desc_addrs[i]);
> +                       vhost_dequeue_offload(dev, &hdr, pkts[i],
> legacy_ol_flags);
>                 }
> 

Thanks for your suggestion, will include them in next version.

> 
> --
> David Marchand
> 


  reply	other threads:[~2024-12-19 11:02 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-12-19  6:38 Yunjian Wang
2024-12-19  8:24 ` David Marchand
2024-12-19 11:02   ` Wangyunjian(wangyunjian,TongTu) [this message]
2024-12-19 16:15 ` Stephen Hemminger
2024-12-20  2:17   ` Wangyunjian(wangyunjian,TongTu)
2024-12-20  4:59     ` Stephen Hemminger
2024-12-20  3:49 ` [PATCH v2 " Yunjian Wang
2024-12-20 17:10   ` Stephen Hemminger
2024-12-20 16:35 ` [PATCH " Stephen Hemminger
2024-12-23  2:45   ` Wangyunjian(wangyunjian,TongTu)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5809b61b17cf490b98b1bbec74c55b1a@huawei.com \
    --to=wangyunjian@huawei.com \
    --cc=chenbox@nvidia.com \
    --cc=david.marchand@redhat.com \
    --cc=dev@dpdk.org \
    --cc=jerry.lilijun@huawei.com \
    --cc=maxime.coquelin@redhat.com \
    --cc=stable@dpdk.org \
    --cc=wangzengyuan@huawei.com \
    --cc=xiawei40@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).