From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8A42C45EE8 for ; Thu, 19 Dec 2024 09:24:20 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7D9B1402AE; Thu, 19 Dec 2024 09:24:20 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id 37D3F402A3 for ; Thu, 19 Dec 2024 09:24:18 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1734596657; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/kS3il6j3h78/rg1jHlyWEbJQXT+kilNRBFPwcEKeqk=; b=a6TNuNM1uN0lhPmU3iB37YWAZrDFIlfe10lGE4FeV9YyJK0X9Ym9Pv/CgTz35pe/moCSvk 64205fwrsR0h5NVTy2Pyds9nB4wcxxcwp3jU7uifDzeYZJeowPLB7b2iYKhfH1hWJhJoiG XGxKdG0mV4WRE6Mz7FZHYkmxYvZYeR8= Received: from mail-lf1-f71.google.com (mail-lf1-f71.google.com [209.85.167.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-256-InavehPyPN-TC9a2c-GQbA-1; Thu, 19 Dec 2024 03:24:13 -0500 X-MC-Unique: InavehPyPN-TC9a2c-GQbA-1 X-Mimecast-MFC-AGG-ID: InavehPyPN-TC9a2c-GQbA Received: by mail-lf1-f71.google.com with SMTP id 2adb3069b0e04-53f167e95e6so235657e87.1 for ; Thu, 19 Dec 2024 00:24:13 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734596652; x=1735201452; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/kS3il6j3h78/rg1jHlyWEbJQXT+kilNRBFPwcEKeqk=; b=dHs158JS3o+4+Z1aXv4lJpm/MAGM+ZD+Z77RFn2UJ7QUF/EzIQJ+WRsIGKkfVwVkmR b15lxmDJU15Ts58CVBsWZDyCgyow2Lyk5txsxy6cyQgvKwNXuDoD6hLqYnJdnWrA9E/Q 8OpWnmAlzm5XHSk4WIiqQde3Kfi6vaOMdb0dx9sn8sWkXuceXbG6fGW1sz4atD8wOXcq 9gXEWt+z0IRmoiXXkwsI1Ite0ywny0t26v655Jq60jCeQj6isI+8QeU7LoZVjGBhn32y DqF/4LAreqrXLuD2FPJiEc7d06B2zhu98j5Jj+rbYG5oNr75xPEXbyjfDOlL6Ww/I39m rM0w== X-Forwarded-Encrypted: i=1; AJvYcCVyYqxLWMLLxE+C5uPz/PoeibgRWFeR1CsWwOAQqlD8P3MTc7M8c211xpm71DjbnKQKypd0Cqk=@dpdk.org X-Gm-Message-State: AOJu0YzO3Gzqi/LbtesahBemXi6tvz+btHvzpWEP+9LoiUgLm/Z7OA7t hooLW7qVut8Bq6dgvo2s/xDwhCMUaGPfDSKiX9QE2vZQlKTRyyrg3vr2/0fKeIRoGAFWxoPLkIO chIWkHdk675IWChtOh5nUlVoVoglWmO/O2xKnIdBrmYqgwUftrysdCvtZKX85KeACN9J/s/ptcG YufiRCFQOF9vb4r0WcBKw= X-Gm-Gg: ASbGncvci0Nb96cB74h6HOyxIYHwZbzb0UNe+ZysmOeGWbJ8pIcctylbqVBmw1u8Xfz c1fvukIHYHOJ9lp/f5Ki2N2rZ07k5TN8TkBv7fG54 X-Received: by 2002:a05:6512:3f1f:b0:542:1b97:3db8 with SMTP id 2adb3069b0e04-542212d2387mr759242e87.5.1734596652100; Thu, 19 Dec 2024 00:24:12 -0800 (PST) X-Google-Smtp-Source: AGHT+IHsFjpxSsYFWKmmKY6tE43V7Yc1jxSjwJjtkSJIMHYyzog7kmaACI2Dzf+kG8Btn+0mKABeUXTLhhGVwGxhHIQ= X-Received: by 2002:a05:6512:3f1f:b0:542:1b97:3db8 with SMTP id 2adb3069b0e04-542212d2387mr759229e87.5.1734596651703; Thu, 19 Dec 2024 00:24:11 -0800 (PST) MIME-Version: 1.0 References: <91dc12662805a3867413940f856ba9454b91c579.1734588243.git.wangyunjian@huawei.com> In-Reply-To: <91dc12662805a3867413940f856ba9454b91c579.1734588243.git.wangyunjian@huawei.com> From: David Marchand Date: Thu, 19 Dec 2024 09:24:00 +0100 Message-ID: Subject: Re: [PATCH 1/1] vhost: fix a double fetch when dequeue offloading To: Yunjian Wang , maxime.coquelin@redhat.com Cc: dev@dpdk.org, chenbox@nvidia.com, jerry.lilijun@huawei.com, xiawei40@huawei.com, wangzengyuan@huawei.com, stable@dpdk.org X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: iXCJgizG_Nr8rE4r7u7hzQzy1SoyXfLygCNjc2kp4Hg_1734596652 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org On Thu, Dec 19, 2024 at 7:38=E2=80=AFAM Yunjian Wang wrote: > > The hdr->csum_start does two successive reads from user space to read a > variable length data structure. The result overflow if the data structure > changes between the two reads. > > To fix this, we can prevent double fetch issue by copying virtio_hdr to > the temporary variable. > > Fixes: 4dc4e33ffa10 ("net/virtio: fix Rx checksum calculation") > Cc: stable@dpdk.org > > Signed-off-by: Yunjian Wang > --- > lib/vhost/virtio_net.c | 13 ++++++++----- > 1 file changed, 8 insertions(+), 5 deletions(-) > > diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c > index 69901ab3b5..5c40ae7069 100644 > --- a/lib/vhost/virtio_net.c > +++ b/lib/vhost/virtio_net.c > @@ -2914,10 +2914,12 @@ desc_to_mbuf(struct virtio_net *dev, struct vhost= _virtqueue *vq, > * in a contiguous virtual area. > */ > copy_vnet_hdr_from_desc(&tmp_hdr, buf_vec); > - hdr =3D &tmp_hdr; > } else { > - hdr =3D (struct virtio_net_hdr *)((uintptr_t)buf_= vec[0].buf_addr); > + rte_memcpy((void *)(uintptr_t)&tmp_hdr, > + (void *)(uintptr_t)buf_vec[0].buf_addr, > + sizeof(struct virtio_net_hdr)); > } > + hdr =3D &tmp_hdr; > } This will need some benchmark, as I remember putting rte_memcpy in inlined helpers had some performance impact. Instead, I would call copy_vnet_hdr_from_desc unconditionnally, and store in a struct virtio_net_hdr hdr variable (+ a has_vnet_hdr boolean to indicate validity). Something like: if (virtio_net_with_host_offload(dev)) { - if (unlikely(buf_vec[0].buf_len < sizeof(struct virtio_net_hdr))) { - /* - * No luck, the virtio-net header doesn't fit - * in a contiguous virtual area. - */ - copy_vnet_hdr_from_desc(&tmp_hdr, buf_vec); - hdr =3D &tmp_hdr; - } else { - hdr =3D (struct virtio_net_hdr *)((uintptr_t)buf_vec[0].buf_addr); - } + copy_vnet_hdr_from_desc(&hdr, buf_vec); + has_vnet_hdr =3D true; } (besides, in copy_vnet_hdr_from_desc, the while (cond) {} loop could be changed to do a do {} while (cond), and that approach requires performance numbers too) > > for (vec_idx =3D 0; vec_idx < nr_vec; vec_idx++) { > @@ -3363,7 +3365,7 @@ virtio_dev_tx_batch_packed(struct virtio_net *dev, > { > uint16_t avail_idx =3D vq->last_avail_idx; > uint32_t buf_offset =3D sizeof(struct virtio_net_hdr_mrg_rxbuf); > - struct virtio_net_hdr *hdr; > + struct virtio_net_hdr hdr; > uintptr_t desc_addrs[PACKED_BATCH_SIZE]; > uint16_t ids[PACKED_BATCH_SIZE]; > uint16_t i; > @@ -3382,8 +3384,9 @@ virtio_dev_tx_batch_packed(struct virtio_net *dev, > > if (virtio_net_with_host_offload(dev)) { > vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { > - hdr =3D (struct virtio_net_hdr *)(desc_addrs[i]); > - vhost_dequeue_offload(dev, hdr, pkts[i], legacy_o= l_flags); > + rte_memcpy((void *)(uintptr_t)&hdr, > + (void *)(uintptr_t)desc_addrs[i], sizeof(= struct virtio_net_hdr)); > + vhost_dequeue_offload(dev, &hdr, pkts[i], legacy_= ol_flags); > } > } Here too, there may be an impact with adding rte_memcpy. Just do a copy like: if (virtio_net_with_host_offload(dev)) { + struct virtio_net_hdr hdr; + vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { - hdr =3D (struct virtio_net_hdr *)(desc_addrs[i]); - vhost_dequeue_offload(dev, hdr, pkts[i], legacy_ol_flags); + hdr =3D *(struct virtio_net_hdr *)(desc_addrs[i]); + vhost_dequeue_offload(dev, &hdr, pkts[i], legacy_ol_flags); } --=20 David Marchand