From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A085045EE8; Thu, 19 Dec 2024 09:24:17 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 61B0A40144; Thu, 19 Dec 2024 09:24:17 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id 0F13E400EF for ; Thu, 19 Dec 2024 09:24:15 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1734596655; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/kS3il6j3h78/rg1jHlyWEbJQXT+kilNRBFPwcEKeqk=; b=OmlwItUQgYVC/udXJidy1bNWI7Gl1bW6h6wD2d5CEDZWpud3TBVFP6PXuhG7r2oNaU5Gaq zeINpk4iYKwjewyyv6RA3OF3l3J83pOGORt/hD9fBPPaT14+Kr7Hap1bqe+mq9yS0sY3/C r/71LOj5K9r6JX0YmxSJK8ICGaGwP2g= Received: from mail-lf1-f72.google.com (mail-lf1-f72.google.com [209.85.167.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-256-eL3_mRiBOQ2fPdunI5Czog-1; Thu, 19 Dec 2024 03:24:13 -0500 X-MC-Unique: eL3_mRiBOQ2fPdunI5Czog-1 X-Mimecast-MFC-AGG-ID: eL3_mRiBOQ2fPdunI5Czog Received: by mail-lf1-f72.google.com with SMTP id 2adb3069b0e04-53faf9e6195so869229e87.0 for ; Thu, 19 Dec 2024 00:24:13 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734596652; x=1735201452; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/kS3il6j3h78/rg1jHlyWEbJQXT+kilNRBFPwcEKeqk=; b=r+8nn8q3Rko/zR2HjVSq6O953PWXeSgcOFTzUmrnfZPNLnDM0lTQjhm+MVGNW/LlFK PHBolTa5LWCXNIP1dIzWsWtYv0ls4+2mk/aLeYAfpNZaJpJguaKbY350CQrbYx/si/oC OVGh6eCLDLkrdFAptlt19Yv56Jfcs4p5pPQs/zwx/H9PWAHwfaz9EjJLp7gHs8PwMYcS nCg1oRIbdAR2ndB+9mT3hjG1scz/XA3QnRJtw3oCVVC9f5hW9Z60a4zyTTKgJbmsb4as 3ytHqIjLO8Ayzdvj5DNQ31RdlRXPC+6fXd9ssOGoIiPtTqNDN65miOamp23cUn4SyIzI U+YA== X-Gm-Message-State: AOJu0YxBDDkerYaVW6N0rpkX5s1MT1Mo1rjYk1ecvDO2bz79LxNYfktN FLPdb2P/cbu900lT5Uu7dPNxz4grPvJiETHdJK8GvGyfZQdeoKoiapLtN0quXL5T7Wokl2sFoD8 agx/03bJqxAbDGtAfvYe0JN/HEGDiRiItJtLFRXQBRc8Nou9hdpOlZk4DH7uqt0BHy6jk4GRTw1 AIlDRKLXLjTDI4FkU= X-Gm-Gg: ASbGncvuNKl6/wvFugfdMbGbtSO8TJPmW96LNqrhB30jWXZlFph8kQJ3nNp+lXhAeer gm6w+q1HcAfuzDqOWAaKZP6Lowon87cBFmK0ehUIl X-Received: by 2002:a05:6512:3f1f:b0:542:1b97:3db8 with SMTP id 2adb3069b0e04-542212d2387mr759239e87.5.1734596652097; Thu, 19 Dec 2024 00:24:12 -0800 (PST) X-Google-Smtp-Source: AGHT+IHsFjpxSsYFWKmmKY6tE43V7Yc1jxSjwJjtkSJIMHYyzog7kmaACI2Dzf+kG8Btn+0mKABeUXTLhhGVwGxhHIQ= X-Received: by 2002:a05:6512:3f1f:b0:542:1b97:3db8 with SMTP id 2adb3069b0e04-542212d2387mr759229e87.5.1734596651703; Thu, 19 Dec 2024 00:24:11 -0800 (PST) MIME-Version: 1.0 References: <91dc12662805a3867413940f856ba9454b91c579.1734588243.git.wangyunjian@huawei.com> In-Reply-To: <91dc12662805a3867413940f856ba9454b91c579.1734588243.git.wangyunjian@huawei.com> From: David Marchand Date: Thu, 19 Dec 2024 09:24:00 +0100 Message-ID: Subject: Re: [PATCH 1/1] vhost: fix a double fetch when dequeue offloading To: Yunjian Wang , maxime.coquelin@redhat.com Cc: dev@dpdk.org, chenbox@nvidia.com, jerry.lilijun@huawei.com, xiawei40@huawei.com, wangzengyuan@huawei.com, stable@dpdk.org X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: ETpfAbomQZBJtdi3UvM9huhFdZD_mVm7OoChdabW6dU_1734596652 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On Thu, Dec 19, 2024 at 7:38=E2=80=AFAM Yunjian Wang wrote: > > The hdr->csum_start does two successive reads from user space to read a > variable length data structure. The result overflow if the data structure > changes between the two reads. > > To fix this, we can prevent double fetch issue by copying virtio_hdr to > the temporary variable. > > Fixes: 4dc4e33ffa10 ("net/virtio: fix Rx checksum calculation") > Cc: stable@dpdk.org > > Signed-off-by: Yunjian Wang > --- > lib/vhost/virtio_net.c | 13 ++++++++----- > 1 file changed, 8 insertions(+), 5 deletions(-) > > diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c > index 69901ab3b5..5c40ae7069 100644 > --- a/lib/vhost/virtio_net.c > +++ b/lib/vhost/virtio_net.c > @@ -2914,10 +2914,12 @@ desc_to_mbuf(struct virtio_net *dev, struct vhost= _virtqueue *vq, > * in a contiguous virtual area. > */ > copy_vnet_hdr_from_desc(&tmp_hdr, buf_vec); > - hdr =3D &tmp_hdr; > } else { > - hdr =3D (struct virtio_net_hdr *)((uintptr_t)buf_= vec[0].buf_addr); > + rte_memcpy((void *)(uintptr_t)&tmp_hdr, > + (void *)(uintptr_t)buf_vec[0].buf_addr, > + sizeof(struct virtio_net_hdr)); > } > + hdr =3D &tmp_hdr; > } This will need some benchmark, as I remember putting rte_memcpy in inlined helpers had some performance impact. Instead, I would call copy_vnet_hdr_from_desc unconditionnally, and store in a struct virtio_net_hdr hdr variable (+ a has_vnet_hdr boolean to indicate validity). Something like: if (virtio_net_with_host_offload(dev)) { - if (unlikely(buf_vec[0].buf_len < sizeof(struct virtio_net_hdr))) { - /* - * No luck, the virtio-net header doesn't fit - * in a contiguous virtual area. - */ - copy_vnet_hdr_from_desc(&tmp_hdr, buf_vec); - hdr =3D &tmp_hdr; - } else { - hdr =3D (struct virtio_net_hdr *)((uintptr_t)buf_vec[0].buf_addr); - } + copy_vnet_hdr_from_desc(&hdr, buf_vec); + has_vnet_hdr =3D true; } (besides, in copy_vnet_hdr_from_desc, the while (cond) {} loop could be changed to do a do {} while (cond), and that approach requires performance numbers too) > > for (vec_idx =3D 0; vec_idx < nr_vec; vec_idx++) { > @@ -3363,7 +3365,7 @@ virtio_dev_tx_batch_packed(struct virtio_net *dev, > { > uint16_t avail_idx =3D vq->last_avail_idx; > uint32_t buf_offset =3D sizeof(struct virtio_net_hdr_mrg_rxbuf); > - struct virtio_net_hdr *hdr; > + struct virtio_net_hdr hdr; > uintptr_t desc_addrs[PACKED_BATCH_SIZE]; > uint16_t ids[PACKED_BATCH_SIZE]; > uint16_t i; > @@ -3382,8 +3384,9 @@ virtio_dev_tx_batch_packed(struct virtio_net *dev, > > if (virtio_net_with_host_offload(dev)) { > vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { > - hdr =3D (struct virtio_net_hdr *)(desc_addrs[i]); > - vhost_dequeue_offload(dev, hdr, pkts[i], legacy_o= l_flags); > + rte_memcpy((void *)(uintptr_t)&hdr, > + (void *)(uintptr_t)desc_addrs[i], sizeof(= struct virtio_net_hdr)); > + vhost_dequeue_offload(dev, &hdr, pkts[i], legacy_= ol_flags); > } > } Here too, there may be an impact with adding rte_memcpy. Just do a copy like: if (virtio_net_with_host_offload(dev)) { + struct virtio_net_hdr hdr; + vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { - hdr =3D (struct virtio_net_hdr *)(desc_addrs[i]); - vhost_dequeue_offload(dev, hdr, pkts[i], legacy_ol_flags); + hdr =3D *(struct virtio_net_hdr *)(desc_addrs[i]); + vhost_dequeue_offload(dev, &hdr, pkts[i], legacy_ol_flags); } --=20 David Marchand