From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BAD8145F17 for ; Mon, 23 Dec 2024 03:45:54 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9BFAA40156; Mon, 23 Dec 2024 03:45:54 +0100 (CET) Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by mails.dpdk.org (Postfix) with ESMTP id DAAC740156; Mon, 23 Dec 2024 03:45:51 +0100 (CET) Received: from mail.maildlp.com (unknown [172.19.88.194]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4YGj4c3JflzhZVf; Mon, 23 Dec 2024 10:43:12 +0800 (CST) Received: from dggemv711-chm.china.huawei.com (unknown [10.1.198.66]) by mail.maildlp.com (Postfix) with ESMTPS id AA965140120; Mon, 23 Dec 2024 10:45:49 +0800 (CST) Received: from kwepemn100012.china.huawei.com (7.202.194.115) by dggemv711-chm.china.huawei.com (10.1.198.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Mon, 23 Dec 2024 10:45:49 +0800 Received: from kwepemd500024.china.huawei.com (7.221.188.194) by kwepemn100012.china.huawei.com (7.202.194.115) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Mon, 23 Dec 2024 10:45:49 +0800 Received: from kwepemd500024.china.huawei.com ([7.221.188.194]) by kwepemd500024.china.huawei.com ([7.221.188.194]) with mapi id 15.02.1544.011; Mon, 23 Dec 2024 10:45:49 +0800 From: "Wangyunjian(wangyunjian,TongTu)" To: Stephen Hemminger CC: "dev@dpdk.org" , "maxime.coquelin@redhat.com" , "chenbox@nvidia.com" , "Lilijun (Jerry)" , "xiawei (H)" , wangzengyuan , "stable@dpdk.org" Subject: RE: [PATCH 1/1] vhost: fix a double fetch when dequeue offloading Thread-Topic: [PATCH 1/1] vhost: fix a double fetch when dequeue offloading Thread-Index: AQHbUeCfDFMb3moSS0+hMQxlUzfoA7Lu0JIAgARTyfA= Date: Mon, 23 Dec 2024 02:45:48 +0000 Message-ID: <96dbb1599a3943f39c7e80d31dd3881a@huawei.com> References: <91dc12662805a3867413940f856ba9454b91c579.1734588243.git.wangyunjian@huawei.com> <20241220083546.4b5ba9c9@hermes.local> In-Reply-To: <20241220083546.4b5ba9c9@hermes.local> Accept-Language: en-US Content-Language: zh-CN X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.174.242.157] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org > -----Original Message----- > From: Stephen Hemminger [mailto:stephen@networkplumber.org] > Sent: Saturday, December 21, 2024 12:36 AM > To: Wangyunjian(wangyunjian,TongTu) > Cc: dev@dpdk.org; maxime.coquelin@redhat.com; chenbox@nvidia.com; > Lilijun (Jerry) ; xiawei (H) ; > wangzengyuan ; stable@dpdk.org > Subject: Re: [PATCH 1/1] vhost: fix a double fetch when dequeue offloadin= g >=20 > On Thu, 19 Dec 2024 14:38:28 +0800 > Yunjian Wang wrote: >=20 > > diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c > > index 69901ab3b5..5c40ae7069 100644 > > --- a/lib/vhost/virtio_net.c > > +++ b/lib/vhost/virtio_net.c > > @@ -2914,10 +2914,12 @@ desc_to_mbuf(struct virtio_net *dev, struct > vhost_virtqueue *vq, > > * in a contiguous virtual area. > > */ > > copy_vnet_hdr_from_desc(&tmp_hdr, buf_vec); > > - hdr =3D &tmp_hdr; > > } else { > > - hdr =3D (struct virtio_net_hdr *)((uintptr_t)buf_vec[0].buf_addr); > > + rte_memcpy((void *)(uintptr_t)&tmp_hdr, > > + (void *)(uintptr_t)buf_vec[0].buf_addr, > > + sizeof(struct virtio_net_hdr)); > > } > > + hdr =3D &tmp_hdr; >=20 > Since this if block is just an optimization of the case where vnet header > is contiguous why not just always use copy_vnet_hdr_from_desc? and inline= it? I also considered using the copy_vnet_hdr_from_desc function directly. However, in most cases, the vnet header is continuous, and reusing copy_vnet_hdr_from_desc results in additional operations. Thanks, Yunjian