From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9C1A945EF3; Fri, 20 Dec 2024 04:50:03 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0A367402C6; Fri, 20 Dec 2024 04:50:03 +0100 (CET) Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by mails.dpdk.org (Postfix) with ESMTP id 60DD740144; Fri, 20 Dec 2024 04:50:01 +0100 (CET) Received: from mail.maildlp.com (unknown [172.19.163.48]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4YDtf567cgzhZXQ; Fri, 20 Dec 2024 11:47:25 +0800 (CST) Received: from kwepemd500024.china.huawei.com (unknown [7.221.188.194]) by mail.maildlp.com (Postfix) with ESMTPS id DABAB180064; Fri, 20 Dec 2024 11:49:59 +0800 (CST) Received: from localhost (10.174.242.157) by kwepemd500024.china.huawei.com (7.221.188.194) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 20 Dec 2024 11:49:59 +0800 From: Yunjian Wang To: CC: , , , , , Yunjian Wang , Subject: [PATCH v2 1/1] vhost: fix a double fetch when dequeue offloading Date: Fri, 20 Dec 2024 11:49:55 +0800 Message-ID: <09058cfb25d7583f67d74f09cd36673f1b10f5ec.1734661755.git.wangyunjian@huawei.com> X-Mailer: git-send-email 1.9.5.msysgit.1 In-Reply-To: <91dc12662805a3867413940f856ba9454b91c579.1734588243.git.wangyunjian@huawei.com> References: <91dc12662805a3867413940f856ba9454b91c579.1734588243.git.wangyunjian@huawei.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.174.242.157] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemd500024.china.huawei.com (7.221.188.194) X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The hdr->csum_start does two successive reads from user space to read a variable length data structure. The result overflow if the data structure changes between the two reads. To fix this, we can prevent double fetch issue by copying virtio_hdr to the temporary variable. Fixes: 4dc4e33ffa10 ("net/virtio: fix Rx checksum calculation") Cc: stable@dpdk.org Signed-off-by: Yunjian Wang --- v2: update code styles suggested by David Marchand --- lib/vhost/virtio_net.c | 27 ++++++++++++++------------- 1 file changed, 14 insertions(+), 13 deletions(-) diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index 69901ab3b5..2676447906 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -2896,8 +2896,8 @@ desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, uint32_t hdr_remain = dev->vhost_hlen; uint32_t cpy_len; struct rte_mbuf *cur = m, *prev = m; - struct virtio_net_hdr tmp_hdr; - struct virtio_net_hdr *hdr = NULL; + bool has_vnet_hdr = false; + struct virtio_net_hdr hdr; uint16_t vec_idx; struct vhost_async *async = vq->async; struct async_inflight_info *pkts_info; @@ -2913,11 +2913,11 @@ desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, * No luck, the virtio-net header doesn't fit * in a contiguous virtual area. */ - copy_vnet_hdr_from_desc(&tmp_hdr, buf_vec); - hdr = &tmp_hdr; + copy_vnet_hdr_from_desc(&hdr, buf_vec); } else { - hdr = (struct virtio_net_hdr *)((uintptr_t)buf_vec[0].buf_addr); + hdr = *(struct virtio_net_hdr *)((uintptr_t)buf_vec[0].buf_addr); } + has_vnet_hdr = true; } for (vec_idx = 0; vec_idx < nr_vec; vec_idx++) { @@ -2953,7 +2953,7 @@ desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, if (async_fill_seg(dev, vq, cur, mbuf_offset, buf_iova + buf_offset, cpy_len, false) < 0) goto error; - } else if (likely(hdr && cur == m)) { + } else if (likely(has_vnet_hdr && cur == m)) { rte_memcpy(rte_pktmbuf_mtod_offset(cur, void *, mbuf_offset), (void *)((uintptr_t)(buf_addr + buf_offset)), cpy_len); @@ -3013,10 +3013,10 @@ desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, if (is_async) { async_iter_finalize(async); - if (hdr) - pkts_info[slot_idx].nethdr = *hdr; - } else if (hdr) { - vhost_dequeue_offload(dev, hdr, m, legacy_ol_flags); + if (has_vnet_hdr) + pkts_info[slot_idx].nethdr = hdr; + } else if (has_vnet_hdr) { + vhost_dequeue_offload(dev, &hdr, m, legacy_ol_flags); } return 0; @@ -3363,7 +3363,6 @@ virtio_dev_tx_batch_packed(struct virtio_net *dev, { uint16_t avail_idx = vq->last_avail_idx; uint32_t buf_offset = sizeof(struct virtio_net_hdr_mrg_rxbuf); - struct virtio_net_hdr *hdr; uintptr_t desc_addrs[PACKED_BATCH_SIZE]; uint16_t ids[PACKED_BATCH_SIZE]; uint16_t i; @@ -3381,9 +3380,11 @@ virtio_dev_tx_batch_packed(struct virtio_net *dev, pkts[i]->pkt_len); if (virtio_net_with_host_offload(dev)) { + struct virtio_net_hdr hdr; + vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { - hdr = (struct virtio_net_hdr *)(desc_addrs[i]); - vhost_dequeue_offload(dev, hdr, pkts[i], legacy_ol_flags); + hdr = *(struct virtio_net_hdr *)(desc_addrs[i]); + vhost_dequeue_offload(dev, &hdr, pkts[i], legacy_ol_flags); } } -- 2.33.0