From: Yunjian Wang <wangyunjian@huawei.com>
To: <dev@dpdk.org>
Cc: <maxime.coquelin@redhat.com>, <chenbox@nvidia.com>,
<jerry.lilijun@huawei.com>, <xiawei40@huawei.com>,
<wangzengyuan@huawei.com>, Yunjian Wang <wangyunjian@huawei.com>,
<stable@dpdk.org>
Subject: [PATCH 1/1] vhost: fix a double fetch when dequeue offloading
Date: Thu, 19 Dec 2024 14:38:28 +0800 [thread overview]
Message-ID: <91dc12662805a3867413940f856ba9454b91c579.1734588243.git.wangyunjian@huawei.com> (raw)
The hdr->csum_start does two successive reads from user space to read a
variable length data structure. The result overflow if the data structure
changes between the two reads.
To fix this, we can prevent double fetch issue by copying virtio_hdr to
the temporary variable.
Fixes: 4dc4e33ffa10 ("net/virtio: fix Rx checksum calculation")
Cc: stable@dpdk.org
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
---
lib/vhost/virtio_net.c | 13 ++++++++-----
1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
index 69901ab3b5..5c40ae7069 100644
--- a/lib/vhost/virtio_net.c
+++ b/lib/vhost/virtio_net.c
@@ -2914,10 +2914,12 @@ desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq,
* in a contiguous virtual area.
*/
copy_vnet_hdr_from_desc(&tmp_hdr, buf_vec);
- hdr = &tmp_hdr;
} else {
- hdr = (struct virtio_net_hdr *)((uintptr_t)buf_vec[0].buf_addr);
+ rte_memcpy((void *)(uintptr_t)&tmp_hdr,
+ (void *)(uintptr_t)buf_vec[0].buf_addr,
+ sizeof(struct virtio_net_hdr));
}
+ hdr = &tmp_hdr;
}
for (vec_idx = 0; vec_idx < nr_vec; vec_idx++) {
@@ -3363,7 +3365,7 @@ virtio_dev_tx_batch_packed(struct virtio_net *dev,
{
uint16_t avail_idx = vq->last_avail_idx;
uint32_t buf_offset = sizeof(struct virtio_net_hdr_mrg_rxbuf);
- struct virtio_net_hdr *hdr;
+ struct virtio_net_hdr hdr;
uintptr_t desc_addrs[PACKED_BATCH_SIZE];
uint16_t ids[PACKED_BATCH_SIZE];
uint16_t i;
@@ -3382,8 +3384,9 @@ virtio_dev_tx_batch_packed(struct virtio_net *dev,
if (virtio_net_with_host_offload(dev)) {
vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) {
- hdr = (struct virtio_net_hdr *)(desc_addrs[i]);
- vhost_dequeue_offload(dev, hdr, pkts[i], legacy_ol_flags);
+ rte_memcpy((void *)(uintptr_t)&hdr,
+ (void *)(uintptr_t)desc_addrs[i], sizeof(struct virtio_net_hdr));
+ vhost_dequeue_offload(dev, &hdr, pkts[i], legacy_ol_flags);
}
}
--
2.33.0
next reply other threads:[~2024-12-19 6:38 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-12-19 6:38 Yunjian Wang [this message]
2024-12-19 8:24 ` David Marchand
2024-12-19 11:02 ` Wangyunjian(wangyunjian,TongTu)
2024-12-19 16:15 ` Stephen Hemminger
2024-12-20 2:17 ` Wangyunjian(wangyunjian,TongTu)
2024-12-20 4:59 ` Stephen Hemminger
2024-12-20 3:49 ` [PATCH v2 " Yunjian Wang
2024-12-20 17:10 ` Stephen Hemminger
2024-12-20 16:35 ` [PATCH " Stephen Hemminger
2024-12-23 2:45 ` Wangyunjian(wangyunjian,TongTu)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=91dc12662805a3867413940f856ba9454b91c579.1734588243.git.wangyunjian@huawei.com \
--to=wangyunjian@huawei.com \
--cc=chenbox@nvidia.com \
--cc=dev@dpdk.org \
--cc=jerry.lilijun@huawei.com \
--cc=maxime.coquelin@redhat.com \
--cc=stable@dpdk.org \
--cc=wangzengyuan@huawei.com \
--cc=xiawei40@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).