* [PATCH] net/virtio-user: fix used ring address calculation
@ 2025-08-05 8:36 Maxime Coquelin
2025-08-18 12:56 ` David Marchand
0 siblings, 1 reply; 2+ messages in thread
From: Maxime Coquelin @ 2025-08-05 8:36 UTC (permalink / raw)
To: dev, amorenoz, chenbox, david.marchand; +Cc: schalla, Maxime Coquelin, stable
This patch fixes the used ring address calculation, to
avoid Vhost-vDPA backends (such as VDUSE) to fail while
trying to translate it.
Fixes: 666ef294ddf7 ("net/virtio-user: share descriptor IOVA to backend")
Cc: stable@dpdk.org
Reported-by: Adrian Moreno <amorenoz@redhat.com>
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
drivers/net/virtio/virtio_user/virtio_user_dev.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index 187f81b066..7789f337f6 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -149,7 +149,8 @@ virtio_user_kick_queue(struct virtio_user_dev *dev, uint32_t queue_sel)
} else {
desc_addr = vring->desc_iova;
avail_addr = desc_addr + vring->num * sizeof(struct vring_desc);
- used_addr = RTE_ALIGN_CEIL((uintptr_t)(&vring->avail->ring[vring->num]),
+ used_addr = RTE_ALIGN_CEIL(avail_addr + offsetof(struct vring_avail,
+ ring[vring->num]),
VIRTIO_VRING_ALIGN);
addr.desc_user_addr = desc_addr;
--
2.50.1
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: [PATCH] net/virtio-user: fix used ring address calculation
2025-08-05 8:36 [PATCH] net/virtio-user: fix used ring address calculation Maxime Coquelin
@ 2025-08-18 12:56 ` David Marchand
0 siblings, 0 replies; 2+ messages in thread
From: David Marchand @ 2025-08-18 12:56 UTC (permalink / raw)
To: Maxime Coquelin; +Cc: dev, amorenoz, chenbox, schalla, stable
On Tue, Aug 5, 2025 at 10:37 AM Maxime Coquelin
<maxime.coquelin@redhat.com> wrote:
>
> This patch fixes the used ring address calculation, to
> avoid Vhost-vDPA backends (such as VDUSE) to fail while
> trying to translate it.
Maybe update the patch title to reflect that this issue affects
vhost-vdpa backend?
>
> Fixes: 666ef294ddf7 ("net/virtio-user: share descriptor IOVA to backend")
> Cc: stable@dpdk.org
>
> Reported-by: Adrian Moreno <amorenoz@redhat.com>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
> drivers/net/virtio/virtio_user/virtio_user_dev.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
> index 187f81b066..7789f337f6 100644
> --- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
> +++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
> @@ -149,7 +149,8 @@ virtio_user_kick_queue(struct virtio_user_dev *dev, uint32_t queue_sel)
> } else {
> desc_addr = vring->desc_iova;
> avail_addr = desc_addr + vring->num * sizeof(struct vring_desc);
> - used_addr = RTE_ALIGN_CEIL((uintptr_t)(&vring->avail->ring[vring->num]),
> + used_addr = RTE_ALIGN_CEIL(avail_addr + offsetof(struct vring_avail,
> + ring[vring->num]),
> VIRTIO_VRING_ALIGN);
The fix looks good to me, but I would go one step further (as a
followup maybe?).
I see no good reason for computing again the addresses in the kick
helper, while those addresses should be set once and for all at
"setup" time.
Only an offset needs to be applied to account for iova stuff.
What do you think of something like:
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c
b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index 187f81b066..dcb702ace3 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -118,7 +118,7 @@ virtio_user_kick_queue(struct virtio_user_dev
*dev, uint32_t queue_sel)
struct vhost_vring_state state;
struct vring *vring = &dev->vrings.split[queue_sel];
struct vring_packed *pq_vring = &dev->vrings.packed[queue_sel];
- uint64_t desc_addr, avail_addr, used_addr;
+ uint64_t desc_addr, desc_iova_addr, avail_addr, used_addr;
struct vhost_vring_addr addr = {
.index = queue_sel,
.log_guest_addr = 0,
@@ -138,25 +138,22 @@ virtio_user_kick_queue(struct virtio_user_dev
*dev, uint32_t queue_sel)
}
if (dev->features & (1ULL << VIRTIO_F_RING_PACKED)) {
- desc_addr = pq_vring->desc_iova;
- avail_addr = desc_addr + pq_vring->num * sizeof(struct
vring_packed_desc);
- used_addr = RTE_ALIGN_CEIL(avail_addr + sizeof(struct
vring_packed_desc_event),
- VIRTIO_VRING_ALIGN);
-
- addr.desc_user_addr = desc_addr;
- addr.avail_user_addr = avail_addr;
- addr.used_user_addr = used_addr;
+ desc_iova_addr = pq_vring->desc_iova;
+ desc_addr = (uint64_t)(uintptr_t)pq_vring->desc;
+ avail_addr = (uint64_t)(uintptr_t)pq_vring->driver;
+ used_addr = (uint64_t)(uintptr_t)pq_vring->device;
+
} else {
- desc_addr = vring->desc_iova;
- avail_addr = desc_addr + vring->num * sizeof(struct vring_desc);
- used_addr =
RTE_ALIGN_CEIL((uintptr_t)(&vring->avail->ring[vring->num]),
- VIRTIO_VRING_ALIGN);
-
- addr.desc_user_addr = desc_addr;
- addr.avail_user_addr = avail_addr;
- addr.used_user_addr = used_addr;
+ desc_iova_addr = vring->desc_iova;
+ desc_addr = (uint64_t)(uintptr_t)vring->desc;
+ avail_addr = (uint64_t)(uintptr_t)vring->avail;
+ used_addr = (uint64_t)(uintptr_t)vring->used;
}
+ addr.desc_user_addr = desc_iova_addr;
+ addr.avail_user_addr = (desc_iova_addr - desc_addr) + avail_addr;
+ addr.used_user_addr = (desc_iova_addr - desc_addr) + used_addr;
+
state.index = queue_sel;
state.num = vring->num;
ret = dev->ops->set_vring_num(dev, &state);
--
David Marchand
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2025-08-18 12:56 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-08-05 8:36 [PATCH] net/virtio-user: fix used ring address calculation Maxime Coquelin
2025-08-18 12:56 ` David Marchand
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).