* [PATCH v2 0/3] net/virtio: support IOVA as PA mode for vDPA backend
@ 2024-02-29 13:29 Srujana Challa
2024-02-29 13:29 ` [PATCH v2 1/3] net/virtio_user: avoid cq descriptor buffer address accessing Srujana Challa
` (4 more replies)
0 siblings, 5 replies; 22+ messages in thread
From: Srujana Challa @ 2024-02-29 13:29 UTC (permalink / raw)
To: dev, maxime.coquelin, chenbox; +Cc: jerinj, ndabilpuram, vattunuru, schalla
This patch series makes Virtio-user works in IOVA as PA mode
for vDPA backend.
First patch fixes the issue when having buffer IOVA address in
control queue descriptors.
Second and third patches helps to share descriptor IOVA address,
to the vhost backend. And also disables the use_va flag for VDPA
backend type.
v1->v2:
- Split single patch into three patches.
Srujana Challa (3):
net/virtio_user: avoid cq descriptor buffer address accessing
net/virtio: store desc IOVA address in vring data structure
net/virtio_user: support sharing vq descriptor IOVA to the backend
drivers/net/virtio/virtio_ring.h | 12 ++-
.../net/virtio/virtio_user/virtio_user_dev.c | 94 ++++++++++---------
drivers/net/virtio/virtio_user_ethdev.c | 10 +-
drivers/net/virtio/virtqueue.c | 4 +-
4 files changed, 69 insertions(+), 51 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v2 1/3] net/virtio_user: avoid cq descriptor buffer address accessing
2024-02-29 13:29 [PATCH v2 0/3] net/virtio: support IOVA as PA mode for vDPA backend Srujana Challa
@ 2024-02-29 13:29 ` Srujana Challa
2024-06-28 8:07 ` Maxime Coquelin
2024-07-03 10:03 ` [PATCH v3 0/3] net/virtio: support IOVA as PA mode for vDPA backend Srujana Challa
2024-02-29 13:29 ` [PATCH v2 2/3] net/virtio: store desc IOVA address in vring data structure Srujana Challa
` (3 subsequent siblings)
4 siblings, 2 replies; 22+ messages in thread
From: Srujana Challa @ 2024-02-29 13:29 UTC (permalink / raw)
To: dev, maxime.coquelin, chenbox; +Cc: jerinj, ndabilpuram, vattunuru, schalla
This patch makes changes to avoid descriptor buffer address
accessing while processing shadow control queue.
So that Virtio-user can work with having IOVA as descriptor
buffer address.
Signed-off-by: Srujana Challa <schalla@marvell.com>
---
.../net/virtio/virtio_user/virtio_user_dev.c | 68 +++++++++----------
1 file changed, 33 insertions(+), 35 deletions(-)
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index d395fc1676..bf3da4340f 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -885,11 +885,11 @@ static uint32_t
virtio_user_handle_ctrl_msg_split(struct virtio_user_dev *dev, struct vring *vring,
uint16_t idx_hdr)
{
- struct virtio_net_ctrl_hdr *hdr;
virtio_net_ctrl_ack status = ~0;
- uint16_t i, idx_data, idx_status;
+ uint16_t i, idx_data;
uint32_t n_descs = 0;
int dlen[CVQ_MAX_DATA_DESCS], nb_dlen = 0;
+ struct virtio_pmd_ctrl *ctrl;
/* locate desc for header, data, and status */
idx_data = vring->desc[idx_hdr].next;
@@ -902,34 +902,33 @@ virtio_user_handle_ctrl_msg_split(struct virtio_user_dev *dev, struct vring *vri
n_descs++;
}
- /* locate desc for status */
- idx_status = i;
n_descs++;
- hdr = (void *)(uintptr_t)vring->desc[idx_hdr].addr;
- if (hdr->class == VIRTIO_NET_CTRL_MQ &&
- hdr->cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) {
- uint16_t queues;
+ /* Access control command via VA from CVQ */
+ ctrl = (struct virtio_pmd_ctrl *)dev->hw.cvq->hdr_mz->addr;
+ if (ctrl->hdr.class == VIRTIO_NET_CTRL_MQ &&
+ ctrl->hdr.cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) {
+ uint16_t *queues;
- queues = *(uint16_t *)(uintptr_t)vring->desc[idx_data].addr;
- status = virtio_user_handle_mq(dev, queues);
- } else if (hdr->class == VIRTIO_NET_CTRL_MQ && hdr->cmd == VIRTIO_NET_CTRL_MQ_RSS_CONFIG) {
+ queues = (uint16_t *)ctrl->data;
+ status = virtio_user_handle_mq(dev, *queues);
+ } else if (ctrl->hdr.class == VIRTIO_NET_CTRL_MQ &&
+ ctrl->hdr.cmd == VIRTIO_NET_CTRL_MQ_RSS_CONFIG) {
struct virtio_net_ctrl_rss *rss;
- rss = (struct virtio_net_ctrl_rss *)(uintptr_t)vring->desc[idx_data].addr;
+ rss = (struct virtio_net_ctrl_rss *)ctrl->data;
status = virtio_user_handle_mq(dev, rss->max_tx_vq);
- } else if (hdr->class == VIRTIO_NET_CTRL_RX ||
- hdr->class == VIRTIO_NET_CTRL_MAC ||
- hdr->class == VIRTIO_NET_CTRL_VLAN) {
+ } else if (ctrl->hdr.class == VIRTIO_NET_CTRL_RX ||
+ ctrl->hdr.class == VIRTIO_NET_CTRL_MAC ||
+ ctrl->hdr.class == VIRTIO_NET_CTRL_VLAN) {
status = 0;
}
if (!status && dev->scvq)
- status = virtio_send_command(&dev->scvq->cq,
- (struct virtio_pmd_ctrl *)hdr, dlen, nb_dlen);
+ status = virtio_send_command(&dev->scvq->cq, ctrl, dlen, nb_dlen);
/* Update status */
- *(virtio_net_ctrl_ack *)(uintptr_t)vring->desc[idx_status].addr = status;
+ ctrl->status = status;
return n_descs;
}
@@ -948,7 +947,7 @@ virtio_user_handle_ctrl_msg_packed(struct virtio_user_dev *dev,
struct vring_packed *vring,
uint16_t idx_hdr)
{
- struct virtio_net_ctrl_hdr *hdr;
+ struct virtio_pmd_ctrl *ctrl;
virtio_net_ctrl_ack status = ~0;
uint16_t idx_data, idx_status;
/* initialize to one, header is first */
@@ -971,32 +970,31 @@ virtio_user_handle_ctrl_msg_packed(struct virtio_user_dev *dev,
n_descs++;
}
- hdr = (void *)(uintptr_t)vring->desc[idx_hdr].addr;
- if (hdr->class == VIRTIO_NET_CTRL_MQ &&
- hdr->cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) {
- uint16_t queues;
+ /* Access control command via VA from CVQ */
+ ctrl = (struct virtio_pmd_ctrl *)dev->hw.cvq->hdr_mz->addr;
+ if (ctrl->hdr.class == VIRTIO_NET_CTRL_MQ &&
+ ctrl->hdr.cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) {
+ uint16_t *queues;
- queues = *(uint16_t *)(uintptr_t)
- vring->desc[idx_data].addr;
- status = virtio_user_handle_mq(dev, queues);
- } else if (hdr->class == VIRTIO_NET_CTRL_MQ && hdr->cmd == VIRTIO_NET_CTRL_MQ_RSS_CONFIG) {
+ queues = (uint16_t *)ctrl->data;
+ status = virtio_user_handle_mq(dev, *queues);
+ } else if (ctrl->hdr.class == VIRTIO_NET_CTRL_MQ &&
+ ctrl->hdr.cmd == VIRTIO_NET_CTRL_MQ_RSS_CONFIG) {
struct virtio_net_ctrl_rss *rss;
- rss = (struct virtio_net_ctrl_rss *)(uintptr_t)vring->desc[idx_data].addr;
+ rss = (struct virtio_net_ctrl_rss *)ctrl->data;
status = virtio_user_handle_mq(dev, rss->max_tx_vq);
- } else if (hdr->class == VIRTIO_NET_CTRL_RX ||
- hdr->class == VIRTIO_NET_CTRL_MAC ||
- hdr->class == VIRTIO_NET_CTRL_VLAN) {
+ } else if (ctrl->hdr.class == VIRTIO_NET_CTRL_RX ||
+ ctrl->hdr.class == VIRTIO_NET_CTRL_MAC ||
+ ctrl->hdr.class == VIRTIO_NET_CTRL_VLAN) {
status = 0;
}
if (!status && dev->scvq)
- status = virtio_send_command(&dev->scvq->cq,
- (struct virtio_pmd_ctrl *)hdr, dlen, nb_dlen);
+ status = virtio_send_command(&dev->scvq->cq, ctrl, dlen, nb_dlen);
/* Update status */
- *(virtio_net_ctrl_ack *)(uintptr_t)
- vring->desc[idx_status].addr = status;
+ ctrl->status = status;
/* Update used descriptor */
vring->desc[idx_hdr].id = vring->desc[idx_status].id;
--
2.25.1
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v2 2/3] net/virtio: store desc IOVA address in vring data structure
2024-02-29 13:29 [PATCH v2 0/3] net/virtio: support IOVA as PA mode for vDPA backend Srujana Challa
2024-02-29 13:29 ` [PATCH v2 1/3] net/virtio_user: avoid cq descriptor buffer address accessing Srujana Challa
@ 2024-02-29 13:29 ` Srujana Challa
2024-02-29 13:29 ` [PATCH v2 3/3] net/virtio_user: support sharing vq descriptor IOVA to the backend Srujana Challa
` (2 subsequent siblings)
4 siblings, 0 replies; 22+ messages in thread
From: Srujana Challa @ 2024-02-29 13:29 UTC (permalink / raw)
To: dev, maxime.coquelin, chenbox; +Cc: jerinj, ndabilpuram, vattunuru, schalla
Stores desc IOVA in the queue's vring data structure,
as preliminary work to provide a way for Virtio-user
to share desc IOVA to the vhost backend.
Signed-off-by: Srujana Challa <schalla@marvell.com>
---
drivers/net/virtio/virtio_ring.h | 12 ++++++++----
drivers/net/virtio/virtqueue.c | 4 ++--
2 files changed, 10 insertions(+), 6 deletions(-)
diff --git a/drivers/net/virtio/virtio_ring.h b/drivers/net/virtio/virtio_ring.h
index e848c0b73b..998605dbb5 100644
--- a/drivers/net/virtio/virtio_ring.h
+++ b/drivers/net/virtio/virtio_ring.h
@@ -83,6 +83,7 @@ struct vring_packed_desc_event {
struct vring_packed {
unsigned int num;
+ rte_iova_t desc_iova;
struct vring_packed_desc *desc;
struct vring_packed_desc_event *driver;
struct vring_packed_desc_event *device;
@@ -90,6 +91,7 @@ struct vring_packed {
struct vring {
unsigned int num;
+ rte_iova_t desc_iova;
struct vring_desc *desc;
struct vring_avail *avail;
struct vring_used *used;
@@ -149,11 +151,12 @@ vring_size(struct virtio_hw *hw, unsigned int num, unsigned long align)
return size;
}
static inline void
-vring_init_split(struct vring *vr, uint8_t *p, unsigned long align,
- unsigned int num)
+vring_init_split(struct vring *vr, uint8_t *p, rte_iova_t iova,
+ unsigned long align, unsigned int num)
{
vr->num = num;
vr->desc = (struct vring_desc *) p;
+ vr->desc_iova = iova;
vr->avail = (struct vring_avail *) (p +
num * sizeof(struct vring_desc));
vr->used = (void *)
@@ -161,11 +164,12 @@ vring_init_split(struct vring *vr, uint8_t *p, unsigned long align,
}
static inline void
-vring_init_packed(struct vring_packed *vr, uint8_t *p, unsigned long align,
- unsigned int num)
+vring_init_packed(struct vring_packed *vr, uint8_t *p, rte_iova_t iova,
+ unsigned long align, unsigned int num)
{
vr->num = num;
vr->desc = (struct vring_packed_desc *)p;
+ vr->desc_iova = iova;
vr->driver = (struct vring_packed_desc_event *)(p +
vr->num * sizeof(struct vring_packed_desc));
vr->device = (struct vring_packed_desc_event *)
diff --git a/drivers/net/virtio/virtqueue.c b/drivers/net/virtio/virtqueue.c
index 6f419665f1..cf46abfd06 100644
--- a/drivers/net/virtio/virtqueue.c
+++ b/drivers/net/virtio/virtqueue.c
@@ -282,13 +282,13 @@ virtio_init_vring(struct virtqueue *vq)
vq->vq_free_cnt = vq->vq_nentries;
memset(vq->vq_descx, 0, sizeof(struct vq_desc_extra) * vq->vq_nentries);
if (virtio_with_packed_queue(vq->hw)) {
- vring_init_packed(&vq->vq_packed.ring, ring_mem,
+ vring_init_packed(&vq->vq_packed.ring, ring_mem, vq->vq_ring_mem,
VIRTIO_VRING_ALIGN, size);
vring_desc_init_packed(vq, size);
} else {
struct vring *vr = &vq->vq_split.ring;
- vring_init_split(vr, ring_mem, VIRTIO_VRING_ALIGN, size);
+ vring_init_split(vr, ring_mem, vq->vq_ring_mem, VIRTIO_VRING_ALIGN, size);
vring_desc_init_split(vr->desc, size);
}
/*
--
2.25.1
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v2 3/3] net/virtio_user: support sharing vq descriptor IOVA to the backend
2024-02-29 13:29 [PATCH v2 0/3] net/virtio: support IOVA as PA mode for vDPA backend Srujana Challa
2024-02-29 13:29 ` [PATCH v2 1/3] net/virtio_user: avoid cq descriptor buffer address accessing Srujana Challa
2024-02-29 13:29 ` [PATCH v2 2/3] net/virtio: store desc IOVA address in vring data structure Srujana Challa
@ 2024-02-29 13:29 ` Srujana Challa
2024-06-19 9:39 ` [PATCH v2 0/3] net/virtio: support IOVA as PA mode for vDPA backend Srujana Challa
2024-06-28 13:33 ` Maxime Coquelin
4 siblings, 0 replies; 22+ messages in thread
From: Srujana Challa @ 2024-02-29 13:29 UTC (permalink / raw)
To: dev, maxime.coquelin, chenbox; +Cc: jerinj, ndabilpuram, vattunuru, schalla
Adds support to share descriptor IOVA to the vhost backend.
This makes virtio_user driver works in IOVA as PA mode when
use_va flag is disabled.
This patch also disables use_va flag for VDPA backend.
Signed-off-by: Srujana Challa <schalla@marvell.com>
---
.../net/virtio/virtio_user/virtio_user_dev.c | 26 ++++++++++++-------
drivers/net/virtio/virtio_user_ethdev.c | 10 ++++++-
2 files changed, 26 insertions(+), 10 deletions(-)
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index bf3da4340f..8ad10e6354 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -62,6 +62,7 @@ virtio_user_kick_queue(struct virtio_user_dev *dev, uint32_t queue_sel)
struct vhost_vring_state state;
struct vring *vring = &dev->vrings.split[queue_sel];
struct vring_packed *pq_vring = &dev->vrings.packed[queue_sel];
+ uint64_t desc_addr, avail_addr, used_addr;
struct vhost_vring_addr addr = {
.index = queue_sel,
.log_guest_addr = 0,
@@ -81,16 +82,23 @@ virtio_user_kick_queue(struct virtio_user_dev *dev, uint32_t queue_sel)
}
if (dev->features & (1ULL << VIRTIO_F_RING_PACKED)) {
- addr.desc_user_addr =
- (uint64_t)(uintptr_t)pq_vring->desc;
- addr.avail_user_addr =
- (uint64_t)(uintptr_t)pq_vring->driver;
- addr.used_user_addr =
- (uint64_t)(uintptr_t)pq_vring->device;
+ desc_addr = pq_vring->desc_iova;
+ avail_addr = desc_addr + pq_vring->num * sizeof(struct vring_packed_desc);
+ used_addr = RTE_ALIGN_CEIL(avail_addr + sizeof(struct vring_packed_desc_event),
+ VIRTIO_VRING_ALIGN);
+
+ addr.desc_user_addr = desc_addr;
+ addr.avail_user_addr = avail_addr;
+ addr.used_user_addr = used_addr;
} else {
- addr.desc_user_addr = (uint64_t)(uintptr_t)vring->desc;
- addr.avail_user_addr = (uint64_t)(uintptr_t)vring->avail;
- addr.used_user_addr = (uint64_t)(uintptr_t)vring->used;
+ desc_addr = vring->desc_iova;
+ avail_addr = desc_addr + vring->num * sizeof(struct vring_desc);
+ used_addr = RTE_ALIGN_CEIL((uintptr_t)(&vring->avail->ring[vring->num]),
+ VIRTIO_VRING_ALIGN);
+
+ addr.desc_user_addr = desc_addr;
+ addr.avail_user_addr = avail_addr;
+ addr.used_user_addr = used_addr;
}
state.index = queue_sel;
diff --git a/drivers/net/virtio/virtio_user_ethdev.c b/drivers/net/virtio/virtio_user_ethdev.c
index bf9de36d8f..ae6593ba0b 100644
--- a/drivers/net/virtio/virtio_user_ethdev.c
+++ b/drivers/net/virtio/virtio_user_ethdev.c
@@ -198,6 +198,7 @@ virtio_user_setup_queue_packed(struct virtqueue *vq,
sizeof(struct vring_packed_desc_event),
VIRTIO_VRING_ALIGN);
vring->num = vq->vq_nentries;
+ vring->desc_iova = vq->vq_ring_mem;
vring->desc = (void *)(uintptr_t)desc_addr;
vring->driver = (void *)(uintptr_t)avail_addr;
vring->device = (void *)(uintptr_t)used_addr;
@@ -221,6 +222,7 @@ virtio_user_setup_queue_split(struct virtqueue *vq, struct virtio_user_dev *dev)
VIRTIO_VRING_ALIGN);
dev->vrings.split[queue_idx].num = vq->vq_nentries;
+ dev->vrings.split[queue_idx].desc_iova = vq->vq_ring_mem;
dev->vrings.split[queue_idx].desc = (void *)(uintptr_t)desc_addr;
dev->vrings.split[queue_idx].avail = (void *)(uintptr_t)avail_addr;
dev->vrings.split[queue_idx].used = (void *)(uintptr_t)used_addr;
@@ -689,7 +691,13 @@ virtio_user_pmd_probe(struct rte_vdev_device *vdev)
* Virtio-user requires using virtual addresses for the descriptors
* buffers, whatever other devices require
*/
- hw->use_va = true;
+ if (backend_type == VIRTIO_USER_BACKEND_VHOST_VDPA)
+ /* VDPA backend requires using iova for the buffers to make it
+ * work in IOVA as PA mode also.
+ */
+ hw->use_va = false;
+ else
+ hw->use_va = true;
/* previously called by pci probing for physical dev */
if (eth_virtio_dev_init(eth_dev) < 0) {
--
2.25.1
^ permalink raw reply [flat|nested] 22+ messages in thread
* RE: [PATCH v2 0/3] net/virtio: support IOVA as PA mode for vDPA backend
2024-02-29 13:29 [PATCH v2 0/3] net/virtio: support IOVA as PA mode for vDPA backend Srujana Challa
` (2 preceding siblings ...)
2024-02-29 13:29 ` [PATCH v2 3/3] net/virtio_user: support sharing vq descriptor IOVA to the backend Srujana Challa
@ 2024-06-19 9:39 ` Srujana Challa
2024-06-28 13:33 ` Maxime Coquelin
4 siblings, 0 replies; 22+ messages in thread
From: Srujana Challa @ 2024-06-19 9:39 UTC (permalink / raw)
To: Srujana Challa, dev, maxime.coquelin, chenbox
Cc: Jerin Jacob, Nithin Kumar Dabilpuram, Vamsi Krishna Attunuru
Ping.
> Subject: [PATCH v2 0/3] net/virtio: support IOVA as PA mode for vDPA
> backend
>
> This patch series makes Virtio-user works in IOVA as PA mode for vDPA
> backend.
>
> First patch fixes the issue when having buffer IOVA address in control queue
> descriptors.
> Second and third patches helps to share descriptor IOVA address, to the vhost
> backend. And also disables the use_va flag for VDPA backend type.
>
> v1->v2:
> - Split single patch into three patches.
>
> Srujana Challa (3):
> net/virtio_user: avoid cq descriptor buffer address accessing
> net/virtio: store desc IOVA address in vring data structure
> net/virtio_user: support sharing vq descriptor IOVA to the backend
>
> drivers/net/virtio/virtio_ring.h | 12 ++-
> .../net/virtio/virtio_user/virtio_user_dev.c | 94 ++++++++++---------
> drivers/net/virtio/virtio_user_ethdev.c | 10 +-
> drivers/net/virtio/virtqueue.c | 4 +-
> 4 files changed, 69 insertions(+), 51 deletions(-)
>
> --
> 2.25.1
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v2 1/3] net/virtio_user: avoid cq descriptor buffer address accessing
2024-02-29 13:29 ` [PATCH v2 1/3] net/virtio_user: avoid cq descriptor buffer address accessing Srujana Challa
@ 2024-06-28 8:07 ` Maxime Coquelin
2024-07-02 11:09 ` [EXTERNAL] " Srujana Challa
2024-07-03 10:03 ` [PATCH v3 0/3] net/virtio: support IOVA as PA mode for vDPA backend Srujana Challa
1 sibling, 1 reply; 22+ messages in thread
From: Maxime Coquelin @ 2024-06-28 8:07 UTC (permalink / raw)
To: Srujana Challa, dev, chenbox; +Cc: jerinj, ndabilpuram, vattunuru
On 2/29/24 14:29, Srujana Challa wrote:
> This patch makes changes to avoid descriptor buffer address
> accessing while processing shadow control queue.
> So that Virtio-user can work with having IOVA as descriptor
> buffer address.
>
> Signed-off-by: Srujana Challa <schalla@marvell.com>
> ---
> .../net/virtio/virtio_user/virtio_user_dev.c | 68 +++++++++----------
> 1 file changed, 33 insertions(+), 35 deletions(-)
>
> diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
> index d395fc1676..bf3da4340f 100644
> --- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
> +++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
> @@ -885,11 +885,11 @@ static uint32_t
> virtio_user_handle_ctrl_msg_split(struct virtio_user_dev *dev, struct vring *vring,
> uint16_t idx_hdr)
> {
> - struct virtio_net_ctrl_hdr *hdr;
> virtio_net_ctrl_ack status = ~0;
> - uint16_t i, idx_data, idx_status;
> + uint16_t i, idx_data;
> uint32_t n_descs = 0;
> int dlen[CVQ_MAX_DATA_DESCS], nb_dlen = 0;
> + struct virtio_pmd_ctrl *ctrl;
>
> /* locate desc for header, data, and status */
> idx_data = vring->desc[idx_hdr].next;
> @@ -902,34 +902,33 @@ virtio_user_handle_ctrl_msg_split(struct virtio_user_dev *dev, struct vring *vri
> n_descs++;
> }
>
> - /* locate desc for status */
> - idx_status = i;
> n_descs++;
>
> - hdr = (void *)(uintptr_t)vring->desc[idx_hdr].addr;
> - if (hdr->class == VIRTIO_NET_CTRL_MQ &&
> - hdr->cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) {
> - uint16_t queues;
> + /* Access control command via VA from CVQ */
> + ctrl = (struct virtio_pmd_ctrl *)dev->hw.cvq->hdr_mz->addr;
> + if (ctrl->hdr.class == VIRTIO_NET_CTRL_MQ &&
> + ctrl->hdr.cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) {
> + uint16_t *queues;
This is not future proof, as it just discards the index and assume the
buffer will always be at the same place.
We should find a way to perform the desc address translation.
>
> - queues = *(uint16_t *)(uintptr_t)vring->desc[idx_data].addr;
> - status = virtio_user_handle_mq(dev, queues);
> - } else if (hdr->class == VIRTIO_NET_CTRL_MQ && hdr->cmd == VIRTIO_NET_CTRL_MQ_RSS_CONFIG) {
> + queues = (uint16_t *)ctrl->data;
> + status = virtio_user_handle_mq(dev, *queues);
> + } else if (ctrl->hdr.class == VIRTIO_NET_CTRL_MQ &&
> + ctrl->hdr.cmd == VIRTIO_NET_CTRL_MQ_RSS_CONFIG) {
> struct virtio_net_ctrl_rss *rss;
>
> - rss = (struct virtio_net_ctrl_rss *)(uintptr_t)vring->desc[idx_data].addr;
> + rss = (struct virtio_net_ctrl_rss *)ctrl->data;
> status = virtio_user_handle_mq(dev, rss->max_tx_vq);
> - } else if (hdr->class == VIRTIO_NET_CTRL_RX ||
> - hdr->class == VIRTIO_NET_CTRL_MAC ||
> - hdr->class == VIRTIO_NET_CTRL_VLAN) {
> + } else if (ctrl->hdr.class == VIRTIO_NET_CTRL_RX ||
> + ctrl->hdr.class == VIRTIO_NET_CTRL_MAC ||
> + ctrl->hdr.class == VIRTIO_NET_CTRL_VLAN) {
> status = 0;
> }
>
> if (!status && dev->scvq)
> - status = virtio_send_command(&dev->scvq->cq,
> - (struct virtio_pmd_ctrl *)hdr, dlen, nb_dlen);
> + status = virtio_send_command(&dev->scvq->cq, ctrl, dlen, nb_dlen);
>
> /* Update status */
> - *(virtio_net_ctrl_ack *)(uintptr_t)vring->desc[idx_status].addr = status;
> + ctrl->status = status;
>
> return n_descs;
> }
> @@ -948,7 +947,7 @@ virtio_user_handle_ctrl_msg_packed(struct virtio_user_dev *dev,
> struct vring_packed *vring,
> uint16_t idx_hdr)
> {
> - struct virtio_net_ctrl_hdr *hdr;
> + struct virtio_pmd_ctrl *ctrl;
> virtio_net_ctrl_ack status = ~0;
> uint16_t idx_data, idx_status;
> /* initialize to one, header is first */
> @@ -971,32 +970,31 @@ virtio_user_handle_ctrl_msg_packed(struct virtio_user_dev *dev,
> n_descs++;
> }
>
> - hdr = (void *)(uintptr_t)vring->desc[idx_hdr].addr;
> - if (hdr->class == VIRTIO_NET_CTRL_MQ &&
> - hdr->cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) {
> - uint16_t queues;
> + /* Access control command via VA from CVQ */
> + ctrl = (struct virtio_pmd_ctrl *)dev->hw.cvq->hdr_mz->addr;
> + if (ctrl->hdr.class == VIRTIO_NET_CTRL_MQ &&
> + ctrl->hdr.cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) {
> + uint16_t *queues;
>
> - queues = *(uint16_t *)(uintptr_t)
> - vring->desc[idx_data].addr;
> - status = virtio_user_handle_mq(dev, queues);
> - } else if (hdr->class == VIRTIO_NET_CTRL_MQ && hdr->cmd == VIRTIO_NET_CTRL_MQ_RSS_CONFIG) {
> + queues = (uint16_t *)ctrl->data;
> + status = virtio_user_handle_mq(dev, *queues);
> + } else if (ctrl->hdr.class == VIRTIO_NET_CTRL_MQ &&
> + ctrl->hdr.cmd == VIRTIO_NET_CTRL_MQ_RSS_CONFIG) {
> struct virtio_net_ctrl_rss *rss;
>
> - rss = (struct virtio_net_ctrl_rss *)(uintptr_t)vring->desc[idx_data].addr;
> + rss = (struct virtio_net_ctrl_rss *)ctrl->data;
> status = virtio_user_handle_mq(dev, rss->max_tx_vq);
> - } else if (hdr->class == VIRTIO_NET_CTRL_RX ||
> - hdr->class == VIRTIO_NET_CTRL_MAC ||
> - hdr->class == VIRTIO_NET_CTRL_VLAN) {
> + } else if (ctrl->hdr.class == VIRTIO_NET_CTRL_RX ||
> + ctrl->hdr.class == VIRTIO_NET_CTRL_MAC ||
> + ctrl->hdr.class == VIRTIO_NET_CTRL_VLAN) {
> status = 0;
> }
>
> if (!status && dev->scvq)
> - status = virtio_send_command(&dev->scvq->cq,
> - (struct virtio_pmd_ctrl *)hdr, dlen, nb_dlen);
> + status = virtio_send_command(&dev->scvq->cq, ctrl, dlen, nb_dlen);
>
> /* Update status */
> - *(virtio_net_ctrl_ack *)(uintptr_t)
> - vring->desc[idx_status].addr = status;
> + ctrl->status = status;
>
> /* Update used descriptor */
> vring->desc[idx_hdr].id = vring->desc[idx_status].id;
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v2 0/3] net/virtio: support IOVA as PA mode for vDPA backend
2024-02-29 13:29 [PATCH v2 0/3] net/virtio: support IOVA as PA mode for vDPA backend Srujana Challa
` (3 preceding siblings ...)
2024-06-19 9:39 ` [PATCH v2 0/3] net/virtio: support IOVA as PA mode for vDPA backend Srujana Challa
@ 2024-06-28 13:33 ` Maxime Coquelin
4 siblings, 0 replies; 22+ messages in thread
From: Maxime Coquelin @ 2024-06-28 13:33 UTC (permalink / raw)
To: Srujana Challa, dev, chenbox; +Cc: jerinj, ndabilpuram, vattunuru
On 2/29/24 14:29, Srujana Challa wrote:
> This patch series makes Virtio-user works in IOVA as PA mode
> for vDPA backend.
>
> First patch fixes the issue when having buffer IOVA address in
> control queue descriptors.
> Second and third patches helps to share descriptor IOVA address,
> to the vhost backend. And also disables the use_va flag for VDPA
> backend type.
>
> v1->v2:
> - Split single patch into three patches.
>
> Srujana Challa (3):
> net/virtio_user: avoid cq descriptor buffer address accessing
> net/virtio: store desc IOVA address in vring data structure
> net/virtio_user: support sharing vq descriptor IOVA to the backend
>
> drivers/net/virtio/virtio_ring.h | 12 ++-
> .../net/virtio/virtio_user/virtio_user_dev.c | 94 ++++++++++---------
> drivers/net/virtio/virtio_user_ethdev.c | 10 +-
> drivers/net/virtio/virtqueue.c | 4 +-
> 4 files changed, 69 insertions(+), 51 deletions(-)
>
I think we need a way to be able to perform the address translation from
iova to va between Virtio and Virtio-user layers. Does that sound
reasonable?
Thanks,
Maxime
^ permalink raw reply [flat|nested] 22+ messages in thread
* RE: [EXTERNAL] Re: [PATCH v2 1/3] net/virtio_user: avoid cq descriptor buffer address accessing
2024-06-28 8:07 ` Maxime Coquelin
@ 2024-07-02 11:09 ` Srujana Challa
2024-07-02 12:41 ` Maxime Coquelin
0 siblings, 1 reply; 22+ messages in thread
From: Srujana Challa @ 2024-07-02 11:09 UTC (permalink / raw)
To: Maxime Coquelin, dev, chenbox
Cc: Jerin Jacob, Nithin Kumar Dabilpuram, Vamsi Krishna Attunuru
> On 2/29/24 14:29, Srujana Challa wrote:
> > This patch makes changes to avoid descriptor buffer address accessing
> > while processing shadow control queue.
> > So that Virtio-user can work with having IOVA as descriptor buffer
> > address.
> >
> > Signed-off-by: Srujana Challa <schalla@marvell.com>
> > ---
> > .../net/virtio/virtio_user/virtio_user_dev.c | 68 +++++++++----------
> > 1 file changed, 33 insertions(+), 35 deletions(-)
> >
> > diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c
> > b/drivers/net/virtio/virtio_user/virtio_user_dev.c
> > index d395fc1676..bf3da4340f 100644
> > --- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
> > +++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
> > @@ -885,11 +885,11 @@ static uint32_t
> > virtio_user_handle_ctrl_msg_split(struct virtio_user_dev *dev, struct vring
> *vring,
> > uint16_t idx_hdr)
> > {
> > - struct virtio_net_ctrl_hdr *hdr;
> > virtio_net_ctrl_ack status = ~0;
> > - uint16_t i, idx_data, idx_status;
> > + uint16_t i, idx_data;
> > uint32_t n_descs = 0;
> > int dlen[CVQ_MAX_DATA_DESCS], nb_dlen = 0;
> > + struct virtio_pmd_ctrl *ctrl;
> >
> > /* locate desc for header, data, and status */
> > idx_data = vring->desc[idx_hdr].next; @@ -902,34 +902,33 @@
> > virtio_user_handle_ctrl_msg_split(struct virtio_user_dev *dev, struct vring
> *vri
> > n_descs++;
> > }
> >
> > - /* locate desc for status */
> > - idx_status = i;
> > n_descs++;
> >
> > - hdr = (void *)(uintptr_t)vring->desc[idx_hdr].addr;
> > - if (hdr->class == VIRTIO_NET_CTRL_MQ &&
> > - hdr->cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) {
> > - uint16_t queues;
> > + /* Access control command via VA from CVQ */
> > + ctrl = (struct virtio_pmd_ctrl *)dev->hw.cvq->hdr_mz->addr;
> > + if (ctrl->hdr.class == VIRTIO_NET_CTRL_MQ &&
> > + ctrl->hdr.cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) {
> > + uint16_t *queues;
>
> This is not future proof, as it just discards the index and assume the buffer will
> always be at the same place.
> We should find a way to perform the desc address translation.
Can we use rte_mem_iova2virt() here?
>
> >
> > - queues = *(uint16_t *)(uintptr_t)vring->desc[idx_data].addr;
> > - status = virtio_user_handle_mq(dev, queues);
> > - } else if (hdr->class == VIRTIO_NET_CTRL_MQ && hdr->cmd ==
> VIRTIO_NET_CTRL_MQ_RSS_CONFIG) {
> > + queues = (uint16_t *)ctrl->data;
> > + status = virtio_user_handle_mq(dev, *queues);
> > + } else if (ctrl->hdr.class == VIRTIO_NET_CTRL_MQ &&
> > + ctrl->hdr.cmd == VIRTIO_NET_CTRL_MQ_RSS_CONFIG) {
> > struct virtio_net_ctrl_rss *rss;
> >
> > - rss = (struct virtio_net_ctrl_rss *)(uintptr_t)vring-
> >desc[idx_data].addr;
> > + rss = (struct virtio_net_ctrl_rss *)ctrl->data;
> > status = virtio_user_handle_mq(dev, rss->max_tx_vq);
> > - } else if (hdr->class == VIRTIO_NET_CTRL_RX ||
> > - hdr->class == VIRTIO_NET_CTRL_MAC ||
> > - hdr->class == VIRTIO_NET_CTRL_VLAN) {
> > + } else if (ctrl->hdr.class == VIRTIO_NET_CTRL_RX ||
> > + ctrl->hdr.class == VIRTIO_NET_CTRL_MAC ||
> > + ctrl->hdr.class == VIRTIO_NET_CTRL_VLAN) {
> > status = 0;
> > }
> >
> > if (!status && dev->scvq)
> > - status = virtio_send_command(&dev->scvq->cq,
> > - (struct virtio_pmd_ctrl *)hdr, dlen, nb_dlen);
> > + status = virtio_send_command(&dev->scvq->cq, ctrl, dlen,
> nb_dlen);
> >
> > /* Update status */
> > - *(virtio_net_ctrl_ack *)(uintptr_t)vring->desc[idx_status].addr =
> status;
> > + ctrl->status = status;
> >
> > return n_descs;
> > }
> > @@ -948,7 +947,7 @@ virtio_user_handle_ctrl_msg_packed(struct
> virtio_user_dev *dev,
> > struct vring_packed *vring,
> > uint16_t idx_hdr)
> > {
> > - struct virtio_net_ctrl_hdr *hdr;
> > + struct virtio_pmd_ctrl *ctrl;
> > virtio_net_ctrl_ack status = ~0;
> > uint16_t idx_data, idx_status;
> > /* initialize to one, header is first */ @@ -971,32 +970,31 @@
> > virtio_user_handle_ctrl_msg_packed(struct virtio_user_dev *dev,
> > n_descs++;
> > }
> >
> > - hdr = (void *)(uintptr_t)vring->desc[idx_hdr].addr;
> > - if (hdr->class == VIRTIO_NET_CTRL_MQ &&
> > - hdr->cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) {
> > - uint16_t queues;
> > + /* Access control command via VA from CVQ */
> > + ctrl = (struct virtio_pmd_ctrl *)dev->hw.cvq->hdr_mz->addr;
> > + if (ctrl->hdr.class == VIRTIO_NET_CTRL_MQ &&
> > + ctrl->hdr.cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) {
> > + uint16_t *queues;
> >
> > - queues = *(uint16_t *)(uintptr_t)
> > - vring->desc[idx_data].addr;
> > - status = virtio_user_handle_mq(dev, queues);
> > - } else if (hdr->class == VIRTIO_NET_CTRL_MQ && hdr->cmd ==
> VIRTIO_NET_CTRL_MQ_RSS_CONFIG) {
> > + queues = (uint16_t *)ctrl->data;
> > + status = virtio_user_handle_mq(dev, *queues);
> > + } else if (ctrl->hdr.class == VIRTIO_NET_CTRL_MQ &&
> > + ctrl->hdr.cmd == VIRTIO_NET_CTRL_MQ_RSS_CONFIG) {
> > struct virtio_net_ctrl_rss *rss;
> >
> > - rss = (struct virtio_net_ctrl_rss *)(uintptr_t)vring-
> >desc[idx_data].addr;
> > + rss = (struct virtio_net_ctrl_rss *)ctrl->data;
> > status = virtio_user_handle_mq(dev, rss->max_tx_vq);
> > - } else if (hdr->class == VIRTIO_NET_CTRL_RX ||
> > - hdr->class == VIRTIO_NET_CTRL_MAC ||
> > - hdr->class == VIRTIO_NET_CTRL_VLAN) {
> > + } else if (ctrl->hdr.class == VIRTIO_NET_CTRL_RX ||
> > + ctrl->hdr.class == VIRTIO_NET_CTRL_MAC ||
> > + ctrl->hdr.class == VIRTIO_NET_CTRL_VLAN) {
> > status = 0;
> > }
> >
> > if (!status && dev->scvq)
> > - status = virtio_send_command(&dev->scvq->cq,
> > - (struct virtio_pmd_ctrl *)hdr, dlen, nb_dlen);
> > + status = virtio_send_command(&dev->scvq->cq, ctrl, dlen,
> nb_dlen);
> >
> > /* Update status */
> > - *(virtio_net_ctrl_ack *)(uintptr_t)
> > - vring->desc[idx_status].addr = status;
> > + ctrl->status = status;
> >
> > /* Update used descriptor */
> > vring->desc[idx_hdr].id = vring->desc[idx_status].id;
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [EXTERNAL] Re: [PATCH v2 1/3] net/virtio_user: avoid cq descriptor buffer address accessing
2024-07-02 11:09 ` [EXTERNAL] " Srujana Challa
@ 2024-07-02 12:41 ` Maxime Coquelin
0 siblings, 0 replies; 22+ messages in thread
From: Maxime Coquelin @ 2024-07-02 12:41 UTC (permalink / raw)
To: Srujana Challa, dev, chenbox
Cc: Jerin Jacob, Nithin Kumar Dabilpuram, Vamsi Krishna Attunuru
On 7/2/24 13:09, Srujana Challa wrote:
>> On 2/29/24 14:29, Srujana Challa wrote:
>>> This patch makes changes to avoid descriptor buffer address accessing
>>> while processing shadow control queue.
>>> So that Virtio-user can work with having IOVA as descriptor buffer
>>> address.
>>>
>>> Signed-off-by: Srujana Challa <schalla@marvell.com>
>>> ---
>>> .../net/virtio/virtio_user/virtio_user_dev.c | 68 +++++++++----------
>>> 1 file changed, 33 insertions(+), 35 deletions(-)
>>>
>>> diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c
>>> b/drivers/net/virtio/virtio_user/virtio_user_dev.c
>>> index d395fc1676..bf3da4340f 100644
>>> --- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
>>> +++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
>>> @@ -885,11 +885,11 @@ static uint32_t
>>> virtio_user_handle_ctrl_msg_split(struct virtio_user_dev *dev, struct vring
>> *vring,
>>> uint16_t idx_hdr)
>>> {
>>> - struct virtio_net_ctrl_hdr *hdr;
>>> virtio_net_ctrl_ack status = ~0;
>>> - uint16_t i, idx_data, idx_status;
>>> + uint16_t i, idx_data;
>>> uint32_t n_descs = 0;
>>> int dlen[CVQ_MAX_DATA_DESCS], nb_dlen = 0;
>>> + struct virtio_pmd_ctrl *ctrl;
>>>
>>> /* locate desc for header, data, and status */
>>> idx_data = vring->desc[idx_hdr].next; @@ -902,34 +902,33 @@
>>> virtio_user_handle_ctrl_msg_split(struct virtio_user_dev *dev, struct vring
>> *vri
>>> n_descs++;
>>> }
>>>
>>> - /* locate desc for status */
>>> - idx_status = i;
>>> n_descs++;
>>>
>>> - hdr = (void *)(uintptr_t)vring->desc[idx_hdr].addr;
>>> - if (hdr->class == VIRTIO_NET_CTRL_MQ &&
>>> - hdr->cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) {
>>> - uint16_t queues;
>>> + /* Access control command via VA from CVQ */
>>> + ctrl = (struct virtio_pmd_ctrl *)dev->hw.cvq->hdr_mz->addr;
>>> + if (ctrl->hdr.class == VIRTIO_NET_CTRL_MQ &&
>>> + ctrl->hdr.cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) {
>>> + uint16_t *queues;
>>
>> This is not future proof, as it just discards the index and assume the buffer will
>> always be at the same place.
>> We should find a way to perform the desc address translation.
> Can we use rte_mem_iova2virt() here?
It should be safe to use it here.
Can you send a new revision ASAP, which would use this API and not take
the shortcurt, i.e. keep fetching buffer addres from descriptors?
Thanks,
Maxime
>
>>
>>>
>>> - queues = *(uint16_t *)(uintptr_t)vring->desc[idx_data].addr;
>>> - status = virtio_user_handle_mq(dev, queues);
>>> - } else if (hdr->class == VIRTIO_NET_CTRL_MQ && hdr->cmd ==
>> VIRTIO_NET_CTRL_MQ_RSS_CONFIG) {
>>> + queues = (uint16_t *)ctrl->data;
>>> + status = virtio_user_handle_mq(dev, *queues);
>>> + } else if (ctrl->hdr.class == VIRTIO_NET_CTRL_MQ &&
>>> + ctrl->hdr.cmd == VIRTIO_NET_CTRL_MQ_RSS_CONFIG) {
>>> struct virtio_net_ctrl_rss *rss;
>>>
>>> - rss = (struct virtio_net_ctrl_rss *)(uintptr_t)vring-
>>> desc[idx_data].addr;
>>> + rss = (struct virtio_net_ctrl_rss *)ctrl->data;
>>> status = virtio_user_handle_mq(dev, rss->max_tx_vq);
>>> - } else if (hdr->class == VIRTIO_NET_CTRL_RX ||
>>> - hdr->class == VIRTIO_NET_CTRL_MAC ||
>>> - hdr->class == VIRTIO_NET_CTRL_VLAN) {
>>> + } else if (ctrl->hdr.class == VIRTIO_NET_CTRL_RX ||
>>> + ctrl->hdr.class == VIRTIO_NET_CTRL_MAC ||
>>> + ctrl->hdr.class == VIRTIO_NET_CTRL_VLAN) {
>>> status = 0;
>>> }
>>>
>>> if (!status && dev->scvq)
>>> - status = virtio_send_command(&dev->scvq->cq,
>>> - (struct virtio_pmd_ctrl *)hdr, dlen, nb_dlen);
>>> + status = virtio_send_command(&dev->scvq->cq, ctrl, dlen,
>> nb_dlen);
>>>
>>> /* Update status */
>>> - *(virtio_net_ctrl_ack *)(uintptr_t)vring->desc[idx_status].addr =
>> status;
>>> + ctrl->status = status;
>>>
>>> return n_descs;
>>> }
>>> @@ -948,7 +947,7 @@ virtio_user_handle_ctrl_msg_packed(struct
>> virtio_user_dev *dev,
>>> struct vring_packed *vring,
>>> uint16_t idx_hdr)
>>> {
>>> - struct virtio_net_ctrl_hdr *hdr;
>>> + struct virtio_pmd_ctrl *ctrl;
>>> virtio_net_ctrl_ack status = ~0;
>>> uint16_t idx_data, idx_status;
>>> /* initialize to one, header is first */ @@ -971,32 +970,31 @@
>>> virtio_user_handle_ctrl_msg_packed(struct virtio_user_dev *dev,
>>> n_descs++;
>>> }
>>>
>>> - hdr = (void *)(uintptr_t)vring->desc[idx_hdr].addr;
>>> - if (hdr->class == VIRTIO_NET_CTRL_MQ &&
>>> - hdr->cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) {
>>> - uint16_t queues;
>>> + /* Access control command via VA from CVQ */
>>> + ctrl = (struct virtio_pmd_ctrl *)dev->hw.cvq->hdr_mz->addr;
>>> + if (ctrl->hdr.class == VIRTIO_NET_CTRL_MQ &&
>>> + ctrl->hdr.cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) {
>>> + uint16_t *queues;
>>>
>>> - queues = *(uint16_t *)(uintptr_t)
>>> - vring->desc[idx_data].addr;
>>> - status = virtio_user_handle_mq(dev, queues);
>>> - } else if (hdr->class == VIRTIO_NET_CTRL_MQ && hdr->cmd ==
>> VIRTIO_NET_CTRL_MQ_RSS_CONFIG) {
>>> + queues = (uint16_t *)ctrl->data;
>>> + status = virtio_user_handle_mq(dev, *queues);
>>> + } else if (ctrl->hdr.class == VIRTIO_NET_CTRL_MQ &&
>>> + ctrl->hdr.cmd == VIRTIO_NET_CTRL_MQ_RSS_CONFIG) {
>>> struct virtio_net_ctrl_rss *rss;
>>>
>>> - rss = (struct virtio_net_ctrl_rss *)(uintptr_t)vring-
>>> desc[idx_data].addr;
>>> + rss = (struct virtio_net_ctrl_rss *)ctrl->data;
>>> status = virtio_user_handle_mq(dev, rss->max_tx_vq);
>>> - } else if (hdr->class == VIRTIO_NET_CTRL_RX ||
>>> - hdr->class == VIRTIO_NET_CTRL_MAC ||
>>> - hdr->class == VIRTIO_NET_CTRL_VLAN) {
>>> + } else if (ctrl->hdr.class == VIRTIO_NET_CTRL_RX ||
>>> + ctrl->hdr.class == VIRTIO_NET_CTRL_MAC ||
>>> + ctrl->hdr.class == VIRTIO_NET_CTRL_VLAN) {
>>> status = 0;
>>> }
>>>
>>> if (!status && dev->scvq)
>>> - status = virtio_send_command(&dev->scvq->cq,
>>> - (struct virtio_pmd_ctrl *)hdr, dlen, nb_dlen);
>>> + status = virtio_send_command(&dev->scvq->cq, ctrl, dlen,
>> nb_dlen);
>>>
>>> /* Update status */
>>> - *(virtio_net_ctrl_ack *)(uintptr_t)
>>> - vring->desc[idx_status].addr = status;
>>> + ctrl->status = status;
>>>
>>> /* Update used descriptor */
>>> vring->desc[idx_hdr].id = vring->desc[idx_status].id;
>
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v3 0/3] net/virtio: support IOVA as PA mode for vDPA backend
2024-02-29 13:29 ` [PATCH v2 1/3] net/virtio_user: avoid cq descriptor buffer address accessing Srujana Challa
2024-06-28 8:07 ` Maxime Coquelin
@ 2024-07-03 10:03 ` Srujana Challa
2024-07-03 10:03 ` [PATCH v3 1/3] net/virtio_user: convert cq descriptor IOVA address to Virtual address Srujana Challa
` (3 more replies)
1 sibling, 4 replies; 22+ messages in thread
From: Srujana Challa @ 2024-07-03 10:03 UTC (permalink / raw)
To: dev, maxime.coquelin, chenbox; +Cc: jerinj, ndabilpuram, vattunuru, schalla
This patch series makes Virtio-user works in IOVA as PA mode
for vDPA backend.
First patch fixes the issue when having buffer IOVA address in
control queue descriptors.
Second and third patches helps to share descriptor IOVA address,
to the vhost backend. And also disables the use_va flag for VDPA
backend type.
v1->v2:
- Split single patch into three patches.
v2->v3:
- Addressed the review comment by using rte_mem_iova2virt() for desc
address translation.
Srujana Challa (3):
net/virtio_user: convert cq descriptor IOVA address to Virtual address
net/virtio: store desc IOVA address in vring data structure
net/virtio_user: support sharing vq descriptor IOVA to the backend
drivers/net/virtio/virtio_ring.h | 12 ++--
.../net/virtio/virtio_user/virtio_user_dev.c | 59 ++++++++++++-------
drivers/net/virtio/virtio_user_ethdev.c | 10 +++-
drivers/net/virtio/virtqueue.c | 4 +-
4 files changed, 57 insertions(+), 28 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v3 1/3] net/virtio_user: convert cq descriptor IOVA address to Virtual address
2024-07-03 10:03 ` [PATCH v3 0/3] net/virtio: support IOVA as PA mode for vDPA backend Srujana Challa
@ 2024-07-03 10:03 ` Srujana Challa
2024-07-03 10:19 ` Jerin Jacob
` (2 more replies)
2024-07-03 10:03 ` [PATCH v3 2/3] net/virtio: store desc IOVA address in vring data structure Srujana Challa
` (2 subsequent siblings)
3 siblings, 3 replies; 22+ messages in thread
From: Srujana Challa @ 2024-07-03 10:03 UTC (permalink / raw)
To: dev, maxime.coquelin, chenbox; +Cc: jerinj, ndabilpuram, vattunuru, schalla
This patch modifies the code to convert descriptor buffer IOVA
addresses to virtual addresses during the processing of shadow
control queue when IOVA mode is PA. This change enables Virtio-user
to operate with IOVA as the descriptor buffer address.
Signed-off-by: Srujana Challa <schalla@marvell.com>
---
.../net/virtio/virtio_user/virtio_user_dev.c | 33 ++++++++++++-------
1 file changed, 21 insertions(+), 12 deletions(-)
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index 1365c8a5c8..7f35f4b06b 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -896,6 +896,15 @@ virtio_user_handle_mq(struct virtio_user_dev *dev, uint16_t q_pairs)
#define CVQ_MAX_DATA_DESCS 32
+static inline void *
+virtio_user_iova2virt(rte_iova_t iova)
+{
+ if (rte_eal_iova_mode() == RTE_IOVA_PA)
+ return rte_mem_iova2virt(iova);
+ else
+ return (void *)(uintptr_t)iova;
+}
+
static uint32_t
virtio_user_handle_ctrl_msg_split(struct virtio_user_dev *dev, struct vring *vring,
uint16_t idx_hdr)
@@ -921,17 +930,18 @@ virtio_user_handle_ctrl_msg_split(struct virtio_user_dev *dev, struct vring *vri
idx_status = i;
n_descs++;
- hdr = (void *)(uintptr_t)vring->desc[idx_hdr].addr;
+ hdr = virtio_user_iova2virt(vring->desc[idx_hdr].addr);
if (hdr->class == VIRTIO_NET_CTRL_MQ &&
hdr->cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) {
- uint16_t queues;
+ uint16_t queues, *addr;
- queues = *(uint16_t *)(uintptr_t)vring->desc[idx_data].addr;
+ addr = virtio_user_iova2virt(vring->desc[idx_data].addr);
+ queues = *addr;
status = virtio_user_handle_mq(dev, queues);
} else if (hdr->class == VIRTIO_NET_CTRL_MQ && hdr->cmd == VIRTIO_NET_CTRL_MQ_RSS_CONFIG) {
struct virtio_net_ctrl_rss *rss;
- rss = (struct virtio_net_ctrl_rss *)(uintptr_t)vring->desc[idx_data].addr;
+ rss = virtio_user_iova2virt(vring->desc[idx_data].addr);
status = virtio_user_handle_mq(dev, rss->max_tx_vq);
} else if (hdr->class == VIRTIO_NET_CTRL_RX ||
hdr->class == VIRTIO_NET_CTRL_MAC ||
@@ -944,7 +954,7 @@ virtio_user_handle_ctrl_msg_split(struct virtio_user_dev *dev, struct vring *vri
(struct virtio_pmd_ctrl *)hdr, dlen, nb_dlen);
/* Update status */
- *(virtio_net_ctrl_ack *)(uintptr_t)vring->desc[idx_status].addr = status;
+ *(virtio_net_ctrl_ack *)virtio_user_iova2virt(vring->desc[idx_status].addr) = status;
return n_descs;
}
@@ -986,18 +996,18 @@ virtio_user_handle_ctrl_msg_packed(struct virtio_user_dev *dev,
n_descs++;
}
- hdr = (void *)(uintptr_t)vring->desc[idx_hdr].addr;
+ hdr = virtio_user_iova2virt(vring->desc[idx_hdr].addr);
if (hdr->class == VIRTIO_NET_CTRL_MQ &&
hdr->cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) {
- uint16_t queues;
+ uint16_t queues, *addr;
- queues = *(uint16_t *)(uintptr_t)
- vring->desc[idx_data].addr;
+ addr = virtio_user_iova2virt(vring->desc[idx_data].addr);
+ queues = *addr;
status = virtio_user_handle_mq(dev, queues);
} else if (hdr->class == VIRTIO_NET_CTRL_MQ && hdr->cmd == VIRTIO_NET_CTRL_MQ_RSS_CONFIG) {
struct virtio_net_ctrl_rss *rss;
- rss = (struct virtio_net_ctrl_rss *)(uintptr_t)vring->desc[idx_data].addr;
+ rss = virtio_user_iova2virt(vring->desc[idx_data].addr);
status = virtio_user_handle_mq(dev, rss->max_tx_vq);
} else if (hdr->class == VIRTIO_NET_CTRL_RX ||
hdr->class == VIRTIO_NET_CTRL_MAC ||
@@ -1010,8 +1020,7 @@ virtio_user_handle_ctrl_msg_packed(struct virtio_user_dev *dev,
(struct virtio_pmd_ctrl *)hdr, dlen, nb_dlen);
/* Update status */
- *(virtio_net_ctrl_ack *)(uintptr_t)
- vring->desc[idx_status].addr = status;
+ *(virtio_net_ctrl_ack *)virtio_user_iova2virt(vring->desc[idx_status].addr) = status;
/* Update used descriptor */
vring->desc[idx_hdr].id = vring->desc[idx_status].id;
--
2.25.1
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v3 2/3] net/virtio: store desc IOVA address in vring data structure
2024-07-03 10:03 ` [PATCH v3 0/3] net/virtio: support IOVA as PA mode for vDPA backend Srujana Challa
2024-07-03 10:03 ` [PATCH v3 1/3] net/virtio_user: convert cq descriptor IOVA address to Virtual address Srujana Challa
@ 2024-07-03 10:03 ` Srujana Challa
2024-07-03 13:41 ` Maxime Coquelin
2024-07-03 10:03 ` [PATCH v3 3/3] net/virtio_user: support sharing vq descriptor IOVA to the backend Srujana Challa
2024-07-03 14:34 ` [PATCH v3 0/3] net/virtio: support IOVA as PA mode for vDPA backend Maxime Coquelin
3 siblings, 1 reply; 22+ messages in thread
From: Srujana Challa @ 2024-07-03 10:03 UTC (permalink / raw)
To: dev, maxime.coquelin, chenbox; +Cc: jerinj, ndabilpuram, vattunuru, schalla
Stores desc IOVA in the queue's vring data structure,
as preliminary work to provide a way for Virtio-user
to share desc IOVA to the vhost backend.
Signed-off-by: Srujana Challa <schalla@marvell.com>
---
drivers/net/virtio/virtio_ring.h | 12 ++++++++----
drivers/net/virtio/virtqueue.c | 4 ++--
2 files changed, 10 insertions(+), 6 deletions(-)
diff --git a/drivers/net/virtio/virtio_ring.h b/drivers/net/virtio/virtio_ring.h
index 2a25751152..57568c66a9 100644
--- a/drivers/net/virtio/virtio_ring.h
+++ b/drivers/net/virtio/virtio_ring.h
@@ -83,6 +83,7 @@ struct vring_packed_desc_event {
struct vring_packed {
unsigned int num;
+ rte_iova_t desc_iova;
struct vring_packed_desc *desc;
struct vring_packed_desc_event *driver;
struct vring_packed_desc_event *device;
@@ -90,6 +91,7 @@ struct vring_packed {
struct vring {
unsigned int num;
+ rte_iova_t desc_iova;
struct vring_desc *desc;
struct vring_avail *avail;
struct vring_used *used;
@@ -149,11 +151,12 @@ vring_size(struct virtio_hw *hw, unsigned int num, unsigned long align)
return size;
}
static inline void
-vring_init_split(struct vring *vr, uint8_t *p, unsigned long align,
- unsigned int num)
+vring_init_split(struct vring *vr, uint8_t *p, rte_iova_t iova,
+ unsigned long align, unsigned int num)
{
vr->num = num;
vr->desc = (struct vring_desc *) p;
+ vr->desc_iova = iova;
vr->avail = (struct vring_avail *) (p +
num * sizeof(struct vring_desc));
vr->used = (void *)
@@ -161,11 +164,12 @@ vring_init_split(struct vring *vr, uint8_t *p, unsigned long align,
}
static inline void
-vring_init_packed(struct vring_packed *vr, uint8_t *p, unsigned long align,
- unsigned int num)
+vring_init_packed(struct vring_packed *vr, uint8_t *p, rte_iova_t iova,
+ unsigned long align, unsigned int num)
{
vr->num = num;
vr->desc = (struct vring_packed_desc *)p;
+ vr->desc_iova = iova;
vr->driver = (struct vring_packed_desc_event *)(p +
vr->num * sizeof(struct vring_packed_desc));
vr->device = (struct vring_packed_desc_event *)
diff --git a/drivers/net/virtio/virtqueue.c b/drivers/net/virtio/virtqueue.c
index 6f419665f1..cf46abfd06 100644
--- a/drivers/net/virtio/virtqueue.c
+++ b/drivers/net/virtio/virtqueue.c
@@ -282,13 +282,13 @@ virtio_init_vring(struct virtqueue *vq)
vq->vq_free_cnt = vq->vq_nentries;
memset(vq->vq_descx, 0, sizeof(struct vq_desc_extra) * vq->vq_nentries);
if (virtio_with_packed_queue(vq->hw)) {
- vring_init_packed(&vq->vq_packed.ring, ring_mem,
+ vring_init_packed(&vq->vq_packed.ring, ring_mem, vq->vq_ring_mem,
VIRTIO_VRING_ALIGN, size);
vring_desc_init_packed(vq, size);
} else {
struct vring *vr = &vq->vq_split.ring;
- vring_init_split(vr, ring_mem, VIRTIO_VRING_ALIGN, size);
+ vring_init_split(vr, ring_mem, vq->vq_ring_mem, VIRTIO_VRING_ALIGN, size);
vring_desc_init_split(vr->desc, size);
}
/*
--
2.25.1
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v3 3/3] net/virtio_user: support sharing vq descriptor IOVA to the backend
2024-07-03 10:03 ` [PATCH v3 0/3] net/virtio: support IOVA as PA mode for vDPA backend Srujana Challa
2024-07-03 10:03 ` [PATCH v3 1/3] net/virtio_user: convert cq descriptor IOVA address to Virtual address Srujana Challa
2024-07-03 10:03 ` [PATCH v3 2/3] net/virtio: store desc IOVA address in vring data structure Srujana Challa
@ 2024-07-03 10:03 ` Srujana Challa
2024-07-03 13:41 ` Maxime Coquelin
2024-07-03 14:34 ` [PATCH v3 0/3] net/virtio: support IOVA as PA mode for vDPA backend Maxime Coquelin
3 siblings, 1 reply; 22+ messages in thread
From: Srujana Challa @ 2024-07-03 10:03 UTC (permalink / raw)
To: dev, maxime.coquelin, chenbox; +Cc: jerinj, ndabilpuram, vattunuru, schalla
Adds support to share descriptor IOVA to the vhost backend.
This makes virtio_user driver works in IOVA as PA mode when
use_va flag is disabled.
This patch also disables use_va flag for VDPA backend.
Signed-off-by: Srujana Challa <schalla@marvell.com>
---
.../net/virtio/virtio_user/virtio_user_dev.c | 26 ++++++++++++-------
drivers/net/virtio/virtio_user_ethdev.c | 10 ++++++-
2 files changed, 26 insertions(+), 10 deletions(-)
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index 7f35f4b06b..02324501fb 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -118,6 +118,7 @@ virtio_user_kick_queue(struct virtio_user_dev *dev, uint32_t queue_sel)
struct vhost_vring_state state;
struct vring *vring = &dev->vrings.split[queue_sel];
struct vring_packed *pq_vring = &dev->vrings.packed[queue_sel];
+ uint64_t desc_addr, avail_addr, used_addr;
struct vhost_vring_addr addr = {
.index = queue_sel,
.log_guest_addr = 0,
@@ -137,16 +138,23 @@ virtio_user_kick_queue(struct virtio_user_dev *dev, uint32_t queue_sel)
}
if (dev->features & (1ULL << VIRTIO_F_RING_PACKED)) {
- addr.desc_user_addr =
- (uint64_t)(uintptr_t)pq_vring->desc;
- addr.avail_user_addr =
- (uint64_t)(uintptr_t)pq_vring->driver;
- addr.used_user_addr =
- (uint64_t)(uintptr_t)pq_vring->device;
+ desc_addr = pq_vring->desc_iova;
+ avail_addr = desc_addr + pq_vring->num * sizeof(struct vring_packed_desc);
+ used_addr = RTE_ALIGN_CEIL(avail_addr + sizeof(struct vring_packed_desc_event),
+ VIRTIO_VRING_ALIGN);
+
+ addr.desc_user_addr = desc_addr;
+ addr.avail_user_addr = avail_addr;
+ addr.used_user_addr = used_addr;
} else {
- addr.desc_user_addr = (uint64_t)(uintptr_t)vring->desc;
- addr.avail_user_addr = (uint64_t)(uintptr_t)vring->avail;
- addr.used_user_addr = (uint64_t)(uintptr_t)vring->used;
+ desc_addr = vring->desc_iova;
+ avail_addr = desc_addr + vring->num * sizeof(struct vring_desc);
+ used_addr = RTE_ALIGN_CEIL((uintptr_t)(&vring->avail->ring[vring->num]),
+ VIRTIO_VRING_ALIGN);
+
+ addr.desc_user_addr = desc_addr;
+ addr.avail_user_addr = avail_addr;
+ addr.used_user_addr = used_addr;
}
state.index = queue_sel;
diff --git a/drivers/net/virtio/virtio_user_ethdev.c b/drivers/net/virtio/virtio_user_ethdev.c
index bf9de36d8f..ae6593ba0b 100644
--- a/drivers/net/virtio/virtio_user_ethdev.c
+++ b/drivers/net/virtio/virtio_user_ethdev.c
@@ -198,6 +198,7 @@ virtio_user_setup_queue_packed(struct virtqueue *vq,
sizeof(struct vring_packed_desc_event),
VIRTIO_VRING_ALIGN);
vring->num = vq->vq_nentries;
+ vring->desc_iova = vq->vq_ring_mem;
vring->desc = (void *)(uintptr_t)desc_addr;
vring->driver = (void *)(uintptr_t)avail_addr;
vring->device = (void *)(uintptr_t)used_addr;
@@ -221,6 +222,7 @@ virtio_user_setup_queue_split(struct virtqueue *vq, struct virtio_user_dev *dev)
VIRTIO_VRING_ALIGN);
dev->vrings.split[queue_idx].num = vq->vq_nentries;
+ dev->vrings.split[queue_idx].desc_iova = vq->vq_ring_mem;
dev->vrings.split[queue_idx].desc = (void *)(uintptr_t)desc_addr;
dev->vrings.split[queue_idx].avail = (void *)(uintptr_t)avail_addr;
dev->vrings.split[queue_idx].used = (void *)(uintptr_t)used_addr;
@@ -689,7 +691,13 @@ virtio_user_pmd_probe(struct rte_vdev_device *vdev)
* Virtio-user requires using virtual addresses for the descriptors
* buffers, whatever other devices require
*/
- hw->use_va = true;
+ if (backend_type == VIRTIO_USER_BACKEND_VHOST_VDPA)
+ /* VDPA backend requires using iova for the buffers to make it
+ * work in IOVA as PA mode also.
+ */
+ hw->use_va = false;
+ else
+ hw->use_va = true;
/* previously called by pci probing for physical dev */
if (eth_virtio_dev_init(eth_dev) < 0) {
--
2.25.1
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v3 1/3] net/virtio_user: convert cq descriptor IOVA address to Virtual address
2024-07-03 10:03 ` [PATCH v3 1/3] net/virtio_user: convert cq descriptor IOVA address to Virtual address Srujana Challa
@ 2024-07-03 10:19 ` Jerin Jacob
2024-07-03 13:28 ` Maxime Coquelin
2024-07-12 11:22 ` David Marchand
2024-07-03 13:40 ` Maxime Coquelin
2024-07-10 15:06 ` David Marchand
2 siblings, 2 replies; 22+ messages in thread
From: Jerin Jacob @ 2024-07-03 10:19 UTC (permalink / raw)
To: Srujana Challa
Cc: dev, maxime.coquelin, chenbox, jerinj, ndabilpuram, vattunuru
On Wed, Jul 3, 2024 at 3:43 PM Srujana Challa <schalla@marvell.com> wrote:
>
> This patch modifies the code to convert descriptor buffer IOVA
> addresses to virtual addresses during the processing of shadow
> control queue when IOVA mode is PA. This change enables Virtio-user
> to operate with IOVA as the descriptor buffer address.
>
> Signed-off-by: Srujana Challa <schalla@marvell.com>
> ---
> .../net/virtio/virtio_user/virtio_user_dev.c | 33 ++++++++++++-------
> 1 file changed, 21 insertions(+), 12 deletions(-)
>
> diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
> index 1365c8a5c8..7f35f4b06b 100644
> --- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
> +++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
> @@ -896,6 +896,15 @@ virtio_user_handle_mq(struct virtio_user_dev *dev, uint16_t q_pairs)
>
> #define CVQ_MAX_DATA_DESCS 32
>
> +static inline void *
> +virtio_user_iova2virt(rte_iova_t iova)
> +{
> + if (rte_eal_iova_mode() == RTE_IOVA_PA)
There is RTE_IOVA_DC as well. So we may put positive logic. i.e
rte_eal_iova_mode() == RTE_IOVA_VA
> + return rte_mem_iova2virt(iova);
> + else
> + return (void *)(uintptr_t)iova;
> +}
> +
> static uint32_t
> virtio_user_handle_ctrl_msg_split(struct virtio_user_dev *dev, struct vring *vring,
> uint16_t idx_hdr)
> @@ -921,17 +930,18 @@ virtio_user_handle_ctrl_msg_split(struct virtio_user_dev *dev, struct vring *vri
> idx_status = i;
> n_descs++;
>
> - hdr = (void *)(uintptr_t)vring->desc[idx_hdr].addr;
> + hdr = virtio_user_iova2virt(vring->desc[idx_hdr].addr);
> if (hdr->class == VIRTIO_NET_CTRL_MQ &&
> hdr->cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) {
> - uint16_t queues;
> + uint16_t queues, *addr;
>
> - queues = *(uint16_t *)(uintptr_t)vring->desc[idx_data].addr;
> + addr = virtio_user_iova2virt(vring->desc[idx_data].addr);
> + queues = *addr;
> status = virtio_user_handle_mq(dev, queues);
> } else if (hdr->class == VIRTIO_NET_CTRL_MQ && hdr->cmd == VIRTIO_NET_CTRL_MQ_RSS_CONFIG) {
> struct virtio_net_ctrl_rss *rss;
>
> - rss = (struct virtio_net_ctrl_rss *)(uintptr_t)vring->desc[idx_data].addr;
> + rss = virtio_user_iova2virt(vring->desc[idx_data].addr);
> status = virtio_user_handle_mq(dev, rss->max_tx_vq);
> } else if (hdr->class == VIRTIO_NET_CTRL_RX ||
> hdr->class == VIRTIO_NET_CTRL_MAC ||
> @@ -944,7 +954,7 @@ virtio_user_handle_ctrl_msg_split(struct virtio_user_dev *dev, struct vring *vri
> (struct virtio_pmd_ctrl *)hdr, dlen, nb_dlen);
>
> /* Update status */
> - *(virtio_net_ctrl_ack *)(uintptr_t)vring->desc[idx_status].addr = status;
> + *(virtio_net_ctrl_ack *)virtio_user_iova2virt(vring->desc[idx_status].addr) = status;
>
> return n_descs;
> }
> @@ -986,18 +996,18 @@ virtio_user_handle_ctrl_msg_packed(struct virtio_user_dev *dev,
> n_descs++;
> }
>
> - hdr = (void *)(uintptr_t)vring->desc[idx_hdr].addr;
> + hdr = virtio_user_iova2virt(vring->desc[idx_hdr].addr);
> if (hdr->class == VIRTIO_NET_CTRL_MQ &&
> hdr->cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) {
> - uint16_t queues;
> + uint16_t queues, *addr;
>
> - queues = *(uint16_t *)(uintptr_t)
> - vring->desc[idx_data].addr;
> + addr = virtio_user_iova2virt(vring->desc[idx_data].addr);
> + queues = *addr;
> status = virtio_user_handle_mq(dev, queues);
> } else if (hdr->class == VIRTIO_NET_CTRL_MQ && hdr->cmd == VIRTIO_NET_CTRL_MQ_RSS_CONFIG) {
> struct virtio_net_ctrl_rss *rss;
>
> - rss = (struct virtio_net_ctrl_rss *)(uintptr_t)vring->desc[idx_data].addr;
> + rss = virtio_user_iova2virt(vring->desc[idx_data].addr);
> status = virtio_user_handle_mq(dev, rss->max_tx_vq);
> } else if (hdr->class == VIRTIO_NET_CTRL_RX ||
> hdr->class == VIRTIO_NET_CTRL_MAC ||
> @@ -1010,8 +1020,7 @@ virtio_user_handle_ctrl_msg_packed(struct virtio_user_dev *dev,
> (struct virtio_pmd_ctrl *)hdr, dlen, nb_dlen);
>
> /* Update status */
> - *(virtio_net_ctrl_ack *)(uintptr_t)
> - vring->desc[idx_status].addr = status;
> + *(virtio_net_ctrl_ack *)virtio_user_iova2virt(vring->desc[idx_status].addr) = status;
>
> /* Update used descriptor */
> vring->desc[idx_hdr].id = vring->desc[idx_status].id;
> --
> 2.25.1
>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v3 1/3] net/virtio_user: convert cq descriptor IOVA address to Virtual address
2024-07-03 10:19 ` Jerin Jacob
@ 2024-07-03 13:28 ` Maxime Coquelin
2024-07-12 11:22 ` David Marchand
1 sibling, 0 replies; 22+ messages in thread
From: Maxime Coquelin @ 2024-07-03 13:28 UTC (permalink / raw)
To: Jerin Jacob, Srujana Challa; +Cc: dev, chenbox, jerinj, ndabilpuram, vattunuru
On 7/3/24 12:19, Jerin Jacob wrote:
> On Wed, Jul 3, 2024 at 3:43 PM Srujana Challa <schalla@marvell.com> wrote:
>>
>> This patch modifies the code to convert descriptor buffer IOVA
>> addresses to virtual addresses during the processing of shadow
>> control queue when IOVA mode is PA. This change enables Virtio-user
>> to operate with IOVA as the descriptor buffer address.
>>
>> Signed-off-by: Srujana Challa <schalla@marvell.com>
>> ---
>> .../net/virtio/virtio_user/virtio_user_dev.c | 33 ++++++++++++-------
>> 1 file changed, 21 insertions(+), 12 deletions(-)
>>
>> diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
>> index 1365c8a5c8..7f35f4b06b 100644
>> --- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
>> +++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
>> @@ -896,6 +896,15 @@ virtio_user_handle_mq(struct virtio_user_dev *dev, uint16_t q_pairs)
>>
>> #define CVQ_MAX_DATA_DESCS 32
>>
>> +static inline void *
>> +virtio_user_iova2virt(rte_iova_t iova)
>> +{
>> + if (rte_eal_iova_mode() == RTE_IOVA_PA)
>
> There is RTE_IOVA_DC as well. So we may put positive logic. i.e
> rte_eal_iova_mode() == RTE_IOVA_VA
Indeed, that would be better.
I can fix it while applying.
Thanks,
Maxime
>
>> + return rte_mem_iova2virt(iova);
>> + else
>> + return (void *)(uintptr_t)iova;
>> +}
>> +
>> static uint32_t
>> virtio_user_handle_ctrl_msg_split(struct virtio_user_dev *dev, struct vring *vring,
>> uint16_t idx_hdr)
>> @@ -921,17 +930,18 @@ virtio_user_handle_ctrl_msg_split(struct virtio_user_dev *dev, struct vring *vri
>> idx_status = i;
>> n_descs++;
>>
>> - hdr = (void *)(uintptr_t)vring->desc[idx_hdr].addr;
>> + hdr = virtio_user_iova2virt(vring->desc[idx_hdr].addr);
>> if (hdr->class == VIRTIO_NET_CTRL_MQ &&
>> hdr->cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) {
>> - uint16_t queues;
>> + uint16_t queues, *addr;
>>
>> - queues = *(uint16_t *)(uintptr_t)vring->desc[idx_data].addr;
>> + addr = virtio_user_iova2virt(vring->desc[idx_data].addr);
>> + queues = *addr;
>> status = virtio_user_handle_mq(dev, queues);
>> } else if (hdr->class == VIRTIO_NET_CTRL_MQ && hdr->cmd == VIRTIO_NET_CTRL_MQ_RSS_CONFIG) {
>> struct virtio_net_ctrl_rss *rss;
>>
>> - rss = (struct virtio_net_ctrl_rss *)(uintptr_t)vring->desc[idx_data].addr;
>> + rss = virtio_user_iova2virt(vring->desc[idx_data].addr);
>> status = virtio_user_handle_mq(dev, rss->max_tx_vq);
>> } else if (hdr->class == VIRTIO_NET_CTRL_RX ||
>> hdr->class == VIRTIO_NET_CTRL_MAC ||
>> @@ -944,7 +954,7 @@ virtio_user_handle_ctrl_msg_split(struct virtio_user_dev *dev, struct vring *vri
>> (struct virtio_pmd_ctrl *)hdr, dlen, nb_dlen);
>>
>> /* Update status */
>> - *(virtio_net_ctrl_ack *)(uintptr_t)vring->desc[idx_status].addr = status;
>> + *(virtio_net_ctrl_ack *)virtio_user_iova2virt(vring->desc[idx_status].addr) = status;
>>
>> return n_descs;
>> }
>> @@ -986,18 +996,18 @@ virtio_user_handle_ctrl_msg_packed(struct virtio_user_dev *dev,
>> n_descs++;
>> }
>>
>> - hdr = (void *)(uintptr_t)vring->desc[idx_hdr].addr;
>> + hdr = virtio_user_iova2virt(vring->desc[idx_hdr].addr);
>> if (hdr->class == VIRTIO_NET_CTRL_MQ &&
>> hdr->cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) {
>> - uint16_t queues;
>> + uint16_t queues, *addr;
>>
>> - queues = *(uint16_t *)(uintptr_t)
>> - vring->desc[idx_data].addr;
>> + addr = virtio_user_iova2virt(vring->desc[idx_data].addr);
>> + queues = *addr;
>> status = virtio_user_handle_mq(dev, queues);
>> } else if (hdr->class == VIRTIO_NET_CTRL_MQ && hdr->cmd == VIRTIO_NET_CTRL_MQ_RSS_CONFIG) {
>> struct virtio_net_ctrl_rss *rss;
>>
>> - rss = (struct virtio_net_ctrl_rss *)(uintptr_t)vring->desc[idx_data].addr;
>> + rss = virtio_user_iova2virt(vring->desc[idx_data].addr);
>> status = virtio_user_handle_mq(dev, rss->max_tx_vq);
>> } else if (hdr->class == VIRTIO_NET_CTRL_RX ||
>> hdr->class == VIRTIO_NET_CTRL_MAC ||
>> @@ -1010,8 +1020,7 @@ virtio_user_handle_ctrl_msg_packed(struct virtio_user_dev *dev,
>> (struct virtio_pmd_ctrl *)hdr, dlen, nb_dlen);
>>
>> /* Update status */
>> - *(virtio_net_ctrl_ack *)(uintptr_t)
>> - vring->desc[idx_status].addr = status;
>> + *(virtio_net_ctrl_ack *)virtio_user_iova2virt(vring->desc[idx_status].addr) = status;
>>
>> /* Update used descriptor */
>> vring->desc[idx_hdr].id = vring->desc[idx_status].id;
>> --
>> 2.25.1
>>
>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v3 1/3] net/virtio_user: convert cq descriptor IOVA address to Virtual address
2024-07-03 10:03 ` [PATCH v3 1/3] net/virtio_user: convert cq descriptor IOVA address to Virtual address Srujana Challa
2024-07-03 10:19 ` Jerin Jacob
@ 2024-07-03 13:40 ` Maxime Coquelin
2024-07-10 15:06 ` David Marchand
2 siblings, 0 replies; 22+ messages in thread
From: Maxime Coquelin @ 2024-07-03 13:40 UTC (permalink / raw)
To: Srujana Challa, dev, chenbox; +Cc: jerinj, ndabilpuram, vattunuru
On 7/3/24 12:03, Srujana Challa wrote:
> This patch modifies the code to convert descriptor buffer IOVA
> addresses to virtual addresses during the processing of shadow
> control queue when IOVA mode is PA. This change enables Virtio-user
> to operate with IOVA as the descriptor buffer address.
>
> Signed-off-by: Srujana Challa <schalla@marvell.com>
> ---
> .../net/virtio/virtio_user/virtio_user_dev.c | 33 ++++++++++++-------
> 1 file changed, 21 insertions(+), 12 deletions(-)
>
> diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
> index 1365c8a5c8..7f35f4b06b 100644
> --- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
> +++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
> @@ -896,6 +896,15 @@ virtio_user_handle_mq(struct virtio_user_dev *dev, uint16_t q_pairs)
>
> #define CVQ_MAX_DATA_DESCS 32
>
> +static inline void *
> +virtio_user_iova2virt(rte_iova_t iova)
> +{
> + if (rte_eal_iova_mode() == RTE_IOVA_PA)
> + return rte_mem_iova2virt(iova);
> + else
> + return (void *)(uintptr_t)iova;
> +}
> +
Thanks for the last minute change.
With Jerin suggestion:
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Maxime
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v3 2/3] net/virtio: store desc IOVA address in vring data structure
2024-07-03 10:03 ` [PATCH v3 2/3] net/virtio: store desc IOVA address in vring data structure Srujana Challa
@ 2024-07-03 13:41 ` Maxime Coquelin
0 siblings, 0 replies; 22+ messages in thread
From: Maxime Coquelin @ 2024-07-03 13:41 UTC (permalink / raw)
To: Srujana Challa, dev, chenbox; +Cc: jerinj, ndabilpuram, vattunuru
On 7/3/24 12:03, Srujana Challa wrote:
> Stores desc IOVA in the queue's vring data structure,
> as preliminary work to provide a way for Virtio-user
> to share desc IOVA to the vhost backend.
>
> Signed-off-by: Srujana Challa <schalla@marvell.com>
> ---
> drivers/net/virtio/virtio_ring.h | 12 ++++++++----
> drivers/net/virtio/virtqueue.c | 4 ++--
> 2 files changed, 10 insertions(+), 6 deletions(-)
>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Thanks,
Maxime
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v3 3/3] net/virtio_user: support sharing vq descriptor IOVA to the backend
2024-07-03 10:03 ` [PATCH v3 3/3] net/virtio_user: support sharing vq descriptor IOVA to the backend Srujana Challa
@ 2024-07-03 13:41 ` Maxime Coquelin
0 siblings, 0 replies; 22+ messages in thread
From: Maxime Coquelin @ 2024-07-03 13:41 UTC (permalink / raw)
To: Srujana Challa, dev, chenbox; +Cc: jerinj, ndabilpuram, vattunuru
On 7/3/24 12:03, Srujana Challa wrote:
> Adds support to share descriptor IOVA to the vhost backend.
> This makes virtio_user driver works in IOVA as PA mode when
> use_va flag is disabled.
> This patch also disables use_va flag for VDPA backend.
>
> Signed-off-by: Srujana Challa <schalla@marvell.com>
> ---
> .../net/virtio/virtio_user/virtio_user_dev.c | 26 ++++++++++++-------
> drivers/net/virtio/virtio_user_ethdev.c | 10 ++++++-
> 2 files changed, 26 insertions(+), 10 deletions(-)
>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Thanks,
Maxime
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v3 0/3] net/virtio: support IOVA as PA mode for vDPA backend
2024-07-03 10:03 ` [PATCH v3 0/3] net/virtio: support IOVA as PA mode for vDPA backend Srujana Challa
` (2 preceding siblings ...)
2024-07-03 10:03 ` [PATCH v3 3/3] net/virtio_user: support sharing vq descriptor IOVA to the backend Srujana Challa
@ 2024-07-03 14:34 ` Maxime Coquelin
3 siblings, 0 replies; 22+ messages in thread
From: Maxime Coquelin @ 2024-07-03 14:34 UTC (permalink / raw)
To: Srujana Challa, dev, chenbox; +Cc: jerinj, ndabilpuram, vattunuru
On 7/3/24 12:03, Srujana Challa wrote:
> This patch series makes Virtio-user works in IOVA as PA mode
> for vDPA backend.
>
> First patch fixes the issue when having buffer IOVA address in
> control queue descriptors.
> Second and third patches helps to share descriptor IOVA address,
> to the vhost backend. And also disables the use_va flag for VDPA
> backend type.
>
> v1->v2:
> - Split single patch into three patches.
> v2->v3:
> - Addressed the review comment by using rte_mem_iova2virt() for desc
> address translation.
>
> Srujana Challa (3):
> net/virtio_user: convert cq descriptor IOVA address to Virtual address
> net/virtio: store desc IOVA address in vring data structure
> net/virtio_user: support sharing vq descriptor IOVA to the backend
>
> drivers/net/virtio/virtio_ring.h | 12 ++--
> .../net/virtio/virtio_user/virtio_user_dev.c | 59 ++++++++++++-------
> drivers/net/virtio/virtio_user_ethdev.c | 10 +++-
> drivers/net/virtio/virtqueue.c | 4 +-
> 4 files changed, 57 insertions(+), 28 deletions(-)
>
Applied to next-virtio/for-next-net.
Thanks,
Maxime
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v3 1/3] net/virtio_user: convert cq descriptor IOVA address to Virtual address
2024-07-03 10:03 ` [PATCH v3 1/3] net/virtio_user: convert cq descriptor IOVA address to Virtual address Srujana Challa
2024-07-03 10:19 ` Jerin Jacob
2024-07-03 13:40 ` Maxime Coquelin
@ 2024-07-10 15:06 ` David Marchand
2024-07-11 8:54 ` [EXTERNAL] " Srujana Challa
2 siblings, 1 reply; 22+ messages in thread
From: David Marchand @ 2024-07-10 15:06 UTC (permalink / raw)
To: Srujana Challa, Jerin Jacob Kollanukkaran
Cc: dev, maxime.coquelin, chenbox, ndabilpuram, vattunuru,
Thomas Monjalon, Ferruh Yigit
Hello Srujana, Jerin,
On Wed, Jul 3, 2024 at 12:04 PM Srujana Challa <schalla@marvell.com> wrote:
>
> This patch modifies the code to convert descriptor buffer IOVA
> addresses to virtual addresses during the processing of shadow
> control queue when IOVA mode is PA. This change enables Virtio-user
> to operate with IOVA as the descriptor buffer address.
>
> Signed-off-by: Srujana Challa <schalla@marvell.com>
This patch triggers a segfault in testpmd when using virtio-user
(server) + vhost-user (client).
This was caught in OVS unit tests running in GHA virtual machines.
In such an env with no iommu, IOVA is forced to PA.
This can be reproduced with two testpmd in an env without iommu like a
vm, or alternatively forcing --iova-mode=pa in the cmdline:
$ rm -f vhost-user; gdb ./build/app/dpdk-testpmd -ex 'run -l 0-2
--in-memory --socket-mem=512 --single-file-segments --no-pci
--file-prefix virtio
--vdev=net_virtio_user,path=vhost-user,queues=2,server=1 -- -i'
...
EAL: Selected IOVA mode 'PA'
...
vhost_user_start_server(): (vhost-user) waiting for client connection...
$ ./build/app/dpdk-testpmd -l 0,3-4 --in-memory --socket-mem=512
--single-file-segments --no-pci --file-prefix vhost-user --vdev
net_vhost,iface=vhost-user,client=1 -- -i
...
EAL: Selected IOVA mode 'PA'
...
VHOST_CONFIG: (vhost-user) virtio is now ready for processing.
On the virtio-user side:
Thread 1 "dpdk-testpmd" received signal SIGSEGV, Segmentation fault.
0x0000000002f956ab in virtio_user_handle_ctrl_msg_split
(dev=0x11a01a8d00, vring=0x11a01a8aa0, idx_hdr=0) at
../drivers/net/virtio/virtio_user/virtio_user_dev.c:942
942 if (hdr->class == VIRTIO_NET_CTRL_MQ &&
(gdb) bt
#0 0x0000000002f956ab in virtio_user_handle_ctrl_msg_split
(dev=0x11a01a8d00, vring=0x11a01a8aa0, idx_hdr=0) at
../drivers/net/virtio/virtio_user/virtio_user_dev.c:942
#1 0x0000000002f95d06 in virtio_user_handle_cq_split
(dev=0x11a01a8d00, queue_idx=4) at
../drivers/net/virtio/virtio_user/virtio_user_dev.c:1087
#2 0x0000000002f95dba in virtio_user_handle_cq (dev=0x11a01a8d00,
queue_idx=4) at
../drivers/net/virtio/virtio_user/virtio_user_dev.c:1104
#3 0x0000000002f79af7 in virtio_user_notify_queue (hw=0x11a01a8d00,
vq=0x11a0181e00) at ../drivers/net/virtio/virtio_user_ethdev.c:278
#4 0x0000000002f45408 in virtqueue_notify (vq=0x11a0181e00) at
../drivers/net/virtio/virtqueue.h:525
#5 0x0000000002f45bf0 in virtio_control_queue_notify
(vq=0x11a0181e00, cookie=0x0) at
../drivers/net/virtio/virtio_ethdev.c:227
#6 0x0000000002f404a5 in virtio_send_command_split (cvq=0x11a0181e60,
ctrl=0x7fffffffc850, dlen=0x7fffffffc84c, pkt_num=1) at
../drivers/net/virtio/virtio_cvq.c:158
#7 0x0000000002f407a7 in virtio_send_command (cvq=0x11a0181e60,
ctrl=0x7fffffffc850, dlen=0x7fffffffc84c, pkt_num=1) at
../drivers/net/virtio/virtio_cvq.c:224
#8 0x0000000002f45af7 in virtio_set_multiple_queues_auto
(dev=0x4624b80 <rte_eth_devices>, nb_queues=1) at
../drivers/net/virtio/virtio_ethdev.c:192
#9 0x0000000002f45b99 in virtio_set_multiple_queues (dev=0x4624b80
<rte_eth_devices>, nb_queues=1) at
../drivers/net/virtio/virtio_ethdev.c:210
#10 0x0000000002f4ad2d in virtio_dev_start (dev=0x4624b80
<rte_eth_devices>) at ../drivers/net/virtio/virtio_ethdev.c:2385
#11 0x0000000000aa4336 in rte_eth_dev_start (port_id=0) at
../lib/ethdev/rte_ethdev.c:1752
#12 0x00000000005984f7 in eth_dev_start_mp (port_id=0) at
../app/test-pmd/testpmd.c:642
#13 0x000000000059ddb7 in start_port (pid=65535) at
../app/test-pmd/testpmd.c:3269
#14 0x00000000005a0eea in main (argc=2, argv=0x7fffffffdfe0) at
../app/test-pmd/testpmd.c:4644
(gdb) l
937 /* locate desc for status */
938 idx_status = i;
939 n_descs++;
940
941 hdr = virtio_user_iova2virt(vring->desc[idx_hdr].addr);
942 if (hdr->class == VIRTIO_NET_CTRL_MQ &&
943 hdr->cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) {
944 uint16_t queues, *addr;
945
946 addr = virtio_user_iova2virt(vring->desc[idx_data].addr);
(gdb) p hdr
$1 = (struct virtio_net_ctrl_hdr *) 0x0
We need someone from Marvell to fix this issue.
The next option is to revert the whole series (which was discussed and
agreed before Maxime went off, in the event this series would trigger
any issue).
--
David Marchand
^ permalink raw reply [flat|nested] 22+ messages in thread
* RE: [EXTERNAL] Re: [PATCH v3 1/3] net/virtio_user: convert cq descriptor IOVA address to Virtual address
2024-07-10 15:06 ` David Marchand
@ 2024-07-11 8:54 ` Srujana Challa
0 siblings, 0 replies; 22+ messages in thread
From: Srujana Challa @ 2024-07-11 8:54 UTC (permalink / raw)
To: David Marchand, Jerin Jacob
Cc: dev, maxime.coquelin, chenbox, Nithin Kumar Dabilpuram,
Vamsi Krishna Attunuru, Thomas Monjalon, Ferruh Yigit
> Hello Srujana, Jerin,
>
> On Wed, Jul 3, 2024 at 12:04 PM Srujana Challa <schalla@marvell.com>
> wrote:
> >
> > This patch modifies the code to convert descriptor buffer IOVA
> > addresses to virtual addresses during the processing of shadow control
> > queue when IOVA mode is PA. This change enables Virtio-user to operate
> > with IOVA as the descriptor buffer address.
> >
> > Signed-off-by: Srujana Challa <schalla@marvell.com>
>
> This patch triggers a segfault in testpmd when using virtio-user
> (server) + vhost-user (client).
> This was caught in OVS unit tests running in GHA virtual machines.
> In such an env with no iommu, IOVA is forced to PA.
>
> This can be reproduced with two testpmd in an env without iommu like a vm,
> or alternatively forcing --iova-mode=pa in the cmdline:
>
> $ rm -f vhost-user; gdb ./build/app/dpdk-testpmd -ex 'run -l 0-2 --in-memory
> --socket-mem=512 --single-file-segments --no-pci --file-prefix virtio
> --vdev=net_virtio_user,path=vhost-user,queues=2,server=1 -- -i'
> ...
> EAL: Selected IOVA mode 'PA'
> ...
> vhost_user_start_server(): (vhost-user) waiting for client connection...
>
> $ ./build/app/dpdk-testpmd -l 0,3-4 --in-memory --socket-mem=512 --single-
> file-segments --no-pci --file-prefix vhost-user --vdev
> net_vhost,iface=vhost-user,client=1 -- -i ...
> EAL: Selected IOVA mode 'PA'
> ...
> VHOST_CONFIG: (vhost-user) virtio is now ready for processing.
>
>
> On the virtio-user side:
>
> Thread 1 "dpdk-testpmd" received signal SIGSEGV, Segmentation fault.
> 0x0000000002f956ab in virtio_user_handle_ctrl_msg_split
> (dev=0x11a01a8d00, vring=0x11a01a8aa0, idx_hdr=0) at
> ../drivers/net/virtio/virtio_user/virtio_user_dev.c:942
> 942 if (hdr->class == VIRTIO_NET_CTRL_MQ &&
> (gdb) bt
> #0 0x0000000002f956ab in virtio_user_handle_ctrl_msg_split
> (dev=0x11a01a8d00, vring=0x11a01a8aa0, idx_hdr=0) at
> ../drivers/net/virtio/virtio_user/virtio_user_dev.c:942
> #1 0x0000000002f95d06 in virtio_user_handle_cq_split
> (dev=0x11a01a8d00, queue_idx=4) at
> ../drivers/net/virtio/virtio_user/virtio_user_dev.c:1087
> #2 0x0000000002f95dba in virtio_user_handle_cq (dev=0x11a01a8d00,
> queue_idx=4) at
> ../drivers/net/virtio/virtio_user/virtio_user_dev.c:1104
> #3 0x0000000002f79af7 in virtio_user_notify_queue (hw=0x11a01a8d00,
> vq=0x11a0181e00) at ../drivers/net/virtio/virtio_user_ethdev.c:278
> #4 0x0000000002f45408 in virtqueue_notify (vq=0x11a0181e00) at
> ../drivers/net/virtio/virtqueue.h:525
> #5 0x0000000002f45bf0 in virtio_control_queue_notify (vq=0x11a0181e00,
> cookie=0x0) at
> ../drivers/net/virtio/virtio_ethdev.c:227
> #6 0x0000000002f404a5 in virtio_send_command_split
> (cvq=0x11a0181e60, ctrl=0x7fffffffc850, dlen=0x7fffffffc84c, pkt_num=1) at
> ../drivers/net/virtio/virtio_cvq.c:158
> #7 0x0000000002f407a7 in virtio_send_command (cvq=0x11a0181e60,
> ctrl=0x7fffffffc850, dlen=0x7fffffffc84c, pkt_num=1) at
> ../drivers/net/virtio/virtio_cvq.c:224
> #8 0x0000000002f45af7 in virtio_set_multiple_queues_auto
> (dev=0x4624b80 <rte_eth_devices>, nb_queues=1) at
> ../drivers/net/virtio/virtio_ethdev.c:192
> #9 0x0000000002f45b99 in virtio_set_multiple_queues (dev=0x4624b80
> <rte_eth_devices>, nb_queues=1) at
> ../drivers/net/virtio/virtio_ethdev.c:210
> #10 0x0000000002f4ad2d in virtio_dev_start (dev=0x4624b80
> <rte_eth_devices>) at ../drivers/net/virtio/virtio_ethdev.c:2385
> #11 0x0000000000aa4336 in rte_eth_dev_start (port_id=0) at
> ../lib/ethdev/rte_ethdev.c:1752
> #12 0x00000000005984f7 in eth_dev_start_mp (port_id=0) at
> ../app/test-pmd/testpmd.c:642
> #13 0x000000000059ddb7 in start_port (pid=65535) at
> ../app/test-pmd/testpmd.c:3269
> #14 0x00000000005a0eea in main (argc=2, argv=0x7fffffffdfe0) at
> ../app/test-pmd/testpmd.c:4644
> (gdb) l
> 937 /* locate desc for status */
> 938 idx_status = i;
> 939 n_descs++;
> 940
> 941 hdr = virtio_user_iova2virt(vring->desc[idx_hdr].addr);
> 942 if (hdr->class == VIRTIO_NET_CTRL_MQ &&
> 943 hdr->cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) {
> 944 uint16_t queues, *addr;
> 945
> 946 addr = virtio_user_iova2virt(vring->desc[idx_data].addr);
> (gdb) p hdr
> $1 = (struct virtio_net_ctrl_hdr *) 0x0
>
>
> We need someone from Marvell to fix this issue.
> The next option is to revert the whole series (which was discussed and agreed
> before Maxime went off, in the event this series would trigger any issue).
Sure, we are looking into the issue.
Thanks.
>
> --
> David Marchand
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v3 1/3] net/virtio_user: convert cq descriptor IOVA address to Virtual address
2024-07-03 10:19 ` Jerin Jacob
2024-07-03 13:28 ` Maxime Coquelin
@ 2024-07-12 11:22 ` David Marchand
1 sibling, 0 replies; 22+ messages in thread
From: David Marchand @ 2024-07-12 11:22 UTC (permalink / raw)
To: Jerin Jacob, Srujana Challa
Cc: dev, maxime.coquelin, chenbox, jerinj, ndabilpuram, vattunuru
Hello,
On Wed, Jul 3, 2024 at 12:19 PM Jerin Jacob <jerinjacobk@gmail.com> wrote:
>
> On Wed, Jul 3, 2024 at 3:43 PM Srujana Challa <schalla@marvell.com> wrote:
> >
> > This patch modifies the code to convert descriptor buffer IOVA
> > addresses to virtual addresses during the processing of shadow
> > control queue when IOVA mode is PA. This change enables Virtio-user
> > to operate with IOVA as the descriptor buffer address.
> >
> > Signed-off-by: Srujana Challa <schalla@marvell.com>
> > ---
> > .../net/virtio/virtio_user/virtio_user_dev.c | 33 ++++++++++++-------
> > 1 file changed, 21 insertions(+), 12 deletions(-)
> >
> > diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
> > index 1365c8a5c8..7f35f4b06b 100644
> > --- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
> > +++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
> > @@ -896,6 +896,15 @@ virtio_user_handle_mq(struct virtio_user_dev *dev, uint16_t q_pairs)
> >
> > #define CVQ_MAX_DATA_DESCS 32
> >
> > +static inline void *
> > +virtio_user_iova2virt(rte_iova_t iova)
> > +{
> > + if (rte_eal_iova_mode() == RTE_IOVA_PA)
>
> There is RTE_IOVA_DC as well. So we may put positive logic. i.e
> rte_eal_iova_mode() == RTE_IOVA_VA
Buses can provide RTE_IOVA_VA, RTE_IOVA_PA or RTE_IOVA_DC hints to EAL.
https://doc.dpdk.org/guides/prog_guide/env_abstraction_layer.html#iova-mode-detection
But at the point drivers are probed (and as a consequence when
handling descriptors here), the iova mode is fixed to be either
RTE_IOVA_PA or RTE_IOVA_VA.
https://git.dpdk.org/dpdk/tree/lib/eal/linux/eal.c#n1077
https://git.dpdk.org/dpdk/tree/lib/eal/linux/eal.c#n1124
https://git.dpdk.org/dpdk/tree/lib/eal/linux/eal.c#n1288
--
David Marchand
^ permalink raw reply [flat|nested] 22+ messages in thread
end of thread, other threads:[~2024-07-12 11:23 UTC | newest]
Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-02-29 13:29 [PATCH v2 0/3] net/virtio: support IOVA as PA mode for vDPA backend Srujana Challa
2024-02-29 13:29 ` [PATCH v2 1/3] net/virtio_user: avoid cq descriptor buffer address accessing Srujana Challa
2024-06-28 8:07 ` Maxime Coquelin
2024-07-02 11:09 ` [EXTERNAL] " Srujana Challa
2024-07-02 12:41 ` Maxime Coquelin
2024-07-03 10:03 ` [PATCH v3 0/3] net/virtio: support IOVA as PA mode for vDPA backend Srujana Challa
2024-07-03 10:03 ` [PATCH v3 1/3] net/virtio_user: convert cq descriptor IOVA address to Virtual address Srujana Challa
2024-07-03 10:19 ` Jerin Jacob
2024-07-03 13:28 ` Maxime Coquelin
2024-07-12 11:22 ` David Marchand
2024-07-03 13:40 ` Maxime Coquelin
2024-07-10 15:06 ` David Marchand
2024-07-11 8:54 ` [EXTERNAL] " Srujana Challa
2024-07-03 10:03 ` [PATCH v3 2/3] net/virtio: store desc IOVA address in vring data structure Srujana Challa
2024-07-03 13:41 ` Maxime Coquelin
2024-07-03 10:03 ` [PATCH v3 3/3] net/virtio_user: support sharing vq descriptor IOVA to the backend Srujana Challa
2024-07-03 13:41 ` Maxime Coquelin
2024-07-03 14:34 ` [PATCH v3 0/3] net/virtio: support IOVA as PA mode for vDPA backend Maxime Coquelin
2024-02-29 13:29 ` [PATCH v2 2/3] net/virtio: store desc IOVA address in vring data structure Srujana Challa
2024-02-29 13:29 ` [PATCH v2 3/3] net/virtio_user: support sharing vq descriptor IOVA to the backend Srujana Challa
2024-06-19 9:39 ` [PATCH v2 0/3] net/virtio: support IOVA as PA mode for vDPA backend Srujana Challa
2024-06-28 13:33 ` Maxime Coquelin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).