From: Srujana Challa <schalla@marvell.com>
To: <dev@dpdk.org>, <maxime.coquelin@redhat.com>, <chenbox@nvidia.com>
Cc: <david.marchand@redhat.com>, <jerinj@marvell.com>,
<ndabilpuram@marvell.com>, <vattunuru@marvell.com>,
<schalla@marvell.com>
Subject: [PATCH v3] net/virtio_user: fix cq descriptor conversion with non vDPA backend
Date: Fri, 12 Jul 2024 18:06:12 +0530 [thread overview]
Message-ID: <20240712123612.2389160-1-schalla@marvell.com> (raw)
In-Reply-To: <20240711124436.2383232-1-schalla@marvell.com>
This patch modifies the code to convert descriptor buffer IOVA
addresses to virtual addresses only when use_va flag is false.
This patch fixes segmentation fault with vhost-user backend.
Fixes: 67e9e504dae2 ("net/virtio_user: convert cq descriptor IOVA address to Virtual address")
Reported-by: David Marchand <david.marchand@redhat.com>
Signed-off-by: Srujana Challa <schalla@marvell.com>
---
v3:
- Addressed the review comments from David Marchand.
v2:
- Added Reported-by tag.
.../net/virtio/virtio_user/virtio_user_dev.c | 20 +++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index fed66d2ae9..48b872524a 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -905,9 +905,9 @@ virtio_user_handle_mq(struct virtio_user_dev *dev, uint16_t q_pairs)
#define CVQ_MAX_DATA_DESCS 32
static inline void *
-virtio_user_iova2virt(rte_iova_t iova)
+virtio_user_iova2virt(struct virtio_user_dev *dev, rte_iova_t iova)
{
- if (rte_eal_iova_mode() == RTE_IOVA_VA)
+ if (rte_eal_iova_mode() == RTE_IOVA_VA || dev->hw.use_va)
return (void *)(uintptr_t)iova;
else
return rte_mem_iova2virt(iova);
@@ -938,18 +938,18 @@ virtio_user_handle_ctrl_msg_split(struct virtio_user_dev *dev, struct vring *vri
idx_status = i;
n_descs++;
- hdr = virtio_user_iova2virt(vring->desc[idx_hdr].addr);
+ hdr = virtio_user_iova2virt(dev, vring->desc[idx_hdr].addr);
if (hdr->class == VIRTIO_NET_CTRL_MQ &&
hdr->cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) {
uint16_t queues, *addr;
- addr = virtio_user_iova2virt(vring->desc[idx_data].addr);
+ addr = virtio_user_iova2virt(dev, vring->desc[idx_data].addr);
queues = *addr;
status = virtio_user_handle_mq(dev, queues);
} else if (hdr->class == VIRTIO_NET_CTRL_MQ && hdr->cmd == VIRTIO_NET_CTRL_MQ_RSS_CONFIG) {
struct virtio_net_ctrl_rss *rss;
- rss = virtio_user_iova2virt(vring->desc[idx_data].addr);
+ rss = virtio_user_iova2virt(dev, vring->desc[idx_data].addr);
status = virtio_user_handle_mq(dev, rss->max_tx_vq);
} else if (hdr->class == VIRTIO_NET_CTRL_RX ||
hdr->class == VIRTIO_NET_CTRL_MAC ||
@@ -962,7 +962,7 @@ virtio_user_handle_ctrl_msg_split(struct virtio_user_dev *dev, struct vring *vri
(struct virtio_pmd_ctrl *)hdr, dlen, nb_dlen);
/* Update status */
- *(virtio_net_ctrl_ack *)virtio_user_iova2virt(vring->desc[idx_status].addr) = status;
+ *(virtio_net_ctrl_ack *)virtio_user_iova2virt(dev, vring->desc[idx_status].addr) = status;
return n_descs;
}
@@ -1004,18 +1004,18 @@ virtio_user_handle_ctrl_msg_packed(struct virtio_user_dev *dev,
n_descs++;
}
- hdr = virtio_user_iova2virt(vring->desc[idx_hdr].addr);
+ hdr = virtio_user_iova2virt(dev, vring->desc[idx_hdr].addr);
if (hdr->class == VIRTIO_NET_CTRL_MQ &&
hdr->cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) {
uint16_t queues, *addr;
- addr = virtio_user_iova2virt(vring->desc[idx_data].addr);
+ addr = virtio_user_iova2virt(dev, vring->desc[idx_data].addr);
queues = *addr;
status = virtio_user_handle_mq(dev, queues);
} else if (hdr->class == VIRTIO_NET_CTRL_MQ && hdr->cmd == VIRTIO_NET_CTRL_MQ_RSS_CONFIG) {
struct virtio_net_ctrl_rss *rss;
- rss = virtio_user_iova2virt(vring->desc[idx_data].addr);
+ rss = virtio_user_iova2virt(dev, vring->desc[idx_data].addr);
status = virtio_user_handle_mq(dev, rss->max_tx_vq);
} else if (hdr->class == VIRTIO_NET_CTRL_RX ||
hdr->class == VIRTIO_NET_CTRL_MAC ||
@@ -1028,7 +1028,7 @@ virtio_user_handle_ctrl_msg_packed(struct virtio_user_dev *dev,
(struct virtio_pmd_ctrl *)hdr, dlen, nb_dlen);
/* Update status */
- *(virtio_net_ctrl_ack *)virtio_user_iova2virt(vring->desc[idx_status].addr) = status;
+ *(virtio_net_ctrl_ack *)virtio_user_iova2virt(dev, vring->desc[idx_status].addr) = status;
/* Update used descriptor */
vring->desc[idx_hdr].id = vring->desc[idx_status].id;
--
2.25.1
next prev parent reply other threads:[~2024-07-12 12:36 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-07-11 12:44 [PATCH v2] net/virtio_user: fix issue with converting cq descriptor IOVA address to VA Srujana Challa
2024-07-11 15:02 ` David Marchand
2024-07-11 17:46 ` [EXTERNAL] " Srujana Challa
2024-07-12 7:57 ` David Marchand
2024-07-12 11:30 ` David Marchand
2024-07-12 11:51 ` Srujana Challa
2024-07-12 12:36 ` Srujana Challa [this message]
2024-07-12 14:24 ` [PATCH v3] net/virtio_user: fix cq descriptor conversion with non vDPA backend David Marchand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240712123612.2389160-1-schalla@marvell.com \
--to=schalla@marvell.com \
--cc=chenbox@nvidia.com \
--cc=david.marchand@redhat.com \
--cc=dev@dpdk.org \
--cc=jerinj@marvell.com \
--cc=maxime.coquelin@redhat.com \
--cc=ndabilpuram@marvell.com \
--cc=vattunuru@marvell.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).