DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH v2] net/virtio_user: fix issue with converting cq descriptor IOVA address to VA
@ 2024-07-11 12:44 Srujana Challa
  2024-07-11 15:02 ` David Marchand
  2024-07-12 12:36 ` [PATCH v3] net/virtio_user: fix cq descriptor conversion with non vDPA backend Srujana Challa
  0 siblings, 2 replies; 8+ messages in thread
From: Srujana Challa @ 2024-07-11 12:44 UTC (permalink / raw)
  To: dev, maxime.coquelin, chenbox
  Cc: david.marchand, jerinj, ndabilpuram, vattunuru, schalla

This patch modifies the code to convert descriptor buffer IOVA
addresses to virtual addresses only when use_va flag is false.

This patch resolves a segmentation fault with the vhost-user backend
that occurs during the processing of the shadow control queue.

'Fixes: 67e9e504dae2 ("net/virtio_user: convert cq descriptor IOVA
address to Virtual address")'

Reported-by: David Marchand <david.marchand@redhat.com>
Signed-off-by: Srujana Challa <schalla@marvell.com>
---
v2:
- Added Reported-by tag.

 .../net/virtio/virtio_user/virtio_user_dev.c  | 28 +++++++++++--------
 1 file changed, 16 insertions(+), 12 deletions(-)

diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index fed66d2ae9..94e0ddcb94 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -905,12 +905,12 @@ virtio_user_handle_mq(struct virtio_user_dev *dev, uint16_t q_pairs)
 #define CVQ_MAX_DATA_DESCS 32
 
 static inline void *
-virtio_user_iova2virt(rte_iova_t iova)
+virtio_user_iova2virt(rte_iova_t iova, bool use_va)
 {
-	if (rte_eal_iova_mode() == RTE_IOVA_VA)
-		return (void *)(uintptr_t)iova;
-	else
+	if (rte_eal_iova_mode() == RTE_IOVA_PA && !use_va)
 		return rte_mem_iova2virt(iova);
+	else
+		return (void *)(uintptr_t)iova;
 }
 
 static uint32_t
@@ -922,6 +922,7 @@ virtio_user_handle_ctrl_msg_split(struct virtio_user_dev *dev, struct vring *vri
 	uint16_t i, idx_data, idx_status;
 	uint32_t n_descs = 0;
 	int dlen[CVQ_MAX_DATA_DESCS], nb_dlen = 0;
+	bool use_va = dev->hw.use_va;
 
 	/* locate desc for header, data, and status */
 	idx_data = vring->desc[idx_hdr].next;
@@ -938,18 +939,18 @@ virtio_user_handle_ctrl_msg_split(struct virtio_user_dev *dev, struct vring *vri
 	idx_status = i;
 	n_descs++;
 
-	hdr = virtio_user_iova2virt(vring->desc[idx_hdr].addr);
+	hdr = virtio_user_iova2virt(vring->desc[idx_hdr].addr, use_va);
 	if (hdr->class == VIRTIO_NET_CTRL_MQ &&
 	    hdr->cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) {
 		uint16_t queues, *addr;
 
-		addr = virtio_user_iova2virt(vring->desc[idx_data].addr);
+		addr = virtio_user_iova2virt(vring->desc[idx_data].addr, use_va);
 		queues = *addr;
 		status = virtio_user_handle_mq(dev, queues);
 	} else if (hdr->class == VIRTIO_NET_CTRL_MQ && hdr->cmd == VIRTIO_NET_CTRL_MQ_RSS_CONFIG) {
 		struct virtio_net_ctrl_rss *rss;
 
-		rss = virtio_user_iova2virt(vring->desc[idx_data].addr);
+		rss = virtio_user_iova2virt(vring->desc[idx_data].addr, use_va);
 		status = virtio_user_handle_mq(dev, rss->max_tx_vq);
 	} else if (hdr->class == VIRTIO_NET_CTRL_RX  ||
 		   hdr->class == VIRTIO_NET_CTRL_MAC ||
@@ -962,7 +963,8 @@ virtio_user_handle_ctrl_msg_split(struct virtio_user_dev *dev, struct vring *vri
 				(struct virtio_pmd_ctrl *)hdr, dlen, nb_dlen);
 
 	/* Update status */
-	*(virtio_net_ctrl_ack *)virtio_user_iova2virt(vring->desc[idx_status].addr) = status;
+	*(virtio_net_ctrl_ack *)
+		virtio_user_iova2virt(vring->desc[idx_status].addr, use_va) = status;
 
 	return n_descs;
 }
@@ -987,6 +989,7 @@ virtio_user_handle_ctrl_msg_packed(struct virtio_user_dev *dev,
 	/* initialize to one, header is first */
 	uint32_t n_descs = 1;
 	int dlen[CVQ_MAX_DATA_DESCS], nb_dlen = 0;
+	bool use_va = dev->hw.use_va;
 
 	/* locate desc for header, data, and status */
 	idx_data = idx_hdr + 1;
@@ -1004,18 +1007,18 @@ virtio_user_handle_ctrl_msg_packed(struct virtio_user_dev *dev,
 		n_descs++;
 	}
 
-	hdr = virtio_user_iova2virt(vring->desc[idx_hdr].addr);
+	hdr = virtio_user_iova2virt(vring->desc[idx_hdr].addr, use_va);
 	if (hdr->class == VIRTIO_NET_CTRL_MQ &&
 	    hdr->cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) {
 		uint16_t queues, *addr;
 
-		addr = virtio_user_iova2virt(vring->desc[idx_data].addr);
+		addr = virtio_user_iova2virt(vring->desc[idx_data].addr, use_va);
 		queues = *addr;
 		status = virtio_user_handle_mq(dev, queues);
 	} else if (hdr->class == VIRTIO_NET_CTRL_MQ && hdr->cmd == VIRTIO_NET_CTRL_MQ_RSS_CONFIG) {
 		struct virtio_net_ctrl_rss *rss;
 
-		rss = virtio_user_iova2virt(vring->desc[idx_data].addr);
+		rss = virtio_user_iova2virt(vring->desc[idx_data].addr, use_va);
 		status = virtio_user_handle_mq(dev, rss->max_tx_vq);
 	} else if (hdr->class == VIRTIO_NET_CTRL_RX  ||
 		   hdr->class == VIRTIO_NET_CTRL_MAC ||
@@ -1028,7 +1031,8 @@ virtio_user_handle_ctrl_msg_packed(struct virtio_user_dev *dev,
 				(struct virtio_pmd_ctrl *)hdr, dlen, nb_dlen);
 
 	/* Update status */
-	*(virtio_net_ctrl_ack *)virtio_user_iova2virt(vring->desc[idx_status].addr) = status;
+	*(virtio_net_ctrl_ack *)
+		virtio_user_iova2virt(vring->desc[idx_status].addr, use_va) = status;
 
 	/* Update used descriptor */
 	vring->desc[idx_hdr].id = vring->desc[idx_status].id;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2] net/virtio_user: fix issue with converting cq descriptor IOVA address to VA
  2024-07-11 12:44 [PATCH v2] net/virtio_user: fix issue with converting cq descriptor IOVA address to VA Srujana Challa
@ 2024-07-11 15:02 ` David Marchand
  2024-07-11 17:46   ` [EXTERNAL] " Srujana Challa
  2024-07-12 12:36 ` [PATCH v3] net/virtio_user: fix cq descriptor conversion with non vDPA backend Srujana Challa
  1 sibling, 1 reply; 8+ messages in thread
From: David Marchand @ 2024-07-11 15:02 UTC (permalink / raw)
  To: Srujana Challa
  Cc: dev, maxime.coquelin, chenbox, jerinj, ndabilpuram, vattunuru

On Thu, Jul 11, 2024 at 2:44 PM Srujana Challa <schalla@marvell.com> wrote:
>
> This patch modifies the code to convert descriptor buffer IOVA
> addresses to virtual addresses only when use_va flag is false.
>
> This patch resolves a segmentation fault with the vhost-user backend
> that occurs during the processing of the shadow control queue.
>
> 'Fixes: 67e9e504dae2 ("net/virtio_user: convert cq descriptor IOVA
> address to Virtual address")'

No single quote around the Fixes: tag, and on a single line please.

As for the title, how about: "net/virtio_user: fix cq descriptor
conversion with non vDPA backend" ?

>
> Reported-by: David Marchand <david.marchand@redhat.com>
> Signed-off-by: Srujana Challa <schalla@marvell.com>
> ---
> v2:
> - Added Reported-by tag.
>
>  .../net/virtio/virtio_user/virtio_user_dev.c  | 28 +++++++++++--------
>  1 file changed, 16 insertions(+), 12 deletions(-)
>
> diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
> index fed66d2ae9..94e0ddcb94 100644
> --- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
> +++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
> @@ -905,12 +905,12 @@ virtio_user_handle_mq(struct virtio_user_dev *dev, uint16_t q_pairs)
>  #define CVQ_MAX_DATA_DESCS 32
>
>  static inline void *
> -virtio_user_iova2virt(rte_iova_t iova)
> +virtio_user_iova2virt(rte_iova_t iova, bool use_va)

All the code in this file passes the virtio_user_dev object, please
keep this convention.

IOW:
-virtio_user_iova2virt(rte_iova_t iova)
+virtio_user_iova2virt(struct virtio_user_dev *dev, rte_iova_t iova)

>  {
> -       if (rte_eal_iova_mode() == RTE_IOVA_VA)
> -               return (void *)(uintptr_t)iova;
> -       else
> +       if (rte_eal_iova_mode() == RTE_IOVA_PA && !use_va)
>                 return rte_mem_iova2virt(iova);
> +       else
> +               return (void *)(uintptr_t)iova;

Why do we need to invert this test?

I would make the change as simple as:
-       if (rte_eal_iova_mode() == RTE_IOVA_VA)
+       if (rte_eal_iova_mode() == RTE_IOVA_VA || dev->hw.use_va)


>  }
>
>  static uint32_t
> @@ -922,6 +922,7 @@ virtio_user_handle_ctrl_msg_split(struct virtio_user_dev *dev, struct vring *vri
>         uint16_t i, idx_data, idx_status;
>         uint32_t n_descs = 0;
>         int dlen[CVQ_MAX_DATA_DESCS], nb_dlen = 0;
> +       bool use_va = dev->hw.use_va;
>
>         /* locate desc for header, data, and status */
>         idx_data = vring->desc[idx_hdr].next;
> @@ -938,18 +939,18 @@ virtio_user_handle_ctrl_msg_split(struct virtio_user_dev *dev, struct vring *vri
>         idx_status = i;
>         n_descs++;
>
> -       hdr = virtio_user_iova2virt(vring->desc[idx_hdr].addr);
> +       hdr = virtio_user_iova2virt(vring->desc[idx_hdr].addr, use_va);
>         if (hdr->class == VIRTIO_NET_CTRL_MQ &&
>             hdr->cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) {
>                 uint16_t queues, *addr;
>
> -               addr = virtio_user_iova2virt(vring->desc[idx_data].addr);
> +               addr = virtio_user_iova2virt(vring->desc[idx_data].addr, use_va);
>                 queues = *addr;
>                 status = virtio_user_handle_mq(dev, queues);
>         } else if (hdr->class == VIRTIO_NET_CTRL_MQ && hdr->cmd == VIRTIO_NET_CTRL_MQ_RSS_CONFIG) {
>                 struct virtio_net_ctrl_rss *rss;
>
> -               rss = virtio_user_iova2virt(vring->desc[idx_data].addr);
> +               rss = virtio_user_iova2virt(vring->desc[idx_data].addr, use_va);
>                 status = virtio_user_handle_mq(dev, rss->max_tx_vq);
>         } else if (hdr->class == VIRTIO_NET_CTRL_RX  ||
>                    hdr->class == VIRTIO_NET_CTRL_MAC ||
> @@ -962,7 +963,8 @@ virtio_user_handle_ctrl_msg_split(struct virtio_user_dev *dev, struct vring *vri
>                                 (struct virtio_pmd_ctrl *)hdr, dlen, nb_dlen);
>
>         /* Update status */
> -       *(virtio_net_ctrl_ack *)virtio_user_iova2virt(vring->desc[idx_status].addr) = status;
> +       *(virtio_net_ctrl_ack *)
> +               virtio_user_iova2virt(vring->desc[idx_status].addr, use_va) = status;

Afaics, no need for reindenting.

>
>         return n_descs;
>  }
> @@ -987,6 +989,7 @@ virtio_user_handle_ctrl_msg_packed(struct virtio_user_dev *dev,
>         /* initialize to one, header is first */
>         uint32_t n_descs = 1;
>         int dlen[CVQ_MAX_DATA_DESCS], nb_dlen = 0;
> +       bool use_va = dev->hw.use_va;
>
>         /* locate desc for header, data, and status */
>         idx_data = idx_hdr + 1;
> @@ -1004,18 +1007,18 @@ virtio_user_handle_ctrl_msg_packed(struct virtio_user_dev *dev,
>                 n_descs++;
>         }
>
> -       hdr = virtio_user_iova2virt(vring->desc[idx_hdr].addr);
> +       hdr = virtio_user_iova2virt(vring->desc[idx_hdr].addr, use_va);
>         if (hdr->class == VIRTIO_NET_CTRL_MQ &&
>             hdr->cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) {
>                 uint16_t queues, *addr;
>
> -               addr = virtio_user_iova2virt(vring->desc[idx_data].addr);
> +               addr = virtio_user_iova2virt(vring->desc[idx_data].addr, use_va);
>                 queues = *addr;
>                 status = virtio_user_handle_mq(dev, queues);
>         } else if (hdr->class == VIRTIO_NET_CTRL_MQ && hdr->cmd == VIRTIO_NET_CTRL_MQ_RSS_CONFIG) {
>                 struct virtio_net_ctrl_rss *rss;
>
> -               rss = virtio_user_iova2virt(vring->desc[idx_data].addr);
> +               rss = virtio_user_iova2virt(vring->desc[idx_data].addr, use_va);
>                 status = virtio_user_handle_mq(dev, rss->max_tx_vq);
>         } else if (hdr->class == VIRTIO_NET_CTRL_RX  ||
>                    hdr->class == VIRTIO_NET_CTRL_MAC ||
> @@ -1028,7 +1031,8 @@ virtio_user_handle_ctrl_msg_packed(struct virtio_user_dev *dev,
>                                 (struct virtio_pmd_ctrl *)hdr, dlen, nb_dlen);
>
>         /* Update status */
> -       *(virtio_net_ctrl_ack *)virtio_user_iova2virt(vring->desc[idx_status].addr) = status;
> +       *(virtio_net_ctrl_ack *)
> +               virtio_user_iova2virt(vring->desc[idx_status].addr, use_va) = status;

Idem, no reindenting.

>
>         /* Update used descriptor */
>         vring->desc[idx_hdr].id = vring->desc[idx_status].id;
> --
> 2.25.1
>


-- 
David Marchand


^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: [EXTERNAL] Re: [PATCH v2] net/virtio_user: fix issue with converting cq descriptor IOVA address to VA
  2024-07-11 15:02 ` David Marchand
@ 2024-07-11 17:46   ` Srujana Challa
  2024-07-12  7:57     ` David Marchand
  2024-07-12 11:30     ` David Marchand
  0 siblings, 2 replies; 8+ messages in thread
From: Srujana Challa @ 2024-07-11 17:46 UTC (permalink / raw)
  To: David Marchand
  Cc: dev, maxime.coquelin, chenbox, Jerin Jacob,
	Nithin Kumar Dabilpuram, Vamsi Krishna Attunuru

> On Thu, Jul 11, 2024 at 2:44 PM Srujana Challa <schalla@marvell.com> wrote:
> >
> > This patch modifies the code to convert descriptor buffer IOVA
> > addresses to virtual addresses only when use_va flag is false.
> >
> > This patch resolves a segmentation fault with the vhost-user backend
> > that occurs during the processing of the shadow control queue.
> >
> > 'Fixes: 67e9e504dae2 ("net/virtio_user: convert cq descriptor IOVA
> > address to Virtual address")'
> 
> No single quote around the Fixes: tag, and on a single line please.
Ack
> 
> As for the title, how about: "net/virtio_user: fix cq descriptor conversion with
> non vDPA backend" ?
Ack
> 
> >
> > Reported-by: David Marchand <david.marchand@redhat.com>
> > Signed-off-by: Srujana Challa <schalla@marvell.com>
> > ---
> > v2:
> > - Added Reported-by tag.
> >
> >  .../net/virtio/virtio_user/virtio_user_dev.c  | 28
> > +++++++++++--------
> >  1 file changed, 16 insertions(+), 12 deletions(-)
> >
> > diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c
> > b/drivers/net/virtio/virtio_user/virtio_user_dev.c
> > index fed66d2ae9..94e0ddcb94 100644
> > --- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
> > +++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
> > @@ -905,12 +905,12 @@ virtio_user_handle_mq(struct virtio_user_dev
> > *dev, uint16_t q_pairs)  #define CVQ_MAX_DATA_DESCS 32
> >
> >  static inline void *
> > -virtio_user_iova2virt(rte_iova_t iova)
> > +virtio_user_iova2virt(rte_iova_t iova, bool use_va)
> 
> All the code in this file passes the virtio_user_dev object, please keep this
> convention.
Ack
> 
> IOW:
> -virtio_user_iova2virt(rte_iova_t iova)
> +virtio_user_iova2virt(struct virtio_user_dev *dev, rte_iova_t iova)
> 
> >  {
> > -       if (rte_eal_iova_mode() == RTE_IOVA_VA)
> > -               return (void *)(uintptr_t)iova;
> > -       else
> > +       if (rte_eal_iova_mode() == RTE_IOVA_PA && !use_va)
> >                 return rte_mem_iova2virt(iova);
> > +       else
> > +               return (void *)(uintptr_t)iova;
> 
> Why do we need to invert this test?
Made this change to ensure that rte_mem_iova2virt() is not called when the IOVA mode is RTE_IOVA_DC
by any chance.
> 
> I would make the change as simple as:
> -       if (rte_eal_iova_mode() == RTE_IOVA_VA)
> +       if (rte_eal_iova_mode() == RTE_IOVA_VA || dev->hw.use_va)
> 
> 
> >  }
> >
> >  static uint32_t
> > @@ -922,6 +922,7 @@ virtio_user_handle_ctrl_msg_split(struct
> virtio_user_dev *dev, struct vring *vri
> >         uint16_t i, idx_data, idx_status;
> >         uint32_t n_descs = 0;
> >         int dlen[CVQ_MAX_DATA_DESCS], nb_dlen = 0;
> > +       bool use_va = dev->hw.use_va;
> >
> >         /* locate desc for header, data, and status */
> >         idx_data = vring->desc[idx_hdr].next; @@ -938,18 +939,18 @@
> > virtio_user_handle_ctrl_msg_split(struct virtio_user_dev *dev, struct vring
> *vri
> >         idx_status = i;
> >         n_descs++;
> >
> > -       hdr = virtio_user_iova2virt(vring->desc[idx_hdr].addr);
> > +       hdr = virtio_user_iova2virt(vring->desc[idx_hdr].addr,
> > + use_va);
> >         if (hdr->class == VIRTIO_NET_CTRL_MQ &&
> >             hdr->cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) {
> >                 uint16_t queues, *addr;
> >
> > -               addr = virtio_user_iova2virt(vring->desc[idx_data].addr);
> > +               addr =
> > + virtio_user_iova2virt(vring->desc[idx_data].addr, use_va);
> >                 queues = *addr;
> >                 status = virtio_user_handle_mq(dev, queues);
> >         } else if (hdr->class == VIRTIO_NET_CTRL_MQ && hdr->cmd ==
> VIRTIO_NET_CTRL_MQ_RSS_CONFIG) {
> >                 struct virtio_net_ctrl_rss *rss;
> >
> > -               rss = virtio_user_iova2virt(vring->desc[idx_data].addr);
> > +               rss =
> > + virtio_user_iova2virt(vring->desc[idx_data].addr, use_va);
> >                 status = virtio_user_handle_mq(dev, rss->max_tx_vq);
> >         } else if (hdr->class == VIRTIO_NET_CTRL_RX  ||
> >                    hdr->class == VIRTIO_NET_CTRL_MAC || @@ -962,7
> > +963,8 @@ virtio_user_handle_ctrl_msg_split(struct virtio_user_dev *dev,
> struct vring *vri
> >                                 (struct virtio_pmd_ctrl *)hdr, dlen,
> > nb_dlen);
> >
> >         /* Update status */
> > -       *(virtio_net_ctrl_ack *)virtio_user_iova2virt(vring-
> >desc[idx_status].addr) = status;
> > +       *(virtio_net_ctrl_ack *)
> > +               virtio_user_iova2virt(vring->desc[idx_status].addr,
> > + use_va) = status;
> 
> Afaics, no need for reindenting.
It crossed 100 line boundary actually.
> 
> >
> >         return n_descs;
> >  }
> > @@ -987,6 +989,7 @@ virtio_user_handle_ctrl_msg_packed(struct
> virtio_user_dev *dev,
> >         /* initialize to one, header is first */
> >         uint32_t n_descs = 1;
> >         int dlen[CVQ_MAX_DATA_DESCS], nb_dlen = 0;
> > +       bool use_va = dev->hw.use_va;
> >
> >         /* locate desc for header, data, and status */
> >         idx_data = idx_hdr + 1;
> > @@ -1004,18 +1007,18 @@ virtio_user_handle_ctrl_msg_packed(struct
> virtio_user_dev *dev,
> >                 n_descs++;
> >         }
> >
> > -       hdr = virtio_user_iova2virt(vring->desc[idx_hdr].addr);
> > +       hdr = virtio_user_iova2virt(vring->desc[idx_hdr].addr,
> > + use_va);
> >         if (hdr->class == VIRTIO_NET_CTRL_MQ &&
> >             hdr->cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) {
> >                 uint16_t queues, *addr;
> >
> > -               addr = virtio_user_iova2virt(vring->desc[idx_data].addr);
> > +               addr =
> > + virtio_user_iova2virt(vring->desc[idx_data].addr, use_va);
> >                 queues = *addr;
> >                 status = virtio_user_handle_mq(dev, queues);
> >         } else if (hdr->class == VIRTIO_NET_CTRL_MQ && hdr->cmd ==
> VIRTIO_NET_CTRL_MQ_RSS_CONFIG) {
> >                 struct virtio_net_ctrl_rss *rss;
> >
> > -               rss = virtio_user_iova2virt(vring->desc[idx_data].addr);
> > +               rss =
> > + virtio_user_iova2virt(vring->desc[idx_data].addr, use_va);
> >                 status = virtio_user_handle_mq(dev, rss->max_tx_vq);
> >         } else if (hdr->class == VIRTIO_NET_CTRL_RX  ||
> >                    hdr->class == VIRTIO_NET_CTRL_MAC || @@ -1028,7
> > +1031,8 @@ virtio_user_handle_ctrl_msg_packed(struct virtio_user_dev
> *dev,
> >                                 (struct virtio_pmd_ctrl *)hdr, dlen,
> > nb_dlen);
> >
> >         /* Update status */
> > -       *(virtio_net_ctrl_ack *)virtio_user_iova2virt(vring-
> >desc[idx_status].addr) = status;
> > +       *(virtio_net_ctrl_ack *)
> > +               virtio_user_iova2virt(vring->desc[idx_status].addr,
> > + use_va) = status;
> 
> Idem, no reindenting.
> 
> >
> >         /* Update used descriptor */
> >         vring->desc[idx_hdr].id = vring->desc[idx_status].id;
> > --
> > 2.25.1
> >
> 
> 
> --
> David Marchand


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [EXTERNAL] Re: [PATCH v2] net/virtio_user: fix issue with converting cq descriptor IOVA address to VA
  2024-07-11 17:46   ` [EXTERNAL] " Srujana Challa
@ 2024-07-12  7:57     ` David Marchand
  2024-07-12 11:30     ` David Marchand
  1 sibling, 0 replies; 8+ messages in thread
From: David Marchand @ 2024-07-12  7:57 UTC (permalink / raw)
  To: Srujana Challa
  Cc: dev, maxime.coquelin, chenbox, Jerin Jacob,
	Nithin Kumar Dabilpuram, Vamsi Krishna Attunuru

On Thu, Jul 11, 2024 at 7:46 PM Srujana Challa <schalla@marvell.com> wrote:
> > > +963,8 @@ virtio_user_handle_ctrl_msg_split(struct virtio_user_dev *dev,
> > struct vring *vri
> > >                                 (struct virtio_pmd_ctrl *)hdr, dlen,
> > > nb_dlen);
> > >
> > >         /* Update status */
> > > -       *(virtio_net_ctrl_ack *)virtio_user_iova2virt(vring-
> > >desc[idx_status].addr) = status;
> > > +       *(virtio_net_ctrl_ack *)
> > > +               virtio_user_iova2virt(vring->desc[idx_status].addr,
> > > + use_va) = status;
> >
> > Afaics, no need for reindenting.
> It crossed 100 line boundary actually.

Ah indeed, it is crossed with "use_va".
But it won't be the case when passing dev.
So please don't touch original indent after applying my suggestion.

-- 
David Marchand


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [EXTERNAL] Re: [PATCH v2] net/virtio_user: fix issue with converting cq descriptor IOVA address to VA
  2024-07-11 17:46   ` [EXTERNAL] " Srujana Challa
  2024-07-12  7:57     ` David Marchand
@ 2024-07-12 11:30     ` David Marchand
  2024-07-12 11:51       ` Srujana Challa
  1 sibling, 1 reply; 8+ messages in thread
From: David Marchand @ 2024-07-12 11:30 UTC (permalink / raw)
  To: Srujana Challa
  Cc: dev, maxime.coquelin, chenbox, Jerin Jacob,
	Nithin Kumar Dabilpuram, Vamsi Krishna Attunuru

On Thu, Jul 11, 2024 at 7:46 PM Srujana Challa <schalla@marvell.com> wrote:
> > -virtio_user_iova2virt(rte_iova_t iova)
> > +virtio_user_iova2virt(struct virtio_user_dev *dev, rte_iova_t iova)
> >
> > >  {
> > > -       if (rte_eal_iova_mode() == RTE_IOVA_VA)
> > > -               return (void *)(uintptr_t)iova;
> > > -       else
> > > +       if (rte_eal_iova_mode() == RTE_IOVA_PA && !use_va)
> > >                 return rte_mem_iova2virt(iova);
> > > +       else
> > > +               return (void *)(uintptr_t)iova;
> >
> > Why do we need to invert this test?
> Made this change to ensure that rte_mem_iova2virt() is not called when the IOVA mode is RTE_IOVA_DC
> by any chance.

Just repeating what I replied in the other thread as I see it was
suggested by Jerin.
It is not possible iova mode == RTE_IOVA_DC.


-- 
David Marchand


^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: [EXTERNAL] Re: [PATCH v2] net/virtio_user: fix issue with converting cq descriptor IOVA address to VA
  2024-07-12 11:30     ` David Marchand
@ 2024-07-12 11:51       ` Srujana Challa
  0 siblings, 0 replies; 8+ messages in thread
From: Srujana Challa @ 2024-07-12 11:51 UTC (permalink / raw)
  To: David Marchand
  Cc: dev, maxime.coquelin, chenbox, Jerin Jacob,
	Nithin Kumar Dabilpuram, Vamsi Krishna Attunuru

> On Thu, Jul 11, 2024 at 7:46 PM Srujana Challa <schalla@marvell.com> wrote:
> > > -virtio_user_iova2virt(rte_iova_t iova)
> > > +virtio_user_iova2virt(struct virtio_user_dev *dev, rte_iova_t iova)
> > >
> > > >  {
> > > > -       if (rte_eal_iova_mode() == RTE_IOVA_VA)
> > > > -               return (void *)(uintptr_t)iova;
> > > > -       else
> > > > +       if (rte_eal_iova_mode() == RTE_IOVA_PA && !use_va)
> > > >                 return rte_mem_iova2virt(iova);
> > > > +       else
> > > > +               return (void *)(uintptr_t)iova;
> > >
> > > Why do we need to invert this test?
> > Made this change to ensure that rte_mem_iova2virt() is not called when
> > the IOVA mode is RTE_IOVA_DC by any chance.
> 
> Just repeating what I replied in the other thread as I see it was suggested by
> Jerin.
> It is not possible iova mode == RTE_IOVA_DC.
Thank you for the clarification. I’ll incorporate the suggested changes in the next version.
> 
> 
> --
> David Marchand


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v3] net/virtio_user: fix cq descriptor conversion with non vDPA backend
  2024-07-11 12:44 [PATCH v2] net/virtio_user: fix issue with converting cq descriptor IOVA address to VA Srujana Challa
  2024-07-11 15:02 ` David Marchand
@ 2024-07-12 12:36 ` Srujana Challa
  2024-07-12 14:24   ` David Marchand
  1 sibling, 1 reply; 8+ messages in thread
From: Srujana Challa @ 2024-07-12 12:36 UTC (permalink / raw)
  To: dev, maxime.coquelin, chenbox
  Cc: david.marchand, jerinj, ndabilpuram, vattunuru, schalla

This patch modifies the code to convert descriptor buffer IOVA
addresses to virtual addresses only when use_va flag is false.

This patch fixes segmentation fault with vhost-user backend.

Fixes: 67e9e504dae2 ("net/virtio_user: convert cq descriptor IOVA address to Virtual address")

Reported-by: David Marchand <david.marchand@redhat.com>
Signed-off-by: Srujana Challa <schalla@marvell.com>
---
v3:
- Addressed the review comments from David Marchand.
v2:
- Added Reported-by tag.

 .../net/virtio/virtio_user/virtio_user_dev.c  | 20 +++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index fed66d2ae9..48b872524a 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -905,9 +905,9 @@ virtio_user_handle_mq(struct virtio_user_dev *dev, uint16_t q_pairs)
 #define CVQ_MAX_DATA_DESCS 32
 
 static inline void *
-virtio_user_iova2virt(rte_iova_t iova)
+virtio_user_iova2virt(struct virtio_user_dev *dev, rte_iova_t iova)
 {
-	if (rte_eal_iova_mode() == RTE_IOVA_VA)
+	if (rte_eal_iova_mode() == RTE_IOVA_VA || dev->hw.use_va)
 		return (void *)(uintptr_t)iova;
 	else
 		return rte_mem_iova2virt(iova);
@@ -938,18 +938,18 @@ virtio_user_handle_ctrl_msg_split(struct virtio_user_dev *dev, struct vring *vri
 	idx_status = i;
 	n_descs++;
 
-	hdr = virtio_user_iova2virt(vring->desc[idx_hdr].addr);
+	hdr = virtio_user_iova2virt(dev, vring->desc[idx_hdr].addr);
 	if (hdr->class == VIRTIO_NET_CTRL_MQ &&
 	    hdr->cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) {
 		uint16_t queues, *addr;
 
-		addr = virtio_user_iova2virt(vring->desc[idx_data].addr);
+		addr = virtio_user_iova2virt(dev, vring->desc[idx_data].addr);
 		queues = *addr;
 		status = virtio_user_handle_mq(dev, queues);
 	} else if (hdr->class == VIRTIO_NET_CTRL_MQ && hdr->cmd == VIRTIO_NET_CTRL_MQ_RSS_CONFIG) {
 		struct virtio_net_ctrl_rss *rss;
 
-		rss = virtio_user_iova2virt(vring->desc[idx_data].addr);
+		rss = virtio_user_iova2virt(dev, vring->desc[idx_data].addr);
 		status = virtio_user_handle_mq(dev, rss->max_tx_vq);
 	} else if (hdr->class == VIRTIO_NET_CTRL_RX  ||
 		   hdr->class == VIRTIO_NET_CTRL_MAC ||
@@ -962,7 +962,7 @@ virtio_user_handle_ctrl_msg_split(struct virtio_user_dev *dev, struct vring *vri
 				(struct virtio_pmd_ctrl *)hdr, dlen, nb_dlen);
 
 	/* Update status */
-	*(virtio_net_ctrl_ack *)virtio_user_iova2virt(vring->desc[idx_status].addr) = status;
+	*(virtio_net_ctrl_ack *)virtio_user_iova2virt(dev, vring->desc[idx_status].addr) = status;
 
 	return n_descs;
 }
@@ -1004,18 +1004,18 @@ virtio_user_handle_ctrl_msg_packed(struct virtio_user_dev *dev,
 		n_descs++;
 	}
 
-	hdr = virtio_user_iova2virt(vring->desc[idx_hdr].addr);
+	hdr = virtio_user_iova2virt(dev, vring->desc[idx_hdr].addr);
 	if (hdr->class == VIRTIO_NET_CTRL_MQ &&
 	    hdr->cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) {
 		uint16_t queues, *addr;
 
-		addr = virtio_user_iova2virt(vring->desc[idx_data].addr);
+		addr = virtio_user_iova2virt(dev, vring->desc[idx_data].addr);
 		queues = *addr;
 		status = virtio_user_handle_mq(dev, queues);
 	} else if (hdr->class == VIRTIO_NET_CTRL_MQ && hdr->cmd == VIRTIO_NET_CTRL_MQ_RSS_CONFIG) {
 		struct virtio_net_ctrl_rss *rss;
 
-		rss = virtio_user_iova2virt(vring->desc[idx_data].addr);
+		rss = virtio_user_iova2virt(dev, vring->desc[idx_data].addr);
 		status = virtio_user_handle_mq(dev, rss->max_tx_vq);
 	} else if (hdr->class == VIRTIO_NET_CTRL_RX  ||
 		   hdr->class == VIRTIO_NET_CTRL_MAC ||
@@ -1028,7 +1028,7 @@ virtio_user_handle_ctrl_msg_packed(struct virtio_user_dev *dev,
 				(struct virtio_pmd_ctrl *)hdr, dlen, nb_dlen);
 
 	/* Update status */
-	*(virtio_net_ctrl_ack *)virtio_user_iova2virt(vring->desc[idx_status].addr) = status;
+	*(virtio_net_ctrl_ack *)virtio_user_iova2virt(dev, vring->desc[idx_status].addr) = status;
 
 	/* Update used descriptor */
 	vring->desc[idx_hdr].id = vring->desc[idx_status].id;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3] net/virtio_user: fix cq descriptor conversion with non vDPA backend
  2024-07-12 12:36 ` [PATCH v3] net/virtio_user: fix cq descriptor conversion with non vDPA backend Srujana Challa
@ 2024-07-12 14:24   ` David Marchand
  0 siblings, 0 replies; 8+ messages in thread
From: David Marchand @ 2024-07-12 14:24 UTC (permalink / raw)
  To: Srujana Challa
  Cc: dev, maxime.coquelin, chenbox, jerinj, ndabilpuram, vattunuru

On Fri, Jul 12, 2024 at 2:36 PM Srujana Challa <schalla@marvell.com> wrote:
>
> This patch modifies the code to convert descriptor buffer IOVA
> addresses to virtual addresses only when use_va flag is false.
>
> This patch fixes segmentation fault with vhost-user backend.
>
> Fixes: 67e9e504dae2 ("net/virtio_user: convert cq descriptor IOVA address to Virtual address")

This is not the sha1 for the commit in the main repo, I updated it.

>
> Reported-by: David Marchand <david.marchand@redhat.com>
> Signed-off-by: Srujana Challa <schalla@marvell.com>

The reported issue is fixed in OVS unit tests.
Applied, thanks.


-- 
David Marchand


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2024-07-12 14:25 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-07-11 12:44 [PATCH v2] net/virtio_user: fix issue with converting cq descriptor IOVA address to VA Srujana Challa
2024-07-11 15:02 ` David Marchand
2024-07-11 17:46   ` [EXTERNAL] " Srujana Challa
2024-07-12  7:57     ` David Marchand
2024-07-12 11:30     ` David Marchand
2024-07-12 11:51       ` Srujana Challa
2024-07-12 12:36 ` [PATCH v3] net/virtio_user: fix cq descriptor conversion with non vDPA backend Srujana Challa
2024-07-12 14:24   ` David Marchand

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).