DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 1/2] vdpa/mlx5: fix guest notification timing
@ 2020-02-24 16:55 Matan Azrad
  2020-02-24 16:55 ` [dpdk-dev] [PATCH 2/2] doc: update mlx5 vDPA dependecies Matan Azrad
  2020-02-25  9:47 ` [dpdk-dev] [PATCH 1/2] vdpa/mlx5: fix guest notification timing Thomas Monjalon
  0 siblings, 2 replies; 3+ messages in thread
From: Matan Azrad @ 2020-02-24 16:55 UTC (permalink / raw)
  To: dev; +Cc: Viacheslav Ovsiienko, Thomas Monjalon, Maxime Coquelin

When the HW finishes to consume the guest Rx descriptors, it creates a
CQE in the CQ.

The mlx5 driver arms the CQ to get notifications when a specific CQE
index is created - the index to be armed is the next CQE index which
should be polled by the driver.

The mlx5 driver configured the kernel driver to send notification to the
guest callfd in the same time it arrives to the mlx5 driver.

It means that the guest was notified only for each first CQE in a poll
cycle, so if the driver polled CQEs of all the virtio queue available
descriptors, the guest was not notified again for the rest because
there was no any new cycle for polling.

Hence, the Rx queues might be stuck when the guest didn't work with
poll mode.

Move the guest notification to be after the driver consumes all the
SW own CQEs.
By this way, guest will be notified only after all the SW CQEs are
polled.

Also init the CQ to be with HW owner in the start.

Fixes: 8395927cdfaf ("vdpa/mlx5: prepare HW queues")

Signed-off-by: Matan Azrad <matan@mellanox.com>
---
 drivers/vdpa/mlx5/mlx5_vdpa.h       |  1 +
 drivers/vdpa/mlx5/mlx5_vdpa_event.c | 18 +++++++-----------
 2 files changed, 8 insertions(+), 11 deletions(-)

diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h
index faeb54a..3324c9d 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa.h
+++ b/drivers/vdpa/mlx5/mlx5_vdpa.h
@@ -39,6 +39,7 @@ struct mlx5_vdpa_cq {
 	uint16_t log_desc_n;
 	uint32_t cq_ci:24;
 	uint32_t arm_sn:2;
+	int callfd;
 	rte_spinlock_t sl;
 	struct mlx5_devx_obj *cq;
 	struct mlx5dv_devx_umem *umem_obj;
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c
index 17fd9dd..16276f5 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c
@@ -4,6 +4,7 @@
 #include <unistd.h>
 #include <stdint.h>
 #include <fcntl.h>
+#include <sys/eventfd.h>
 
 #include <rte_malloc.h>
 #include <rte_errno.h>
@@ -156,17 +157,9 @@
 		rte_errno = errno;
 		goto error;
 	}
-	/* Subscribe CQ event to the guest FD only if it is not in poll mode. */
-	if (callfd != -1) {
-		ret = mlx5_glue->devx_subscribe_devx_event_fd(priv->eventc,
-							      callfd,
-							      cq->cq->obj, 0);
-		if (ret) {
-			DRV_LOG(ERR, "Failed to subscribe CQE event fd.");
-			rte_errno = errno;
-			goto error;
-		}
-	}
+	cq->callfd = callfd;
+	/* Init CQ to ones to be in HW owner in the start. */
+	memset((void *)(uintptr_t)cq->umem_buf, 0xFF, attr.db_umem_offset);
 	/* First arming. */
 	mlx5_vdpa_cq_arm(priv, cq);
 	return 0;
@@ -231,6 +224,9 @@
 		rte_spinlock_lock(&cq->sl);
 		mlx5_vdpa_cq_poll(priv, cq);
 		mlx5_vdpa_cq_arm(priv, cq);
+		if (cq->callfd != -1)
+			/* Notify guest for descriptors consuming. */
+			eventfd_write(cq->callfd, (eventfd_t)1);
 		rte_spinlock_unlock(&cq->sl);
 		DRV_LOG(DEBUG, "CQ %d event: new cq_ci = %u.", cq->cq->id,
 			cq->cq_ci);
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 3+ messages in thread

* [dpdk-dev] [PATCH 2/2] doc: update mlx5 vDPA dependecies
  2020-02-24 16:55 [dpdk-dev] [PATCH 1/2] vdpa/mlx5: fix guest notification timing Matan Azrad
@ 2020-02-24 16:55 ` Matan Azrad
  2020-02-25  9:47 ` [dpdk-dev] [PATCH 1/2] vdpa/mlx5: fix guest notification timing Thomas Monjalon
  1 sibling, 0 replies; 3+ messages in thread
From: Matan Azrad @ 2020-02-24 16:55 UTC (permalink / raw)
  To: dev; +Cc: Viacheslav Ovsiienko, Thomas Monjalon, Maxime Coquelin

The first Mellanox OFED version to support mlx5 vDPA driver is 5.0.

Signed-off-by: Matan Azrad <matan@mellanox.com>
---
 doc/guides/vdpadevs/mlx5.rst | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/doc/guides/vdpadevs/mlx5.rst b/doc/guides/vdpadevs/mlx5.rst
index ce7c8a7..1660192 100644
--- a/doc/guides/vdpadevs/mlx5.rst
+++ b/doc/guides/vdpadevs/mlx5.rst
@@ -56,7 +56,7 @@ Supported NICs
 Prerequisites
 -------------
 
-- Mellanox OFED version: **4.7**
+- Mellanox OFED version: **5.0**
   see :doc:`../../nics/mlx5` guide for more Mellanox OFED details.
 
 Compilation options
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-dev] [PATCH 1/2] vdpa/mlx5: fix guest notification timing
  2020-02-24 16:55 [dpdk-dev] [PATCH 1/2] vdpa/mlx5: fix guest notification timing Matan Azrad
  2020-02-24 16:55 ` [dpdk-dev] [PATCH 2/2] doc: update mlx5 vDPA dependecies Matan Azrad
@ 2020-02-25  9:47 ` Thomas Monjalon
  1 sibling, 0 replies; 3+ messages in thread
From: Thomas Monjalon @ 2020-02-25  9:47 UTC (permalink / raw)
  To: Matan Azrad; +Cc: dev, Viacheslav Ovsiienko, Maxime Coquelin

24/02/2020 17:55, Matan Azrad:
> When the HW finishes to consume the guest Rx descriptors, it creates a
> CQE in the CQ.
> 
> The mlx5 driver arms the CQ to get notifications when a specific CQE
> index is created - the index to be armed is the next CQE index which
> should be polled by the driver.
> 
> The mlx5 driver configured the kernel driver to send notification to the
> guest callfd in the same time it arrives to the mlx5 driver.
> 
> It means that the guest was notified only for each first CQE in a poll
> cycle, so if the driver polled CQEs of all the virtio queue available
> descriptors, the guest was not notified again for the rest because
> there was no any new cycle for polling.
> 
> Hence, the Rx queues might be stuck when the guest didn't work with
> poll mode.
> 
> Move the guest notification to be after the driver consumes all the
> SW own CQEs.
> By this way, guest will be notified only after all the SW CQEs are
> polled.
> 
> Also init the CQ to be with HW owner in the start.
> 
> Fixes: 8395927cdfaf ("vdpa/mlx5: prepare HW queues")
> 
> Signed-off-by: Matan Azrad <matan@mellanox.com>

Applied, thanks
Note: there is no regression risk because it is fixing a new driver.




^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2020-02-25  9:47 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-24 16:55 [dpdk-dev] [PATCH 1/2] vdpa/mlx5: fix guest notification timing Matan Azrad
2020-02-24 16:55 ` [dpdk-dev] [PATCH 2/2] doc: update mlx5 vDPA dependecies Matan Azrad
2020-02-25  9:47 ` [dpdk-dev] [PATCH 1/2] vdpa/mlx5: fix guest notification timing Thomas Monjalon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).