DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH 0/2] support to clear in-flight packets for async
@ 2022-04-13 18:27 Yuan Wang
  2022-04-13 18:27 ` [PATCH 1/2] vhost: support clear in-flight packets for async dequeue Yuan Wang
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Yuan Wang @ 2022-04-13 18:27 UTC (permalink / raw)
  To: maxime.coquelin, chenbo.xia
  Cc: dev, jiayu.hu, xuan.ding, xingguang.he, yvonnex.yang,
	sunil.pai.g, yuanx.wang

These patches support to clear in-flight packets for async dequeue
and introduce thread-safe version of this function.

note: The patches depend on the following patches
(https://patches.dpdk.org/project/dpdk/patch/20220411100032.114434-5-xuan.ding@intel.com/)
(https://patches.dpdk.org/project/dpdk/patch/20220411100032.114434-6-xuan.ding@intel.com/)

Yuan Wang (2):
  vhost: support clear in-flight packets for async dequeue
  example/vhost: support to clear in-flight packets for async dequeue

 doc/guides/prog_guide/vhost_lib.rst    |  8 ++-
 doc/guides/rel_notes/release_22_07.rst |  4 ++
 examples/vhost/main.c                  |  3 -
 lib/vhost/rte_vhost_async.h            | 25 ++++++++
 lib/vhost/version.map                  |  1 +
 lib/vhost/virtio_net.c                 | 80 +++++++++++++++++++++++++-
 6 files changed, 115 insertions(+), 6 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 1/2] vhost: support clear in-flight packets for async dequeue
  2022-04-13 18:27 [PATCH 0/2] support to clear in-flight packets for async Yuan Wang
@ 2022-04-13 18:27 ` Yuan Wang
  2022-04-13 18:27 ` [PATCH 2/2] example/vhost: support to " Yuan Wang
  2022-05-13 16:35 ` [PATCH v2 0/2] support to clear in-flight packets for async Yuan Wang
  2 siblings, 0 replies; 6+ messages in thread
From: Yuan Wang @ 2022-04-13 18:27 UTC (permalink / raw)
  To: maxime.coquelin, chenbo.xia
  Cc: dev, jiayu.hu, xuan.ding, xingguang.he, yvonnex.yang,
	sunil.pai.g, yuanx.wang

rte_vhost_clear_queue_thread_unsafe() supports to clear
in-flight packets for async enqueue only. But after
supporting async dequeue, this API should support async dequeue too.

This patch also adds the thread-safe version of this API,
the difference between the two API is that thread safety uses lock.

These APIs maybe used to clean up packets in the async channel
to prevent packet loss when the device state changes or
when the device is destroyed.

Signed-off-by: Yuan Wang <yuanx.wang@intel.com>
---
 doc/guides/prog_guide/vhost_lib.rst    |  8 ++-
 doc/guides/rel_notes/release_22_07.rst |  4 ++
 lib/vhost/rte_vhost_async.h            | 25 ++++++++
 lib/vhost/version.map                  |  1 +
 lib/vhost/virtio_net.c                 | 80 +++++++++++++++++++++++++-
 5 files changed, 115 insertions(+), 3 deletions(-)

diff --git a/doc/guides/prog_guide/vhost_lib.rst b/doc/guides/prog_guide/vhost_lib.rst
index 40cf315170..967c902703 100644
--- a/doc/guides/prog_guide/vhost_lib.rst
+++ b/doc/guides/prog_guide/vhost_lib.rst
@@ -273,7 +273,13 @@ The following is an overview of some key Vhost API functions:
 
 * ``rte_vhost_clear_queue_thread_unsafe(vid, queue_id, **pkts, count, dma_id, vchan_id)``
 
-  Clear inflight packets which are submitted to DMA engine in vhost async data
+  Clear in-flight packets which are submitted to async channel in vhost
+  async data path without performing any locking. Completed packets are
+  returned to applications through ``pkts``.
+
+* ``rte_vhost_clear_queue(vid, queue_id, **pkts, count, dma_id, vchan_id)``
+
+  Clear in-flight packets which are submitted to async channel in vhost async data
   path. Completed packets are returned to applications through ``pkts``.
 
 * ``rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id,
diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
index 422a6673cb..6340ab9474 100644
--- a/doc/guides/rel_notes/release_22_07.rst
+++ b/doc/guides/rel_notes/release_22_07.rst
@@ -60,6 +60,10 @@ New Features
   Added vhost async dequeue API which can leverage DMA devices to accelerate
   receiving pkts from guest.
 
+* **Added thread-safe version of inflight packet clear API in vhost library.**
+  Added an API which can clear the inflight packets submitted to
+  the async channel in a thread-safe manner in the vhost async data path.
+
 Removed Items
 -------------
 
diff --git a/lib/vhost/rte_vhost_async.h b/lib/vhost/rte_vhost_async.h
index 23fe1a7316..8a0e4849b9 100644
--- a/lib/vhost/rte_vhost_async.h
+++ b/lib/vhost/rte_vhost_async.h
@@ -166,6 +166,31 @@ uint16_t rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id,
 		struct rte_mbuf **pkts, uint16_t count, int16_t dma_id,
 		uint16_t vchan_id);
 
+/**
+ * This function checks async completion status and clear packets for
+ * a specific vhost device queue. Packets which are inflight will be
+ * returned in an array.
+ *
+ * @param vid
+ *  ID of vhost device to clear data
+ * @param queue_id
+ *  Queue id to clear data
+ * @param pkts
+ *  Blank array to get return packet pointer
+ * @param count
+ *  Size of the packet array
+ * @param dma_id
+ *  The identifier of the DMA device
+ * @param vchan_id
+ *  The identifier of virtual DMA channel
+ * @return
+ *  Number of packets returned
+ */
+__rte_experimental
+uint16_t rte_vhost_clear_queue(int vid, uint16_t queue_id,
+		struct rte_mbuf **pkts, uint16_t count, int16_t dma_id,
+		uint16_t vchan_id);
+
 /**
  * The DMA vChannels used in asynchronous data path must be configured
  * first. So this function needs to be called before enabling DMA
diff --git a/lib/vhost/version.map b/lib/vhost/version.map
index 514e3ff6a6..531c966c03 100644
--- a/lib/vhost/version.map
+++ b/lib/vhost/version.map
@@ -90,6 +90,7 @@ EXPERIMENTAL {
 
 	# added in 22.07
 	rte_vhost_async_try_dequeue_burst;
+	rte_vhost_clear_queue;
 };
 
 INTERNAL {
diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
index 514315ef50..d650b291db 100644
--- a/lib/vhost/virtio_net.c
+++ b/lib/vhost/virtio_net.c
@@ -26,6 +26,11 @@
 
 #define MAX_BATCH_LEN 256
 
+static __rte_always_inline uint16_t
+async_poll_dequeue_completed_split(struct virtio_net *dev, uint16_t queue_id,
+		struct rte_mbuf **pkts, uint16_t count, uint16_t dma_id,
+		uint16_t vchan_id, bool legacy_ol_flags);
+
 /* DMA device copy operation tracking array. */
 struct async_dma_info dma_copy_track[RTE_DMADEV_DEFAULT_MAX];
 
@@ -2097,7 +2102,7 @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id,
 		return 0;
 
 	VHOST_LOG_DATA(DEBUG, "(%s) %s\n", dev->ifname, __func__);
-	if (unlikely(!is_valid_virt_queue_idx(queue_id, 0, dev->nr_vring))) {
+	if (unlikely(queue_id >= dev->nr_vring)) {
 		VHOST_LOG_DATA(ERR, "(%s) %s: invalid virtqueue idx %d.\n",
 			dev->ifname, __func__, queue_id);
 		return 0;
@@ -2118,11 +2123,82 @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id,
 		return 0;
 	}
 
-	n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, pkts, count, dma_id, vchan_id);
+	if (queue_id % 2 == 0)
+		n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id,
+					pkts, count, dma_id, vchan_id);
+	else {
+		if (unlikely(vq_is_packed(dev)))
+			VHOST_LOG_DATA(ERR,
+					"(%d) %s: async dequeue does not support packed ring.\n",
+					dev->vid, __func__);
+		else
+			n_pkts_cpl = async_poll_dequeue_completed_split(dev, queue_id, pkts, count,
+					dma_id, vchan_id, dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS);
+	}
 
 	return n_pkts_cpl;
 }
 
+uint16_t
+rte_vhost_clear_queue(int vid, uint16_t queue_id, struct rte_mbuf **pkts,
+		uint16_t count, int16_t dma_id, uint16_t vchan_id)
+{
+	struct virtio_net *dev = get_device(vid);
+	struct vhost_virtqueue *vq;
+	uint16_t n_pkts_cpl = 0;
+
+	if (!dev)
+		return 0;
+
+	VHOST_LOG_DATA(DEBUG, "(%s) %s\n", dev->ifname, __func__);
+	if (unlikely(queue_id >= dev->nr_vring)) {
+		VHOST_LOG_DATA(ERR, "(%s) %s: invalid virtqueue idx %d.\n",
+			dev->ifname, __func__, queue_id);
+		return 0;
+	}
+
+	vq = dev->virtqueue[queue_id];
+
+	if (!rte_spinlock_trylock(&vq->access_lock)) {
+		VHOST_LOG_DATA(ERR,
+			"(%d) %s: failed to clear async queue id %d, virtqueue busy.\n",
+			dev->vid, __func__, queue_id);
+		return 0;
+	}
+
+	if (unlikely(!vq->async)) {
+		VHOST_LOG_DATA(ERR, "(%s) %s: async not registered for queue id %d.\n",
+			dev->ifname, __func__, queue_id);
+		goto out_access_unlock;
+	}
+
+	if (unlikely(!dma_copy_track[dma_id].vchans ||
+				!dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr)) {
+		VHOST_LOG_DATA(ERR, "(%s) %s: invalid channel %d:%u.\n", dev->ifname, __func__,
+				dma_id, vchan_id);
+		goto out_access_unlock;
+	}
+
+	if (queue_id % 2 == 0)
+		n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id,
+					pkts, count, dma_id, vchan_id);
+	else {
+		if (unlikely(vq_is_packed(dev)))
+			VHOST_LOG_DATA(ERR,
+					"(%d) %s: async dequeue does not support packed ring.\n",
+					dev->vid, __func__);
+		else
+			n_pkts_cpl = async_poll_dequeue_completed_split(dev, queue_id, pkts, count,
+					dma_id, vchan_id, dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS);
+	}
+
+out_access_unlock:
+	rte_spinlock_unlock(&vq->access_lock);
+
+	return n_pkts_cpl;
+}
+
+
 static __rte_always_inline uint32_t
 virtio_dev_rx_async_submit(struct virtio_net *dev, uint16_t queue_id,
 	struct rte_mbuf **pkts, uint32_t count, int16_t dma_id, uint16_t vchan_id)
-- 
2.25.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 2/2] example/vhost: support to clear in-flight packets for async dequeue
  2022-04-13 18:27 [PATCH 0/2] support to clear in-flight packets for async Yuan Wang
  2022-04-13 18:27 ` [PATCH 1/2] vhost: support clear in-flight packets for async dequeue Yuan Wang
@ 2022-04-13 18:27 ` Yuan Wang
  2022-05-13 16:35 ` [PATCH v2 0/2] support to clear in-flight packets for async Yuan Wang
  2 siblings, 0 replies; 6+ messages in thread
From: Yuan Wang @ 2022-04-13 18:27 UTC (permalink / raw)
  To: maxime.coquelin, chenbo.xia
  Cc: dev, jiayu.hu, xuan.ding, xingguang.he, yvonnex.yang,
	sunil.pai.g, yuanx.wang

This patch allows vring_state_changed() to clear
in-flight dequeue packets.

Signed-off-by: Yuan Wang <yuanx.wang@intel.com>
---
 examples/vhost/main.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index d26e40ab73..04e7821322 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -1767,9 +1767,6 @@ vring_state_changed(int vid, uint16_t queue_id, int enable)
 	if (!vdev)
 		return -1;
 
-	if (queue_id != VIRTIO_RXQ)
-		return 0;
-
 	if (dma_bind[vid2socketid[vid]].dmas[queue_id].async_enabled) {
 		if (!enable)
 			vhost_clear_queue_thread_unsafe(vdev, queue_id);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH v2 0/2] support to clear in-flight packets for async
  2022-04-13 18:27 [PATCH 0/2] support to clear in-flight packets for async Yuan Wang
  2022-04-13 18:27 ` [PATCH 1/2] vhost: support clear in-flight packets for async dequeue Yuan Wang
  2022-04-13 18:27 ` [PATCH 2/2] example/vhost: support to " Yuan Wang
@ 2022-05-13 16:35 ` Yuan Wang
  2022-05-13 16:35   ` [PATCH v2 1/2] vhost: support clear in-flight packets for async dequeue Yuan Wang
  2022-05-13 16:35   ` [PATCH v2 2/2] example/vhost: support to " Yuan Wang
  2 siblings, 2 replies; 6+ messages in thread
From: Yuan Wang @ 2022-05-13 16:35 UTC (permalink / raw)
  To: maxime.coquelin, chenbo.xia
  Cc: dev, jiayu.hu, xuan.ding, sunil.pai.g, yuanx.wang

These patches support to clear in-flight packets for async dequeue
and introduce thread-safe version of this function.

note: The patches depend on the following patches
(http://patches.dpdk.org/project/dpdk/patch/20220513025058.12898-5-xuan.ding@intel.com/)
(http://patches.dpdk.org/project/dpdk/patch/20220513025058.12898-6-xuan.ding@intel.com/)

v1->v2:
* Rebase to latest DPDK
* Use the thread-safe version in destroy_device()

RFC->v1:
* Protect vq access with splitlock

Yuan Wang (2):
  vhost: support clear in-flight packets for async dequeue
  example/vhost: support to clear in-flight packets for async dequeue

 doc/guides/prog_guide/vhost_lib.rst    |  8 ++-
 doc/guides/rel_notes/release_22_07.rst |  5 ++
 examples/vhost/main.c                  | 26 +++++++--
 lib/vhost/rte_vhost_async.h            | 25 ++++++++
 lib/vhost/version.map                  |  1 +
 lib/vhost/virtio_net.c                 | 80 +++++++++++++++++++++++++-
 6 files changed, 137 insertions(+), 8 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH v2 1/2] vhost: support clear in-flight packets for async dequeue
  2022-05-13 16:35 ` [PATCH v2 0/2] support to clear in-flight packets for async Yuan Wang
@ 2022-05-13 16:35   ` Yuan Wang
  2022-05-13 16:35   ` [PATCH v2 2/2] example/vhost: support to " Yuan Wang
  1 sibling, 0 replies; 6+ messages in thread
From: Yuan Wang @ 2022-05-13 16:35 UTC (permalink / raw)
  To: maxime.coquelin, chenbo.xia
  Cc: dev, jiayu.hu, xuan.ding, sunil.pai.g, yuanx.wang

rte_vhost_clear_queue_thread_unsafe() supports to clear
in-flight packets for async enqueue only. But after
supporting async dequeue, this API should support async dequeue too.

This patch also adds the thread-safe version of this API,
the difference between the two API is that thread safety uses lock.

These APIs maybe used to clean up packets in the async channel
to prevent packet loss when the device state changes or
when the device is destroyed.

Signed-off-by: Yuan Wang <yuanx.wang@intel.com>
---
 doc/guides/prog_guide/vhost_lib.rst    |  8 ++-
 doc/guides/rel_notes/release_22_07.rst |  5 ++
 lib/vhost/rte_vhost_async.h            | 25 ++++++++
 lib/vhost/version.map                  |  1 +
 lib/vhost/virtio_net.c                 | 80 +++++++++++++++++++++++++-
 5 files changed, 116 insertions(+), 3 deletions(-)

diff --git a/doc/guides/prog_guide/vhost_lib.rst b/doc/guides/prog_guide/vhost_lib.rst
index 09c1c24b48..543d37e4f4 100644
--- a/doc/guides/prog_guide/vhost_lib.rst
+++ b/doc/guides/prog_guide/vhost_lib.rst
@@ -279,7 +279,13 @@ The following is an overview of some key Vhost API functions:
 
 * ``rte_vhost_clear_queue_thread_unsafe(vid, queue_id, **pkts, count, dma_id, vchan_id)``
 
-  Clear inflight packets which are submitted to DMA engine in vhost async data
+  Clear in-flight packets which are submitted to async channel in vhost
+  async data path without performing any locking. Completed packets are
+  returned to applications through ``pkts``.
+
+* ``rte_vhost_clear_queue(vid, queue_id, **pkts, count, dma_id, vchan_id)``
+
+  Clear in-flight packets which are submitted to async channel in vhost async data
   path. Completed packets are returned to applications through ``pkts``.
 
 * ``rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id,
diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
index 564d88623e..2696deb8bb 100644
--- a/doc/guides/rel_notes/release_22_07.rst
+++ b/doc/guides/rel_notes/release_22_07.rst
@@ -75,6 +75,11 @@ New Features
   Added vhost async dequeue API which can leverage DMA devices to
   accelerate receiving pkts from guest.
 
+* **Added thread-safe version of inflight packet clear API in vhost library.**
+
+  Added an API which can clear the inflight packets submitted to
+  the async channel in a thread-safe manner in the vhost async data path.
+
 Removed Items
 -------------
 
diff --git a/lib/vhost/rte_vhost_async.h b/lib/vhost/rte_vhost_async.h
index 2789492e38..2ecaced7d8 100644
--- a/lib/vhost/rte_vhost_async.h
+++ b/lib/vhost/rte_vhost_async.h
@@ -183,6 +183,31 @@ uint16_t rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id,
 		struct rte_mbuf **pkts, uint16_t count, int16_t dma_id,
 		uint16_t vchan_id);
 
+/**
+ * This function checks async completion status and clear packets for
+ * a specific vhost device queue. Packets which are inflight will be
+ * returned in an array.
+ *
+ * @param vid
+ *  ID of vhost device to clear data
+ * @param queue_id
+ *  Queue id to clear data
+ * @param pkts
+ *  Blank array to get return packet pointer
+ * @param count
+ *  Size of the packet array
+ * @param dma_id
+ *  The identifier of the DMA device
+ * @param vchan_id
+ *  The identifier of virtual DMA channel
+ * @return
+ *  Number of packets returned
+ */
+__rte_experimental
+uint16_t rte_vhost_clear_queue(int vid, uint16_t queue_id,
+		struct rte_mbuf **pkts, uint16_t count, int16_t dma_id,
+		uint16_t vchan_id);
+
 /**
  * The DMA vChannels used in asynchronous data path must be configured
  * first. So this function needs to be called before enabling DMA
diff --git a/lib/vhost/version.map b/lib/vhost/version.map
index 8c7211bf0d..eeaab77695 100644
--- a/lib/vhost/version.map
+++ b/lib/vhost/version.map
@@ -91,6 +91,7 @@ EXPERIMENTAL {
 	# added in 22.07
 	rte_vhost_async_get_inflight_thread_unsafe;
 	rte_vhost_async_try_dequeue_burst;
+	rte_vhost_clear_queue;
 };
 
 INTERNAL {
diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
index 8290514e65..36e4d80ea8 100644
--- a/lib/vhost/virtio_net.c
+++ b/lib/vhost/virtio_net.c
@@ -26,6 +26,11 @@
 
 #define MAX_BATCH_LEN 256
 
+static __rte_always_inline uint16_t
+async_poll_dequeue_completed_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
+		struct rte_mbuf **pkts, uint16_t count, uint16_t dma_id,
+		uint16_t vchan_id, bool legacy_ol_flags);
+
 /* DMA device copy operation tracking array. */
 struct async_dma_info dma_copy_track[RTE_DMADEV_DEFAULT_MAX];
 
@@ -2102,7 +2107,7 @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id,
 		return 0;
 
 	VHOST_LOG_DATA(DEBUG, "(%s) %s\n", dev->ifname, __func__);
-	if (unlikely(!is_valid_virt_queue_idx(queue_id, 0, dev->nr_vring))) {
+	if (unlikely(queue_id >= dev->nr_vring)) {
 		VHOST_LOG_DATA(ERR, "(%s) %s: invalid virtqueue idx %d.\n",
 			dev->ifname, __func__, queue_id);
 		return 0;
@@ -2123,11 +2128,82 @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id,
 		return 0;
 	}
 
-	n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, pkts, count, dma_id, vchan_id);
+	if (queue_id % 2 == 0)
+		n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id,
+					pkts, count, dma_id, vchan_id);
+	else {
+		if (unlikely(vq_is_packed(dev)))
+			VHOST_LOG_DATA(ERR,
+					"(%d) %s: async dequeue does not support packed ring.\n",
+					dev->vid, __func__);
+		else
+			n_pkts_cpl = async_poll_dequeue_completed_split(dev, vq, pkts, count,
+					dma_id, vchan_id, dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS);
+	}
+
+	return n_pkts_cpl;
+}
+
+uint16_t
+rte_vhost_clear_queue(int vid, uint16_t queue_id, struct rte_mbuf **pkts,
+		uint16_t count, int16_t dma_id, uint16_t vchan_id)
+{
+	struct virtio_net *dev = get_device(vid);
+	struct vhost_virtqueue *vq;
+	uint16_t n_pkts_cpl = 0;
+
+	if (!dev)
+		return 0;
+
+	VHOST_LOG_DATA(DEBUG, "(%s) %s\n", dev->ifname, __func__);
+	if (unlikely(queue_id >= dev->nr_vring)) {
+		VHOST_LOG_DATA(ERR, "(%s) %s: invalid virtqueue idx %d.\n",
+			dev->ifname, __func__, queue_id);
+		return 0;
+	}
+
+	vq = dev->virtqueue[queue_id];
+
+	if (!rte_spinlock_trylock(&vq->access_lock)) {
+		VHOST_LOG_DATA(ERR,
+			"(%d) %s: failed to clear async queue id %d, virtqueue busy.\n",
+			dev->vid, __func__, queue_id);
+		return 0;
+	}
+
+	if (unlikely(!vq->async)) {
+		VHOST_LOG_DATA(ERR, "(%s) %s: async not registered for queue id %d.\n",
+			dev->ifname, __func__, queue_id);
+		goto out_access_unlock;
+	}
+
+	if (unlikely(!dma_copy_track[dma_id].vchans ||
+				!dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr)) {
+		VHOST_LOG_DATA(ERR, "(%s) %s: invalid channel %d:%u.\n", dev->ifname, __func__,
+				dma_id, vchan_id);
+		goto out_access_unlock;
+	}
+
+	if (queue_id % 2 == 0)
+		n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id,
+					pkts, count, dma_id, vchan_id);
+	else {
+		if (unlikely(vq_is_packed(dev)))
+			VHOST_LOG_DATA(ERR,
+					"(%d) %s: async dequeue does not support packed ring.\n",
+					dev->vid, __func__);
+		else
+			n_pkts_cpl = async_poll_dequeue_completed_split(dev, vq, pkts, count,
+					dma_id, vchan_id, dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS);
+	}
+
+out_access_unlock:
+	rte_spinlock_unlock(&vq->access_lock);
 
 	return n_pkts_cpl;
 }
 
+
 static __rte_always_inline uint32_t
 virtio_dev_rx_async_submit(struct virtio_net *dev, uint16_t queue_id,
 	struct rte_mbuf **pkts, uint32_t count, int16_t dma_id, uint16_t vchan_id)
-- 
2.25.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH v2 2/2] example/vhost: support to clear in-flight packets for async dequeue
  2022-05-13 16:35 ` [PATCH v2 0/2] support to clear in-flight packets for async Yuan Wang
  2022-05-13 16:35   ` [PATCH v2 1/2] vhost: support clear in-flight packets for async dequeue Yuan Wang
@ 2022-05-13 16:35   ` Yuan Wang
  1 sibling, 0 replies; 6+ messages in thread
From: Yuan Wang @ 2022-05-13 16:35 UTC (permalink / raw)
  To: maxime.coquelin, chenbo.xia
  Cc: dev, jiayu.hu, xuan.ding, sunil.pai.g, yuanx.wang

This patch allows vring_state_changed() to clear in-flight
dequeue packets. It also clears the in-flight packets in
a thread-safe way in destroy_device().

Signed-off-by: Yuan Wang <yuanx.wang@intel.com>
---
 examples/vhost/main.c | 26 +++++++++++++++++++++-----
 1 file changed, 21 insertions(+), 5 deletions(-)

diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index d070391727..a97ac23061 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -1537,6 +1537,25 @@ vhost_clear_queue_thread_unsafe(struct vhost_dev *vdev, uint16_t queue_id)
 	}
 }
 
+static void
+vhost_clear_queue(struct vhost_dev *vdev, uint16_t queue_id)
+{
+	uint16_t n_pkt = 0;
+	int pkts_inflight;
+
+	uint16_t dma_id = dma_bind[vid2socketid[vdev->vid]].dmas[queue_id].dev_id;
+	pkts_inflight = rte_vhost_async_get_inflight(vdev->vid, queue_id);
+
+	struct rte_mbuf *m_cpl[pkts_inflight];
+
+	while (pkts_inflight) {
+		n_pkt = rte_vhost_clear_queue(vdev->vid, queue_id, m_cpl,
+						pkts_inflight, dma_id, 0);
+		free_pkts(m_cpl, n_pkt);
+		pkts_inflight = rte_vhost_async_get_inflight(vdev->vid, queue_id);
+	}
+}
+
 /*
  * Remove a device from the specific data core linked list and from the
  * main linked list. Synchronization  occurs through the use of the
@@ -1594,13 +1613,13 @@ destroy_device(int vid)
 		vdev->vid);
 
 	if (dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled) {
-		vhost_clear_queue_thread_unsafe(vdev, VIRTIO_RXQ);
+		vhost_clear_queue(vdev, VIRTIO_RXQ);
 		rte_vhost_async_channel_unregister(vid, VIRTIO_RXQ);
 		dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled = false;
 	}
 
 	if (dma_bind[vid].dmas[VIRTIO_TXQ].async_enabled) {
-		vhost_clear_queue_thread_unsafe(vdev, VIRTIO_TXQ);
+		vhost_clear_queue(vdev, VIRTIO_TXQ);
 		rte_vhost_async_channel_unregister(vid, VIRTIO_TXQ);
 		dma_bind[vid].dmas[VIRTIO_TXQ].async_enabled = false;
 	}
@@ -1759,9 +1778,6 @@ vring_state_changed(int vid, uint16_t queue_id, int enable)
 	if (!vdev)
 		return -1;
 
-	if (queue_id != VIRTIO_RXQ)
-		return 0;
-
 	if (dma_bind[vid2socketid[vid]].dmas[queue_id].async_enabled) {
 		if (!enable)
 			vhost_clear_queue_thread_unsafe(vdev, queue_id);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2022-05-13  8:44 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-13 18:27 [PATCH 0/2] support to clear in-flight packets for async Yuan Wang
2022-04-13 18:27 ` [PATCH 1/2] vhost: support clear in-flight packets for async dequeue Yuan Wang
2022-04-13 18:27 ` [PATCH 2/2] example/vhost: support to " Yuan Wang
2022-05-13 16:35 ` [PATCH v2 0/2] support to clear in-flight packets for async Yuan Wang
2022-05-13 16:35   ` [PATCH v2 1/2] vhost: support clear in-flight packets for async dequeue Yuan Wang
2022-05-13 16:35   ` [PATCH v2 2/2] example/vhost: support to " Yuan Wang

DPDK patches and discussions

This inbox may be cloned and mirrored by anyone:

	git clone --mirror http://inbox.dpdk.org/dev/0 dev/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 dev dev/ http://inbox.dpdk.org/dev \
		dev@dpdk.org
	public-inbox-index dev

Example config snippet for mirrors.
Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.dev


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git