DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH v1 0/2] vhost: introduce DMA vchannel unconfiguration
@ 2022-08-14 14:04 xuan.ding
  2022-08-14 14:04 ` [PATCH v1 1/2] " xuan.ding
                   ` (8 more replies)
  0 siblings, 9 replies; 43+ messages in thread
From: xuan.ding @ 2022-08-14 14:04 UTC (permalink / raw)
  To: maxime.coquelin, chenbo.xia
  Cc: dev, jiayu.hu, xingguang.he, yvonnex.yang, cheng1.jiang,
	yuanx.wang, wenwux.ma, Xuan Ding

From: Xuan Ding <xuan.ding@intel.com>

This patchset introduces a new API rte_vhost_async_dma_unconfigure()
to help user to manually free the DMA vchannel finished to use.

Note: this API should be called after async channel unregister.

Xuan Ding (2):
  vhost: introduce DMA vchannel unconfiguration
  example/vhost: unconfigure DMA vchannel

 doc/guides/prog_guide/vhost_lib.rst    |  5 ++++
 doc/guides/rel_notes/release_22_11.rst |  2 ++
 examples/vhost/main.c                  |  7 +++++
 lib/vhost/rte_vhost_async.h            | 17 ++++++++++++
 lib/vhost/version.map                  |  3 +++
 lib/vhost/vhost.c                      | 37 ++++++++++++++++++++++++++
 6 files changed, 71 insertions(+)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH v1 1/2] vhost: introduce DMA vchannel unconfiguration
  2022-08-14 14:04 [PATCH v1 0/2] vhost: introduce DMA vchannel unconfiguration xuan.ding
@ 2022-08-14 14:04 ` xuan.ding
  2022-08-14 14:04 ` [PATCH v1 2/2] example/vhost: unconfigure DMA vchannel xuan.ding
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 43+ messages in thread
From: xuan.ding @ 2022-08-14 14:04 UTC (permalink / raw)
  To: maxime.coquelin, chenbo.xia
  Cc: dev, jiayu.hu, xingguang.he, yvonnex.yang, cheng1.jiang,
	yuanx.wang, wenwux.ma, Xuan Ding

From: Xuan Ding <xuan.ding@intel.com>

This patch adds a new API rte_vhost_async_dma_unconfigure() to unconfigure
DMA vchannels in vhost async data path.

Signed-off-by: Xuan Ding <xuan.ding@intel.com>
---
 doc/guides/prog_guide/vhost_lib.rst    |  5 ++++
 doc/guides/rel_notes/release_22_11.rst |  2 ++
 lib/vhost/rte_vhost_async.h            | 17 ++++++++++++
 lib/vhost/version.map                  |  3 +++
 lib/vhost/vhost.c                      | 37 ++++++++++++++++++++++++++
 5 files changed, 64 insertions(+)

diff --git a/doc/guides/prog_guide/vhost_lib.rst b/doc/guides/prog_guide/vhost_lib.rst
index bad4d819e1..78b1943d29 100644
--- a/doc/guides/prog_guide/vhost_lib.rst
+++ b/doc/guides/prog_guide/vhost_lib.rst
@@ -323,6 +323,11 @@ The following is an overview of some key Vhost API functions:
   Get device type of vDPA device, such as VDPA_DEVICE_TYPE_NET,
   VDPA_DEVICE_TYPE_BLK.
 
+* ``rte_vhost_async_dma_unconfigure(dma_id, vchan_id)``
+
+  Clear DMA vChannels finished to use. This function needs to
+  be called after async unregister.
+
 Vhost-user Implementations
 --------------------------
 
diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst
index 8c021cf050..e94c006e39 100644
--- a/doc/guides/rel_notes/release_22_11.rst
+++ b/doc/guides/rel_notes/release_22_11.rst
@@ -55,6 +55,8 @@ New Features
      Also, make sure to start the actual text at the margin.
      =======================================================
 
+* **Added vhost API to unconfigure DMA vchannels.**
+  Added an API which helps to unconfigure DMA vchannels.
 
 Removed Items
 -------------
diff --git a/lib/vhost/rte_vhost_async.h b/lib/vhost/rte_vhost_async.h
index 1db2a10124..0442e027fd 100644
--- a/lib/vhost/rte_vhost_async.h
+++ b/lib/vhost/rte_vhost_async.h
@@ -266,6 +266,23 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id,
 	struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count,
 	int *nr_inflight, int16_t dma_id, uint16_t vchan_id);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice.
+ *
+ * Unconfigure DMA vChannels in asynchronous data path.
+ *
+ * @param dma_id
+ *  the identifier of DMA device
+ * @param vchan_id
+ *  the identifier of virtual DMA channel
+ * @return
+ *  0 on success, and -1 on failure
+ */
+__rte_experimental
+int
+rte_vhost_async_dma_unconfigure(int16_t dma_id, uint16_t vchan_id);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/vhost/version.map b/lib/vhost/version.map
index 18574346d5..013a6bcc42 100644
--- a/lib/vhost/version.map
+++ b/lib/vhost/version.map
@@ -96,6 +96,9 @@ EXPERIMENTAL {
 	rte_vhost_async_try_dequeue_burst;
 	rte_vhost_driver_get_vdpa_dev_type;
 	rte_vhost_clear_queue;
+
+	# added in 22.11
+	rte_vhost_async_dma_unconfigure;
 };
 
 INTERNAL {
diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
index 60cb05a0ff..d215c917a2 100644
--- a/lib/vhost/vhost.c
+++ b/lib/vhost/vhost.c
@@ -1863,6 +1863,43 @@ rte_vhost_async_channel_unregister_thread_unsafe(int vid, uint16_t queue_id)
 	return 0;
 }
 
+int
+rte_vhost_async_dma_unconfigure(int16_t dma_id, uint16_t vchan_id)
+{
+	struct rte_dma_info info;
+	void *pkts_cmpl_flag_addr;
+
+	if (!rte_dma_is_valid(dma_id)) {
+		VHOST_LOG_CONFIG("dma", ERR, "DMA %d is not found.\n", dma_id);
+		return -1;
+	}
+
+	if (rte_dma_info_get(dma_id, &info) != 0) {
+		VHOST_LOG_CONFIG("dma", ERR, "Fail to get DMA %d information.\n", dma_id);
+		return -1;
+	}
+
+	if (vchan_id >= info.max_vchans) {
+		VHOST_LOG_CONFIG("dma", ERR, "Invalid DMA %d vChannel %u.\n", dma_id, vchan_id);
+		return -1;
+	}
+
+	pkts_cmpl_flag_addr = dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr;
+	if (pkts_cmpl_flag_addr) {
+		rte_free(pkts_cmpl_flag_addr);
+		pkts_cmpl_flag_addr = NULL;
+	}
+
+	if (dma_copy_track[dma_id].vchans) {
+		rte_free(dma_copy_track[dma_id].vchans);
+		dma_copy_track[dma_id].vchans = NULL;
+	}
+
+	dma_copy_track[dma_id].nr_vchans--;
+
+	return 0;
+}
+
 int
 rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)
 {
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH v1 2/2] example/vhost: unconfigure DMA vchannel
  2022-08-14 14:04 [PATCH v1 0/2] vhost: introduce DMA vchannel unconfiguration xuan.ding
  2022-08-14 14:04 ` [PATCH v1 1/2] " xuan.ding
@ 2022-08-14 14:04 ` xuan.ding
  2022-09-06  5:21 ` [PATCH v2 0/2] vhost: introduce DMA vchannel unconfiguration xuan.ding
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 43+ messages in thread
From: xuan.ding @ 2022-08-14 14:04 UTC (permalink / raw)
  To: maxime.coquelin, chenbo.xia
  Cc: dev, jiayu.hu, xingguang.he, yvonnex.yang, cheng1.jiang,
	yuanx.wang, wenwux.ma, Xuan Ding

From: Xuan Ding <xuan.ding@intel.com>

This patch uses rte_vhost_async_dma_unconfigure() API
to manually free 'dma_coy_track' array rather than wait for
the program to finish before being freed.

Signed-off-by: Xuan Ding <xuan.ding@intel.com>
---
 examples/vhost/main.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index 7e1666f42a..1dba9724c2 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -2060,6 +2060,13 @@ main(int argc, char *argv[])
 	RTE_LCORE_FOREACH_WORKER(lcore_id)
 		rte_eal_wait_lcore(lcore_id);
 
+	for (i = 0; i < dma_count; i++) {
+		if (rte_vhost_async_dma_unconfigure(dmas_id[i], 0) < 0) {
+			RTE_LOG(ERR, VHOST_PORT, "Failed to unconfigure DMA in vhost.\n");
+			rte_exit(EXIT_FAILURE, "Cannot use given DMA device\n");
+		}
+	}
+
 	/* clean up the EAL */
 	rte_eal_cleanup();
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH v2 0/2] vhost: introduce DMA vchannel unconfiguration
  2022-08-14 14:04 [PATCH v1 0/2] vhost: introduce DMA vchannel unconfiguration xuan.ding
  2022-08-14 14:04 ` [PATCH v1 1/2] " xuan.ding
  2022-08-14 14:04 ` [PATCH v1 2/2] example/vhost: unconfigure DMA vchannel xuan.ding
@ 2022-09-06  5:21 ` xuan.ding
  2022-09-06  5:21   ` [PATCH v2 1/2] " xuan.ding
  2022-09-06  5:21   ` [PATCH v2 2/2] examples/vhost: unconfigure DMA vchannel xuan.ding
  2022-09-29  1:32 ` [PATCH v3 0/2] vhost: introduce DMA vchannel unconfiguration xuan.ding
                   ` (5 subsequent siblings)
  8 siblings, 2 replies; 43+ messages in thread
From: xuan.ding @ 2022-09-06  5:21 UTC (permalink / raw)
  To: maxime.coquelin, chenbo.xia
  Cc: dev, jiayu.hu, xingguang.he, yvonnex.yang, cheng1.jiang,
	yuanx.wang, wenwux.ma, Xuan Ding

From: Xuan Ding <xuan.ding@intel.com>

This patchset introduces a new API rte_vhost_async_dma_unconfigure()
to help user to manually free the DMA vchannel finished to use.

Note: this API should be called after async channel unregister.

v2:
* Add spinlock protection.
* Fix a memory leak issue.
* Refine the doc.

Xuan Ding (2):
  vhost: introduce DMA vchannel unconfiguration
  examples/vhost: unconfigure DMA vchannel

 doc/guides/prog_guide/vhost_lib.rst    |  5 ++
 doc/guides/rel_notes/release_22_11.rst |  2 +
 examples/vhost/main.c                  |  7 +++
 lib/vhost/rte_vhost_async.h            | 17 +++++++
 lib/vhost/version.map                  |  3 ++
 lib/vhost/vhost.c                      | 69 ++++++++++++++++++++++++--
 6 files changed, 98 insertions(+), 5 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH v2 1/2] vhost: introduce DMA vchannel unconfiguration
  2022-09-06  5:21 ` [PATCH v2 0/2] vhost: introduce DMA vchannel unconfiguration xuan.ding
@ 2022-09-06  5:21   ` xuan.ding
  2022-09-26  6:06     ` Xia, Chenbo
  2022-09-06  5:21   ` [PATCH v2 2/2] examples/vhost: unconfigure DMA vchannel xuan.ding
  1 sibling, 1 reply; 43+ messages in thread
From: xuan.ding @ 2022-09-06  5:21 UTC (permalink / raw)
  To: maxime.coquelin, chenbo.xia
  Cc: dev, jiayu.hu, xingguang.he, yvonnex.yang, cheng1.jiang,
	yuanx.wang, wenwux.ma, Xuan Ding

From: Xuan Ding <xuan.ding@intel.com>

This patch adds a new API rte_vhost_async_dma_unconfigure() to unconfigure
DMA vchannels in vhost async data path.

Lock protection are also added to protect DMA vchannels configuration and
unconfiguration from concurrent calls.

Signed-off-by: Xuan Ding <xuan.ding@intel.com>
---
 doc/guides/prog_guide/vhost_lib.rst    |  5 ++
 doc/guides/rel_notes/release_22_11.rst |  2 +
 lib/vhost/rte_vhost_async.h            | 17 +++++++
 lib/vhost/version.map                  |  3 ++
 lib/vhost/vhost.c                      | 69 ++++++++++++++++++++++++--
 5 files changed, 91 insertions(+), 5 deletions(-)

diff --git a/doc/guides/prog_guide/vhost_lib.rst b/doc/guides/prog_guide/vhost_lib.rst
index bad4d819e1..22764cbeaa 100644
--- a/doc/guides/prog_guide/vhost_lib.rst
+++ b/doc/guides/prog_guide/vhost_lib.rst
@@ -323,6 +323,11 @@ The following is an overview of some key Vhost API functions:
   Get device type of vDPA device, such as VDPA_DEVICE_TYPE_NET,
   VDPA_DEVICE_TYPE_BLK.
 
+* ``rte_vhost_async_dma_unconfigure(dma_id, vchan_id)``
+
+  Clear DMA vChannels finished to use. This function needs to
+  be called after the deregisterration of async path has been finished.
+
 Vhost-user Implementations
 --------------------------
 
diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst
index 8c021cf050..e94c006e39 100644
--- a/doc/guides/rel_notes/release_22_11.rst
+++ b/doc/guides/rel_notes/release_22_11.rst
@@ -55,6 +55,8 @@ New Features
      Also, make sure to start the actual text at the margin.
      =======================================================
 
+* **Added vhost API to unconfigure DMA vchannels.**
+  Added an API which helps to unconfigure DMA vchannels.
 
 Removed Items
 -------------
diff --git a/lib/vhost/rte_vhost_async.h b/lib/vhost/rte_vhost_async.h
index 1db2a10124..0442e027fd 100644
--- a/lib/vhost/rte_vhost_async.h
+++ b/lib/vhost/rte_vhost_async.h
@@ -266,6 +266,23 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id,
 	struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count,
 	int *nr_inflight, int16_t dma_id, uint16_t vchan_id);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice.
+ *
+ * Unconfigure DMA vChannels in asynchronous data path.
+ *
+ * @param dma_id
+ *  the identifier of DMA device
+ * @param vchan_id
+ *  the identifier of virtual DMA channel
+ * @return
+ *  0 on success, and -1 on failure
+ */
+__rte_experimental
+int
+rte_vhost_async_dma_unconfigure(int16_t dma_id, uint16_t vchan_id);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/vhost/version.map b/lib/vhost/version.map
index 18574346d5..013a6bcc42 100644
--- a/lib/vhost/version.map
+++ b/lib/vhost/version.map
@@ -96,6 +96,9 @@ EXPERIMENTAL {
 	rte_vhost_async_try_dequeue_burst;
 	rte_vhost_driver_get_vdpa_dev_type;
 	rte_vhost_clear_queue;
+
+	# added in 22.11
+	rte_vhost_async_dma_unconfigure;
 };
 
 INTERNAL {
diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
index 60cb05a0ff..273616da11 100644
--- a/lib/vhost/vhost.c
+++ b/lib/vhost/vhost.c
@@ -23,6 +23,7 @@
 
 struct virtio_net *vhost_devices[RTE_MAX_VHOST_DEVICE];
 pthread_mutex_t vhost_dev_lock = PTHREAD_MUTEX_INITIALIZER;
+static rte_spinlock_t vhost_dma_lock = RTE_SPINLOCK_INITIALIZER;
 
 struct vhost_vq_stats_name_off {
 	char name[RTE_VHOST_STATS_NAME_SIZE];
@@ -1870,19 +1871,20 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)
 	void *pkts_cmpl_flag_addr;
 	uint16_t max_desc;
 
+	rte_spinlock_lock(&vhost_dma_lock);
 	if (!rte_dma_is_valid(dma_id)) {
 		VHOST_LOG_CONFIG("dma", ERR, "DMA %d is not found.\n", dma_id);
-		return -1;
+		goto error;
 	}
 
 	if (rte_dma_info_get(dma_id, &info) != 0) {
 		VHOST_LOG_CONFIG("dma", ERR, "Fail to get DMA %d information.\n", dma_id);
-		return -1;
+		goto error;
 	}
 
 	if (vchan_id >= info.max_vchans) {
 		VHOST_LOG_CONFIG("dma", ERR, "Invalid DMA %d vChannel %u.\n", dma_id, vchan_id);
-		return -1;
+		goto error;
 	}
 
 	if (!dma_copy_track[dma_id].vchans) {
@@ -1894,7 +1896,7 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)
 			VHOST_LOG_CONFIG("dma", ERR,
 				"Failed to allocate vchans for DMA %d vChannel %u.\n",
 				dma_id, vchan_id);
-			return -1;
+			goto error;
 		}
 
 		dma_copy_track[dma_id].vchans = vchans;
@@ -1903,6 +1905,7 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)
 	if (dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr) {
 		VHOST_LOG_CONFIG("dma", INFO, "DMA %d vChannel %u already registered.\n",
 			dma_id, vchan_id);
+		rte_spinlock_unlock(&vhost_dma_lock);
 		return 0;
 	}
 
@@ -1920,7 +1923,7 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)
 			rte_free(dma_copy_track[dma_id].vchans);
 			dma_copy_track[dma_id].vchans = NULL;
 		}
-		return -1;
+		goto error;
 	}
 
 	dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr = pkts_cmpl_flag_addr;
@@ -1928,7 +1931,12 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)
 	dma_copy_track[dma_id].vchans[vchan_id].ring_mask = max_desc - 1;
 	dma_copy_track[dma_id].nr_vchans++;
 
+	rte_spinlock_unlock(&vhost_dma_lock);
 	return 0;
+
+error:
+	rte_spinlock_unlock(&vhost_dma_lock);
+	return -1;
 }
 
 int
@@ -2117,5 +2125,56 @@ int rte_vhost_vring_stats_reset(int vid, uint16_t queue_id)
 	return 0;
 }
 
+int
+rte_vhost_async_dma_unconfigure(int16_t dma_id, uint16_t vchan_id)
+{
+	struct rte_dma_info info;
+	uint16_t max_desc;
+	int i;
+
+	rte_spinlock_lock(&vhost_dma_lock);
+	if (!rte_dma_is_valid(dma_id)) {
+		VHOST_LOG_CONFIG("dma", ERR, "DMA %d is not found.\n", dma_id);
+		goto error;
+	}
+
+	if (rte_dma_info_get(dma_id, &info) != 0) {
+		VHOST_LOG_CONFIG("dma", ERR, "Fail to get DMA %d information.\n", dma_id);
+		goto error;
+	}
+
+	if (vchan_id >= info.max_vchans) {
+		VHOST_LOG_CONFIG("dma", ERR, "Invalid DMA %d vChannel %u.\n", dma_id, vchan_id);
+		goto error;
+	}
+
+	max_desc = info.max_desc;
+	for (i = 0; i < max_desc; i++) {
+		if (dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr[i] != NULL) {
+			rte_free(dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr[i]);
+			dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr[i] = NULL;
+		}
+	}
+
+	if (dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr != NULL) {
+		rte_free(dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr);
+		dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr = NULL;
+	}
+
+	if (dma_copy_track[dma_id].vchans != NULL) {
+		rte_free(dma_copy_track[dma_id].vchans);
+		dma_copy_track[dma_id].vchans = NULL;
+	}
+
+	dma_copy_track[dma_id].nr_vchans--;
+
+	rte_spinlock_unlock(&vhost_dma_lock);
+	return 0;
+
+error:
+	rte_spinlock_unlock(&vhost_dma_lock);
+	return -1;
+}
+
 RTE_LOG_REGISTER_SUFFIX(vhost_config_log_level, config, INFO);
 RTE_LOG_REGISTER_SUFFIX(vhost_data_log_level, data, WARNING);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH v2 2/2] examples/vhost: unconfigure DMA vchannel
  2022-09-06  5:21 ` [PATCH v2 0/2] vhost: introduce DMA vchannel unconfiguration xuan.ding
  2022-09-06  5:21   ` [PATCH v2 1/2] " xuan.ding
@ 2022-09-06  5:21   ` xuan.ding
  1 sibling, 0 replies; 43+ messages in thread
From: xuan.ding @ 2022-09-06  5:21 UTC (permalink / raw)
  To: maxime.coquelin, chenbo.xia
  Cc: dev, jiayu.hu, xingguang.he, yvonnex.yang, cheng1.jiang,
	yuanx.wang, wenwux.ma, Xuan Ding

From: Xuan Ding <xuan.ding@intel.com>

This patch applies rte_vhost_async_dma_unconfigure() API
to manually free 'dma_copy_track' array instead of waiting
until the program ends to be released.

Signed-off-by: Xuan Ding <xuan.ding@intel.com>
---
 examples/vhost/main.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index 7e1666f42a..1754d9ee27 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -1624,6 +1624,13 @@ destroy_device(int vid)
 		dma_bind[vid].dmas[VIRTIO_TXQ].async_enabled = false;
 	}
 
+	for (i = 0; i < dma_count; i++) {
+		if (rte_vhost_async_dma_unconfigure(dmas_id[i], 0) < 0) {
+			RTE_LOG(ERR, VHOST_PORT, "Failed to unconfigure DMA in vhost.\n");
+			rte_exit(EXIT_FAILURE, "Cannot use given DMA device\n");
+		}
+	}
+
 	rte_free(vdev);
 }
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* RE: [PATCH v2 1/2] vhost: introduce DMA vchannel unconfiguration
  2022-09-06  5:21   ` [PATCH v2 1/2] " xuan.ding
@ 2022-09-26  6:06     ` Xia, Chenbo
  2022-09-26  6:43       ` Ding, Xuan
  0 siblings, 1 reply; 43+ messages in thread
From: Xia, Chenbo @ 2022-09-26  6:06 UTC (permalink / raw)
  To: Ding, Xuan, maxime.coquelin
  Cc: dev, Hu, Jiayu, He, Xingguang, Yang, YvonneX, Jiang, Cheng1,
	Wang, YuanX, Ma, WenwuX

> -----Original Message-----
> From: Ding, Xuan <xuan.ding@intel.com>
> Sent: Tuesday, September 6, 2022 1:22 PM
> To: maxime.coquelin@redhat.com; Xia, Chenbo <chenbo.xia@intel.com>
> Cc: dev@dpdk.org; Hu, Jiayu <jiayu.hu@intel.com>; He, Xingguang
> <xingguang.he@intel.com>; Yang, YvonneX <yvonnex.yang@intel.com>; Jiang,
> Cheng1 <cheng1.jiang@intel.com>; Wang, YuanX <yuanx.wang@intel.com>; Ma,
> WenwuX <wenwux.ma@intel.com>; Ding, Xuan <xuan.ding@intel.com>
> Subject: [PATCH v2 1/2] vhost: introduce DMA vchannel unconfiguration
> 
> From: Xuan Ding <xuan.ding@intel.com>
> 
> This patch adds a new API rte_vhost_async_dma_unconfigure() to unconfigure
> DMA vchannels in vhost async data path.
> 
> Lock protection are also added to protect DMA vchannels configuration and
> unconfiguration from concurrent calls.
> 
> Signed-off-by: Xuan Ding <xuan.ding@intel.com>
> ---
>  doc/guides/prog_guide/vhost_lib.rst    |  5 ++
>  doc/guides/rel_notes/release_22_11.rst |  2 +
>  lib/vhost/rte_vhost_async.h            | 17 +++++++
>  lib/vhost/version.map                  |  3 ++
>  lib/vhost/vhost.c                      | 69 ++++++++++++++++++++++++--
>  5 files changed, 91 insertions(+), 5 deletions(-)
> 
> diff --git a/doc/guides/prog_guide/vhost_lib.rst
> b/doc/guides/prog_guide/vhost_lib.rst
> index bad4d819e1..22764cbeaa 100644
> --- a/doc/guides/prog_guide/vhost_lib.rst
> +++ b/doc/guides/prog_guide/vhost_lib.rst
> @@ -323,6 +323,11 @@ The following is an overview of some key Vhost API
> functions:
>    Get device type of vDPA device, such as VDPA_DEVICE_TYPE_NET,
>    VDPA_DEVICE_TYPE_BLK.
> 
> +* ``rte_vhost_async_dma_unconfigure(dma_id, vchan_id)``
> +
> +  Clear DMA vChannels finished to use. This function needs to
> +  be called after the deregisterration of async path has been finished.

Deregistration

> +
>  Vhost-user Implementations
>  --------------------------
> 
> diff --git a/doc/guides/rel_notes/release_22_11.rst
> b/doc/guides/rel_notes/release_22_11.rst
> index 8c021cf050..e94c006e39 100644
> --- a/doc/guides/rel_notes/release_22_11.rst
> +++ b/doc/guides/rel_notes/release_22_11.rst
> @@ -55,6 +55,8 @@ New Features
>       Also, make sure to start the actual text at the margin.
>       =======================================================
> 
> +* **Added vhost API to unconfigure DMA vchannels.**
> +  Added an API which helps to unconfigure DMA vchannels.

Added XXX for async vhost

Overall LGTM. It seems it needs some rebasing too.

Thanks,
Chenbo

> 
>  Removed Items
>  -------------
> diff --git a/lib/vhost/rte_vhost_async.h b/lib/vhost/rte_vhost_async.h
> index 1db2a10124..0442e027fd 100644
> --- a/lib/vhost/rte_vhost_async.h
> +++ b/lib/vhost/rte_vhost_async.h
> @@ -266,6 +266,23 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t
> queue_id,
>  	struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t
> count,
>  	int *nr_inflight, int16_t dma_id, uint16_t vchan_id);
> 
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change, or be removed, without prior
> notice.
> + *
> + * Unconfigure DMA vChannels in asynchronous data path.
> + *
> + * @param dma_id
> + *  the identifier of DMA device
> + * @param vchan_id
> + *  the identifier of virtual DMA channel
> + * @return
> + *  0 on success, and -1 on failure
> + */
> +__rte_experimental
> +int
> +rte_vhost_async_dma_unconfigure(int16_t dma_id, uint16_t vchan_id);
> +
>  #ifdef __cplusplus
>  }
>  #endif
> diff --git a/lib/vhost/version.map b/lib/vhost/version.map
> index 18574346d5..013a6bcc42 100644
> --- a/lib/vhost/version.map
> +++ b/lib/vhost/version.map
> @@ -96,6 +96,9 @@ EXPERIMENTAL {
>  	rte_vhost_async_try_dequeue_burst;
>  	rte_vhost_driver_get_vdpa_dev_type;
>  	rte_vhost_clear_queue;
> +
> +	# added in 22.11
> +	rte_vhost_async_dma_unconfigure;
>  };
> 
>  INTERNAL {
> diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
> index 60cb05a0ff..273616da11 100644
> --- a/lib/vhost/vhost.c
> +++ b/lib/vhost/vhost.c
> @@ -23,6 +23,7 @@
> 
>  struct virtio_net *vhost_devices[RTE_MAX_VHOST_DEVICE];
>  pthread_mutex_t vhost_dev_lock = PTHREAD_MUTEX_INITIALIZER;
> +static rte_spinlock_t vhost_dma_lock = RTE_SPINLOCK_INITIALIZER;
> 
>  struct vhost_vq_stats_name_off {
>  	char name[RTE_VHOST_STATS_NAME_SIZE];
> @@ -1870,19 +1871,20 @@ rte_vhost_async_dma_configure(int16_t dma_id,
> uint16_t vchan_id)
>  	void *pkts_cmpl_flag_addr;
>  	uint16_t max_desc;
> 
> +	rte_spinlock_lock(&vhost_dma_lock);
>  	if (!rte_dma_is_valid(dma_id)) {
>  		VHOST_LOG_CONFIG("dma", ERR, "DMA %d is not found.\n", dma_id);
> -		return -1;
> +		goto error;
>  	}
> 
>  	if (rte_dma_info_get(dma_id, &info) != 0) {
>  		VHOST_LOG_CONFIG("dma", ERR, "Fail to get DMA %d
> information.\n", dma_id);
> -		return -1;
> +		goto error;
>  	}
> 
>  	if (vchan_id >= info.max_vchans) {
>  		VHOST_LOG_CONFIG("dma", ERR, "Invalid DMA %d vChannel %u.\n",
> dma_id, vchan_id);
> -		return -1;
> +		goto error;
>  	}
> 
>  	if (!dma_copy_track[dma_id].vchans) {
> @@ -1894,7 +1896,7 @@ rte_vhost_async_dma_configure(int16_t dma_id,
> uint16_t vchan_id)
>  			VHOST_LOG_CONFIG("dma", ERR,
>  				"Failed to allocate vchans for DMA %d
> vChannel %u.\n",
>  				dma_id, vchan_id);
> -			return -1;
> +			goto error;
>  		}
> 
>  		dma_copy_track[dma_id].vchans = vchans;
> @@ -1903,6 +1905,7 @@ rte_vhost_async_dma_configure(int16_t dma_id,
> uint16_t vchan_id)
>  	if (dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr) {
>  		VHOST_LOG_CONFIG("dma", INFO, "DMA %d vChannel %u already
> registered.\n",
>  			dma_id, vchan_id);
> +		rte_spinlock_unlock(&vhost_dma_lock);
>  		return 0;
>  	}
> 
> @@ -1920,7 +1923,7 @@ rte_vhost_async_dma_configure(int16_t dma_id,
> uint16_t vchan_id)
>  			rte_free(dma_copy_track[dma_id].vchans);
>  			dma_copy_track[dma_id].vchans = NULL;
>  		}
> -		return -1;
> +		goto error;
>  	}
> 
>  	dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr =
> pkts_cmpl_flag_addr;
> @@ -1928,7 +1931,12 @@ rte_vhost_async_dma_configure(int16_t dma_id,
> uint16_t vchan_id)
>  	dma_copy_track[dma_id].vchans[vchan_id].ring_mask = max_desc - 1;
>  	dma_copy_track[dma_id].nr_vchans++;
> 
> +	rte_spinlock_unlock(&vhost_dma_lock);
>  	return 0;
> +
> +error:
> +	rte_spinlock_unlock(&vhost_dma_lock);
> +	return -1;
>  }
> 
>  int
> @@ -2117,5 +2125,56 @@ int rte_vhost_vring_stats_reset(int vid, uint16_t
> queue_id)
>  	return 0;
>  }
> 
> +int
> +rte_vhost_async_dma_unconfigure(int16_t dma_id, uint16_t vchan_id)
> +{
> +	struct rte_dma_info info;
> +	uint16_t max_desc;
> +	int i;
> +
> +	rte_spinlock_lock(&vhost_dma_lock);
> +	if (!rte_dma_is_valid(dma_id)) {
> +		VHOST_LOG_CONFIG("dma", ERR, "DMA %d is not found.\n", dma_id);
> +		goto error;
> +	}
> +
> +	if (rte_dma_info_get(dma_id, &info) != 0) {
> +		VHOST_LOG_CONFIG("dma", ERR, "Fail to get DMA %d
> information.\n", dma_id);
> +		goto error;
> +	}
> +
> +	if (vchan_id >= info.max_vchans) {
> +		VHOST_LOG_CONFIG("dma", ERR, "Invalid DMA %d vChannel %u.\n",
> dma_id, vchan_id);
> +		goto error;
> +	}
> +
> +	max_desc = info.max_desc;
> +	for (i = 0; i < max_desc; i++) {
> +		if
> (dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr[i] != NULL) {
> +
> 	rte_free(dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr
> [i]);
> +
> 	dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr[i] =
> NULL;
> +		}
> +	}
> +
> +	if (dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr !=
> NULL) {
> +
> 	rte_free(dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr
> );
> +		dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr =
> NULL;
> +	}
> +
> +	if (dma_copy_track[dma_id].vchans != NULL) {
> +		rte_free(dma_copy_track[dma_id].vchans);
> +		dma_copy_track[dma_id].vchans = NULL;
> +	}
> +
> +	dma_copy_track[dma_id].nr_vchans--;
> +
> +	rte_spinlock_unlock(&vhost_dma_lock);
> +	return 0;
> +
> +error:
> +	rte_spinlock_unlock(&vhost_dma_lock);
> +	return -1;
> +}
> +
>  RTE_LOG_REGISTER_SUFFIX(vhost_config_log_level, config, INFO);
>  RTE_LOG_REGISTER_SUFFIX(vhost_data_log_level, data, WARNING);
> --
> 2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* RE: [PATCH v2 1/2] vhost: introduce DMA vchannel unconfiguration
  2022-09-26  6:06     ` Xia, Chenbo
@ 2022-09-26  6:43       ` Ding, Xuan
  0 siblings, 0 replies; 43+ messages in thread
From: Ding, Xuan @ 2022-09-26  6:43 UTC (permalink / raw)
  To: Xia, Chenbo, maxime.coquelin
  Cc: dev, Hu, Jiayu, He, Xingguang, Yang, YvonneX, Jiang, Cheng1,
	Wang, YuanX, Ma, WenwuX

Hi Chenbo,

Thanks for your comments, please see replies inline.

> -----Original Message-----
> From: Xia, Chenbo <chenbo.xia@intel.com>
> Sent: Monday, September 26, 2022 2:07 PM
> To: Ding, Xuan <xuan.ding@intel.com>; maxime.coquelin@redhat.com
> Cc: dev@dpdk.org; Hu, Jiayu <jiayu.hu@intel.com>; He, Xingguang
> <xingguang.he@intel.com>; Yang, YvonneX <yvonnex.yang@intel.com>;
> Jiang, Cheng1 <cheng1.jiang@intel.com>; Wang, YuanX
> <yuanx.wang@intel.com>; Ma, WenwuX <wenwux.ma@intel.com>
> Subject: RE: [PATCH v2 1/2] vhost: introduce DMA vchannel unconfiguration
> 
> > -----Original Message-----
> > From: Ding, Xuan <xuan.ding@intel.com>
> > Sent: Tuesday, September 6, 2022 1:22 PM
> > To: maxime.coquelin@redhat.com; Xia, Chenbo <chenbo.xia@intel.com>
> > Cc: dev@dpdk.org; Hu, Jiayu <jiayu.hu@intel.com>; He, Xingguang
> > <xingguang.he@intel.com>; Yang, YvonneX <yvonnex.yang@intel.com>;
> > Jiang,
> > Cheng1 <cheng1.jiang@intel.com>; Wang, YuanX <yuanx.wang@intel.com>;
> > Ma, WenwuX <wenwux.ma@intel.com>; Ding, Xuan <xuan.ding@intel.com>
> > Subject: [PATCH v2 1/2] vhost: introduce DMA vchannel unconfiguration
> >
> > From: Xuan Ding <xuan.ding@intel.com>
> >
> > This patch adds a new API rte_vhost_async_dma_unconfigure() to
> > unconfigure DMA vchannels in vhost async data path.
> >
> > Lock protection are also added to protect DMA vchannels configuration
> > and unconfiguration from concurrent calls.
> >
> > Signed-off-by: Xuan Ding <xuan.ding@intel.com>
> > ---
> >  doc/guides/prog_guide/vhost_lib.rst    |  5 ++
> >  doc/guides/rel_notes/release_22_11.rst |  2 +
> >  lib/vhost/rte_vhost_async.h            | 17 +++++++
> >  lib/vhost/version.map                  |  3 ++
> >  lib/vhost/vhost.c                      | 69 ++++++++++++++++++++++++--
> >  5 files changed, 91 insertions(+), 5 deletions(-)
> >
> > diff --git a/doc/guides/prog_guide/vhost_lib.rst
> > b/doc/guides/prog_guide/vhost_lib.rst
> > index bad4d819e1..22764cbeaa 100644
> > --- a/doc/guides/prog_guide/vhost_lib.rst
> > +++ b/doc/guides/prog_guide/vhost_lib.rst
> > @@ -323,6 +323,11 @@ The following is an overview of some key Vhost
> > API
> > functions:
> >    Get device type of vDPA device, such as VDPA_DEVICE_TYPE_NET,
> >    VDPA_DEVICE_TYPE_BLK.
> >
> > +* ``rte_vhost_async_dma_unconfigure(dma_id, vchan_id)``
> > +
> > +  Clear DMA vChannels finished to use. This function needs to  be
> > + called after the deregisterration of async path has been finished.
> 
> Deregistration

Thanks for your catch.

> 
> > +
> >  Vhost-user Implementations
> >  --------------------------
> >
> > diff --git a/doc/guides/rel_notes/release_22_11.rst
> > b/doc/guides/rel_notes/release_22_11.rst
> > index 8c021cf050..e94c006e39 100644
> > --- a/doc/guides/rel_notes/release_22_11.rst
> > +++ b/doc/guides/rel_notes/release_22_11.rst
> > @@ -55,6 +55,8 @@ New Features
> >       Also, make sure to start the actual text at the margin.
> >       =======================================================
> >
> > +* **Added vhost API to unconfigure DMA vchannels.**
> > +  Added an API which helps to unconfigure DMA vchannels.
> 
> Added XXX for async vhost

Good idea.

> 
> Overall LGTM. It seems it needs some rebasing too.

I'm preparing v3 patch series, please see next version.

Regards,
Xuan

> 
> Thanks,
> Chenbo
> 
> >
> >  Removed Items
> >  -------------
> > diff --git a/lib/vhost/rte_vhost_async.h b/lib/vhost/rte_vhost_async.h
> > index 1db2a10124..0442e027fd 100644
> > --- a/lib/vhost/rte_vhost_async.h
> > +++ b/lib/vhost/rte_vhost_async.h
> > @@ -266,6 +266,23 @@ rte_vhost_async_try_dequeue_burst(int vid,
> > uint16_t queue_id,
> >  	struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t
> > count,
> >  	int *nr_inflight, int16_t dma_id, uint16_t vchan_id);
> >
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change, or be removed, without prior
> > notice.
> > + *
> > + * Unconfigure DMA vChannels in asynchronous data path.
> > + *
> > + * @param dma_id
> > + *  the identifier of DMA device
> > + * @param vchan_id
> > + *  the identifier of virtual DMA channel
> > + * @return
> > + *  0 on success, and -1 on failure
> > + */
> > +__rte_experimental
> > +int
> > +rte_vhost_async_dma_unconfigure(int16_t dma_id, uint16_t vchan_id);
> > +
> >  #ifdef __cplusplus
> >  }
> >  #endif
> > diff --git a/lib/vhost/version.map b/lib/vhost/version.map index
> > 18574346d5..013a6bcc42 100644
> > --- a/lib/vhost/version.map
> > +++ b/lib/vhost/version.map
> > @@ -96,6 +96,9 @@ EXPERIMENTAL {
> >  	rte_vhost_async_try_dequeue_burst;
> >  	rte_vhost_driver_get_vdpa_dev_type;
> >  	rte_vhost_clear_queue;
> > +
> > +	# added in 22.11
> > +	rte_vhost_async_dma_unconfigure;
> >  };
> >
> >  INTERNAL {
> > diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c index
> > 60cb05a0ff..273616da11 100644
> > --- a/lib/vhost/vhost.c
> > +++ b/lib/vhost/vhost.c
> > @@ -23,6 +23,7 @@
> >
> >  struct virtio_net *vhost_devices[RTE_MAX_VHOST_DEVICE];
> >  pthread_mutex_t vhost_dev_lock = PTHREAD_MUTEX_INITIALIZER;
> > +static rte_spinlock_t vhost_dma_lock = RTE_SPINLOCK_INITIALIZER;
> >
> >  struct vhost_vq_stats_name_off {
> >  	char name[RTE_VHOST_STATS_NAME_SIZE]; @@ -1870,19 +1871,20
> @@
> > rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)
> >  	void *pkts_cmpl_flag_addr;
> >  	uint16_t max_desc;
> >
> > +	rte_spinlock_lock(&vhost_dma_lock);
> >  	if (!rte_dma_is_valid(dma_id)) {
> >  		VHOST_LOG_CONFIG("dma", ERR, "DMA %d is not found.\n",
> dma_id);
> > -		return -1;
> > +		goto error;
> >  	}
> >
> >  	if (rte_dma_info_get(dma_id, &info) != 0) {
> >  		VHOST_LOG_CONFIG("dma", ERR, "Fail to get DMA %d
> information.\n",
> > dma_id);
> > -		return -1;
> > +		goto error;
> >  	}
> >
> >  	if (vchan_id >= info.max_vchans) {
> >  		VHOST_LOG_CONFIG("dma", ERR, "Invalid DMA %d
> vChannel %u.\n",
> > dma_id, vchan_id);
> > -		return -1;
> > +		goto error;
> >  	}
> >
> >  	if (!dma_copy_track[dma_id].vchans) { @@ -1894,7 +1896,7 @@
> > rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)
> >  			VHOST_LOG_CONFIG("dma", ERR,
> >  				"Failed to allocate vchans for DMA %d
> vChannel %u.\n",
> >  				dma_id, vchan_id);
> > -			return -1;
> > +			goto error;
> >  		}
> >
> >  		dma_copy_track[dma_id].vchans = vchans; @@ -1903,6
> +1905,7 @@
> > rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)
> >  	if (dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr)
> {
> >  		VHOST_LOG_CONFIG("dma", INFO, "DMA %d vChannel %u
> already
> > registered.\n",
> >  			dma_id, vchan_id);
> > +		rte_spinlock_unlock(&vhost_dma_lock);
> >  		return 0;
> >  	}
> >
> > @@ -1920,7 +1923,7 @@ rte_vhost_async_dma_configure(int16_t dma_id,
> > uint16_t vchan_id)
> >  			rte_free(dma_copy_track[dma_id].vchans);
> >  			dma_copy_track[dma_id].vchans = NULL;
> >  		}
> > -		return -1;
> > +		goto error;
> >  	}
> >
> >  	dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr =
> > pkts_cmpl_flag_addr; @@ -1928,7 +1931,12 @@
> > rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)
> >  	dma_copy_track[dma_id].vchans[vchan_id].ring_mask = max_desc -
> 1;
> >  	dma_copy_track[dma_id].nr_vchans++;
> >
> > +	rte_spinlock_unlock(&vhost_dma_lock);
> >  	return 0;
> > +
> > +error:
> > +	rte_spinlock_unlock(&vhost_dma_lock);
> > +	return -1;
> >  }
> >
> >  int
> > @@ -2117,5 +2125,56 @@ int rte_vhost_vring_stats_reset(int vid,
> > uint16_t
> > queue_id)
> >  	return 0;
> >  }
> >
> > +int
> > +rte_vhost_async_dma_unconfigure(int16_t dma_id, uint16_t vchan_id) {
> > +	struct rte_dma_info info;
> > +	uint16_t max_desc;
> > +	int i;
> > +
> > +	rte_spinlock_lock(&vhost_dma_lock);
> > +	if (!rte_dma_is_valid(dma_id)) {
> > +		VHOST_LOG_CONFIG("dma", ERR, "DMA %d is not found.\n",
> dma_id);
> > +		goto error;
> > +	}
> > +
> > +	if (rte_dma_info_get(dma_id, &info) != 0) {
> > +		VHOST_LOG_CONFIG("dma", ERR, "Fail to get DMA %d
> > information.\n", dma_id);
> > +		goto error;
> > +	}
> > +
> > +	if (vchan_id >= info.max_vchans) {
> > +		VHOST_LOG_CONFIG("dma", ERR, "Invalid DMA %d
> vChannel %u.\n",
> > dma_id, vchan_id);
> > +		goto error;
> > +	}
> > +
> > +	max_desc = info.max_desc;
> > +	for (i = 0; i < max_desc; i++) {
> > +		if
> > (dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr[i] !=
> > NULL) {
> > +
> >
> 	rte_free(dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag
> _addr
> > [i]);
> > +
> > 	dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr[i] =
> > NULL;
> > +		}
> > +	}
> > +
> > +	if
> (dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr !=
> > NULL) {
> > +
> >
> 	rte_free(dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag
> _addr
> > );
> > +
> 	dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr =
> > NULL;
> > +	}
> > +
> > +	if (dma_copy_track[dma_id].vchans != NULL) {
> > +		rte_free(dma_copy_track[dma_id].vchans);
> > +		dma_copy_track[dma_id].vchans = NULL;
> > +	}
> > +
> > +	dma_copy_track[dma_id].nr_vchans--;
> > +
> > +	rte_spinlock_unlock(&vhost_dma_lock);
> > +	return 0;
> > +
> > +error:
> > +	rte_spinlock_unlock(&vhost_dma_lock);
> > +	return -1;
> > +}
> > +
> >  RTE_LOG_REGISTER_SUFFIX(vhost_config_log_level, config, INFO);
> > RTE_LOG_REGISTER_SUFFIX(vhost_data_log_level, data, WARNING);
> > --
> > 2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH v3 0/2] vhost: introduce DMA vchannel unconfiguration
  2022-08-14 14:04 [PATCH v1 0/2] vhost: introduce DMA vchannel unconfiguration xuan.ding
                   ` (2 preceding siblings ...)
  2022-09-06  5:21 ` [PATCH v2 0/2] vhost: introduce DMA vchannel unconfiguration xuan.ding
@ 2022-09-29  1:32 ` xuan.ding
  2022-09-29  1:32   ` [PATCH v3 1/2] " xuan.ding
  2022-09-29  1:32   ` [PATCH v3 2/2] examples/vhost: unconfigure DMA vchannel xuan.ding
  2022-10-13  6:40 ` [PATCH v4 0/2] vhost: introduce DMA vchannel unconfiguration xuan.ding
                   ` (4 subsequent siblings)
  8 siblings, 2 replies; 43+ messages in thread
From: xuan.ding @ 2022-09-29  1:32 UTC (permalink / raw)
  To: maxime.coquelin, chenbo.xia
  Cc: dev, jiayu.hu, xingguang.he, yvonnex.yang, cheng1.jiang,
	yuanx.wang, wenwux.ma, Xuan Ding

From: Xuan Ding <xuan.ding@intel.com>

This patchset introduces a new API rte_vhost_async_dma_unconfigure()
to help user to manually free the DMA vchannel finished to use.

Note: this API should be called after async channel unregister.

v3:
* Rebase to latest DPDK.
* Refine some descriptions in the doc.
* Fix one bug in the vhost example.

v2:
* Add spinlock protection.
* Fix a memory leak issue.
* Refine the doc.

Xuan Ding (2):
  vhost: introduce DMA vchannel unconfiguration
  examples/vhost: unconfigure DMA vchannel

 doc/guides/prog_guide/vhost_lib.rst    |  6 +++
 doc/guides/rel_notes/release_22_11.rst |  3 ++
 examples/vhost/main.c                  | 45 ++++++++++++-----
 examples/vhost/main.h                  |  1 +
 lib/vhost/rte_vhost_async.h            | 18 +++++++
 lib/vhost/version.map                  |  3 ++
 lib/vhost/vhost.c                      | 69 ++++++++++++++++++++++++--
 7 files changed, 127 insertions(+), 18 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH v3 1/2] vhost: introduce DMA vchannel unconfiguration
  2022-09-29  1:32 ` [PATCH v3 0/2] vhost: introduce DMA vchannel unconfiguration xuan.ding
@ 2022-09-29  1:32   ` xuan.ding
  2022-09-29  1:32   ` [PATCH v3 2/2] examples/vhost: unconfigure DMA vchannel xuan.ding
  1 sibling, 0 replies; 43+ messages in thread
From: xuan.ding @ 2022-09-29  1:32 UTC (permalink / raw)
  To: maxime.coquelin, chenbo.xia
  Cc: dev, jiayu.hu, xingguang.he, yvonnex.yang, cheng1.jiang,
	yuanx.wang, wenwux.ma, Xuan Ding

From: Xuan Ding <xuan.ding@intel.com>

This patch adds a new API rte_vhost_async_dma_unconfigure() to unconfigure
DMA vchannels in vhost async data path.

Lock protection are also added to protect DMA vchannels configuration and
unconfiguration from concurrent calls.

Signed-off-by: Xuan Ding <xuan.ding@intel.com>
---
 doc/guides/prog_guide/vhost_lib.rst    |  6 +++
 doc/guides/rel_notes/release_22_11.rst |  3 ++
 lib/vhost/rte_vhost_async.h            | 18 +++++++
 lib/vhost/version.map                  |  3 ++
 lib/vhost/vhost.c                      | 69 ++++++++++++++++++++++++--
 5 files changed, 94 insertions(+), 5 deletions(-)

diff --git a/doc/guides/prog_guide/vhost_lib.rst b/doc/guides/prog_guide/vhost_lib.rst
index bad4d819e1..d3cef978d0 100644
--- a/doc/guides/prog_guide/vhost_lib.rst
+++ b/doc/guides/prog_guide/vhost_lib.rst
@@ -323,6 +323,12 @@ The following is an overview of some key Vhost API functions:
   Get device type of vDPA device, such as VDPA_DEVICE_TYPE_NET,
   VDPA_DEVICE_TYPE_BLK.
 
+* ``rte_vhost_async_dma_unconfigure(dma_id, vchan_id)``
+
+  Clean DMA vChannels finished to use. This function needs to
+  be called after the deregistration of async DMA vchannel
+  has been finished.
+
 Vhost-user Implementations
 --------------------------
 
diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst
index 684bf74596..a641d8a6b8 100644
--- a/doc/guides/rel_notes/release_22_11.rst
+++ b/doc/guides/rel_notes/release_22_11.rst
@@ -67,6 +67,9 @@ New Features
 
   * Added support to set device link down/up.
 
+* **Added DMA vChannel unconfiguration for async vhost.**
+
+  * Added support to unconfigure DMA vChannels that have been unregistered.
 
 Removed Items
 -------------
diff --git a/lib/vhost/rte_vhost_async.h b/lib/vhost/rte_vhost_async.h
index 1db2a10124..6ee4f7258d 100644
--- a/lib/vhost/rte_vhost_async.h
+++ b/lib/vhost/rte_vhost_async.h
@@ -266,6 +266,24 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id,
 	struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count,
 	int *nr_inflight, int16_t dma_id, uint16_t vchan_id);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice.
+ *
+ * Unconfigure DMA vChannels in asynchronous data path.
+ * This function should be called after the DMA vChannel has been unregistered.
+ *
+ * @param dma_id
+ *  the identifier of DMA device
+ * @param vchan_id
+ *  the identifier of virtual DMA channel
+ * @return
+ *  0 on success, and -1 on failure
+ */
+__rte_experimental
+int
+rte_vhost_async_dma_unconfigure(int16_t dma_id, uint16_t vchan_id);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/vhost/version.map b/lib/vhost/version.map
index 18574346d5..013a6bcc42 100644
--- a/lib/vhost/version.map
+++ b/lib/vhost/version.map
@@ -96,6 +96,9 @@ EXPERIMENTAL {
 	rte_vhost_async_try_dequeue_burst;
 	rte_vhost_driver_get_vdpa_dev_type;
 	rte_vhost_clear_queue;
+
+	# added in 22.11
+	rte_vhost_async_dma_unconfigure;
 };
 
 INTERNAL {
diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
index aa671f47a3..f0f337bf5b 100644
--- a/lib/vhost/vhost.c
+++ b/lib/vhost/vhost.c
@@ -23,6 +23,7 @@
 
 struct virtio_net *vhost_devices[RTE_MAX_VHOST_DEVICE];
 pthread_mutex_t vhost_dev_lock = PTHREAD_MUTEX_INITIALIZER;
+static rte_spinlock_t vhost_dma_lock = RTE_SPINLOCK_INITIALIZER;
 
 struct vhost_vq_stats_name_off {
 	char name[RTE_VHOST_STATS_NAME_SIZE];
@@ -1850,19 +1851,20 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)
 	void *pkts_cmpl_flag_addr;
 	uint16_t max_desc;
 
+	rte_spinlock_lock(&vhost_dma_lock);
 	if (!rte_dma_is_valid(dma_id)) {
 		VHOST_LOG_CONFIG("dma", ERR, "DMA %d is not found.\n", dma_id);
-		return -1;
+		goto error;
 	}
 
 	if (rte_dma_info_get(dma_id, &info) != 0) {
 		VHOST_LOG_CONFIG("dma", ERR, "Fail to get DMA %d information.\n", dma_id);
-		return -1;
+		goto error;
 	}
 
 	if (vchan_id >= info.max_vchans) {
 		VHOST_LOG_CONFIG("dma", ERR, "Invalid DMA %d vChannel %u.\n", dma_id, vchan_id);
-		return -1;
+		goto error;
 	}
 
 	if (!dma_copy_track[dma_id].vchans) {
@@ -1874,7 +1876,7 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)
 			VHOST_LOG_CONFIG("dma", ERR,
 				"Failed to allocate vchans for DMA %d vChannel %u.\n",
 				dma_id, vchan_id);
-			return -1;
+			goto error;
 		}
 
 		dma_copy_track[dma_id].vchans = vchans;
@@ -1883,6 +1885,7 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)
 	if (dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr) {
 		VHOST_LOG_CONFIG("dma", INFO, "DMA %d vChannel %u already registered.\n",
 			dma_id, vchan_id);
+		rte_spinlock_unlock(&vhost_dma_lock);
 		return 0;
 	}
 
@@ -1900,7 +1903,7 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)
 			rte_free(dma_copy_track[dma_id].vchans);
 			dma_copy_track[dma_id].vchans = NULL;
 		}
-		return -1;
+		goto error;
 	}
 
 	dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr = pkts_cmpl_flag_addr;
@@ -1908,7 +1911,12 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)
 	dma_copy_track[dma_id].vchans[vchan_id].ring_mask = max_desc - 1;
 	dma_copy_track[dma_id].nr_vchans++;
 
+	rte_spinlock_unlock(&vhost_dma_lock);
 	return 0;
+
+error:
+	rte_spinlock_unlock(&vhost_dma_lock);
+	return -1;
 }
 
 int
@@ -2097,5 +2105,56 @@ int rte_vhost_vring_stats_reset(int vid, uint16_t queue_id)
 	return 0;
 }
 
+int
+rte_vhost_async_dma_unconfigure(int16_t dma_id, uint16_t vchan_id)
+{
+	struct rte_dma_info info;
+	uint16_t max_desc;
+	int i;
+
+	rte_spinlock_lock(&vhost_dma_lock);
+	if (!rte_dma_is_valid(dma_id)) {
+		VHOST_LOG_CONFIG("dma", ERR, "DMA %d is not found.\n", dma_id);
+		goto error;
+	}
+
+	if (rte_dma_info_get(dma_id, &info) != 0) {
+		VHOST_LOG_CONFIG("dma", ERR, "Fail to get DMA %d information.\n", dma_id);
+		goto error;
+	}
+
+	if (vchan_id >= info.max_vchans) {
+		VHOST_LOG_CONFIG("dma", ERR, "Invalid DMA %d vChannel %u.\n", dma_id, vchan_id);
+		goto error;
+	}
+
+	max_desc = info.max_desc;
+	for (i = 0; i < max_desc; i++) {
+		if (dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr[i] != NULL) {
+			rte_free(dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr[i]);
+			dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr[i] = NULL;
+		}
+	}
+
+	if (dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr != NULL) {
+		rte_free(dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr);
+		dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr = NULL;
+	}
+
+	if (dma_copy_track[dma_id].vchans != NULL) {
+		rte_free(dma_copy_track[dma_id].vchans);
+		dma_copy_track[dma_id].vchans = NULL;
+	}
+
+	dma_copy_track[dma_id].nr_vchans--;
+
+	rte_spinlock_unlock(&vhost_dma_lock);
+	return 0;
+
+error:
+	rte_spinlock_unlock(&vhost_dma_lock);
+	return -1;
+}
+
 RTE_LOG_REGISTER_SUFFIX(vhost_config_log_level, config, INFO);
 RTE_LOG_REGISTER_SUFFIX(vhost_data_log_level, data, WARNING);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH v3 2/2] examples/vhost: unconfigure DMA vchannel
  2022-09-29  1:32 ` [PATCH v3 0/2] vhost: introduce DMA vchannel unconfiguration xuan.ding
  2022-09-29  1:32   ` [PATCH v3 1/2] " xuan.ding
@ 2022-09-29  1:32   ` xuan.ding
  2022-09-29  8:27     ` Xia, Chenbo
  1 sibling, 1 reply; 43+ messages in thread
From: xuan.ding @ 2022-09-29  1:32 UTC (permalink / raw)
  To: maxime.coquelin, chenbo.xia
  Cc: dev, jiayu.hu, xingguang.he, yvonnex.yang, cheng1.jiang,
	yuanx.wang, wenwux.ma, Xuan Ding

From: Xuan Ding <xuan.ding@intel.com>

This patch applies rte_vhost_async_dma_unconfigure() API
to manually free DMA vchannels instead of waiting
until the program ends to be released.

Signed-off-by: Xuan Ding <xuan.ding@intel.com>
---
 examples/vhost/main.c | 45 ++++++++++++++++++++++++++++++-------------
 examples/vhost/main.h |  1 +
 2 files changed, 33 insertions(+), 13 deletions(-)

diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index 0fa4753e70..32f396d88a 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -1558,6 +1558,28 @@ vhost_clear_queue(struct vhost_dev *vdev, uint16_t queue_id)
 	}
 }
 
+static void
+vhost_clear_async(struct vhost_dev *vdev, int vid, uint16_t queue_id)
+{
+	int16_t dma_id;
+	uint16_t ref_count;
+
+	if (dma_bind[vid].dmas[queue_id].async_enabled) {
+		vhost_clear_queue(vdev, queue_id);
+		rte_vhost_async_channel_unregister(vid, queue_id);
+		dma_bind[vid].dmas[queue_id].async_enabled = false;
+	}
+
+	dma_id = dma_bind[vid2socketid[vdev->vid]].dmas[queue_id].dev_id;
+	dma_bind[vid2socketid[vdev->vid]].dmas[queue_id].dma_ref_count--;
+	ref_count = dma_bind[vid2socketid[vdev->vid]].dmas[queue_id].dma_ref_count;
+
+	if (ref_count == 0 && dma_id != INVALID_DMA_ID) {
+		if (rte_vhost_async_dma_unconfigure(dma_id, 0) < 0)
+			RTE_LOG(ERR, VHOST_PORT, "Failed to unconfigure DMA in vhost.\n");
+	}
+}
+
 /*
  * Remove a device from the specific data core linked list and from the
  * main linked list. Synchronization  occurs through the use of the
@@ -1614,17 +1636,8 @@ destroy_device(int vid)
 		"(%d) device has been removed from data core\n",
 		vdev->vid);
 
-	if (dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled) {
-		vhost_clear_queue(vdev, VIRTIO_RXQ);
-		rte_vhost_async_channel_unregister(vid, VIRTIO_RXQ);
-		dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled = false;
-	}
-
-	if (dma_bind[vid].dmas[VIRTIO_TXQ].async_enabled) {
-		vhost_clear_queue(vdev, VIRTIO_TXQ);
-		rte_vhost_async_channel_unregister(vid, VIRTIO_TXQ);
-		dma_bind[vid].dmas[VIRTIO_TXQ].async_enabled = false;
-	}
+	vhost_clear_async(vdev, vid, VIRTIO_RXQ);
+	vhost_clear_async(vdev, vid, VIRTIO_TXQ);
 
 	rte_free(vdev);
 }
@@ -1673,14 +1686,19 @@ vhost_async_channel_register(int vid)
 
 	if (dma_bind[vid2socketid[vid]].dmas[VIRTIO_RXQ].dev_id != INVALID_DMA_ID) {
 		rx_ret = rte_vhost_async_channel_register(vid, VIRTIO_RXQ);
-		if (rx_ret == 0)
+		if (rx_ret == 0) {
 			dma_bind[vid2socketid[vid]].dmas[VIRTIO_RXQ].async_enabled = true;
+			dma_bind[vid2socketid[vid]].dmas[VIRTIO_RXQ].dma_ref_count++;
+		}
+
 	}
 
 	if (dma_bind[vid2socketid[vid]].dmas[VIRTIO_TXQ].dev_id != INVALID_DMA_ID) {
 		tx_ret = rte_vhost_async_channel_register(vid, VIRTIO_TXQ);
-		if (tx_ret == 0)
+		if (tx_ret == 0) {
 			dma_bind[vid2socketid[vid]].dmas[VIRTIO_TXQ].async_enabled = true;
+			dma_bind[vid2socketid[vid]].dmas[VIRTIO_TXQ].dma_ref_count++;
+		}
 	}
 
 	return rx_ret | tx_ret;
@@ -1886,6 +1904,7 @@ reset_dma(void)
 		for (j = 0; j < RTE_MAX_QUEUES_PER_PORT * 2; j++) {
 			dma_bind[i].dmas[j].dev_id = INVALID_DMA_ID;
 			dma_bind[i].dmas[j].async_enabled = false;
+			dma_bind[i].dmas[j].dma_ref_count = 0;
 		}
 	}
 
diff --git a/examples/vhost/main.h b/examples/vhost/main.h
index 2fcb8376c5..2b2cf828d3 100644
--- a/examples/vhost/main.h
+++ b/examples/vhost/main.h
@@ -96,6 +96,7 @@ struct dma_info {
 	struct rte_pci_addr addr;
 	int16_t dev_id;
 	bool async_enabled;
+	uint16_t dma_ref_count;
 };
 
 struct dma_for_vhost {
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* RE: [PATCH v3 2/2] examples/vhost: unconfigure DMA vchannel
  2022-09-29  1:32   ` [PATCH v3 2/2] examples/vhost: unconfigure DMA vchannel xuan.ding
@ 2022-09-29  8:27     ` Xia, Chenbo
  2022-10-08  0:38       ` Ding, Xuan
  0 siblings, 1 reply; 43+ messages in thread
From: Xia, Chenbo @ 2022-09-29  8:27 UTC (permalink / raw)
  To: Ding, Xuan, maxime.coquelin
  Cc: dev, Hu, Jiayu, He, Xingguang, Yang, YvonneX, Jiang, Cheng1,
	Wang, YuanX, Ma, WenwuX

> -----Original Message-----
> From: Ding, Xuan <xuan.ding@intel.com>
> Sent: Thursday, September 29, 2022 9:33 AM
> To: maxime.coquelin@redhat.com; Xia, Chenbo <chenbo.xia@intel.com>
> Cc: dev@dpdk.org; Hu, Jiayu <jiayu.hu@intel.com>; He, Xingguang
> <xingguang.he@intel.com>; Yang, YvonneX <yvonnex.yang@intel.com>; Jiang,
> Cheng1 <cheng1.jiang@intel.com>; Wang, YuanX <yuanx.wang@intel.com>; Ma,
> WenwuX <wenwux.ma@intel.com>; Ding, Xuan <xuan.ding@intel.com>
> Subject: [PATCH v3 2/2] examples/vhost: unconfigure DMA vchannel
> 
> From: Xuan Ding <xuan.ding@intel.com>
> 
> This patch applies rte_vhost_async_dma_unconfigure() API
> to manually free DMA vchannels instead of waiting
> until the program ends to be released.
> 
> Signed-off-by: Xuan Ding <xuan.ding@intel.com>
> ---
>  examples/vhost/main.c | 45 ++++++++++++++++++++++++++++++-------------
>  examples/vhost/main.h |  1 +
>  2 files changed, 33 insertions(+), 13 deletions(-)
> 
> diff --git a/examples/vhost/main.c b/examples/vhost/main.c
> index 0fa4753e70..32f396d88a 100644
> --- a/examples/vhost/main.c
> +++ b/examples/vhost/main.c
> @@ -1558,6 +1558,28 @@ vhost_clear_queue(struct vhost_dev *vdev, uint16_t
> queue_id)
>  	}
>  }
> 
> +static void
> +vhost_clear_async(struct vhost_dev *vdev, int vid, uint16_t queue_id)
> +{
> +	int16_t dma_id;
> +	uint16_t ref_count;
> +
> +	if (dma_bind[vid].dmas[queue_id].async_enabled) {
> +		vhost_clear_queue(vdev, queue_id);
> +		rte_vhost_async_channel_unregister(vid, queue_id);
> +		dma_bind[vid].dmas[queue_id].async_enabled = false;
> +	}
> +
> +	dma_id = dma_bind[vid2socketid[vdev->vid]].dmas[queue_id].dev_id;
> +	dma_bind[vid2socketid[vdev->vid]].dmas[queue_id].dma_ref_count--;
> +	ref_count = dma_bind[vid2socketid[vdev-
> >vid]].dmas[queue_id].dma_ref_count;

Above two lines could be combined like 'ref_count = --XXXX'

But I wonder is this dma_ref_count really needed? It's always 0/1 like
Async_enabled, no? If it target at multi-port sharing same one dma case,
it should not be implemented like this.

Thanks,
Chenbo

> +
> +	if (ref_count == 0 && dma_id != INVALID_DMA_ID) {
> +		if (rte_vhost_async_dma_unconfigure(dma_id, 0) < 0)
> +			RTE_LOG(ERR, VHOST_PORT, "Failed to unconfigure DMA in
> vhost.\n");
> +	}
> +}
> +
>  /*
>   * Remove a device from the specific data core linked list and from the
>   * main linked list. Synchronization  occurs through the use of the
> @@ -1614,17 +1636,8 @@ destroy_device(int vid)
>  		"(%d) device has been removed from data core\n",
>  		vdev->vid);
> 
> -	if (dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled) {
> -		vhost_clear_queue(vdev, VIRTIO_RXQ);
> -		rte_vhost_async_channel_unregister(vid, VIRTIO_RXQ);
> -		dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled = false;
> -	}
> -
> -	if (dma_bind[vid].dmas[VIRTIO_TXQ].async_enabled) {
> -		vhost_clear_queue(vdev, VIRTIO_TXQ);
> -		rte_vhost_async_channel_unregister(vid, VIRTIO_TXQ);
> -		dma_bind[vid].dmas[VIRTIO_TXQ].async_enabled = false;
> -	}
> +	vhost_clear_async(vdev, vid, VIRTIO_RXQ);
> +	vhost_clear_async(vdev, vid, VIRTIO_TXQ);
> 
>  	rte_free(vdev);
>  }
> @@ -1673,14 +1686,19 @@ vhost_async_channel_register(int vid)
> 
>  	if (dma_bind[vid2socketid[vid]].dmas[VIRTIO_RXQ].dev_id !=
> INVALID_DMA_ID) {
>  		rx_ret = rte_vhost_async_channel_register(vid, VIRTIO_RXQ);
> -		if (rx_ret == 0)
> +		if (rx_ret == 0) {
> 
> 	dma_bind[vid2socketid[vid]].dmas[VIRTIO_RXQ].async_enabled = true;
> +
> 	dma_bind[vid2socketid[vid]].dmas[VIRTIO_RXQ].dma_ref_count++;
> +		}
> +
>  	}
> 
>  	if (dma_bind[vid2socketid[vid]].dmas[VIRTIO_TXQ].dev_id !=
> INVALID_DMA_ID) {
>  		tx_ret = rte_vhost_async_channel_register(vid, VIRTIO_TXQ);
> -		if (tx_ret == 0)
> +		if (tx_ret == 0) {
> 
> 	dma_bind[vid2socketid[vid]].dmas[VIRTIO_TXQ].async_enabled = true;
> +
> 	dma_bind[vid2socketid[vid]].dmas[VIRTIO_TXQ].dma_ref_count++;
> +		}
>  	}
> 
>  	return rx_ret | tx_ret;
> @@ -1886,6 +1904,7 @@ reset_dma(void)
>  		for (j = 0; j < RTE_MAX_QUEUES_PER_PORT * 2; j++) {
>  			dma_bind[i].dmas[j].dev_id = INVALID_DMA_ID;
>  			dma_bind[i].dmas[j].async_enabled = false;
> +			dma_bind[i].dmas[j].dma_ref_count = 0;
>  		}
>  	}
> 
> diff --git a/examples/vhost/main.h b/examples/vhost/main.h
> index 2fcb8376c5..2b2cf828d3 100644
> --- a/examples/vhost/main.h
> +++ b/examples/vhost/main.h
> @@ -96,6 +96,7 @@ struct dma_info {
>  	struct rte_pci_addr addr;
>  	int16_t dev_id;
>  	bool async_enabled;
> +	uint16_t dma_ref_count;
>  };
> 
>  struct dma_for_vhost {
> --
> 2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* RE: [PATCH v3 2/2] examples/vhost: unconfigure DMA vchannel
  2022-09-29  8:27     ` Xia, Chenbo
@ 2022-10-08  0:38       ` Ding, Xuan
  0 siblings, 0 replies; 43+ messages in thread
From: Ding, Xuan @ 2022-10-08  0:38 UTC (permalink / raw)
  To: Xia, Chenbo, maxime.coquelin
  Cc: dev, Hu, Jiayu, He, Xingguang, Yang, YvonneX, Jiang, Cheng1,
	Wang, YuanX, Ma, WenwuX

Hi Chenbo,

> -----Original Message-----
> From: Xia, Chenbo <chenbo.xia@intel.com>
> Sent: Thursday, September 29, 2022 4:28 PM
> To: Ding, Xuan <xuan.ding@intel.com>; maxime.coquelin@redhat.com
> Cc: dev@dpdk.org; Hu, Jiayu <jiayu.hu@intel.com>; He, Xingguang
> <xingguang.he@intel.com>; Yang, YvonneX <yvonnex.yang@intel.com>;
> Jiang, Cheng1 <cheng1.jiang@intel.com>; Wang, YuanX
> <yuanx.wang@intel.com>; Ma, WenwuX <wenwux.ma@intel.com>
> Subject: RE: [PATCH v3 2/2] examples/vhost: unconfigure DMA vchannel
> 
> > -----Original Message-----
> > From: Ding, Xuan <xuan.ding@intel.com>
> > Sent: Thursday, September 29, 2022 9:33 AM
> > To: maxime.coquelin@redhat.com; Xia, Chenbo <chenbo.xia@intel.com>
> > Cc: dev@dpdk.org; Hu, Jiayu <jiayu.hu@intel.com>; He, Xingguang
> > <xingguang.he@intel.com>; Yang, YvonneX <yvonnex.yang@intel.com>;
> > Jiang,
> > Cheng1 <cheng1.jiang@intel.com>; Wang, YuanX
> <yuanx.wang@intel.com>;
> > Ma, WenwuX <wenwux.ma@intel.com>; Ding, Xuan
> <xuan.ding@intel.com>
> > Subject: [PATCH v3 2/2] examples/vhost: unconfigure DMA vchannel
> >
> > From: Xuan Ding <xuan.ding@intel.com>
> >
> > This patch applies rte_vhost_async_dma_unconfigure() API to manually
> > free DMA vchannels instead of waiting until the program ends to be
> > released.
> >
> > Signed-off-by: Xuan Ding <xuan.ding@intel.com>
> > ---
> >  examples/vhost/main.c | 45
> > ++++++++++++++++++++++++++++++-------------
> >  examples/vhost/main.h |  1 +
> >  2 files changed, 33 insertions(+), 13 deletions(-)
> >
> > diff --git a/examples/vhost/main.c b/examples/vhost/main.c index
> > 0fa4753e70..32f396d88a 100644
> > --- a/examples/vhost/main.c
> > +++ b/examples/vhost/main.c
> > @@ -1558,6 +1558,28 @@ vhost_clear_queue(struct vhost_dev *vdev,
> > uint16_t
> > queue_id)
> >  	}
> >  }
> >
> > +static void
> > +vhost_clear_async(struct vhost_dev *vdev, int vid, uint16_t queue_id)
> > +{
> > +	int16_t dma_id;
> > +	uint16_t ref_count;
> > +
> > +	if (dma_bind[vid].dmas[queue_id].async_enabled) {
> > +		vhost_clear_queue(vdev, queue_id);
> > +		rte_vhost_async_channel_unregister(vid, queue_id);
> > +		dma_bind[vid].dmas[queue_id].async_enabled = false;
> > +	}
> > +
> > +	dma_id = dma_bind[vid2socketid[vdev-
> >vid]].dmas[queue_id].dev_id;
> > +	dma_bind[vid2socketid[vdev-
> >vid]].dmas[queue_id].dma_ref_count--;
> > +	ref_count = dma_bind[vid2socketid[vdev-
> > >vid]].dmas[queue_id].dma_ref_count;
> 
> Above two lines could be combined like 'ref_count = --XXXX'
> 
> But I wonder is this dma_ref_count really needed? It's always 0/1 like
> Async_enabled, no? If it target at multi-port sharing same one dma case, it
> should not be implemented like this.

Yes, the dma_ref_count here is for multi-port sharing same one dma case. The intention was to
unconfigure dma device when it is no longer used by any ports.
I will check the implementation.

Thanks,
Xuan

> 
> Thanks,
> Chenbo
> 
> > +
> > +	if (ref_count == 0 && dma_id != INVALID_DMA_ID) {
> > +		if (rte_vhost_async_dma_unconfigure(dma_id, 0) < 0)
> > +			RTE_LOG(ERR, VHOST_PORT, "Failed to unconfigure
> DMA in
> > vhost.\n");
> > +	}
> > +}
> > +
> >  /*
> >   * Remove a device from the specific data core linked list and from the
> >   * main linked list. Synchronization  occurs through the use of the
> > @@ -1614,17 +1636,8 @@ destroy_device(int vid)
> >  		"(%d) device has been removed from data core\n",
> >  		vdev->vid);
> >
> > -	if (dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled) {
> > -		vhost_clear_queue(vdev, VIRTIO_RXQ);
> > -		rte_vhost_async_channel_unregister(vid, VIRTIO_RXQ);
> > -		dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled = false;
> > -	}
> > -
> > -	if (dma_bind[vid].dmas[VIRTIO_TXQ].async_enabled) {
> > -		vhost_clear_queue(vdev, VIRTIO_TXQ);
> > -		rte_vhost_async_channel_unregister(vid, VIRTIO_TXQ);
> > -		dma_bind[vid].dmas[VIRTIO_TXQ].async_enabled = false;
> > -	}
> > +	vhost_clear_async(vdev, vid, VIRTIO_RXQ);
> > +	vhost_clear_async(vdev, vid, VIRTIO_TXQ);
> >
> >  	rte_free(vdev);
> >  }
> > @@ -1673,14 +1686,19 @@ vhost_async_channel_register(int vid)
> >
> >  	if (dma_bind[vid2socketid[vid]].dmas[VIRTIO_RXQ].dev_id !=
> > INVALID_DMA_ID) {
> >  		rx_ret = rte_vhost_async_channel_register(vid,
> VIRTIO_RXQ);
> > -		if (rx_ret == 0)
> > +		if (rx_ret == 0) {
> >
> > 	dma_bind[vid2socketid[vid]].dmas[VIRTIO_RXQ].async_enabled =
> true;
> > +
> > 	dma_bind[vid2socketid[vid]].dmas[VIRTIO_RXQ].dma_ref_count++;
> > +		}
> > +
> >  	}
> >
> >  	if (dma_bind[vid2socketid[vid]].dmas[VIRTIO_TXQ].dev_id !=
> > INVALID_DMA_ID) {
> >  		tx_ret = rte_vhost_async_channel_register(vid,
> VIRTIO_TXQ);
> > -		if (tx_ret == 0)
> > +		if (tx_ret == 0) {
> >
> > 	dma_bind[vid2socketid[vid]].dmas[VIRTIO_TXQ].async_enabled =
> true;
> > +
> > 	dma_bind[vid2socketid[vid]].dmas[VIRTIO_TXQ].dma_ref_count++;
> > +		}
> >  	}
> >
> >  	return rx_ret | tx_ret;
> > @@ -1886,6 +1904,7 @@ reset_dma(void)
> >  		for (j = 0; j < RTE_MAX_QUEUES_PER_PORT * 2; j++) {
> >  			dma_bind[i].dmas[j].dev_id = INVALID_DMA_ID;
> >  			dma_bind[i].dmas[j].async_enabled = false;
> > +			dma_bind[i].dmas[j].dma_ref_count = 0;
> >  		}
> >  	}
> >
> > diff --git a/examples/vhost/main.h b/examples/vhost/main.h index
> > 2fcb8376c5..2b2cf828d3 100644
> > --- a/examples/vhost/main.h
> > +++ b/examples/vhost/main.h
> > @@ -96,6 +96,7 @@ struct dma_info {
> >  	struct rte_pci_addr addr;
> >  	int16_t dev_id;
> >  	bool async_enabled;
> > +	uint16_t dma_ref_count;
> >  };
> >
> >  struct dma_for_vhost {
> > --
> > 2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH v4 0/2] vhost: introduce DMA vchannel unconfiguration
  2022-08-14 14:04 [PATCH v1 0/2] vhost: introduce DMA vchannel unconfiguration xuan.ding
                   ` (3 preceding siblings ...)
  2022-09-29  1:32 ` [PATCH v3 0/2] vhost: introduce DMA vchannel unconfiguration xuan.ding
@ 2022-10-13  6:40 ` xuan.ding
  2022-10-13  6:40   ` [PATCH v4 1/2] " xuan.ding
  2022-10-13  6:40   ` [PATCH v4 2/2] examples/vhost: unconfigure DMA vchannel xuan.ding
  2022-10-13  9:27 ` [PATCH v5 0/2] vhost: introduce DMA vchannel unconfiguration xuan.ding
                   ` (3 subsequent siblings)
  8 siblings, 2 replies; 43+ messages in thread
From: xuan.ding @ 2022-10-13  6:40 UTC (permalink / raw)
  To: maxime.coquelin, chenbo.xia
  Cc: dev, jiayu.hu, xingguang.he, yvonnex.yang, cheng1.jiang,
	yuanx.wang, wenwux.ma, Xuan Ding

From: Xuan Ding <xuan.ding@intel.com>

This patchset introduces a new API rte_vhost_async_dma_unconfigure()
to help user to manually free DMA vchannels finished to use.

Note: this API should be called after async channel unregister.

v4:
* Rebase to 22.11 rc1.
* Fix the usage of 'dma_ref_count' to make sure the specified DMA device
  is not used by any vhost ports before unconfiguration.

v3:
* Rebase to latest DPDK.
* Refine some descriptions in the doc.
* Fix one bug in the vhost example.

v2:
* Add spinlock protection.
* Fix a memory leak issue.
* Refine the doc.

Xuan Ding (2):
  vhost: introduce DMA vchannel unconfiguration
  examples/vhost: unconfigure DMA vchannel

 doc/guides/prog_guide/vhost_lib.rst    |  6 +++
 doc/guides/rel_notes/release_22_11.rst |  4 ++
 examples/vhost/main.c                  | 38 +++++++++-----
 lib/vhost/rte_vhost_async.h            | 18 +++++++
 lib/vhost/version.map                  |  3 ++
 lib/vhost/vhost.c                      | 69 ++++++++++++++++++++++++--
 6 files changed, 120 insertions(+), 18 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH v4 1/2] vhost: introduce DMA vchannel unconfiguration
  2022-10-13  6:40 ` [PATCH v4 0/2] vhost: introduce DMA vchannel unconfiguration xuan.ding
@ 2022-10-13  6:40   ` xuan.ding
  2022-10-13  8:01     ` Maxime Coquelin
  2022-10-13  6:40   ` [PATCH v4 2/2] examples/vhost: unconfigure DMA vchannel xuan.ding
  1 sibling, 1 reply; 43+ messages in thread
From: xuan.ding @ 2022-10-13  6:40 UTC (permalink / raw)
  To: maxime.coquelin, chenbo.xia
  Cc: dev, jiayu.hu, xingguang.he, yvonnex.yang, cheng1.jiang,
	yuanx.wang, wenwux.ma, Xuan Ding

From: Xuan Ding <xuan.ding@intel.com>

Add a new API rte_vhost_async_dma_unconfigure() to unconfigure DMA
vchannels in vhost async data path. Lock protection are also added
to protect DMA vchannels configuration and unconfiguration
from concurrent calls.

Signed-off-by: Xuan Ding <xuan.ding@intel.com>
---
 doc/guides/prog_guide/vhost_lib.rst    |  6 +++
 doc/guides/rel_notes/release_22_11.rst |  4 ++
 lib/vhost/rte_vhost_async.h            | 18 +++++++
 lib/vhost/version.map                  |  3 ++
 lib/vhost/vhost.c                      | 69 ++++++++++++++++++++++++--
 5 files changed, 95 insertions(+), 5 deletions(-)

diff --git a/doc/guides/prog_guide/vhost_lib.rst b/doc/guides/prog_guide/vhost_lib.rst
index bad4d819e1..d3cef978d0 100644
--- a/doc/guides/prog_guide/vhost_lib.rst
+++ b/doc/guides/prog_guide/vhost_lib.rst
@@ -323,6 +323,12 @@ The following is an overview of some key Vhost API functions:
   Get device type of vDPA device, such as VDPA_DEVICE_TYPE_NET,
   VDPA_DEVICE_TYPE_BLK.
 
+* ``rte_vhost_async_dma_unconfigure(dma_id, vchan_id)``
+
+  Clean DMA vChannels finished to use. This function needs to
+  be called after the deregistration of async DMA vchannel
+  has been finished.
+
 Vhost-user Implementations
 --------------------------
 
diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst
index 2da8bc9661..3be150122c 100644
--- a/doc/guides/rel_notes/release_22_11.rst
+++ b/doc/guides/rel_notes/release_22_11.rst
@@ -236,6 +236,10 @@ New Features
 
      strings $dpdk_binary_or_driver | sed -n 's/^PMD_INFO_STRING= //p'
 
+* **Added DMA vChannel unconfiguration for async vhost.**
+
+  * Added support to unconfigure DMA vChannels that have been unregistered.
+
 
 Removed Items
 -------------
diff --git a/lib/vhost/rte_vhost_async.h b/lib/vhost/rte_vhost_async.h
index 1db2a10124..6ee4f7258d 100644
--- a/lib/vhost/rte_vhost_async.h
+++ b/lib/vhost/rte_vhost_async.h
@@ -266,6 +266,24 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id,
 	struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count,
 	int *nr_inflight, int16_t dma_id, uint16_t vchan_id);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice.
+ *
+ * Unconfigure DMA vChannels in asynchronous data path.
+ * This function should be called after the DMA vChannel has been unregistered.
+ *
+ * @param dma_id
+ *  the identifier of DMA device
+ * @param vchan_id
+ *  the identifier of virtual DMA channel
+ * @return
+ *  0 on success, and -1 on failure
+ */
+__rte_experimental
+int
+rte_vhost_async_dma_unconfigure(int16_t dma_id, uint16_t vchan_id);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/vhost/version.map b/lib/vhost/version.map
index 7a00b65740..0b61870870 100644
--- a/lib/vhost/version.map
+++ b/lib/vhost/version.map
@@ -94,6 +94,9 @@ EXPERIMENTAL {
 	rte_vhost_async_try_dequeue_burst;
 	rte_vhost_driver_get_vdpa_dev_type;
 	rte_vhost_clear_queue;
+
+	# added in 22.11
+	rte_vhost_async_dma_unconfigure;
 };
 
 INTERNAL {
diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
index 8740aa2788..9fbc56229a 100644
--- a/lib/vhost/vhost.c
+++ b/lib/vhost/vhost.c
@@ -23,6 +23,7 @@
 
 struct virtio_net *vhost_devices[RTE_MAX_VHOST_DEVICE];
 pthread_mutex_t vhost_dev_lock = PTHREAD_MUTEX_INITIALIZER;
+static rte_spinlock_t vhost_dma_lock = RTE_SPINLOCK_INITIALIZER;
 
 struct vhost_vq_stats_name_off {
 	char name[RTE_VHOST_STATS_NAME_SIZE];
@@ -1844,19 +1845,20 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)
 	void *pkts_cmpl_flag_addr;
 	uint16_t max_desc;
 
+	rte_spinlock_lock(&vhost_dma_lock);
 	if (!rte_dma_is_valid(dma_id)) {
 		VHOST_LOG_CONFIG("dma", ERR, "DMA %d is not found.\n", dma_id);
-		return -1;
+		goto error;
 	}
 
 	if (rte_dma_info_get(dma_id, &info) != 0) {
 		VHOST_LOG_CONFIG("dma", ERR, "Fail to get DMA %d information.\n", dma_id);
-		return -1;
+		goto error;
 	}
 
 	if (vchan_id >= info.max_vchans) {
 		VHOST_LOG_CONFIG("dma", ERR, "Invalid DMA %d vChannel %u.\n", dma_id, vchan_id);
-		return -1;
+		goto error;
 	}
 
 	if (!dma_copy_track[dma_id].vchans) {
@@ -1868,7 +1870,7 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)
 			VHOST_LOG_CONFIG("dma", ERR,
 				"Failed to allocate vchans for DMA %d vChannel %u.\n",
 				dma_id, vchan_id);
-			return -1;
+			goto error;
 		}
 
 		dma_copy_track[dma_id].vchans = vchans;
@@ -1877,6 +1879,7 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)
 	if (dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr) {
 		VHOST_LOG_CONFIG("dma", INFO, "DMA %d vChannel %u already registered.\n",
 			dma_id, vchan_id);
+		rte_spinlock_unlock(&vhost_dma_lock);
 		return 0;
 	}
 
@@ -1894,7 +1897,7 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)
 			rte_free(dma_copy_track[dma_id].vchans);
 			dma_copy_track[dma_id].vchans = NULL;
 		}
-		return -1;
+		goto error;
 	}
 
 	dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr = pkts_cmpl_flag_addr;
@@ -1902,7 +1905,12 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)
 	dma_copy_track[dma_id].vchans[vchan_id].ring_mask = max_desc - 1;
 	dma_copy_track[dma_id].nr_vchans++;
 
+	rte_spinlock_unlock(&vhost_dma_lock);
 	return 0;
+
+error:
+	rte_spinlock_unlock(&vhost_dma_lock);
+	return -1;
 }
 
 int
@@ -2091,5 +2099,56 @@ int rte_vhost_vring_stats_reset(int vid, uint16_t queue_id)
 	return 0;
 }
 
+int
+rte_vhost_async_dma_unconfigure(int16_t dma_id, uint16_t vchan_id)
+{
+	struct rte_dma_info info;
+	uint16_t max_desc;
+	int i;
+
+	rte_spinlock_lock(&vhost_dma_lock);
+	if (!rte_dma_is_valid(dma_id)) {
+		VHOST_LOG_CONFIG("dma", ERR, "DMA %d is not found.\n", dma_id);
+		goto error;
+	}
+
+	if (rte_dma_info_get(dma_id, &info) != 0) {
+		VHOST_LOG_CONFIG("dma", ERR, "Fail to get DMA %d information.\n", dma_id);
+		goto error;
+	}
+
+	if (vchan_id >= info.max_vchans) {
+		VHOST_LOG_CONFIG("dma", ERR, "Invalid DMA %d vChannel %u.\n", dma_id, vchan_id);
+		goto error;
+	}
+
+	max_desc = info.max_desc;
+	for (i = 0; i < max_desc; i++) {
+		if (dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr[i] != NULL) {
+			rte_free(dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr[i]);
+			dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr[i] = NULL;
+		}
+	}
+
+	if (dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr != NULL) {
+		rte_free(dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr);
+		dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr = NULL;
+	}
+
+	if (dma_copy_track[dma_id].vchans != NULL) {
+		rte_free(dma_copy_track[dma_id].vchans);
+		dma_copy_track[dma_id].vchans = NULL;
+	}
+
+	dma_copy_track[dma_id].nr_vchans--;
+
+	rte_spinlock_unlock(&vhost_dma_lock);
+	return 0;
+
+error:
+	rte_spinlock_unlock(&vhost_dma_lock);
+	return -1;
+}
+
 RTE_LOG_REGISTER_SUFFIX(vhost_config_log_level, config, INFO);
 RTE_LOG_REGISTER_SUFFIX(vhost_data_log_level, data, WARNING);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH v4 2/2] examples/vhost: unconfigure DMA vchannel
  2022-10-13  6:40 ` [PATCH v4 0/2] vhost: introduce DMA vchannel unconfiguration xuan.ding
  2022-10-13  6:40   ` [PATCH v4 1/2] " xuan.ding
@ 2022-10-13  6:40   ` xuan.ding
  2022-10-13  8:07     ` Maxime Coquelin
  1 sibling, 1 reply; 43+ messages in thread
From: xuan.ding @ 2022-10-13  6:40 UTC (permalink / raw)
  To: maxime.coquelin, chenbo.xia
  Cc: dev, jiayu.hu, xingguang.he, yvonnex.yang, cheng1.jiang,
	yuanx.wang, wenwux.ma, Xuan Ding

From: Xuan Ding <xuan.ding@intel.com>

This patch applies rte_vhost_async_dma_unconfigure() to manually
free DMA vchannels. Before unconfiguration, need make sure the
specified DMA device is no longer used by any vhost ports.

Signed-off-by: Xuan Ding <xuan.ding@intel.com>
---
 examples/vhost/main.c | 38 +++++++++++++++++++++++++-------------
 1 file changed, 25 insertions(+), 13 deletions(-)

diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index ac78704d79..bfeb808dcc 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -73,6 +73,7 @@ static int total_num_mbufs = NUM_MBUFS_DEFAULT;
 
 struct dma_for_vhost dma_bind[RTE_MAX_VHOST_DEVICE];
 int16_t dmas_id[RTE_DMADEV_DEFAULT_MAX];
+int16_t dma_ref_count[RTE_DMADEV_DEFAULT_MAX];
 static int dma_count;
 
 /* mask of enabled ports */
@@ -371,6 +372,7 @@ open_dma(const char *value)
 done:
 		(dma_info + socketid)->dmas[vring_id].dev_id = dev_id;
 		(dma_info + socketid)->async_flag |= async_flag;
+		dma_ref_count[dev_id]++;
 		i++;
 	}
 out:
@@ -1562,6 +1564,27 @@ vhost_clear_queue(struct vhost_dev *vdev, uint16_t queue_id)
 	}
 }
 
+static void
+vhost_clear_async(struct vhost_dev *vdev, int vid, uint16_t queue_id)
+{
+	int16_t dma_id;
+
+	if (dma_bind[vid].dmas[queue_id].async_enabled) {
+		vhost_clear_queue(vdev, queue_id);
+		rte_vhost_async_channel_unregister(vid, queue_id);
+		dma_bind[vid].dmas[queue_id].async_enabled = false;
+
+		dma_id = dma_bind[vid2socketid[vdev->vid]].dmas[queue_id].dev_id;
+		dma_ref_count[dma_id]--;
+
+		if (dma_ref_count[dma_id] == 0) {
+			if (rte_vhost_async_dma_unconfigure(dma_id, 0) < 0)
+				RTE_LOG(ERR, VHOST_CONFIG,
+				       "Failed to unconfigure DMA %d in vhost.\n", dma_id);
+		}
+	}
+}
+
 /*
  * Remove a device from the specific data core linked list and from the
  * main linked list. Synchronization  occurs through the use of the
@@ -1618,17 +1641,8 @@ destroy_device(int vid)
 		"(%d) device has been removed from data core\n",
 		vdev->vid);
 
-	if (dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled) {
-		vhost_clear_queue(vdev, VIRTIO_RXQ);
-		rte_vhost_async_channel_unregister(vid, VIRTIO_RXQ);
-		dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled = false;
-	}
-
-	if (dma_bind[vid].dmas[VIRTIO_TXQ].async_enabled) {
-		vhost_clear_queue(vdev, VIRTIO_TXQ);
-		rte_vhost_async_channel_unregister(vid, VIRTIO_TXQ);
-		dma_bind[vid].dmas[VIRTIO_TXQ].async_enabled = false;
-	}
+	vhost_clear_async(vdev, vid, VIRTIO_RXQ);
+	vhost_clear_async(vdev, vid, VIRTIO_TXQ);
 
 	rte_free(vdev);
 }
@@ -1690,8 +1704,6 @@ vhost_async_channel_register(int vid)
 	return rx_ret | tx_ret;
 }
 
-
-
 /*
  * A new device is added to a data core. First the device is added to the main linked list
  * and then allocated to a specific data core.
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v4 1/2] vhost: introduce DMA vchannel unconfiguration
  2022-10-13  6:40   ` [PATCH v4 1/2] " xuan.ding
@ 2022-10-13  8:01     ` Maxime Coquelin
  2022-10-13  8:45       ` Ding, Xuan
  0 siblings, 1 reply; 43+ messages in thread
From: Maxime Coquelin @ 2022-10-13  8:01 UTC (permalink / raw)
  To: xuan.ding, chenbo.xia
  Cc: dev, jiayu.hu, xingguang.he, yvonnex.yang, cheng1.jiang,
	yuanx.wang, wenwux.ma



On 10/13/22 08:40, xuan.ding@intel.com wrote:
> From: Xuan Ding <xuan.ding@intel.com>
> 
> Add a new API rte_vhost_async_dma_unconfigure() to unconfigure DMA
> vchannels in vhost async data path. Lock protection are also added
> to protect DMA vchannels configuration and unconfiguration
> from concurrent calls.
> 
> Signed-off-by: Xuan Ding <xuan.ding@intel.com>
> ---
>   doc/guides/prog_guide/vhost_lib.rst    |  6 +++
>   doc/guides/rel_notes/release_22_11.rst |  4 ++
>   lib/vhost/rte_vhost_async.h            | 18 +++++++
>   lib/vhost/version.map                  |  3 ++
>   lib/vhost/vhost.c                      | 69 ++++++++++++++++++++++++--
>   5 files changed, 95 insertions(+), 5 deletions(-)
> 
> diff --git a/doc/guides/prog_guide/vhost_lib.rst b/doc/guides/prog_guide/vhost_lib.rst
> index bad4d819e1..d3cef978d0 100644
> --- a/doc/guides/prog_guide/vhost_lib.rst
> +++ b/doc/guides/prog_guide/vhost_lib.rst
> @@ -323,6 +323,12 @@ The following is an overview of some key Vhost API functions:
>     Get device type of vDPA device, such as VDPA_DEVICE_TYPE_NET,
>     VDPA_DEVICE_TYPE_BLK.
>   
> +* ``rte_vhost_async_dma_unconfigure(dma_id, vchan_id)``
> +
> +  Clean DMA vChannels finished to use. This function needs to
> +  be called after the deregistration of async DMA vchannel
> +  has been finished.
> +
>   Vhost-user Implementations
>   --------------------------
>   
> diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst
> index 2da8bc9661..3be150122c 100644
> --- a/doc/guides/rel_notes/release_22_11.rst
> +++ b/doc/guides/rel_notes/release_22_11.rst
> @@ -236,6 +236,10 @@ New Features
>   
>        strings $dpdk_binary_or_driver | sed -n 's/^PMD_INFO_STRING= //p'
>   
> +* **Added DMA vChannel unconfiguration for async vhost.**
> +
> +  * Added support to unconfigure DMA vChannels that have been unregistered.
> +
>   
>   Removed Items
>   -------------
> diff --git a/lib/vhost/rte_vhost_async.h b/lib/vhost/rte_vhost_async.h
> index 1db2a10124..6ee4f7258d 100644
> --- a/lib/vhost/rte_vhost_async.h
> +++ b/lib/vhost/rte_vhost_async.h
> @@ -266,6 +266,24 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id,
>   	struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count,
>   	int *nr_inflight, int16_t dma_id, uint16_t vchan_id);
>   
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change, or be removed, without prior notice.
> + *
> + * Unconfigure DMA vChannels in asynchronous data path.
> + * This function should be called after the DMA vChannel has been unregistered.
> + *
> + * @param dma_id
> + *  the identifier of DMA device
> + * @param vchan_id
> + *  the identifier of virtual DMA channel
> + * @return
> + *  0 on success, and -1 on failure
> + */
> +__rte_experimental
> +int
> +rte_vhost_async_dma_unconfigure(int16_t dma_id, uint16_t vchan_id);
> +
>   #ifdef __cplusplus
>   }
>   #endif
> diff --git a/lib/vhost/version.map b/lib/vhost/version.map
> index 7a00b65740..0b61870870 100644
> --- a/lib/vhost/version.map
> +++ b/lib/vhost/version.map
> @@ -94,6 +94,9 @@ EXPERIMENTAL {
>   	rte_vhost_async_try_dequeue_burst;
>   	rte_vhost_driver_get_vdpa_dev_type;
>   	rte_vhost_clear_queue;
> +
> +	# added in 22.11
> +	rte_vhost_async_dma_unconfigure;
>   };
>   
>   INTERNAL {
> diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
> index 8740aa2788..9fbc56229a 100644
> --- a/lib/vhost/vhost.c
> +++ b/lib/vhost/vhost.c
> @@ -23,6 +23,7 @@
>   
>   struct virtio_net *vhost_devices[RTE_MAX_VHOST_DEVICE];
>   pthread_mutex_t vhost_dev_lock = PTHREAD_MUTEX_INITIALIZER;
> +static rte_spinlock_t vhost_dma_lock = RTE_SPINLOCK_INITIALIZER;

I think a mutex would be more appropiate.

Thanks,
Maxime


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v4 2/2] examples/vhost: unconfigure DMA vchannel
  2022-10-13  6:40   ` [PATCH v4 2/2] examples/vhost: unconfigure DMA vchannel xuan.ding
@ 2022-10-13  8:07     ` Maxime Coquelin
  2022-10-13  8:49       ` Ding, Xuan
  0 siblings, 1 reply; 43+ messages in thread
From: Maxime Coquelin @ 2022-10-13  8:07 UTC (permalink / raw)
  To: xuan.ding, chenbo.xia
  Cc: dev, jiayu.hu, xingguang.he, yvonnex.yang, cheng1.jiang,
	yuanx.wang, wenwux.ma



On 10/13/22 08:40, xuan.ding@intel.com wrote:
> From: Xuan Ding <xuan.ding@intel.com>
> 
> This patch applies rte_vhost_async_dma_unconfigure() to manually
> free DMA vchannels. Before unconfiguration, need make sure the
> specified DMA device is no longer used by any vhost ports.
> 
> Signed-off-by: Xuan Ding <xuan.ding@intel.com>
> ---
>   examples/vhost/main.c | 38 +++++++++++++++++++++++++-------------
>   1 file changed, 25 insertions(+), 13 deletions(-)
> 
> diff --git a/examples/vhost/main.c b/examples/vhost/main.c
> index ac78704d79..bfeb808dcc 100644
> --- a/examples/vhost/main.c
> +++ b/examples/vhost/main.c
> @@ -73,6 +73,7 @@ static int total_num_mbufs = NUM_MBUFS_DEFAULT;
>   
>   struct dma_for_vhost dma_bind[RTE_MAX_VHOST_DEVICE];
>   int16_t dmas_id[RTE_DMADEV_DEFAULT_MAX];
> +int16_t dma_ref_count[RTE_DMADEV_DEFAULT_MAX];
>   static int dma_count;
>   
>   /* mask of enabled ports */
> @@ -371,6 +372,7 @@ open_dma(const char *value)
>   done:
>   		(dma_info + socketid)->dmas[vring_id].dev_id = dev_id;
>   		(dma_info + socketid)->async_flag |= async_flag;
> +		dma_ref_count[dev_id]++;
>   		i++;
>   	}
>   out:
> @@ -1562,6 +1564,27 @@ vhost_clear_queue(struct vhost_dev *vdev, uint16_t queue_id)
>   	}
>   }
>   
> +static void
> +vhost_clear_async(struct vhost_dev *vdev, int vid, uint16_t queue_id)
> +{
> +	int16_t dma_id;
> +
> +	if (dma_bind[vid].dmas[queue_id].async_enabled) {

if (!dma_bind[vid].dmas[queue_id].async_enabled)
	return;

> +		vhost_clear_queue(vdev, queue_id);
> +		rte_vhost_async_channel_unregister(vid, queue_id);
> +		dma_bind[vid].dmas[queue_id].async_enabled = false;
> +
> +		dma_id = dma_bind[vid2socketid[vdev->vid]].dmas[queue_id].dev_id;
> +		dma_ref_count[dma_id]--;
> +
> +		if (dma_ref_count[dma_id] == 0) {

if (dma_ref_count[dma_id] > 0)
	return;

Doing this should improve readability.

> +			if (rte_vhost_async_dma_unconfigure(dma_id, 0) < 0)
> +				RTE_LOG(ERR, VHOST_CONFIG,
> +				       "Failed to unconfigure DMA %d in vhost.\n", dma_id);
> +		}
> +	}
> +}
> +
>   /*
>    * Remove a device from the specific data core linked list and from the
>    * main linked list. Synchronization  occurs through the use of the
> @@ -1618,17 +1641,8 @@ destroy_device(int vid)
>   		"(%d) device has been removed from data core\n",
>   		vdev->vid);
>   
> -	if (dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled) {
> -		vhost_clear_queue(vdev, VIRTIO_RXQ);
> -		rte_vhost_async_channel_unregister(vid, VIRTIO_RXQ);
> -		dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled = false;
> -	}
> -
> -	if (dma_bind[vid].dmas[VIRTIO_TXQ].async_enabled) {
> -		vhost_clear_queue(vdev, VIRTIO_TXQ);
> -		rte_vhost_async_channel_unregister(vid, VIRTIO_TXQ);
> -		dma_bind[vid].dmas[VIRTIO_TXQ].async_enabled = false;
> -	}
> +	vhost_clear_async(vdev, vid, VIRTIO_RXQ);
> +	vhost_clear_async(vdev, vid, VIRTIO_TXQ);
>   
>   	rte_free(vdev);
>   }
> @@ -1690,8 +1704,6 @@ vhost_async_channel_register(int vid)
>   	return rx_ret | tx_ret;
>   }
>   
> -
> -
>   /*
>    * A new device is added to a data core. First the device is added to the main linked list
>    * and then allocated to a specific data core.


^ permalink raw reply	[flat|nested] 43+ messages in thread

* RE: [PATCH v4 1/2] vhost: introduce DMA vchannel unconfiguration
  2022-10-13  8:01     ` Maxime Coquelin
@ 2022-10-13  8:45       ` Ding, Xuan
  0 siblings, 0 replies; 43+ messages in thread
From: Ding, Xuan @ 2022-10-13  8:45 UTC (permalink / raw)
  To: Maxime Coquelin, Xia, Chenbo
  Cc: dev, Hu, Jiayu, He, Xingguang, Yang, YvonneX, Jiang, Cheng1,
	Wang, YuanX, Ma, WenwuX

Hi Maxime,

> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Thursday, October 13, 2022 4:02 PM
> To: Ding, Xuan <xuan.ding@intel.com>; Xia, Chenbo <chenbo.xia@intel.com>
> Cc: dev@dpdk.org; Hu, Jiayu <jiayu.hu@intel.com>; He, Xingguang
> <xingguang.he@intel.com>; Yang, YvonneX <yvonnex.yang@intel.com>;
> Jiang, Cheng1 <cheng1.jiang@intel.com>; Wang, YuanX
> <yuanx.wang@intel.com>; Ma, WenwuX <wenwux.ma@intel.com>
> Subject: Re: [PATCH v4 1/2] vhost: introduce DMA vchannel unconfiguration
> 
> 
> 
> On 10/13/22 08:40, xuan.ding@intel.com wrote:
> > From: Xuan Ding <xuan.ding@intel.com>
> >
> > Add a new API rte_vhost_async_dma_unconfigure() to unconfigure DMA
> > vchannels in vhost async data path. Lock protection are also added to
> > protect DMA vchannels configuration and unconfiguration from
> > concurrent calls.
> >
> > Signed-off-by: Xuan Ding <xuan.ding@intel.com>
> > ---
> >   doc/guides/prog_guide/vhost_lib.rst    |  6 +++
> >   doc/guides/rel_notes/release_22_11.rst |  4 ++
> >   lib/vhost/rte_vhost_async.h            | 18 +++++++
> >   lib/vhost/version.map                  |  3 ++
> >   lib/vhost/vhost.c                      | 69 ++++++++++++++++++++++++--
> >   5 files changed, 95 insertions(+), 5 deletions(-)
> >
> > diff --git a/doc/guides/prog_guide/vhost_lib.rst
> > b/doc/guides/prog_guide/vhost_lib.rst
> > index bad4d819e1..d3cef978d0 100644
> > --- a/doc/guides/prog_guide/vhost_lib.rst
> > +++ b/doc/guides/prog_guide/vhost_lib.rst
> > @@ -323,6 +323,12 @@ The following is an overview of some key Vhost
> API functions:
> >     Get device type of vDPA device, such as VDPA_DEVICE_TYPE_NET,
> >     VDPA_DEVICE_TYPE_BLK.
> >
> > +* ``rte_vhost_async_dma_unconfigure(dma_id, vchan_id)``
> > +
> > +  Clean DMA vChannels finished to use. This function needs to  be
> > + called after the deregistration of async DMA vchannel  has been
> > + finished.
> > +
> >   Vhost-user Implementations
> >   --------------------------
> >
> > diff --git a/doc/guides/rel_notes/release_22_11.rst
> > b/doc/guides/rel_notes/release_22_11.rst
> > index 2da8bc9661..3be150122c 100644
> > --- a/doc/guides/rel_notes/release_22_11.rst
> > +++ b/doc/guides/rel_notes/release_22_11.rst
> > @@ -236,6 +236,10 @@ New Features
> >
> >        strings $dpdk_binary_or_driver | sed -n 's/^PMD_INFO_STRING= //p'
> >
> > +* **Added DMA vChannel unconfiguration for async vhost.**
> > +
> > +  * Added support to unconfigure DMA vChannels that have been
> unregistered.
> > +
> >
> >   Removed Items
> >   -------------
> > diff --git a/lib/vhost/rte_vhost_async.h b/lib/vhost/rte_vhost_async.h
> > index 1db2a10124..6ee4f7258d 100644
> > --- a/lib/vhost/rte_vhost_async.h
> > +++ b/lib/vhost/rte_vhost_async.h
> > @@ -266,6 +266,24 @@ rte_vhost_async_try_dequeue_burst(int vid,
> uint16_t queue_id,
> >   	struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t
> count,
> >   	int *nr_inflight, int16_t dma_id, uint16_t vchan_id);
> >
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change, or be removed, without prior
> notice.
> > + *
> > + * Unconfigure DMA vChannels in asynchronous data path.
> > + * This function should be called after the DMA vChannel has been
> unregistered.
> > + *
> > + * @param dma_id
> > + *  the identifier of DMA device
> > + * @param vchan_id
> > + *  the identifier of virtual DMA channel
> > + * @return
> > + *  0 on success, and -1 on failure
> > + */
> > +__rte_experimental
> > +int
> > +rte_vhost_async_dma_unconfigure(int16_t dma_id, uint16_t vchan_id);
> > +
> >   #ifdef __cplusplus
> >   }
> >   #endif
> > diff --git a/lib/vhost/version.map b/lib/vhost/version.map index
> > 7a00b65740..0b61870870 100644
> > --- a/lib/vhost/version.map
> > +++ b/lib/vhost/version.map
> > @@ -94,6 +94,9 @@ EXPERIMENTAL {
> >   	rte_vhost_async_try_dequeue_burst;
> >   	rte_vhost_driver_get_vdpa_dev_type;
> >   	rte_vhost_clear_queue;
> > +
> > +	# added in 22.11
> > +	rte_vhost_async_dma_unconfigure;
> >   };
> >
> >   INTERNAL {
> > diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c index
> > 8740aa2788..9fbc56229a 100644
> > --- a/lib/vhost/vhost.c
> > +++ b/lib/vhost/vhost.c
> > @@ -23,6 +23,7 @@
> >
> >   struct virtio_net *vhost_devices[RTE_MAX_VHOST_DEVICE];
> >   pthread_mutex_t vhost_dev_lock = PTHREAD_MUTEX_INITIALIZER;
> > +static rte_spinlock_t vhost_dma_lock = RTE_SPINLOCK_INITIALIZER;
> 
> I think a mutex would be more appropiate.

Thanks for your suggestion.
Maybe mutex is more efficient, I will switch to mutex in next version.

Regards,
Xuan

> 
> Thanks,
> Maxime


^ permalink raw reply	[flat|nested] 43+ messages in thread

* RE: [PATCH v4 2/2] examples/vhost: unconfigure DMA vchannel
  2022-10-13  8:07     ` Maxime Coquelin
@ 2022-10-13  8:49       ` Ding, Xuan
  0 siblings, 0 replies; 43+ messages in thread
From: Ding, Xuan @ 2022-10-13  8:49 UTC (permalink / raw)
  To: Maxime Coquelin, Xia, Chenbo
  Cc: dev, Hu, Jiayu, He, Xingguang, Yang, YvonneX, Jiang, Cheng1,
	Wang, YuanX, Ma, WenwuX



> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Thursday, October 13, 2022 4:07 PM
> To: Ding, Xuan <xuan.ding@intel.com>; Xia, Chenbo <chenbo.xia@intel.com>
> Cc: dev@dpdk.org; Hu, Jiayu <jiayu.hu@intel.com>; He, Xingguang
> <xingguang.he@intel.com>; Yang, YvonneX <yvonnex.yang@intel.com>;
> Jiang, Cheng1 <cheng1.jiang@intel.com>; Wang, YuanX
> <yuanx.wang@intel.com>; Ma, WenwuX <wenwux.ma@intel.com>
> Subject: Re: [PATCH v4 2/2] examples/vhost: unconfigure DMA vchannel
> 
> 
> 
> On 10/13/22 08:40, xuan.ding@intel.com wrote:
> > From: Xuan Ding <xuan.ding@intel.com>
> >
> > This patch applies rte_vhost_async_dma_unconfigure() to manually free
> > DMA vchannels. Before unconfiguration, need make sure the specified
> > DMA device is no longer used by any vhost ports.
> >
> > Signed-off-by: Xuan Ding <xuan.ding@intel.com>
> > ---
> >   examples/vhost/main.c | 38 +++++++++++++++++++++++++-------------
> >   1 file changed, 25 insertions(+), 13 deletions(-)
> >
> > diff --git a/examples/vhost/main.c b/examples/vhost/main.c index
> > ac78704d79..bfeb808dcc 100644
> > --- a/examples/vhost/main.c
> > +++ b/examples/vhost/main.c
> > @@ -73,6 +73,7 @@ static int total_num_mbufs = NUM_MBUFS_DEFAULT;
> >
> >   struct dma_for_vhost dma_bind[RTE_MAX_VHOST_DEVICE];
> >   int16_t dmas_id[RTE_DMADEV_DEFAULT_MAX];
> > +int16_t dma_ref_count[RTE_DMADEV_DEFAULT_MAX];
> >   static int dma_count;
> >
> >   /* mask of enabled ports */
> > @@ -371,6 +372,7 @@ open_dma(const char *value)
> >   done:
> >   		(dma_info + socketid)->dmas[vring_id].dev_id = dev_id;
> >   		(dma_info + socketid)->async_flag |= async_flag;
> > +		dma_ref_count[dev_id]++;
> >   		i++;
> >   	}
> >   out:
> > @@ -1562,6 +1564,27 @@ vhost_clear_queue(struct vhost_dev *vdev,
> uint16_t queue_id)
> >   	}
> >   }
> >
> > +static void
> > +vhost_clear_async(struct vhost_dev *vdev, int vid, uint16_t queue_id)
> > +{
> > +	int16_t dma_id;
> > +
> > +	if (dma_bind[vid].dmas[queue_id].async_enabled) {
> 
> if (!dma_bind[vid].dmas[queue_id].async_enabled)
> 	return;
> 
> > +		vhost_clear_queue(vdev, queue_id);
> > +		rte_vhost_async_channel_unregister(vid, queue_id);
> > +		dma_bind[vid].dmas[queue_id].async_enabled = false;
> > +
> > +		dma_id = dma_bind[vid2socketid[vdev-
> >vid]].dmas[queue_id].dev_id;
> > +		dma_ref_count[dma_id]--;
> > +
> > +		if (dma_ref_count[dma_id] == 0) {
> 
> if (dma_ref_count[dma_id] > 0)
> 	return;
> 
> Doing this should improve readability.

Good suggestion! Please see v5.

Thanks,
Xuan

> 
> > +			if (rte_vhost_async_dma_unconfigure(dma_id, 0) < 0)
> > +				RTE_LOG(ERR, VHOST_CONFIG,
> > +				       "Failed to unconfigure DMA %d in
> vhost.\n", dma_id);
> > +		}
> > +	}
> > +}
> > +
> >   /*
> >    * Remove a device from the specific data core linked list and from the
> >    * main linked list. Synchronization  occurs through the use of the
> > @@ -1618,17 +1641,8 @@ destroy_device(int vid)
> >   		"(%d) device has been removed from data core\n",
> >   		vdev->vid);
> >
> > -	if (dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled) {
> > -		vhost_clear_queue(vdev, VIRTIO_RXQ);
> > -		rte_vhost_async_channel_unregister(vid, VIRTIO_RXQ);
> > -		dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled = false;
> > -	}
> > -
> > -	if (dma_bind[vid].dmas[VIRTIO_TXQ].async_enabled) {
> > -		vhost_clear_queue(vdev, VIRTIO_TXQ);
> > -		rte_vhost_async_channel_unregister(vid, VIRTIO_TXQ);
> > -		dma_bind[vid].dmas[VIRTIO_TXQ].async_enabled = false;
> > -	}
> > +	vhost_clear_async(vdev, vid, VIRTIO_RXQ);
> > +	vhost_clear_async(vdev, vid, VIRTIO_TXQ);
> >
> >   	rte_free(vdev);
> >   }
> > @@ -1690,8 +1704,6 @@ vhost_async_channel_register(int vid)
> >   	return rx_ret | tx_ret;
> >   }
> >
> > -
> > -
> >   /*
> >    * A new device is added to a data core. First the device is added to the
> main linked list
> >    * and then allocated to a specific data core.


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH v5 0/2] vhost: introduce DMA vchannel unconfiguration
  2022-08-14 14:04 [PATCH v1 0/2] vhost: introduce DMA vchannel unconfiguration xuan.ding
                   ` (4 preceding siblings ...)
  2022-10-13  6:40 ` [PATCH v4 0/2] vhost: introduce DMA vchannel unconfiguration xuan.ding
@ 2022-10-13  9:27 ` xuan.ding
  2022-10-13  9:27   ` [PATCH v5 1/2] " xuan.ding
  2022-10-13  9:27   ` [PATCH v5 2/2] examples/vhost: unconfigure DMA vchannel xuan.ding
  2022-10-18 15:22 ` [PATCH v6 0/2] vhost: introduce DMA vchannel unconfiguration xuan.ding
                   ` (2 subsequent siblings)
  8 siblings, 2 replies; 43+ messages in thread
From: xuan.ding @ 2022-10-13  9:27 UTC (permalink / raw)
  To: maxime.coquelin, chenbo.xia
  Cc: dev, jiayu.hu, xingguang.he, yvonnex.yang, cheng1.jiang,
	yuanx.wang, wenwux.ma, Xuan Ding

From: Xuan Ding <xuan.ding@intel.com>

This patchset introduces a new API rte_vhost_async_dma_unconfigure()
to help user to manually free DMA vchannels finished to use.

Note: this API should be called after async channel unregister.

v5:
* Use mutex instead of spinlock.
* Improve code readability.

v4:
* Rebase to 22.11 rc1.
* Fix the usage of 'dma_ref_count' to make sure the specified DMA device
  is not used by any vhost ports before unconfiguration.

v3:
* Rebase to latest DPDK.
* Refine some descriptions in the doc.
* Fix one bug in the vhost example.

v2:
* Add spinlock protection.
* Fix a memory leak issue.
* Refine the doc.

Xuan Ding (2):
  vhost: introduce DMA vchannel unconfiguration
  examples/vhost: unconfigure DMA vchannel

 doc/guides/prog_guide/vhost_lib.rst    |  6 +++
 doc/guides/rel_notes/release_22_11.rst |  4 ++
 examples/vhost/main.c                  | 40 ++++++++++-----
 lib/vhost/rte_vhost_async.h            | 18 +++++++
 lib/vhost/version.map                  |  3 ++
 lib/vhost/vhost.c                      | 69 ++++++++++++++++++++++++--
 6 files changed, 122 insertions(+), 18 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH v5 1/2] vhost: introduce DMA vchannel unconfiguration
  2022-10-13  9:27 ` [PATCH v5 0/2] vhost: introduce DMA vchannel unconfiguration xuan.ding
@ 2022-10-13  9:27   ` xuan.ding
  2022-10-13  9:27   ` [PATCH v5 2/2] examples/vhost: unconfigure DMA vchannel xuan.ding
  1 sibling, 0 replies; 43+ messages in thread
From: xuan.ding @ 2022-10-13  9:27 UTC (permalink / raw)
  To: maxime.coquelin, chenbo.xia
  Cc: dev, jiayu.hu, xingguang.he, yvonnex.yang, cheng1.jiang,
	yuanx.wang, wenwux.ma, Xuan Ding

From: Xuan Ding <xuan.ding@intel.com>

Add a new API rte_vhost_async_dma_unconfigure() to unconfigure DMA
vchannels in vhost async data path. Lock protection are also added
to protect DMA vchannels configuration and unconfiguration
from concurrent calls.

Signed-off-by: Xuan Ding <xuan.ding@intel.com>
---
 doc/guides/prog_guide/vhost_lib.rst    |  6 +++
 doc/guides/rel_notes/release_22_11.rst |  4 ++
 lib/vhost/rte_vhost_async.h            | 18 +++++++
 lib/vhost/version.map                  |  3 ++
 lib/vhost/vhost.c                      | 69 ++++++++++++++++++++++++--
 5 files changed, 95 insertions(+), 5 deletions(-)

diff --git a/doc/guides/prog_guide/vhost_lib.rst b/doc/guides/prog_guide/vhost_lib.rst
index bad4d819e1..d3cef978d0 100644
--- a/doc/guides/prog_guide/vhost_lib.rst
+++ b/doc/guides/prog_guide/vhost_lib.rst
@@ -323,6 +323,12 @@ The following is an overview of some key Vhost API functions:
   Get device type of vDPA device, such as VDPA_DEVICE_TYPE_NET,
   VDPA_DEVICE_TYPE_BLK.
 
+* ``rte_vhost_async_dma_unconfigure(dma_id, vchan_id)``
+
+  Clean DMA vChannels finished to use. This function needs to
+  be called after the deregistration of async DMA vchannel
+  has been finished.
+
 Vhost-user Implementations
 --------------------------
 
diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst
index 2da8bc9661..3be150122c 100644
--- a/doc/guides/rel_notes/release_22_11.rst
+++ b/doc/guides/rel_notes/release_22_11.rst
@@ -236,6 +236,10 @@ New Features
 
      strings $dpdk_binary_or_driver | sed -n 's/^PMD_INFO_STRING= //p'
 
+* **Added DMA vChannel unconfiguration for async vhost.**
+
+  * Added support to unconfigure DMA vChannels that have been unregistered.
+
 
 Removed Items
 -------------
diff --git a/lib/vhost/rte_vhost_async.h b/lib/vhost/rte_vhost_async.h
index 1db2a10124..6ee4f7258d 100644
--- a/lib/vhost/rte_vhost_async.h
+++ b/lib/vhost/rte_vhost_async.h
@@ -266,6 +266,24 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id,
 	struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count,
 	int *nr_inflight, int16_t dma_id, uint16_t vchan_id);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice.
+ *
+ * Unconfigure DMA vChannels in asynchronous data path.
+ * This function should be called after the DMA vChannel has been unregistered.
+ *
+ * @param dma_id
+ *  the identifier of DMA device
+ * @param vchan_id
+ *  the identifier of virtual DMA channel
+ * @return
+ *  0 on success, and -1 on failure
+ */
+__rte_experimental
+int
+rte_vhost_async_dma_unconfigure(int16_t dma_id, uint16_t vchan_id);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/vhost/version.map b/lib/vhost/version.map
index 7a00b65740..0b61870870 100644
--- a/lib/vhost/version.map
+++ b/lib/vhost/version.map
@@ -94,6 +94,9 @@ EXPERIMENTAL {
 	rte_vhost_async_try_dequeue_burst;
 	rte_vhost_driver_get_vdpa_dev_type;
 	rte_vhost_clear_queue;
+
+	# added in 22.11
+	rte_vhost_async_dma_unconfigure;
 };
 
 INTERNAL {
diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
index 8740aa2788..975c0d3297 100644
--- a/lib/vhost/vhost.c
+++ b/lib/vhost/vhost.c
@@ -23,6 +23,7 @@
 
 struct virtio_net *vhost_devices[RTE_MAX_VHOST_DEVICE];
 pthread_mutex_t vhost_dev_lock = PTHREAD_MUTEX_INITIALIZER;
+pthread_mutex_t vhost_dma_lock = PTHREAD_MUTEX_INITIALIZER;
 
 struct vhost_vq_stats_name_off {
 	char name[RTE_VHOST_STATS_NAME_SIZE];
@@ -1844,19 +1845,20 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)
 	void *pkts_cmpl_flag_addr;
 	uint16_t max_desc;
 
+	pthread_mutex_lock(&vhost_dma_lock);
 	if (!rte_dma_is_valid(dma_id)) {
 		VHOST_LOG_CONFIG("dma", ERR, "DMA %d is not found.\n", dma_id);
-		return -1;
+		goto error;
 	}
 
 	if (rte_dma_info_get(dma_id, &info) != 0) {
 		VHOST_LOG_CONFIG("dma", ERR, "Fail to get DMA %d information.\n", dma_id);
-		return -1;
+		goto error;
 	}
 
 	if (vchan_id >= info.max_vchans) {
 		VHOST_LOG_CONFIG("dma", ERR, "Invalid DMA %d vChannel %u.\n", dma_id, vchan_id);
-		return -1;
+		goto error;
 	}
 
 	if (!dma_copy_track[dma_id].vchans) {
@@ -1868,7 +1870,7 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)
 			VHOST_LOG_CONFIG("dma", ERR,
 				"Failed to allocate vchans for DMA %d vChannel %u.\n",
 				dma_id, vchan_id);
-			return -1;
+			goto error;
 		}
 
 		dma_copy_track[dma_id].vchans = vchans;
@@ -1877,6 +1879,7 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)
 	if (dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr) {
 		VHOST_LOG_CONFIG("dma", INFO, "DMA %d vChannel %u already registered.\n",
 			dma_id, vchan_id);
+		pthread_mutex_unlock(&vhost_dma_lock);
 		return 0;
 	}
 
@@ -1894,7 +1897,7 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)
 			rte_free(dma_copy_track[dma_id].vchans);
 			dma_copy_track[dma_id].vchans = NULL;
 		}
-		return -1;
+		goto error;
 	}
 
 	dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr = pkts_cmpl_flag_addr;
@@ -1902,7 +1905,12 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)
 	dma_copy_track[dma_id].vchans[vchan_id].ring_mask = max_desc - 1;
 	dma_copy_track[dma_id].nr_vchans++;
 
+	pthread_mutex_unlock(&vhost_dma_lock);
 	return 0;
+
+error:
+	pthread_mutex_unlock(&vhost_dma_lock);
+	return -1;
 }
 
 int
@@ -2091,5 +2099,56 @@ int rte_vhost_vring_stats_reset(int vid, uint16_t queue_id)
 	return 0;
 }
 
+int
+rte_vhost_async_dma_unconfigure(int16_t dma_id, uint16_t vchan_id)
+{
+	struct rte_dma_info info;
+	uint16_t max_desc;
+	int i;
+
+	pthread_mutex_lock(&vhost_dma_lock);
+	if (!rte_dma_is_valid(dma_id)) {
+		VHOST_LOG_CONFIG("dma", ERR, "DMA %d is not found.\n", dma_id);
+		goto error;
+	}
+
+	if (rte_dma_info_get(dma_id, &info) != 0) {
+		VHOST_LOG_CONFIG("dma", ERR, "Fail to get DMA %d information.\n", dma_id);
+		goto error;
+	}
+
+	if (vchan_id >= info.max_vchans) {
+		VHOST_LOG_CONFIG("dma", ERR, "Invalid DMA %d vChannel %u.\n", dma_id, vchan_id);
+		goto error;
+	}
+
+	max_desc = info.max_desc;
+	for (i = 0; i < max_desc; i++) {
+		if (dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr[i] != NULL) {
+			rte_free(dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr[i]);
+			dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr[i] = NULL;
+		}
+	}
+
+	if (dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr != NULL) {
+		rte_free(dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr);
+		dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr = NULL;
+	}
+
+	if (dma_copy_track[dma_id].vchans != NULL) {
+		rte_free(dma_copy_track[dma_id].vchans);
+		dma_copy_track[dma_id].vchans = NULL;
+	}
+
+	dma_copy_track[dma_id].nr_vchans--;
+
+	pthread_mutex_unlock(&vhost_dma_lock);
+	return 0;
+
+error:
+	pthread_mutex_unlock(&vhost_dma_lock);
+	return -1;
+}
+
 RTE_LOG_REGISTER_SUFFIX(vhost_config_log_level, config, INFO);
 RTE_LOG_REGISTER_SUFFIX(vhost_data_log_level, data, WARNING);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH v5 2/2] examples/vhost: unconfigure DMA vchannel
  2022-10-13  9:27 ` [PATCH v5 0/2] vhost: introduce DMA vchannel unconfiguration xuan.ding
  2022-10-13  9:27   ` [PATCH v5 1/2] " xuan.ding
@ 2022-10-13  9:27   ` xuan.ding
  1 sibling, 0 replies; 43+ messages in thread
From: xuan.ding @ 2022-10-13  9:27 UTC (permalink / raw)
  To: maxime.coquelin, chenbo.xia
  Cc: dev, jiayu.hu, xingguang.he, yvonnex.yang, cheng1.jiang,
	yuanx.wang, wenwux.ma, Xuan Ding

From: Xuan Ding <xuan.ding@intel.com>

This patch applies rte_vhost_async_dma_unconfigure() to manually free
DMA vchannels. Before unconfiguration, make sure the specified DMA
device is no longer used by any vhost ports.

Signed-off-by: Xuan Ding <xuan.ding@intel.com>
---
 examples/vhost/main.c | 40 +++++++++++++++++++++++++++-------------
 1 file changed, 27 insertions(+), 13 deletions(-)

diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index ac78704d79..baf7d49116 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -73,6 +73,7 @@ static int total_num_mbufs = NUM_MBUFS_DEFAULT;
 
 struct dma_for_vhost dma_bind[RTE_MAX_VHOST_DEVICE];
 int16_t dmas_id[RTE_DMADEV_DEFAULT_MAX];
+int16_t dma_ref_count[RTE_DMADEV_DEFAULT_MAX];
 static int dma_count;
 
 /* mask of enabled ports */
@@ -371,6 +372,7 @@ open_dma(const char *value)
 done:
 		(dma_info + socketid)->dmas[vring_id].dev_id = dev_id;
 		(dma_info + socketid)->async_flag |= async_flag;
+		dma_ref_count[dev_id]++;
 		i++;
 	}
 out:
@@ -1562,6 +1564,29 @@ vhost_clear_queue(struct vhost_dev *vdev, uint16_t queue_id)
 	}
 }
 
+static void
+vhost_clear_async(struct vhost_dev *vdev, int vid, uint16_t queue_id)
+{
+	int16_t dma_id;
+
+	if (!dma_bind[vid].dmas[queue_id].async_enabled)
+		return;
+
+	vhost_clear_queue(vdev, queue_id);
+	rte_vhost_async_channel_unregister(vid, queue_id);
+	dma_bind[vid].dmas[queue_id].async_enabled = false;
+
+	dma_id = dma_bind[vid2socketid[vdev->vid]].dmas[queue_id].dev_id;
+	dma_ref_count[dma_id]--;
+
+	if (dma_ref_count[dma_id] > 0)
+		return;
+
+	if (rte_vhost_async_dma_unconfigure(dma_id, 0) < 0)
+		RTE_LOG(ERR, VHOST_CONFIG,
+			"Failed to unconfigure DMA %d in vhost.\n", dma_id);
+}
+
 /*
  * Remove a device from the specific data core linked list and from the
  * main linked list. Synchronization  occurs through the use of the
@@ -1618,17 +1643,8 @@ destroy_device(int vid)
 		"(%d) device has been removed from data core\n",
 		vdev->vid);
 
-	if (dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled) {
-		vhost_clear_queue(vdev, VIRTIO_RXQ);
-		rte_vhost_async_channel_unregister(vid, VIRTIO_RXQ);
-		dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled = false;
-	}
-
-	if (dma_bind[vid].dmas[VIRTIO_TXQ].async_enabled) {
-		vhost_clear_queue(vdev, VIRTIO_TXQ);
-		rte_vhost_async_channel_unregister(vid, VIRTIO_TXQ);
-		dma_bind[vid].dmas[VIRTIO_TXQ].async_enabled = false;
-	}
+	vhost_clear_async(vdev, vid, VIRTIO_RXQ);
+	vhost_clear_async(vdev, vid, VIRTIO_TXQ);
 
 	rte_free(vdev);
 }
@@ -1690,8 +1706,6 @@ vhost_async_channel_register(int vid)
 	return rx_ret | tx_ret;
 }
 
-
-
 /*
  * A new device is added to a data core. First the device is added to the main linked list
  * and then allocated to a specific data core.
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH v6 0/2] vhost: introduce DMA vchannel unconfiguration
  2022-08-14 14:04 [PATCH v1 0/2] vhost: introduce DMA vchannel unconfiguration xuan.ding
                   ` (5 preceding siblings ...)
  2022-10-13  9:27 ` [PATCH v5 0/2] vhost: introduce DMA vchannel unconfiguration xuan.ding
@ 2022-10-18 15:22 ` xuan.ding
  2022-10-18 15:22   ` [PATCH v6 1/2] " xuan.ding
  2022-10-18 15:22   ` [PATCH v6 2/2] examples/vhost: unconfigure DMA vchannel xuan.ding
  2022-10-20  9:11 ` [PATCH v7 0/2] vhost: introduce DMA vchannel unconfiguration xuan.ding
  2022-10-25  8:25 ` [PATCH v8 0/2] vhost: introduce DMA vchannel unconfiguration xuan.ding
  8 siblings, 2 replies; 43+ messages in thread
From: xuan.ding @ 2022-10-18 15:22 UTC (permalink / raw)
  To: maxime.coquelin, chenbo.xia
  Cc: dev, jiayu.hu, xingguang.he, yvonnex.yang, cheng1.jiang,
	yuanx.wang, wenwux.ma, Xuan Ding

From: Xuan Ding <xuan.ding@intel.com>

This patchset introduces a new API rte_vhost_async_dma_unconfigure()
to help user to manually free DMA vchannels finished to use.

v6:
* Move DMA unconfiguration to the end due to DMA devices maybe reused
  after destroy_device().
* Refine the doc to claim the DMA device should not be used in vhost
  after unconfiguration.

v5:
* Use mutex instead of spinlock.
* Improve code readability.

v4:
* Rebase to 22.11 rc1.
* Fix the usage of 'dma_ref_count' to make sure the specified DMA device
  is not used by any vhost ports before unconfiguration.

v3:
* Rebase to latest DPDK.
* Refine some descriptions in the doc.
* Fix one bug in the vhost example.

v2:
* Add spinlock protection.
* Fix a memory leak issue.
* Refine the doc.

Xuan Ding (2):
  vhost: introduce DMA vchannel unconfiguration
  examples/vhost: unconfigure DMA vchannel

 doc/guides/prog_guide/vhost_lib.rst    |  7 +++
 doc/guides/rel_notes/release_22_11.rst |  5 ++
 examples/vhost/main.c                  |  8 +++
 lib/vhost/rte_vhost_async.h            | 20 ++++++++
 lib/vhost/version.map                  |  3 ++
 lib/vhost/vhost.c                      | 69 ++++++++++++++++++++++++--
 6 files changed, 107 insertions(+), 5 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH v6 1/2] vhost: introduce DMA vchannel unconfiguration
  2022-10-18 15:22 ` [PATCH v6 0/2] vhost: introduce DMA vchannel unconfiguration xuan.ding
@ 2022-10-18 15:22   ` xuan.ding
  2022-10-19  9:28     ` Xia, Chenbo
  2022-10-18 15:22   ` [PATCH v6 2/2] examples/vhost: unconfigure DMA vchannel xuan.ding
  1 sibling, 1 reply; 43+ messages in thread
From: xuan.ding @ 2022-10-18 15:22 UTC (permalink / raw)
  To: maxime.coquelin, chenbo.xia
  Cc: dev, jiayu.hu, xingguang.he, yvonnex.yang, cheng1.jiang,
	yuanx.wang, wenwux.ma, Xuan Ding

From: Xuan Ding <xuan.ding@intel.com>

Add a new API rte_vhost_async_dma_unconfigure() to unconfigure DMA
vchannels in vhost async data path. Lock protection are also added
to protect DMA vchannels configuration and unconfiguration
from concurrent calls.

Signed-off-by: Xuan Ding <xuan.ding@intel.com>
---
 doc/guides/prog_guide/vhost_lib.rst    |  7 +++
 doc/guides/rel_notes/release_22_11.rst |  5 ++
 lib/vhost/rte_vhost_async.h            | 20 ++++++++
 lib/vhost/version.map                  |  3 ++
 lib/vhost/vhost.c                      | 69 ++++++++++++++++++++++++--
 5 files changed, 99 insertions(+), 5 deletions(-)

diff --git a/doc/guides/prog_guide/vhost_lib.rst b/doc/guides/prog_guide/vhost_lib.rst
index bad4d819e1..fbe841321f 100644
--- a/doc/guides/prog_guide/vhost_lib.rst
+++ b/doc/guides/prog_guide/vhost_lib.rst
@@ -323,6 +323,13 @@ The following is an overview of some key Vhost API functions:
   Get device type of vDPA device, such as VDPA_DEVICE_TYPE_NET,
   VDPA_DEVICE_TYPE_BLK.
 
+* ``rte_vhost_async_dma_unconfigure(dma_id, vchan_id)``
+
+  Clean DMA vChannels finished to use. This function needs to be called
+  after the deregistration of async DMA vchannel has been finished.
+  After this function is called, the specified DMA device should no
+  longer be used by the Vhost library.
+
 Vhost-user Implementations
 --------------------------
 
diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst
index 2da8bc9661..a1c5cdea7c 100644
--- a/doc/guides/rel_notes/release_22_11.rst
+++ b/doc/guides/rel_notes/release_22_11.rst
@@ -236,6 +236,11 @@ New Features
 
      strings $dpdk_binary_or_driver | sed -n 's/^PMD_INFO_STRING= //p'
 
+* **Added DMA vChannel unconfiguration for async vhost.**
+
+  * Added support to unconfigure DMA vChannels that have been unregistered
+  and no longer used by Vhost library.
+
 
 Removed Items
 -------------
diff --git a/lib/vhost/rte_vhost_async.h b/lib/vhost/rte_vhost_async.h
index 1db2a10124..eac7eb568e 100644
--- a/lib/vhost/rte_vhost_async.h
+++ b/lib/vhost/rte_vhost_async.h
@@ -266,6 +266,26 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id,
 	struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count,
 	int *nr_inflight, int16_t dma_id, uint16_t vchan_id);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice.
+ *
+ * Unconfigure DMA vChannels in Vhost asynchronous data path.
+ * This function should be called after the DMA vChannel has been unregistered.
+ * After this function is called, the specified DMA device should no longer
+ * be used by the Vhost library.
+ *
+ * @param dma_id
+ *  the identifier of DMA device
+ * @param vchan_id
+ *  the identifier of virtual DMA channel
+ * @return
+ *  0 on success, and -1 on failure
+ */
+__rte_experimental
+int
+rte_vhost_async_dma_unconfigure(int16_t dma_id, uint16_t vchan_id);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/vhost/version.map b/lib/vhost/version.map
index 7a00b65740..0b61870870 100644
--- a/lib/vhost/version.map
+++ b/lib/vhost/version.map
@@ -94,6 +94,9 @@ EXPERIMENTAL {
 	rte_vhost_async_try_dequeue_burst;
 	rte_vhost_driver_get_vdpa_dev_type;
 	rte_vhost_clear_queue;
+
+	# added in 22.11
+	rte_vhost_async_dma_unconfigure;
 };
 
 INTERNAL {
diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
index 8740aa2788..975c0d3297 100644
--- a/lib/vhost/vhost.c
+++ b/lib/vhost/vhost.c
@@ -23,6 +23,7 @@
 
 struct virtio_net *vhost_devices[RTE_MAX_VHOST_DEVICE];
 pthread_mutex_t vhost_dev_lock = PTHREAD_MUTEX_INITIALIZER;
+pthread_mutex_t vhost_dma_lock = PTHREAD_MUTEX_INITIALIZER;
 
 struct vhost_vq_stats_name_off {
 	char name[RTE_VHOST_STATS_NAME_SIZE];
@@ -1844,19 +1845,20 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)
 	void *pkts_cmpl_flag_addr;
 	uint16_t max_desc;
 
+	pthread_mutex_lock(&vhost_dma_lock);
 	if (!rte_dma_is_valid(dma_id)) {
 		VHOST_LOG_CONFIG("dma", ERR, "DMA %d is not found.\n", dma_id);
-		return -1;
+		goto error;
 	}
 
 	if (rte_dma_info_get(dma_id, &info) != 0) {
 		VHOST_LOG_CONFIG("dma", ERR, "Fail to get DMA %d information.\n", dma_id);
-		return -1;
+		goto error;
 	}
 
 	if (vchan_id >= info.max_vchans) {
 		VHOST_LOG_CONFIG("dma", ERR, "Invalid DMA %d vChannel %u.\n", dma_id, vchan_id);
-		return -1;
+		goto error;
 	}
 
 	if (!dma_copy_track[dma_id].vchans) {
@@ -1868,7 +1870,7 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)
 			VHOST_LOG_CONFIG("dma", ERR,
 				"Failed to allocate vchans for DMA %d vChannel %u.\n",
 				dma_id, vchan_id);
-			return -1;
+			goto error;
 		}
 
 		dma_copy_track[dma_id].vchans = vchans;
@@ -1877,6 +1879,7 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)
 	if (dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr) {
 		VHOST_LOG_CONFIG("dma", INFO, "DMA %d vChannel %u already registered.\n",
 			dma_id, vchan_id);
+		pthread_mutex_unlock(&vhost_dma_lock);
 		return 0;
 	}
 
@@ -1894,7 +1897,7 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)
 			rte_free(dma_copy_track[dma_id].vchans);
 			dma_copy_track[dma_id].vchans = NULL;
 		}
-		return -1;
+		goto error;
 	}
 
 	dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr = pkts_cmpl_flag_addr;
@@ -1902,7 +1905,12 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)
 	dma_copy_track[dma_id].vchans[vchan_id].ring_mask = max_desc - 1;
 	dma_copy_track[dma_id].nr_vchans++;
 
+	pthread_mutex_unlock(&vhost_dma_lock);
 	return 0;
+
+error:
+	pthread_mutex_unlock(&vhost_dma_lock);
+	return -1;
 }
 
 int
@@ -2091,5 +2099,56 @@ int rte_vhost_vring_stats_reset(int vid, uint16_t queue_id)
 	return 0;
 }
 
+int
+rte_vhost_async_dma_unconfigure(int16_t dma_id, uint16_t vchan_id)
+{
+	struct rte_dma_info info;
+	uint16_t max_desc;
+	int i;
+
+	pthread_mutex_lock(&vhost_dma_lock);
+	if (!rte_dma_is_valid(dma_id)) {
+		VHOST_LOG_CONFIG("dma", ERR, "DMA %d is not found.\n", dma_id);
+		goto error;
+	}
+
+	if (rte_dma_info_get(dma_id, &info) != 0) {
+		VHOST_LOG_CONFIG("dma", ERR, "Fail to get DMA %d information.\n", dma_id);
+		goto error;
+	}
+
+	if (vchan_id >= info.max_vchans) {
+		VHOST_LOG_CONFIG("dma", ERR, "Invalid DMA %d vChannel %u.\n", dma_id, vchan_id);
+		goto error;
+	}
+
+	max_desc = info.max_desc;
+	for (i = 0; i < max_desc; i++) {
+		if (dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr[i] != NULL) {
+			rte_free(dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr[i]);
+			dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr[i] = NULL;
+		}
+	}
+
+	if (dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr != NULL) {
+		rte_free(dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr);
+		dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr = NULL;
+	}
+
+	if (dma_copy_track[dma_id].vchans != NULL) {
+		rte_free(dma_copy_track[dma_id].vchans);
+		dma_copy_track[dma_id].vchans = NULL;
+	}
+
+	dma_copy_track[dma_id].nr_vchans--;
+
+	pthread_mutex_unlock(&vhost_dma_lock);
+	return 0;
+
+error:
+	pthread_mutex_unlock(&vhost_dma_lock);
+	return -1;
+}
+
 RTE_LOG_REGISTER_SUFFIX(vhost_config_log_level, config, INFO);
 RTE_LOG_REGISTER_SUFFIX(vhost_data_log_level, data, WARNING);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH v6 2/2] examples/vhost: unconfigure DMA vchannel
  2022-10-18 15:22 ` [PATCH v6 0/2] vhost: introduce DMA vchannel unconfiguration xuan.ding
  2022-10-18 15:22   ` [PATCH v6 1/2] " xuan.ding
@ 2022-10-18 15:22   ` xuan.ding
  2022-10-19  2:57     ` Ling, WeiX
  1 sibling, 1 reply; 43+ messages in thread
From: xuan.ding @ 2022-10-18 15:22 UTC (permalink / raw)
  To: maxime.coquelin, chenbo.xia
  Cc: dev, jiayu.hu, xingguang.he, yvonnex.yang, cheng1.jiang,
	yuanx.wang, wenwux.ma, Xuan Ding

From: Xuan Ding <xuan.ding@intel.com>

This patch applies rte_vhost_async_dma_unconfigure() to manually free
DMA vchannels. Before unconfiguration, make sure the specified DMA
device is no longer used by any vhost ports.

Signed-off-by: Xuan Ding <xuan.ding@intel.com>
---
 examples/vhost/main.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index ac78704d79..42e53a0f9a 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -2066,6 +2066,14 @@ main(int argc, char *argv[])
 	RTE_LCORE_FOREACH_WORKER(lcore_id)
 		rte_eal_wait_lcore(lcore_id);
 
+	for (i = 0; i < dma_count; i++) {
+		if (rte_vhost_async_dma_unconfigure(dmas_id[i], 0) < 0) {
+			RTE_LOG(ERR, VHOST_PORT,
+				"Failed to unconfigure DMA %d in vhost.\n", dmas_id[i]);
+			rte_exit(EXIT_FAILURE, "Cannot use given DMA device\n");
+		}
+	}
+
 	/* clean up the EAL */
 	rte_eal_cleanup();
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* RE: [PATCH v6 2/2] examples/vhost: unconfigure DMA vchannel
  2022-10-18 15:22   ` [PATCH v6 2/2] examples/vhost: unconfigure DMA vchannel xuan.ding
@ 2022-10-19  2:57     ` Ling, WeiX
  0 siblings, 0 replies; 43+ messages in thread
From: Ling, WeiX @ 2022-10-19  2:57 UTC (permalink / raw)
  To: Ding, Xuan, maxime.coquelin, Xia, Chenbo
  Cc: dev, Hu, Jiayu, He, Xingguang, Yang, YvonneX, Jiang, Cheng1,
	Wang, YuanX, Ma, WenwuX, Ding, Xuan

> -----Original Message-----
> From: xuan.ding@intel.com <xuan.ding@intel.com>
> Sent: Tuesday, October 18, 2022 11:22 PM
> To: maxime.coquelin@redhat.com; Xia, Chenbo <chenbo.xia@intel.com>
> Cc: dev@dpdk.org; Hu, Jiayu <jiayu.hu@intel.com>; He, Xingguang
> <xingguang.he@intel.com>; Yang, YvonneX <yvonnex.yang@intel.com>; Jiang,
> Cheng1 <cheng1.jiang@intel.com>; Wang, YuanX <yuanx.wang@intel.com>;
> Ma, WenwuX <wenwux.ma@intel.com>; Ding, Xuan <xuan.ding@intel.com>
> Subject: [PATCH v6 2/2] examples/vhost: unconfigure DMA vchannel
> 
> From: Xuan Ding <xuan.ding@intel.com>
> 
> This patch applies rte_vhost_async_dma_unconfigure() to manually free DMA
> vchannels. Before unconfiguration, make sure the specified DMA device is no
> longer used by any vhost ports.
> 
> Signed-off-by: Xuan Ding <xuan.ding@intel.com>
> ---
Tested-by: Wei Ling <weix.ling@intel.com>

^ permalink raw reply	[flat|nested] 43+ messages in thread

* RE: [PATCH v6 1/2] vhost: introduce DMA vchannel unconfiguration
  2022-10-18 15:22   ` [PATCH v6 1/2] " xuan.ding
@ 2022-10-19  9:28     ` Xia, Chenbo
  0 siblings, 0 replies; 43+ messages in thread
From: Xia, Chenbo @ 2022-10-19  9:28 UTC (permalink / raw)
  To: Ding, Xuan, maxime.coquelin
  Cc: dev, Hu, Jiayu, He, Xingguang, Yang, YvonneX, Jiang, Cheng1,
	Wang, YuanX, Ma, WenwuX

> -----Original Message-----
> From: Ding, Xuan <xuan.ding@intel.com>
> Sent: Tuesday, October 18, 2022 11:22 PM
> To: maxime.coquelin@redhat.com; Xia, Chenbo <chenbo.xia@intel.com>
> Cc: dev@dpdk.org; Hu, Jiayu <jiayu.hu@intel.com>; He, Xingguang
> <xingguang.he@intel.com>; Yang, YvonneX <yvonnex.yang@intel.com>; Jiang,
> Cheng1 <cheng1.jiang@intel.com>; Wang, YuanX <yuanx.wang@intel.com>; Ma,
> WenwuX <wenwux.ma@intel.com>; Ding, Xuan <xuan.ding@intel.com>
> Subject: [PATCH v6 1/2] vhost: introduce DMA vchannel unconfiguration
> 
> From: Xuan Ding <xuan.ding@intel.com>
> 
> Add a new API rte_vhost_async_dma_unconfigure() to unconfigure DMA
> vchannels in vhost async data path. Lock protection are also added
> to protect DMA vchannels configuration and unconfiguration
> from concurrent calls.
> 
> Signed-off-by: Xuan Ding <xuan.ding@intel.com>
> ---
>  doc/guides/prog_guide/vhost_lib.rst    |  7 +++
>  doc/guides/rel_notes/release_22_11.rst |  5 ++
>  lib/vhost/rte_vhost_async.h            | 20 ++++++++
>  lib/vhost/version.map                  |  3 ++
>  lib/vhost/vhost.c                      | 69 ++++++++++++++++++++++++--
>  5 files changed, 99 insertions(+), 5 deletions(-)
> 
> diff --git a/doc/guides/prog_guide/vhost_lib.rst
> b/doc/guides/prog_guide/vhost_lib.rst
> index bad4d819e1..fbe841321f 100644
> --- a/doc/guides/prog_guide/vhost_lib.rst
> +++ b/doc/guides/prog_guide/vhost_lib.rst
> @@ -323,6 +323,13 @@ The following is an overview of some key Vhost API
> functions:
>    Get device type of vDPA device, such as VDPA_DEVICE_TYPE_NET,
>    VDPA_DEVICE_TYPE_BLK.
> 
> +* ``rte_vhost_async_dma_unconfigure(dma_id, vchan_id)``
> +
> +  Clean DMA vChannels finished to use. This function needs to be called
> +  after the deregistration of async DMA vchannel has been finished.
> +  After this function is called, the specified DMA device should no
> +  longer be used by the Vhost library.
> +
>  Vhost-user Implementations
>  --------------------------
> 
> diff --git a/doc/guides/rel_notes/release_22_11.rst
> b/doc/guides/rel_notes/release_22_11.rst
> index 2da8bc9661..a1c5cdea7c 100644
> --- a/doc/guides/rel_notes/release_22_11.rst
> +++ b/doc/guides/rel_notes/release_22_11.rst
> @@ -236,6 +236,11 @@ New Features
> 
>       strings $dpdk_binary_or_driver | sed -n 's/^PMD_INFO_STRING= //p'
> 
> +* **Added DMA vChannel unconfiguration for async vhost.**
> +
> +  * Added support to unconfigure DMA vChannels that have been
> unregistered
> +  and no longer used by Vhost library.

Somehow this is making release notes build fail.

Maxime, is this because of the '*' before 'Added support'?

Thanks,
Chenbo

> +
> 
>  Removed Items
>  -------------
> diff --git a/lib/vhost/rte_vhost_async.h b/lib/vhost/rte_vhost_async.h
> index 1db2a10124..eac7eb568e 100644
> --- a/lib/vhost/rte_vhost_async.h
> +++ b/lib/vhost/rte_vhost_async.h
> @@ -266,6 +266,26 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t
> queue_id,
>  	struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t
> count,
>  	int *nr_inflight, int16_t dma_id, uint16_t vchan_id);
> 
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change, or be removed, without prior
> notice.
> + *
> + * Unconfigure DMA vChannels in Vhost asynchronous data path.
> + * This function should be called after the DMA vChannel has been
> unregistered.
> + * After this function is called, the specified DMA device should no
> longer
> + * be used by the Vhost library.
> + *
> + * @param dma_id
> + *  the identifier of DMA device
> + * @param vchan_id
> + *  the identifier of virtual DMA channel
> + * @return
> + *  0 on success, and -1 on failure
> + */
> +__rte_experimental
> +int
> +rte_vhost_async_dma_unconfigure(int16_t dma_id, uint16_t vchan_id);
> +
>  #ifdef __cplusplus
>  }
>  #endif
> diff --git a/lib/vhost/version.map b/lib/vhost/version.map
> index 7a00b65740..0b61870870 100644
> --- a/lib/vhost/version.map
> +++ b/lib/vhost/version.map
> @@ -94,6 +94,9 @@ EXPERIMENTAL {
>  	rte_vhost_async_try_dequeue_burst;
>  	rte_vhost_driver_get_vdpa_dev_type;
>  	rte_vhost_clear_queue;
> +
> +	# added in 22.11
> +	rte_vhost_async_dma_unconfigure;
>  };
> 
>  INTERNAL {
> diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
> index 8740aa2788..975c0d3297 100644
> --- a/lib/vhost/vhost.c
> +++ b/lib/vhost/vhost.c
> @@ -23,6 +23,7 @@
> 
>  struct virtio_net *vhost_devices[RTE_MAX_VHOST_DEVICE];
>  pthread_mutex_t vhost_dev_lock = PTHREAD_MUTEX_INITIALIZER;
> +pthread_mutex_t vhost_dma_lock = PTHREAD_MUTEX_INITIALIZER;
> 
>  struct vhost_vq_stats_name_off {
>  	char name[RTE_VHOST_STATS_NAME_SIZE];
> @@ -1844,19 +1845,20 @@ rte_vhost_async_dma_configure(int16_t dma_id,
> uint16_t vchan_id)
>  	void *pkts_cmpl_flag_addr;
>  	uint16_t max_desc;
> 
> +	pthread_mutex_lock(&vhost_dma_lock);
>  	if (!rte_dma_is_valid(dma_id)) {
>  		VHOST_LOG_CONFIG("dma", ERR, "DMA %d is not found.\n", dma_id);
> -		return -1;
> +		goto error;
>  	}
> 
>  	if (rte_dma_info_get(dma_id, &info) != 0) {
>  		VHOST_LOG_CONFIG("dma", ERR, "Fail to get DMA %d
> information.\n", dma_id);
> -		return -1;
> +		goto error;
>  	}
> 
>  	if (vchan_id >= info.max_vchans) {
>  		VHOST_LOG_CONFIG("dma", ERR, "Invalid DMA %d vChannel %u.\n",
> dma_id, vchan_id);
> -		return -1;
> +		goto error;
>  	}
> 
>  	if (!dma_copy_track[dma_id].vchans) {
> @@ -1868,7 +1870,7 @@ rte_vhost_async_dma_configure(int16_t dma_id,
> uint16_t vchan_id)
>  			VHOST_LOG_CONFIG("dma", ERR,
>  				"Failed to allocate vchans for DMA %d
> vChannel %u.\n",
>  				dma_id, vchan_id);
> -			return -1;
> +			goto error;
>  		}
> 
>  		dma_copy_track[dma_id].vchans = vchans;
> @@ -1877,6 +1879,7 @@ rte_vhost_async_dma_configure(int16_t dma_id,
> uint16_t vchan_id)
>  	if (dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr) {
>  		VHOST_LOG_CONFIG("dma", INFO, "DMA %d vChannel %u already
> registered.\n",
>  			dma_id, vchan_id);
> +		pthread_mutex_unlock(&vhost_dma_lock);
>  		return 0;
>  	}
> 
> @@ -1894,7 +1897,7 @@ rte_vhost_async_dma_configure(int16_t dma_id,
> uint16_t vchan_id)
>  			rte_free(dma_copy_track[dma_id].vchans);
>  			dma_copy_track[dma_id].vchans = NULL;
>  		}
> -		return -1;
> +		goto error;
>  	}
> 
>  	dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr =
> pkts_cmpl_flag_addr;
> @@ -1902,7 +1905,12 @@ rte_vhost_async_dma_configure(int16_t dma_id,
> uint16_t vchan_id)
>  	dma_copy_track[dma_id].vchans[vchan_id].ring_mask = max_desc - 1;
>  	dma_copy_track[dma_id].nr_vchans++;
> 
> +	pthread_mutex_unlock(&vhost_dma_lock);
>  	return 0;
> +
> +error:
> +	pthread_mutex_unlock(&vhost_dma_lock);
> +	return -1;
>  }
> 
>  int
> @@ -2091,5 +2099,56 @@ int rte_vhost_vring_stats_reset(int vid, uint16_t
> queue_id)
>  	return 0;
>  }
> 
> +int
> +rte_vhost_async_dma_unconfigure(int16_t dma_id, uint16_t vchan_id)
> +{
> +	struct rte_dma_info info;
> +	uint16_t max_desc;
> +	int i;
> +
> +	pthread_mutex_lock(&vhost_dma_lock);
> +	if (!rte_dma_is_valid(dma_id)) {
> +		VHOST_LOG_CONFIG("dma", ERR, "DMA %d is not found.\n", dma_id);
> +		goto error;
> +	}
> +
> +	if (rte_dma_info_get(dma_id, &info) != 0) {
> +		VHOST_LOG_CONFIG("dma", ERR, "Fail to get DMA %d
> information.\n", dma_id);
> +		goto error;
> +	}
> +
> +	if (vchan_id >= info.max_vchans) {
> +		VHOST_LOG_CONFIG("dma", ERR, "Invalid DMA %d vChannel %u.\n",
> dma_id, vchan_id);
> +		goto error;
> +	}
> +
> +	max_desc = info.max_desc;
> +	for (i = 0; i < max_desc; i++) {
> +		if
> (dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr[i] != NULL) {
> +
> 	rte_free(dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr
> [i]);
> +
> 	dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr[i] =
> NULL;
> +		}
> +	}
> +
> +	if (dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr !=
> NULL) {
> +
> 	rte_free(dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr
> );
> +		dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr =
> NULL;
> +	}
> +
> +	if (dma_copy_track[dma_id].vchans != NULL) {
> +		rte_free(dma_copy_track[dma_id].vchans);
> +		dma_copy_track[dma_id].vchans = NULL;
> +	}
> +
> +	dma_copy_track[dma_id].nr_vchans--;
> +
> +	pthread_mutex_unlock(&vhost_dma_lock);
> +	return 0;
> +
> +error:
> +	pthread_mutex_unlock(&vhost_dma_lock);
> +	return -1;
> +}
> +
>  RTE_LOG_REGISTER_SUFFIX(vhost_config_log_level, config, INFO);
>  RTE_LOG_REGISTER_SUFFIX(vhost_data_log_level, data, WARNING);
> --
> 2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH v7 0/2] vhost: introduce DMA vchannel unconfiguration
  2022-08-14 14:04 [PATCH v1 0/2] vhost: introduce DMA vchannel unconfiguration xuan.ding
                   ` (6 preceding siblings ...)
  2022-10-18 15:22 ` [PATCH v6 0/2] vhost: introduce DMA vchannel unconfiguration xuan.ding
@ 2022-10-20  9:11 ` xuan.ding
  2022-10-20  9:11   ` [PATCH v7 1/2] " xuan.ding
  2022-10-20  9:11   ` [PATCH v7 2/2] examples/vhost: unconfigure DMA vchannel xuan.ding
  2022-10-25  8:25 ` [PATCH v8 0/2] vhost: introduce DMA vchannel unconfiguration xuan.ding
  8 siblings, 2 replies; 43+ messages in thread
From: xuan.ding @ 2022-10-20  9:11 UTC (permalink / raw)
  To: maxime.coquelin, chenbo.xia
  Cc: dev, jiayu.hu, xingguang.he, yvonnex.yang, cheng1.jiang,
	yuanx.wang, wenwux.ma, Xuan Ding

From: Xuan Ding <xuan.ding@intel.com>

This patchset introduces a new API rte_vhost_async_dma_unconfigure()
to help user to manually free DMA vchannels finished to use.

v7:
* Add inflight packets processing.
* Fix CI error.

v6:
* Move DMA unconfiguration to the end due to DMA devices maybe reused
  after destroy_device().
* Refine the doc to claim the DMA device should not be used in vhost
  after unconfiguration.

v5:
* Use mutex instead of spinlock.
* Improve code readability.

v4:
* Rebase to 22.11 rc1.
* Fix the usage of 'dma_ref_count' to make sure the specified DMA device
  is not used by any vhost ports before unconfiguration.

v3:
* Rebase to latest DPDK.
* Refine some descriptions in the doc.
* Fix one bug in the vhost example.

v2:
* Add spinlock protection.
* Fix a memory leak issue.
* Refine the doc.

Xuan Ding (2):
  vhost: introduce DMA vchannel unconfiguration
  examples/vhost: unconfigure DMA vchannel

 doc/guides/prog_guide/vhost_lib.rst    |  5 ++
 doc/guides/rel_notes/release_22_11.rst |  5 ++
 examples/vhost/main.c                  |  8 +++
 lib/vhost/rte_vhost_async.h            | 20 +++++++
 lib/vhost/version.map                  |  3 ++
 lib/vhost/vhost.c                      | 75 ++++++++++++++++++++++++--
 6 files changed, 111 insertions(+), 5 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH v7 1/2] vhost: introduce DMA vchannel unconfiguration
  2022-10-20  9:11 ` [PATCH v7 0/2] vhost: introduce DMA vchannel unconfiguration xuan.ding
@ 2022-10-20  9:11   ` xuan.ding
  2022-10-21  8:09     ` Maxime Coquelin
  2022-10-20  9:11   ` [PATCH v7 2/2] examples/vhost: unconfigure DMA vchannel xuan.ding
  1 sibling, 1 reply; 43+ messages in thread
From: xuan.ding @ 2022-10-20  9:11 UTC (permalink / raw)
  To: maxime.coquelin, chenbo.xia
  Cc: dev, jiayu.hu, xingguang.he, yvonnex.yang, cheng1.jiang,
	yuanx.wang, wenwux.ma, Xuan Ding

From: Xuan Ding <xuan.ding@intel.com>

Add a new API rte_vhost_async_dma_unconfigure() to unconfigure DMA
vchannels in vhost async data path. Lock protection are also added
to protect DMA vchannels configuration and unconfiguration
from concurrent calls.

Signed-off-by: Xuan Ding <xuan.ding@intel.com>
---
 doc/guides/prog_guide/vhost_lib.rst    |  5 ++
 doc/guides/rel_notes/release_22_11.rst |  5 ++
 lib/vhost/rte_vhost_async.h            | 20 +++++++
 lib/vhost/version.map                  |  3 ++
 lib/vhost/vhost.c                      | 75 ++++++++++++++++++++++++--
 5 files changed, 103 insertions(+), 5 deletions(-)

diff --git a/doc/guides/prog_guide/vhost_lib.rst b/doc/guides/prog_guide/vhost_lib.rst
index bad4d819e1..cdf0c0b026 100644
--- a/doc/guides/prog_guide/vhost_lib.rst
+++ b/doc/guides/prog_guide/vhost_lib.rst
@@ -323,6 +323,11 @@ The following is an overview of some key Vhost API functions:
   Get device type of vDPA device, such as VDPA_DEVICE_TYPE_NET,
   VDPA_DEVICE_TYPE_BLK.
 
+* ``rte_vhost_async_dma_unconfigure(dma_id, vchan_id)``
+
+  Clean DMA vChannels finished to use. After this function is called,
+  the specified DMA vChannel should no longer be used by the Vhost library.
+
 Vhost-user Implementations
 --------------------------
 
diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst
index 2da8bc9661..dc52d1ce2f 100644
--- a/doc/guides/rel_notes/release_22_11.rst
+++ b/doc/guides/rel_notes/release_22_11.rst
@@ -225,6 +225,11 @@ New Features
   sysfs entries to adjust the minimum and maximum uncore frequency values,
   which works on Linux with Intel hardware only.
 
+* **Added DMA vChannel unconfiguration for async vhost.**
+
+  Added support to unconfigure DMA vChannels that are no longer used
+  by the Vhost library.
+
 * **Rewritten pmdinfo script.**
 
   The ``dpdk-pmdinfo.py`` script was rewritten to produce valid JSON only.
diff --git a/lib/vhost/rte_vhost_async.h b/lib/vhost/rte_vhost_async.h
index 1db2a10124..45c75be55c 100644
--- a/lib/vhost/rte_vhost_async.h
+++ b/lib/vhost/rte_vhost_async.h
@@ -266,6 +266,26 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id,
 	struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count,
 	int *nr_inflight, int16_t dma_id, uint16_t vchan_id);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice.
+ *
+ * Unconfigure DMA vChannels in Vhost asynchronous data path.
+ * This function should be called when the specified DMA device is no longer
+ * used by the Vhost library. Before this function is called, make sure there
+ * does not exist in-flight packets in DMA vChannel.
+ *
+ * @param dma_id
+ *  the identifier of DMA device
+ * @param vchan_id
+ *  the identifier of virtual DMA channel
+ * @return
+ *  0 on success, and -1 on failure
+ */
+__rte_experimental
+int
+rte_vhost_async_dma_unconfigure(int16_t dma_id, uint16_t vchan_id);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/vhost/version.map b/lib/vhost/version.map
index 7a00b65740..0b61870870 100644
--- a/lib/vhost/version.map
+++ b/lib/vhost/version.map
@@ -94,6 +94,9 @@ EXPERIMENTAL {
 	rte_vhost_async_try_dequeue_burst;
 	rte_vhost_driver_get_vdpa_dev_type;
 	rte_vhost_clear_queue;
+
+	# added in 22.11
+	rte_vhost_async_dma_unconfigure;
 };
 
 INTERNAL {
diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
index 8740aa2788..3f733b4878 100644
--- a/lib/vhost/vhost.c
+++ b/lib/vhost/vhost.c
@@ -23,6 +23,7 @@
 
 struct virtio_net *vhost_devices[RTE_MAX_VHOST_DEVICE];
 pthread_mutex_t vhost_dev_lock = PTHREAD_MUTEX_INITIALIZER;
+pthread_mutex_t vhost_dma_lock = PTHREAD_MUTEX_INITIALIZER;
 
 struct vhost_vq_stats_name_off {
 	char name[RTE_VHOST_STATS_NAME_SIZE];
@@ -1844,19 +1845,20 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)
 	void *pkts_cmpl_flag_addr;
 	uint16_t max_desc;
 
+	pthread_mutex_lock(&vhost_dma_lock);
 	if (!rte_dma_is_valid(dma_id)) {
 		VHOST_LOG_CONFIG("dma", ERR, "DMA %d is not found.\n", dma_id);
-		return -1;
+		goto error;
 	}
 
 	if (rte_dma_info_get(dma_id, &info) != 0) {
 		VHOST_LOG_CONFIG("dma", ERR, "Fail to get DMA %d information.\n", dma_id);
-		return -1;
+		goto error;
 	}
 
 	if (vchan_id >= info.max_vchans) {
 		VHOST_LOG_CONFIG("dma", ERR, "Invalid DMA %d vChannel %u.\n", dma_id, vchan_id);
-		return -1;
+		goto error;
 	}
 
 	if (!dma_copy_track[dma_id].vchans) {
@@ -1868,7 +1870,7 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)
 			VHOST_LOG_CONFIG("dma", ERR,
 				"Failed to allocate vchans for DMA %d vChannel %u.\n",
 				dma_id, vchan_id);
-			return -1;
+			goto error;
 		}
 
 		dma_copy_track[dma_id].vchans = vchans;
@@ -1877,6 +1879,7 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)
 	if (dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr) {
 		VHOST_LOG_CONFIG("dma", INFO, "DMA %d vChannel %u already registered.\n",
 			dma_id, vchan_id);
+		pthread_mutex_unlock(&vhost_dma_lock);
 		return 0;
 	}
 
@@ -1894,7 +1897,7 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)
 			rte_free(dma_copy_track[dma_id].vchans);
 			dma_copy_track[dma_id].vchans = NULL;
 		}
-		return -1;
+		goto error;
 	}
 
 	dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr = pkts_cmpl_flag_addr;
@@ -1902,7 +1905,12 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)
 	dma_copy_track[dma_id].vchans[vchan_id].ring_mask = max_desc - 1;
 	dma_copy_track[dma_id].nr_vchans++;
 
+	pthread_mutex_unlock(&vhost_dma_lock);
 	return 0;
+
+error:
+	pthread_mutex_unlock(&vhost_dma_lock);
+	return -1;
 }
 
 int
@@ -2091,5 +2099,62 @@ int rte_vhost_vring_stats_reset(int vid, uint16_t queue_id)
 	return 0;
 }
 
+int
+rte_vhost_async_dma_unconfigure(int16_t dma_id, uint16_t vchan_id)
+{
+	struct rte_dma_info info;
+	struct rte_dma_stats stats = { 0 };
+
+	pthread_mutex_lock(&vhost_dma_lock);
+	if (!rte_dma_is_valid(dma_id)) {
+		VHOST_LOG_CONFIG("dma", ERR, "DMA %d is not found.\n", dma_id);
+		goto error;
+	}
+
+	if (rte_dma_info_get(dma_id, &info) != 0) {
+		VHOST_LOG_CONFIG("dma", ERR, "Fail to get DMA %d information.\n", dma_id);
+		goto error;
+	}
+
+	if (vchan_id >= info.max_vchans) {
+		VHOST_LOG_CONFIG("dma", ERR, "Invalid DMA %d vChannel %u.\n", dma_id, vchan_id);
+		goto error;
+	}
+
+	if (dma_copy_track[dma_id].nr_vchans == 0)
+		goto error;
+
+	if (dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr != NULL) {
+		rte_free(dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr);
+		dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr = NULL;
+	}
+
+	dma_copy_track[dma_id].nr_vchans--;
+
+	if (dma_copy_track[dma_id].nr_vchans == 0 && dma_copy_track[dma_id].vchans != NULL) {
+		rte_free(dma_copy_track[dma_id].vchans);
+		dma_copy_track[dma_id].vchans = NULL;
+	}
+
+	if (rte_dma_stats_get(dma_id, vchan_id, &stats) != 0) {
+		VHOST_LOG_CONFIG("dma", ERR,
+				 "Failed to get stats for DMA %d vChannel %u.\n", dma_id, vchan_id);
+		goto error;
+	}
+
+	if (stats.submitted - stats.completed != 0) {
+		VHOST_LOG_CONFIG("dma", ERR,
+				 "Do not unconfigure when there are inflight packets.\n");
+		goto error;
+	}
+
+	pthread_mutex_unlock(&vhost_dma_lock);
+	return 0;
+
+error:
+	pthread_mutex_unlock(&vhost_dma_lock);
+	return -1;
+}
+
 RTE_LOG_REGISTER_SUFFIX(vhost_config_log_level, config, INFO);
 RTE_LOG_REGISTER_SUFFIX(vhost_data_log_level, data, WARNING);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH v7 2/2] examples/vhost: unconfigure DMA vchannel
  2022-10-20  9:11 ` [PATCH v7 0/2] vhost: introduce DMA vchannel unconfiguration xuan.ding
  2022-10-20  9:11   ` [PATCH v7 1/2] " xuan.ding
@ 2022-10-20  9:11   ` xuan.ding
  2022-10-21  8:12     ` Maxime Coquelin
  1 sibling, 1 reply; 43+ messages in thread
From: xuan.ding @ 2022-10-20  9:11 UTC (permalink / raw)
  To: maxime.coquelin, chenbo.xia
  Cc: dev, jiayu.hu, xingguang.he, yvonnex.yang, cheng1.jiang,
	yuanx.wang, wenwux.ma, Xuan Ding

From: Xuan Ding <xuan.ding@intel.com>

This patch applies rte_vhost_async_dma_unconfigure() to manually free
DMA vchannels. Before unconfiguration, make sure the specified DMA
device is no longer used by any vhost ports.

Signed-off-by: Xuan Ding <xuan.ding@intel.com>
---
 examples/vhost/main.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index ac78704d79..42e53a0f9a 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -2066,6 +2066,14 @@ main(int argc, char *argv[])
 	RTE_LCORE_FOREACH_WORKER(lcore_id)
 		rte_eal_wait_lcore(lcore_id);
 
+	for (i = 0; i < dma_count; i++) {
+		if (rte_vhost_async_dma_unconfigure(dmas_id[i], 0) < 0) {
+			RTE_LOG(ERR, VHOST_PORT,
+				"Failed to unconfigure DMA %d in vhost.\n", dmas_id[i]);
+			rte_exit(EXIT_FAILURE, "Cannot use given DMA device\n");
+		}
+	}
+
 	/* clean up the EAL */
 	rte_eal_cleanup();
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v7 1/2] vhost: introduce DMA vchannel unconfiguration
  2022-10-20  9:11   ` [PATCH v7 1/2] " xuan.ding
@ 2022-10-21  8:09     ` Maxime Coquelin
  2022-10-21  8:22       ` Ding, Xuan
  0 siblings, 1 reply; 43+ messages in thread
From: Maxime Coquelin @ 2022-10-21  8:09 UTC (permalink / raw)
  To: xuan.ding, chenbo.xia
  Cc: dev, jiayu.hu, xingguang.he, yvonnex.yang, cheng1.jiang,
	yuanx.wang, wenwux.ma



On 10/20/22 11:11, xuan.ding@intel.com wrote:
> From: Xuan Ding <xuan.ding@intel.com>
> 
> Add a new API rte_vhost_async_dma_unconfigure() to unconfigure DMA
> vchannels in vhost async data path. Lock protection are also added
> to protect DMA vchannels configuration and unconfiguration
> from concurrent calls.
> 
> Signed-off-by: Xuan Ding <xuan.ding@intel.com>
> ---
>   doc/guides/prog_guide/vhost_lib.rst    |  5 ++
>   doc/guides/rel_notes/release_22_11.rst |  5 ++
>   lib/vhost/rte_vhost_async.h            | 20 +++++++
>   lib/vhost/version.map                  |  3 ++
>   lib/vhost/vhost.c                      | 75 ++++++++++++++++++++++++--
>   5 files changed, 103 insertions(+), 5 deletions(-)
> 

Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

Thanks,
Maxime


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v7 2/2] examples/vhost: unconfigure DMA vchannel
  2022-10-20  9:11   ` [PATCH v7 2/2] examples/vhost: unconfigure DMA vchannel xuan.ding
@ 2022-10-21  8:12     ` Maxime Coquelin
  0 siblings, 0 replies; 43+ messages in thread
From: Maxime Coquelin @ 2022-10-21  8:12 UTC (permalink / raw)
  To: xuan.ding, chenbo.xia
  Cc: dev, jiayu.hu, xingguang.he, yvonnex.yang, cheng1.jiang,
	yuanx.wang, wenwux.ma



On 10/20/22 11:11, xuan.ding@intel.com wrote:
> From: Xuan Ding <xuan.ding@intel.com>
> 
> This patch applies rte_vhost_async_dma_unconfigure() to manually free
> DMA vchannels. Before unconfiguration, make sure the specified DMA
> device is no longer used by any vhost ports.
> 
> Signed-off-by: Xuan Ding <xuan.ding@intel.com>
> ---
>   examples/vhost/main.c | 8 ++++++++
>   1 file changed, 8 insertions(+)
> 
> diff --git a/examples/vhost/main.c b/examples/vhost/main.c
> index ac78704d79..42e53a0f9a 100644
> --- a/examples/vhost/main.c
> +++ b/examples/vhost/main.c
> @@ -2066,6 +2066,14 @@ main(int argc, char *argv[])
>   	RTE_LCORE_FOREACH_WORKER(lcore_id)
>   		rte_eal_wait_lcore(lcore_id);
>   
> +	for (i = 0; i < dma_count; i++) {
> +		if (rte_vhost_async_dma_unconfigure(dmas_id[i], 0) < 0) {
> +			RTE_LOG(ERR, VHOST_PORT,
> +				"Failed to unconfigure DMA %d in vhost.\n", dmas_id[i]);
> +			rte_exit(EXIT_FAILURE, "Cannot use given DMA device\n");
> +		}
> +	}
> +
>   	/* clean up the EAL */
>   	rte_eal_cleanup();
>   

Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

Thanks,
Maxime


^ permalink raw reply	[flat|nested] 43+ messages in thread

* RE: [PATCH v7 1/2] vhost: introduce DMA vchannel unconfiguration
  2022-10-21  8:09     ` Maxime Coquelin
@ 2022-10-21  8:22       ` Ding, Xuan
  0 siblings, 0 replies; 43+ messages in thread
From: Ding, Xuan @ 2022-10-21  8:22 UTC (permalink / raw)
  To: Maxime Coquelin, Xia, Chenbo
  Cc: dev, Hu, Jiayu, He, Xingguang, Yang, YvonneX, Jiang, Cheng1,
	Wang, YuanX, Ma, WenwuX

Hi Maxime,

> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Friday, October 21, 2022 4:10 PM
> To: Ding, Xuan <xuan.ding@intel.com>; Xia, Chenbo <chenbo.xia@intel.com>
> Cc: dev@dpdk.org; Hu, Jiayu <jiayu.hu@intel.com>; He, Xingguang
> <xingguang.he@intel.com>; Yang, YvonneX <yvonnex.yang@intel.com>;
> Jiang, Cheng1 <cheng1.jiang@intel.com>; Wang, YuanX
> <yuanx.wang@intel.com>; Ma, WenwuX <wenwux.ma@intel.com>
> Subject: Re: [PATCH v7 1/2] vhost: introduce DMA vchannel unconfiguration
> 
> 
> 
> On 10/20/22 11:11, xuan.ding@intel.com wrote:
> > From: Xuan Ding <xuan.ding@intel.com>
> >
> > Add a new API rte_vhost_async_dma_unconfigure() to unconfigure DMA
> > vchannels in vhost async data path. Lock protection are also added to
> > protect DMA vchannels configuration and unconfiguration from
> > concurrent calls.
> >
> > Signed-off-by: Xuan Ding <xuan.ding@intel.com>
> > ---
> >   doc/guides/prog_guide/vhost_lib.rst    |  5 ++
> >   doc/guides/rel_notes/release_22_11.rst |  5 ++
> >   lib/vhost/rte_vhost_async.h            | 20 +++++++
> >   lib/vhost/version.map                  |  3 ++
> >   lib/vhost/vhost.c                      | 75 ++++++++++++++++++++++++--
> >   5 files changed, 103 insertions(+), 5 deletions(-)
> >
> 
> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

This patch needs mirror changes.
Please help to review and wait until the v8 test passes before merging.
Thanks a lot.

Regards,
Xuan

> 
> Thanks,
> Maxime


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH v8 0/2] vhost: introduce DMA vchannel unconfiguration
  2022-08-14 14:04 [PATCH v1 0/2] vhost: introduce DMA vchannel unconfiguration xuan.ding
                   ` (7 preceding siblings ...)
  2022-10-20  9:11 ` [PATCH v7 0/2] vhost: introduce DMA vchannel unconfiguration xuan.ding
@ 2022-10-25  8:25 ` xuan.ding
  2022-10-25  8:25   ` [PATCH v8 1/2] " xuan.ding
                     ` (2 more replies)
  8 siblings, 3 replies; 43+ messages in thread
From: xuan.ding @ 2022-10-25  8:25 UTC (permalink / raw)
  To: maxime.coquelin, chenbo.xia
  Cc: dev, jiayu.hu, xingguang.he, weix.ling, cheng1.jiang, yuanx.wang,
	wenwux.ma, Xuan Ding

From: Xuan Ding <xuan.ding@intel.com>

This patchset introduces a new API rte_vhost_async_dma_unconfigure()
to help user to manually free DMA vChannels finished to use.

v8:
* Check inflight packets release virtual channel.

v7:
* Add inflight packets processing.
* Fix CI error.

v6:
* Move DMA unconfiguration to the end due to DMA devices maybe reused
  after destroy_device().
* Refine the doc to claim the DMA device should not be used in vhost
  after unconfiguration.

v5:
* Use mutex instead of spinlock.
* Improve code readability.

v4:
* Rebase to 22.11 rc1.
* Fix the usage of 'dma_ref_count' to make sure the specified DMA device
  is not used by any vhost ports before unconfiguration.

v3:
* Rebase to latest DPDK.
* Refine some descriptions in the doc.
* Fix one bug in the vhost example.

v2:
* Add spinlock protection.
* Fix a memory leak issue.
* Refine the doc.

Xuan Ding (2):
  vhost: introduce DMA vchannel unconfiguration
  examples/vhost: unconfigure DMA vchannel

 doc/guides/prog_guide/vhost_lib.rst    |  5 ++
 doc/guides/rel_notes/release_22_11.rst |  5 ++
 examples/vhost/main.c                  |  8 +++
 lib/vhost/rte_vhost_async.h            | 20 +++++++
 lib/vhost/version.map                  |  3 ++
 lib/vhost/vhost.c                      | 72 ++++++++++++++++++++++++--
 6 files changed, 108 insertions(+), 5 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH v8 1/2] vhost: introduce DMA vchannel unconfiguration
  2022-10-25  8:25 ` [PATCH v8 0/2] vhost: introduce DMA vchannel unconfiguration xuan.ding
@ 2022-10-25  8:25   ` xuan.ding
  2022-10-26  5:13     ` Maxime Coquelin
  2022-10-26  9:02     ` Xia, Chenbo
  2022-10-25  8:25   ` [PATCH v8 2/2] examples/vhost: unconfigure DMA vchannel xuan.ding
  2022-10-26  9:07   ` [PATCH v8 0/2] vhost: introduce DMA vchannel unconfiguration Xia, Chenbo
  2 siblings, 2 replies; 43+ messages in thread
From: xuan.ding @ 2022-10-25  8:25 UTC (permalink / raw)
  To: maxime.coquelin, chenbo.xia
  Cc: dev, jiayu.hu, xingguang.he, weix.ling, cheng1.jiang, yuanx.wang,
	wenwux.ma, Xuan Ding

From: Xuan Ding <xuan.ding@intel.com>

Add a new API rte_vhost_async_dma_unconfigure() to unconfigure DMA
vChannels in vhost async data path. Lock protection are also added
to protect DMA vChannel configuration and unconfiguration
from concurrent calls.

Signed-off-by: Xuan Ding <xuan.ding@intel.com>
---
 doc/guides/prog_guide/vhost_lib.rst    |  5 ++
 doc/guides/rel_notes/release_22_11.rst |  5 ++
 lib/vhost/rte_vhost_async.h            | 20 +++++++
 lib/vhost/version.map                  |  3 ++
 lib/vhost/vhost.c                      | 72 ++++++++++++++++++++++++--
 5 files changed, 100 insertions(+), 5 deletions(-)

diff --git a/doc/guides/prog_guide/vhost_lib.rst b/doc/guides/prog_guide/vhost_lib.rst
index bad4d819e1..0d9eca1f7d 100644
--- a/doc/guides/prog_guide/vhost_lib.rst
+++ b/doc/guides/prog_guide/vhost_lib.rst
@@ -323,6 +323,11 @@ The following is an overview of some key Vhost API functions:
   Get device type of vDPA device, such as VDPA_DEVICE_TYPE_NET,
   VDPA_DEVICE_TYPE_BLK.
 
+* ``rte_vhost_async_dma_unconfigure(dma_id, vchan_id)``
+
+  Clean DMA vChannel finished to use. After this function is called,
+  the specified DMA vChannel should no longer be used by the Vhost library.
+
 Vhost-user Implementations
 --------------------------
 
diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst
index 2da8bc9661..bbd1c5aa9c 100644
--- a/doc/guides/rel_notes/release_22_11.rst
+++ b/doc/guides/rel_notes/release_22_11.rst
@@ -225,6 +225,11 @@ New Features
   sysfs entries to adjust the minimum and maximum uncore frequency values,
   which works on Linux with Intel hardware only.
 
+* **Added DMA vChannel unconfiguration for async vhost.**
+
+  Added support to unconfigure DMA vChannel that is no longer used
+  by the Vhost library.
+
 * **Rewritten pmdinfo script.**
 
   The ``dpdk-pmdinfo.py`` script was rewritten to produce valid JSON only.
diff --git a/lib/vhost/rte_vhost_async.h b/lib/vhost/rte_vhost_async.h
index 1db2a10124..8f190dd44b 100644
--- a/lib/vhost/rte_vhost_async.h
+++ b/lib/vhost/rte_vhost_async.h
@@ -266,6 +266,26 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id,
 	struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count,
 	int *nr_inflight, int16_t dma_id, uint16_t vchan_id);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice.
+ *
+ * Unconfigure DMA vChannel in Vhost asynchronous data path.
+ * This function should be called when the specified DMA vChannel is no longer
+ * used by the Vhost library. Before this function is called, make sure there
+ * does not exist in-flight packets in DMA vChannel.
+ *
+ * @param dma_id
+ *  the identifier of DMA device
+ * @param vchan_id
+ *  the identifier of virtual DMA channel
+ * @return
+ *  0 on success, and -1 on failure
+ */
+__rte_experimental
+int
+rte_vhost_async_dma_unconfigure(int16_t dma_id, uint16_t vchan_id);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/vhost/version.map b/lib/vhost/version.map
index 7a00b65740..0b61870870 100644
--- a/lib/vhost/version.map
+++ b/lib/vhost/version.map
@@ -94,6 +94,9 @@ EXPERIMENTAL {
 	rte_vhost_async_try_dequeue_burst;
 	rte_vhost_driver_get_vdpa_dev_type;
 	rte_vhost_clear_queue;
+
+	# added in 22.11
+	rte_vhost_async_dma_unconfigure;
 };
 
 INTERNAL {
diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
index 8740aa2788..1bb01c2a2e 100644
--- a/lib/vhost/vhost.c
+++ b/lib/vhost/vhost.c
@@ -23,6 +23,7 @@
 
 struct virtio_net *vhost_devices[RTE_MAX_VHOST_DEVICE];
 pthread_mutex_t vhost_dev_lock = PTHREAD_MUTEX_INITIALIZER;
+pthread_mutex_t vhost_dma_lock = PTHREAD_MUTEX_INITIALIZER;
 
 struct vhost_vq_stats_name_off {
 	char name[RTE_VHOST_STATS_NAME_SIZE];
@@ -1844,19 +1845,21 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)
 	void *pkts_cmpl_flag_addr;
 	uint16_t max_desc;
 
+	pthread_mutex_lock(&vhost_dma_lock);
+
 	if (!rte_dma_is_valid(dma_id)) {
 		VHOST_LOG_CONFIG("dma", ERR, "DMA %d is not found.\n", dma_id);
-		return -1;
+		goto error;
 	}
 
 	if (rte_dma_info_get(dma_id, &info) != 0) {
 		VHOST_LOG_CONFIG("dma", ERR, "Fail to get DMA %d information.\n", dma_id);
-		return -1;
+		goto error;
 	}
 
 	if (vchan_id >= info.max_vchans) {
 		VHOST_LOG_CONFIG("dma", ERR, "Invalid DMA %d vChannel %u.\n", dma_id, vchan_id);
-		return -1;
+		goto error;
 	}
 
 	if (!dma_copy_track[dma_id].vchans) {
@@ -1868,7 +1871,7 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)
 			VHOST_LOG_CONFIG("dma", ERR,
 				"Failed to allocate vchans for DMA %d vChannel %u.\n",
 				dma_id, vchan_id);
-			return -1;
+			goto error;
 		}
 
 		dma_copy_track[dma_id].vchans = vchans;
@@ -1877,6 +1880,7 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)
 	if (dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr) {
 		VHOST_LOG_CONFIG("dma", INFO, "DMA %d vChannel %u already registered.\n",
 			dma_id, vchan_id);
+		pthread_mutex_unlock(&vhost_dma_lock);
 		return 0;
 	}
 
@@ -1894,7 +1898,7 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)
 			rte_free(dma_copy_track[dma_id].vchans);
 			dma_copy_track[dma_id].vchans = NULL;
 		}
-		return -1;
+		goto error;
 	}
 
 	dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr = pkts_cmpl_flag_addr;
@@ -1902,7 +1906,12 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)
 	dma_copy_track[dma_id].vchans[vchan_id].ring_mask = max_desc - 1;
 	dma_copy_track[dma_id].nr_vchans++;
 
+	pthread_mutex_unlock(&vhost_dma_lock);
 	return 0;
+
+error:
+	pthread_mutex_unlock(&vhost_dma_lock);
+	return -1;
 }
 
 int
@@ -2091,5 +2100,58 @@ int rte_vhost_vring_stats_reset(int vid, uint16_t queue_id)
 	return 0;
 }
 
+int
+rte_vhost_async_dma_unconfigure(int16_t dma_id, uint16_t vchan_id)
+{
+	struct rte_dma_info info;
+	struct rte_dma_stats stats = { 0 };
+
+	pthread_mutex_lock(&vhost_dma_lock);
+
+	if (!rte_dma_is_valid(dma_id)) {
+		VHOST_LOG_CONFIG("dma", ERR, "DMA %d is not found.\n", dma_id);
+		goto error;
+	}
+
+	if (rte_dma_info_get(dma_id, &info) != 0) {
+		VHOST_LOG_CONFIG("dma", ERR, "Fail to get DMA %d information.\n", dma_id);
+		goto error;
+	}
+
+	if (vchan_id >= info.max_vchans || !dma_copy_track[dma_id].vchans ||
+		!dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr) {
+		VHOST_LOG_CONFIG("dma", ERR, "Invalid channel %d:%u.\n", dma_id, vchan_id);
+		goto error;
+	}
+
+	if (rte_dma_stats_get(dma_id, vchan_id, &stats) != 0) {
+		VHOST_LOG_CONFIG("dma", ERR,
+				 "Failed to get stats for DMA %d vChannel %u.\n", dma_id, vchan_id);
+		goto error;
+	}
+
+	if (stats.submitted - stats.completed != 0) {
+		VHOST_LOG_CONFIG("dma", ERR,
+				 "Do not unconfigure when there are inflight packets.\n");
+		goto error;
+	}
+
+	rte_free(dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr);
+	dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr = NULL;
+	dma_copy_track[dma_id].nr_vchans--;
+
+	if (dma_copy_track[dma_id].nr_vchans == 0) {
+		rte_free(dma_copy_track[dma_id].vchans);
+		dma_copy_track[dma_id].vchans = NULL;
+	}
+
+	pthread_mutex_unlock(&vhost_dma_lock);
+	return 0;
+
+error:
+	pthread_mutex_unlock(&vhost_dma_lock);
+	return -1;
+}
+
 RTE_LOG_REGISTER_SUFFIX(vhost_config_log_level, config, INFO);
 RTE_LOG_REGISTER_SUFFIX(vhost_data_log_level, data, WARNING);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH v8 2/2] examples/vhost: unconfigure DMA vchannel
  2022-10-25  8:25 ` [PATCH v8 0/2] vhost: introduce DMA vchannel unconfiguration xuan.ding
  2022-10-25  8:25   ` [PATCH v8 1/2] " xuan.ding
@ 2022-10-25  8:25   ` xuan.ding
  2022-10-25  9:56     ` Ling, WeiX
                       ` (2 more replies)
  2022-10-26  9:07   ` [PATCH v8 0/2] vhost: introduce DMA vchannel unconfiguration Xia, Chenbo
  2 siblings, 3 replies; 43+ messages in thread
From: xuan.ding @ 2022-10-25  8:25 UTC (permalink / raw)
  To: maxime.coquelin, chenbo.xia
  Cc: dev, jiayu.hu, xingguang.he, weix.ling, cheng1.jiang, yuanx.wang,
	wenwux.ma, Xuan Ding

From: Xuan Ding <xuan.ding@intel.com>

This patch applies rte_vhost_async_dma_unconfigure() to manually free
DMA vChannels. Before unconfiguration, make sure the specified DMA
vChannel is no longer used by any vhost ports.

Signed-off-by: Xuan Ding <xuan.ding@intel.com>
---
 examples/vhost/main.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index ac78704d79..42e53a0f9a 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -2066,6 +2066,14 @@ main(int argc, char *argv[])
 	RTE_LCORE_FOREACH_WORKER(lcore_id)
 		rte_eal_wait_lcore(lcore_id);
 
+	for (i = 0; i < dma_count; i++) {
+		if (rte_vhost_async_dma_unconfigure(dmas_id[i], 0) < 0) {
+			RTE_LOG(ERR, VHOST_PORT,
+				"Failed to unconfigure DMA %d in vhost.\n", dmas_id[i]);
+			rte_exit(EXIT_FAILURE, "Cannot use given DMA device\n");
+		}
+	}
+
 	/* clean up the EAL */
 	rte_eal_cleanup();
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* RE: [PATCH v8 2/2] examples/vhost: unconfigure DMA vchannel
  2022-10-25  8:25   ` [PATCH v8 2/2] examples/vhost: unconfigure DMA vchannel xuan.ding
@ 2022-10-25  9:56     ` Ling, WeiX
  2022-10-26  5:14     ` Maxime Coquelin
  2022-10-26  9:03     ` Xia, Chenbo
  2 siblings, 0 replies; 43+ messages in thread
From: Ling, WeiX @ 2022-10-25  9:56 UTC (permalink / raw)
  To: Ding, Xuan, maxime.coquelin, Xia, Chenbo
  Cc: dev, Hu, Jiayu, He, Xingguang, Jiang, Cheng1, Wang, YuanX, Ma, WenwuX

> -----Original Message-----
> From: Ding, Xuan <xuan.ding@intel.com>
> Sent: Tuesday, October 25, 2022 4:26 PM
> To: maxime.coquelin@redhat.com; Xia, Chenbo <chenbo.xia@intel.com>
> Cc: dev@dpdk.org; Hu, Jiayu <jiayu.hu@intel.com>; He, Xingguang
> <xingguang.he@intel.com>; Ling, WeiX <weix.ling@intel.com>; Jiang,
> Cheng1 <cheng1.jiang@intel.com>; Wang, YuanX <yuanx.wang@intel.com>;
> Ma, WenwuX <wenwux.ma@intel.com>; Ding, Xuan <xuan.ding@intel.com>
> Subject: [PATCH v8 2/2] examples/vhost: unconfigure DMA vchannel
> 
> From: Xuan Ding <xuan.ding@intel.com>
> 
> This patch applies rte_vhost_async_dma_unconfigure() to manually free
> DMA vChannels. Before unconfiguration, make sure the specified DMA
> vChannel is no longer used by any vhost ports.
> 
> Signed-off-by: Xuan Ding <xuan.ding@intel.com>
> ---
Tested-by: Wei Ling <weix.ling@intel.com>

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v8 1/2] vhost: introduce DMA vchannel unconfiguration
  2022-10-25  8:25   ` [PATCH v8 1/2] " xuan.ding
@ 2022-10-26  5:13     ` Maxime Coquelin
  2022-10-26  9:02     ` Xia, Chenbo
  1 sibling, 0 replies; 43+ messages in thread
From: Maxime Coquelin @ 2022-10-26  5:13 UTC (permalink / raw)
  To: xuan.ding, chenbo.xia
  Cc: dev, jiayu.hu, xingguang.he, weix.ling, cheng1.jiang, yuanx.wang,
	wenwux.ma



On 10/25/22 10:25, xuan.ding@intel.com wrote:
> From: Xuan Ding <xuan.ding@intel.com>
> 
> Add a new API rte_vhost_async_dma_unconfigure() to unconfigure DMA
> vChannels in vhost async data path. Lock protection are also added
> to protect DMA vChannel configuration and unconfiguration
> from concurrent calls.
> 
> Signed-off-by: Xuan Ding <xuan.ding@intel.com>
> ---
>   doc/guides/prog_guide/vhost_lib.rst    |  5 ++
>   doc/guides/rel_notes/release_22_11.rst |  5 ++
>   lib/vhost/rte_vhost_async.h            | 20 +++++++
>   lib/vhost/version.map                  |  3 ++
>   lib/vhost/vhost.c                      | 72 ++++++++++++++++++++++++--
>   5 files changed, 100 insertions(+), 5 deletions(-)
> 

Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

Thanks,
Maxime


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v8 2/2] examples/vhost: unconfigure DMA vchannel
  2022-10-25  8:25   ` [PATCH v8 2/2] examples/vhost: unconfigure DMA vchannel xuan.ding
  2022-10-25  9:56     ` Ling, WeiX
@ 2022-10-26  5:14     ` Maxime Coquelin
  2022-10-26  9:03     ` Xia, Chenbo
  2 siblings, 0 replies; 43+ messages in thread
From: Maxime Coquelin @ 2022-10-26  5:14 UTC (permalink / raw)
  To: xuan.ding, chenbo.xia
  Cc: dev, jiayu.hu, xingguang.he, weix.ling, cheng1.jiang, yuanx.wang,
	wenwux.ma



On 10/25/22 10:25, xuan.ding@intel.com wrote:
> From: Xuan Ding <xuan.ding@intel.com>
> 
> This patch applies rte_vhost_async_dma_unconfigure() to manually free
> DMA vChannels. Before unconfiguration, make sure the specified DMA
> vChannel is no longer used by any vhost ports.
> 
> Signed-off-by: Xuan Ding <xuan.ding@intel.com>
> ---
>   examples/vhost/main.c | 8 ++++++++
>   1 file changed, 8 insertions(+)
> 
> diff --git a/examples/vhost/main.c b/examples/vhost/main.c
> index ac78704d79..42e53a0f9a 100644
> --- a/examples/vhost/main.c
> +++ b/examples/vhost/main.c
> @@ -2066,6 +2066,14 @@ main(int argc, char *argv[])
>   	RTE_LCORE_FOREACH_WORKER(lcore_id)
>   		rte_eal_wait_lcore(lcore_id);
>   
> +	for (i = 0; i < dma_count; i++) {
> +		if (rte_vhost_async_dma_unconfigure(dmas_id[i], 0) < 0) {
> +			RTE_LOG(ERR, VHOST_PORT,
> +				"Failed to unconfigure DMA %d in vhost.\n", dmas_id[i]);
> +			rte_exit(EXIT_FAILURE, "Cannot use given DMA device\n");
> +		}
> +	}
> +
>   	/* clean up the EAL */
>   	rte_eal_cleanup();
>   

Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

Thanks,
Maxime


^ permalink raw reply	[flat|nested] 43+ messages in thread

* RE: [PATCH v8 1/2] vhost: introduce DMA vchannel unconfiguration
  2022-10-25  8:25   ` [PATCH v8 1/2] " xuan.ding
  2022-10-26  5:13     ` Maxime Coquelin
@ 2022-10-26  9:02     ` Xia, Chenbo
  1 sibling, 0 replies; 43+ messages in thread
From: Xia, Chenbo @ 2022-10-26  9:02 UTC (permalink / raw)
  To: Ding, Xuan, maxime.coquelin
  Cc: dev, Hu, Jiayu, He, Xingguang, Ling, WeiX, Jiang, Cheng1, Wang,
	YuanX, Ma, WenwuX

> -----Original Message-----
> From: Ding, Xuan <xuan.ding@intel.com>
> Sent: Tuesday, October 25, 2022 4:26 PM
> To: maxime.coquelin@redhat.com; Xia, Chenbo <chenbo.xia@intel.com>
> Cc: dev@dpdk.org; Hu, Jiayu <jiayu.hu@intel.com>; He, Xingguang
> <xingguang.he@intel.com>; Ling, WeiX <weix.ling@intel.com>; Jiang, Cheng1
> <cheng1.jiang@intel.com>; Wang, YuanX <yuanx.wang@intel.com>; Ma, WenwuX
> <wenwux.ma@intel.com>; Ding, Xuan <xuan.ding@intel.com>
> Subject: [PATCH v8 1/2] vhost: introduce DMA vchannel unconfiguration
> 
> From: Xuan Ding <xuan.ding@intel.com>
> 
> Add a new API rte_vhost_async_dma_unconfigure() to unconfigure DMA
> vChannels in vhost async data path. Lock protection are also added
> to protect DMA vChannel configuration and unconfiguration
> from concurrent calls.
> 
> Signed-off-by: Xuan Ding <xuan.ding@intel.com>
> ---
>  doc/guides/prog_guide/vhost_lib.rst    |  5 ++
>  doc/guides/rel_notes/release_22_11.rst |  5 ++
>  lib/vhost/rte_vhost_async.h            | 20 +++++++
>  lib/vhost/version.map                  |  3 ++
>  lib/vhost/vhost.c                      | 72 ++++++++++++++++++++++++--
>  5 files changed, 100 insertions(+), 5 deletions(-)
> 
> diff --git a/doc/guides/prog_guide/vhost_lib.rst
> b/doc/guides/prog_guide/vhost_lib.rst
> index bad4d819e1..0d9eca1f7d 100644
> --- a/doc/guides/prog_guide/vhost_lib.rst
> +++ b/doc/guides/prog_guide/vhost_lib.rst
> @@ -323,6 +323,11 @@ The following is an overview of some key Vhost API
> functions:
>    Get device type of vDPA device, such as VDPA_DEVICE_TYPE_NET,
>    VDPA_DEVICE_TYPE_BLK.
> 
> +* ``rte_vhost_async_dma_unconfigure(dma_id, vchan_id)``
> +
> +  Clean DMA vChannel finished to use. After this function is called,
> +  the specified DMA vChannel should no longer be used by the Vhost
> library.
> +
>  Vhost-user Implementations
>  --------------------------
> 
> diff --git a/doc/guides/rel_notes/release_22_11.rst
> b/doc/guides/rel_notes/release_22_11.rst
> index 2da8bc9661..bbd1c5aa9c 100644
> --- a/doc/guides/rel_notes/release_22_11.rst
> +++ b/doc/guides/rel_notes/release_22_11.rst
> @@ -225,6 +225,11 @@ New Features
>    sysfs entries to adjust the minimum and maximum uncore frequency values,
>    which works on Linux with Intel hardware only.
> 
> +* **Added DMA vChannel unconfiguration for async vhost.**
> +
> +  Added support to unconfigure DMA vChannel that is no longer used
> +  by the Vhost library.
> +
>  * **Rewritten pmdinfo script.**
> 
>    The ``dpdk-pmdinfo.py`` script was rewritten to produce valid JSON only.
> diff --git a/lib/vhost/rte_vhost_async.h b/lib/vhost/rte_vhost_async.h
> index 1db2a10124..8f190dd44b 100644
> --- a/lib/vhost/rte_vhost_async.h
> +++ b/lib/vhost/rte_vhost_async.h
> @@ -266,6 +266,26 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t
> queue_id,
>  	struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t
> count,
>  	int *nr_inflight, int16_t dma_id, uint16_t vchan_id);
> 
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change, or be removed, without prior
> notice.
> + *
> + * Unconfigure DMA vChannel in Vhost asynchronous data path.
> + * This function should be called when the specified DMA vChannel is no
> longer
> + * used by the Vhost library. Before this function is called, make sure
> there
> + * does not exist in-flight packets in DMA vChannel.
> + *
> + * @param dma_id
> + *  the identifier of DMA device
> + * @param vchan_id
> + *  the identifier of virtual DMA channel
> + * @return
> + *  0 on success, and -1 on failure
> + */
> +__rte_experimental
> +int
> +rte_vhost_async_dma_unconfigure(int16_t dma_id, uint16_t vchan_id);
> +
>  #ifdef __cplusplus
>  }
>  #endif
> diff --git a/lib/vhost/version.map b/lib/vhost/version.map
> index 7a00b65740..0b61870870 100644
> --- a/lib/vhost/version.map
> +++ b/lib/vhost/version.map
> @@ -94,6 +94,9 @@ EXPERIMENTAL {
>  	rte_vhost_async_try_dequeue_burst;
>  	rte_vhost_driver_get_vdpa_dev_type;
>  	rte_vhost_clear_queue;
> +
> +	# added in 22.11
> +	rte_vhost_async_dma_unconfigure;
>  };
> 
>  INTERNAL {
> diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
> index 8740aa2788..1bb01c2a2e 100644
> --- a/lib/vhost/vhost.c
> +++ b/lib/vhost/vhost.c
> @@ -23,6 +23,7 @@
> 
>  struct virtio_net *vhost_devices[RTE_MAX_VHOST_DEVICE];
>  pthread_mutex_t vhost_dev_lock = PTHREAD_MUTEX_INITIALIZER;
> +pthread_mutex_t vhost_dma_lock = PTHREAD_MUTEX_INITIALIZER;
> 
>  struct vhost_vq_stats_name_off {
>  	char name[RTE_VHOST_STATS_NAME_SIZE];
> @@ -1844,19 +1845,21 @@ rte_vhost_async_dma_configure(int16_t dma_id,
> uint16_t vchan_id)
>  	void *pkts_cmpl_flag_addr;
>  	uint16_t max_desc;
> 
> +	pthread_mutex_lock(&vhost_dma_lock);
> +
>  	if (!rte_dma_is_valid(dma_id)) {
>  		VHOST_LOG_CONFIG("dma", ERR, "DMA %d is not found.\n", dma_id);
> -		return -1;
> +		goto error;
>  	}
> 
>  	if (rte_dma_info_get(dma_id, &info) != 0) {
>  		VHOST_LOG_CONFIG("dma", ERR, "Fail to get DMA %d
> information.\n", dma_id);
> -		return -1;
> +		goto error;
>  	}
> 
>  	if (vchan_id >= info.max_vchans) {
>  		VHOST_LOG_CONFIG("dma", ERR, "Invalid DMA %d vChannel %u.\n",
> dma_id, vchan_id);
> -		return -1;
> +		goto error;
>  	}
> 
>  	if (!dma_copy_track[dma_id].vchans) {
> @@ -1868,7 +1871,7 @@ rte_vhost_async_dma_configure(int16_t dma_id,
> uint16_t vchan_id)
>  			VHOST_LOG_CONFIG("dma", ERR,
>  				"Failed to allocate vchans for DMA %d
> vChannel %u.\n",
>  				dma_id, vchan_id);
> -			return -1;
> +			goto error;
>  		}
> 
>  		dma_copy_track[dma_id].vchans = vchans;
> @@ -1877,6 +1880,7 @@ rte_vhost_async_dma_configure(int16_t dma_id,
> uint16_t vchan_id)
>  	if (dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr) {
>  		VHOST_LOG_CONFIG("dma", INFO, "DMA %d vChannel %u already
> registered.\n",
>  			dma_id, vchan_id);
> +		pthread_mutex_unlock(&vhost_dma_lock);
>  		return 0;
>  	}
> 
> @@ -1894,7 +1898,7 @@ rte_vhost_async_dma_configure(int16_t dma_id,
> uint16_t vchan_id)
>  			rte_free(dma_copy_track[dma_id].vchans);
>  			dma_copy_track[dma_id].vchans = NULL;
>  		}
> -		return -1;
> +		goto error;
>  	}
> 
>  	dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr =
> pkts_cmpl_flag_addr;
> @@ -1902,7 +1906,12 @@ rte_vhost_async_dma_configure(int16_t dma_id,
> uint16_t vchan_id)
>  	dma_copy_track[dma_id].vchans[vchan_id].ring_mask = max_desc - 1;
>  	dma_copy_track[dma_id].nr_vchans++;
> 
> +	pthread_mutex_unlock(&vhost_dma_lock);
>  	return 0;
> +
> +error:
> +	pthread_mutex_unlock(&vhost_dma_lock);
> +	return -1;
>  }
> 
>  int
> @@ -2091,5 +2100,58 @@ int rte_vhost_vring_stats_reset(int vid, uint16_t
> queue_id)
>  	return 0;
>  }
> 
> +int
> +rte_vhost_async_dma_unconfigure(int16_t dma_id, uint16_t vchan_id)
> +{
> +	struct rte_dma_info info;
> +	struct rte_dma_stats stats = { 0 };
> +
> +	pthread_mutex_lock(&vhost_dma_lock);
> +
> +	if (!rte_dma_is_valid(dma_id)) {
> +		VHOST_LOG_CONFIG("dma", ERR, "DMA %d is not found.\n", dma_id);
> +		goto error;
> +	}
> +
> +	if (rte_dma_info_get(dma_id, &info) != 0) {
> +		VHOST_LOG_CONFIG("dma", ERR, "Fail to get DMA %d
> information.\n", dma_id);
> +		goto error;
> +	}
> +
> +	if (vchan_id >= info.max_vchans || !dma_copy_track[dma_id].vchans ||
> +		!dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr)
> {
> +		VHOST_LOG_CONFIG("dma", ERR, "Invalid channel %d:%u.\n",
> dma_id, vchan_id);
> +		goto error;
> +	}
> +
> +	if (rte_dma_stats_get(dma_id, vchan_id, &stats) != 0) {
> +		VHOST_LOG_CONFIG("dma", ERR,
> +				 "Failed to get stats for DMA %d vChannel %u.\n",
> dma_id, vchan_id);
> +		goto error;
> +	}
> +
> +	if (stats.submitted - stats.completed != 0) {
> +		VHOST_LOG_CONFIG("dma", ERR,
> +				 "Do not unconfigure when there are inflight
> packets.\n");
> +		goto error;
> +	}
> +
> +
> 	rte_free(dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr
> );
> +	dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr = NULL;
> +	dma_copy_track[dma_id].nr_vchans--;
> +
> +	if (dma_copy_track[dma_id].nr_vchans == 0) {
> +		rte_free(dma_copy_track[dma_id].vchans);
> +		dma_copy_track[dma_id].vchans = NULL;
> +	}
> +
> +	pthread_mutex_unlock(&vhost_dma_lock);
> +	return 0;
> +
> +error:
> +	pthread_mutex_unlock(&vhost_dma_lock);
> +	return -1;
> +}
> +
>  RTE_LOG_REGISTER_SUFFIX(vhost_config_log_level, config, INFO);
>  RTE_LOG_REGISTER_SUFFIX(vhost_data_log_level, data, WARNING);
> --
> 2.17.1

Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>

^ permalink raw reply	[flat|nested] 43+ messages in thread

* RE: [PATCH v8 2/2] examples/vhost: unconfigure DMA vchannel
  2022-10-25  8:25   ` [PATCH v8 2/2] examples/vhost: unconfigure DMA vchannel xuan.ding
  2022-10-25  9:56     ` Ling, WeiX
  2022-10-26  5:14     ` Maxime Coquelin
@ 2022-10-26  9:03     ` Xia, Chenbo
  2 siblings, 0 replies; 43+ messages in thread
From: Xia, Chenbo @ 2022-10-26  9:03 UTC (permalink / raw)
  To: Ding, Xuan, maxime.coquelin
  Cc: dev, Hu, Jiayu, He, Xingguang, Ling, WeiX, Jiang, Cheng1, Wang,
	YuanX, Ma, WenwuX

> -----Original Message-----
> From: Ding, Xuan <xuan.ding@intel.com>
> Sent: Tuesday, October 25, 2022 4:26 PM
> To: maxime.coquelin@redhat.com; Xia, Chenbo <chenbo.xia@intel.com>
> Cc: dev@dpdk.org; Hu, Jiayu <jiayu.hu@intel.com>; He, Xingguang
> <xingguang.he@intel.com>; Ling, WeiX <weix.ling@intel.com>; Jiang, Cheng1
> <cheng1.jiang@intel.com>; Wang, YuanX <yuanx.wang@intel.com>; Ma, WenwuX
> <wenwux.ma@intel.com>; Ding, Xuan <xuan.ding@intel.com>
> Subject: [PATCH v8 2/2] examples/vhost: unconfigure DMA vchannel
> 
> From: Xuan Ding <xuan.ding@intel.com>
> 
> This patch applies rte_vhost_async_dma_unconfigure() to manually free
> DMA vChannels. Before unconfiguration, make sure the specified DMA
> vChannel is no longer used by any vhost ports.
> 
> Signed-off-by: Xuan Ding <xuan.ding@intel.com>
> ---
>  examples/vhost/main.c | 8 ++++++++
>  1 file changed, 8 insertions(+)
> 
> diff --git a/examples/vhost/main.c b/examples/vhost/main.c
> index ac78704d79..42e53a0f9a 100644
> --- a/examples/vhost/main.c
> +++ b/examples/vhost/main.c
> @@ -2066,6 +2066,14 @@ main(int argc, char *argv[])
>  	RTE_LCORE_FOREACH_WORKER(lcore_id)
>  		rte_eal_wait_lcore(lcore_id);
> 
> +	for (i = 0; i < dma_count; i++) {
> +		if (rte_vhost_async_dma_unconfigure(dmas_id[i], 0) < 0) {
> +			RTE_LOG(ERR, VHOST_PORT,
> +				"Failed to unconfigure DMA %d in vhost.\n",
> dmas_id[i]);
> +			rte_exit(EXIT_FAILURE, "Cannot use given DMA device\n");
> +		}
> +	}
> +
>  	/* clean up the EAL */
>  	rte_eal_cleanup();
> 
> --
> 2.17.1

Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>

^ permalink raw reply	[flat|nested] 43+ messages in thread

* RE: [PATCH v8 0/2] vhost: introduce DMA vchannel unconfiguration
  2022-10-25  8:25 ` [PATCH v8 0/2] vhost: introduce DMA vchannel unconfiguration xuan.ding
  2022-10-25  8:25   ` [PATCH v8 1/2] " xuan.ding
  2022-10-25  8:25   ` [PATCH v8 2/2] examples/vhost: unconfigure DMA vchannel xuan.ding
@ 2022-10-26  9:07   ` Xia, Chenbo
  2 siblings, 0 replies; 43+ messages in thread
From: Xia, Chenbo @ 2022-10-26  9:07 UTC (permalink / raw)
  To: Ding, Xuan, maxime.coquelin
  Cc: dev, Hu, Jiayu, He, Xingguang, Ling, WeiX, Jiang, Cheng1, Wang,
	YuanX, Ma, WenwuX

> -----Original Message-----
> From: Ding, Xuan <xuan.ding@intel.com>
> Sent: Tuesday, October 25, 2022 4:26 PM
> To: maxime.coquelin@redhat.com; Xia, Chenbo <chenbo.xia@intel.com>
> Cc: dev@dpdk.org; Hu, Jiayu <jiayu.hu@intel.com>; He, Xingguang
> <xingguang.he@intel.com>; Ling, WeiX <weix.ling@intel.com>; Jiang, Cheng1
> <cheng1.jiang@intel.com>; Wang, YuanX <yuanx.wang@intel.com>; Ma, WenwuX
> <wenwux.ma@intel.com>; Ding, Xuan <xuan.ding@intel.com>
> Subject: [PATCH v8 0/2] vhost: introduce DMA vchannel unconfiguration
> 
> From: Xuan Ding <xuan.ding@intel.com>
> 
> This patchset introduces a new API rte_vhost_async_dma_unconfigure()
> to help user to manually free DMA vChannels finished to use.
> 
> v8:
> * Check inflight packets release virtual channel.
> 
> v7:
> * Add inflight packets processing.
> * Fix CI error.
> 
> v6:
> * Move DMA unconfiguration to the end due to DMA devices maybe reused
>   after destroy_device().
> * Refine the doc to claim the DMA device should not be used in vhost
>   after unconfiguration.
> 
> v5:
> * Use mutex instead of spinlock.
> * Improve code readability.
> 
> v4:
> * Rebase to 22.11 rc1.
> * Fix the usage of 'dma_ref_count' to make sure the specified DMA device
>   is not used by any vhost ports before unconfiguration.
> 
> v3:
> * Rebase to latest DPDK.
> * Refine some descriptions in the doc.
> * Fix one bug in the vhost example.
> 
> v2:
> * Add spinlock protection.
> * Fix a memory leak issue.
> * Refine the doc.
> 
> Xuan Ding (2):
>   vhost: introduce DMA vchannel unconfiguration
>   examples/vhost: unconfigure DMA vchannel
> 
>  doc/guides/prog_guide/vhost_lib.rst    |  5 ++
>  doc/guides/rel_notes/release_22_11.rst |  5 ++
>  examples/vhost/main.c                  |  8 +++
>  lib/vhost/rte_vhost_async.h            | 20 +++++++
>  lib/vhost/version.map                  |  3 ++
>  lib/vhost/vhost.c                      | 72 ++++++++++++++++++++++++--
>  6 files changed, 108 insertions(+), 5 deletions(-)
> 
> --
> 2.17.1

Series applied to next-virtio/main, thanks

^ permalink raw reply	[flat|nested] 43+ messages in thread

end of thread, other threads:[~2022-10-26  9:07 UTC | newest]

Thread overview: 43+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-14 14:04 [PATCH v1 0/2] vhost: introduce DMA vchannel unconfiguration xuan.ding
2022-08-14 14:04 ` [PATCH v1 1/2] " xuan.ding
2022-08-14 14:04 ` [PATCH v1 2/2] example/vhost: unconfigure DMA vchannel xuan.ding
2022-09-06  5:21 ` [PATCH v2 0/2] vhost: introduce DMA vchannel unconfiguration xuan.ding
2022-09-06  5:21   ` [PATCH v2 1/2] " xuan.ding
2022-09-26  6:06     ` Xia, Chenbo
2022-09-26  6:43       ` Ding, Xuan
2022-09-06  5:21   ` [PATCH v2 2/2] examples/vhost: unconfigure DMA vchannel xuan.ding
2022-09-29  1:32 ` [PATCH v3 0/2] vhost: introduce DMA vchannel unconfiguration xuan.ding
2022-09-29  1:32   ` [PATCH v3 1/2] " xuan.ding
2022-09-29  1:32   ` [PATCH v3 2/2] examples/vhost: unconfigure DMA vchannel xuan.ding
2022-09-29  8:27     ` Xia, Chenbo
2022-10-08  0:38       ` Ding, Xuan
2022-10-13  6:40 ` [PATCH v4 0/2] vhost: introduce DMA vchannel unconfiguration xuan.ding
2022-10-13  6:40   ` [PATCH v4 1/2] " xuan.ding
2022-10-13  8:01     ` Maxime Coquelin
2022-10-13  8:45       ` Ding, Xuan
2022-10-13  6:40   ` [PATCH v4 2/2] examples/vhost: unconfigure DMA vchannel xuan.ding
2022-10-13  8:07     ` Maxime Coquelin
2022-10-13  8:49       ` Ding, Xuan
2022-10-13  9:27 ` [PATCH v5 0/2] vhost: introduce DMA vchannel unconfiguration xuan.ding
2022-10-13  9:27   ` [PATCH v5 1/2] " xuan.ding
2022-10-13  9:27   ` [PATCH v5 2/2] examples/vhost: unconfigure DMA vchannel xuan.ding
2022-10-18 15:22 ` [PATCH v6 0/2] vhost: introduce DMA vchannel unconfiguration xuan.ding
2022-10-18 15:22   ` [PATCH v6 1/2] " xuan.ding
2022-10-19  9:28     ` Xia, Chenbo
2022-10-18 15:22   ` [PATCH v6 2/2] examples/vhost: unconfigure DMA vchannel xuan.ding
2022-10-19  2:57     ` Ling, WeiX
2022-10-20  9:11 ` [PATCH v7 0/2] vhost: introduce DMA vchannel unconfiguration xuan.ding
2022-10-20  9:11   ` [PATCH v7 1/2] " xuan.ding
2022-10-21  8:09     ` Maxime Coquelin
2022-10-21  8:22       ` Ding, Xuan
2022-10-20  9:11   ` [PATCH v7 2/2] examples/vhost: unconfigure DMA vchannel xuan.ding
2022-10-21  8:12     ` Maxime Coquelin
2022-10-25  8:25 ` [PATCH v8 0/2] vhost: introduce DMA vchannel unconfiguration xuan.ding
2022-10-25  8:25   ` [PATCH v8 1/2] " xuan.ding
2022-10-26  5:13     ` Maxime Coquelin
2022-10-26  9:02     ` Xia, Chenbo
2022-10-25  8:25   ` [PATCH v8 2/2] examples/vhost: unconfigure DMA vchannel xuan.ding
2022-10-25  9:56     ` Ling, WeiX
2022-10-26  5:14     ` Maxime Coquelin
2022-10-26  9:03     ` Xia, Chenbo
2022-10-26  9:07   ` [PATCH v8 0/2] vhost: introduce DMA vchannel unconfiguration Xia, Chenbo

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).