DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [RFC PATCH 0/7] add test suite for DMA drivers
@ 2021-08-26 18:32 Bruce Richardson
  2021-08-26 18:32 ` [dpdk-dev] [RFC PATCH 1/7] app/test: take API tests from skeleton dmadev Bruce Richardson
                   ` (11 more replies)
  0 siblings, 12 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-08-26 18:32 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, Chengwen Feng, jerinj, Bruce Richardson

This patchset adds a fairly comprehensive set of tests for basic dmadev
functionality. It takes the existing tests that were in the skeleton dmadev and
uses those to create a set of API-level tests to check the basics of the dmadev
library itself. Once that is done, the "selftest" part of the dmadev API is
removed as drivers should largely rely on the standard tests to ensure
compatibility.

For those standard tests, tests are added to verify basic copy operation in each
device, using both submit function and submit flag, and verifying completion
gathering using both "completed()" and "completed_status()" functions. Beyond
that, tests are then added for the error reporting and handling, as is a suite
of tests for the fill() operation for devices that support those.

NOTE: This patchset depends on series v16 of the dmadev set [1]. The first 2
patches should probably be merged into that set for completeness as they move
and adjust basic dmadev code, and the dmadev skeleton driver.

[1] http://patches.dpdk.org/project/dpdk/list/?series=18391

Bruce Richardson (6):
  app/test: take API tests from skeleton dmadev
  dmadev: remove selftest support
  app/test: add basic dmadev instance tests
  app/test: add basic dmadev copy tests
  app/test: add more comprehensive dmadev copy tests
  app/test: test dmadev instance failure handling

Kevin Laatz (1):
  app/test: add dmadev fill tests

 app/test/meson.build                          |   1 +
 app/test/test_dmadev.c                        | 844 +++++++++++++++++-
 .../test/test_dmadev_api.c                    |  24 +-
 drivers/dma/skeleton/meson.build              |   1 -
 drivers/dma/skeleton/skeleton_dmadev.c        |  35 +-
 drivers/dma/skeleton/skeleton_dmadev.h        |   3 -
 lib/dmadev/rte_dmadev.c                       |  10 -
 lib/dmadev/rte_dmadev.h                       |  18 -
 lib/dmadev/rte_dmadev_core.h                  |   4 -
 lib/dmadev/version.map                        |   1 -
 10 files changed, 838 insertions(+), 103 deletions(-)
 rename drivers/dma/skeleton/skeleton_dmadev_test.c => app/test/test_dmadev_api.c (96%)

--
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [RFC PATCH 1/7] app/test: take API tests from skeleton dmadev
  2021-08-26 18:32 [dpdk-dev] [RFC PATCH 0/7] add test suite for DMA drivers Bruce Richardson
@ 2021-08-26 18:32 ` Bruce Richardson
  2021-08-26 18:32 ` [dpdk-dev] [RFC PATCH 2/7] dmadev: remove selftest support Bruce Richardson
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-08-26 18:32 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, Chengwen Feng, jerinj, Bruce Richardson

Rather than having the API level tests as self-test for the skeleton
driver, we can have these tests included directly into the autotest, so
that potentially other drivers can use them in future.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 app/test/meson.build                          |  1 +
 app/test/test_dmadev.c                        | 12 ++++++++--
 .../test/test_dmadev_api.c                    | 24 ++++++++-----------
 drivers/dma/skeleton/meson.build              |  1 -
 drivers/dma/skeleton/skeleton_dmadev.c        |  6 +++++
 5 files changed, 27 insertions(+), 17 deletions(-)
 rename drivers/dma/skeleton/skeleton_dmadev_test.c => app/test/test_dmadev_api.c (96%)

diff --git a/app/test/meson.build b/app/test/meson.build
index 881cb4f655..9027eba3a4 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -44,6 +44,7 @@ test_sources = files(
         'test_distributor.c',
         'test_distributor_perf.c',
         'test_dmadev.c',
+        'test_dmadev_api.c',
         'test_eal_flags.c',
         'test_eal_fs.c',
         'test_efd.c',
diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
index 90e8faafa5..62fe27b7e8 100644
--- a/app/test/test_dmadev.c
+++ b/app/test/test_dmadev.c
@@ -9,15 +9,23 @@
 
 #include "test.h"
 
+/* from test_dmadev_api.c */
+extern int test_dmadev_api(uint16_t dev_id);
+
 static int
 test_dmadev_selftest_skeleton(void)
 {
 	const char *pmd = "dma_skeleton";
+	int id;
 	int ret;
 
+	if (rte_vdev_init(pmd, NULL) < 0)
+		return TEST_SKIPPED;
+	id = rte_dmadev_get_dev_id(pmd);
+	if (id < 0)
+		return TEST_SKIPPED;
 	printf("\n### Test dmadev infrastructure using skeleton driver\n");
-	rte_vdev_init(pmd, NULL);
-	ret = rte_dmadev_selftest(rte_dmadev_get_dev_id(pmd));
+	ret = test_dmadev_api(id);
 	rte_vdev_uninit(pmd);
 
 	return ret;
diff --git a/drivers/dma/skeleton/skeleton_dmadev_test.c b/app/test/test_dmadev_api.c
similarity index 96%
rename from drivers/dma/skeleton/skeleton_dmadev_test.c
rename to app/test/test_dmadev_api.c
index be56f07262..8b93628e1c 100644
--- a/drivers/dma/skeleton/skeleton_dmadev_test.c
+++ b/app/test/test_dmadev_api.c
@@ -2,20 +2,16 @@
  * Copyright(c) 2021 HiSilicon Limited.
  */
 
+#include <stdint.h>
 #include <string.h>
 
 #include <rte_common.h>
 #include <rte_cycles.h>
 #include <rte_malloc.h>
 #include <rte_test.h>
+#include <rte_dmadev.h>
 
-/* Using relative path as skeleton_dmadev is not part of exported headers */
-#include "skeleton_dmadev.h"
-
-#define SKELDMA_TEST_DEBUG(fmt, args...) \
-	SKELDMA_LOG(DEBUG, fmt, ## args)
-#define SKELDMA_TEST_INFO(fmt, args...) \
-	SKELDMA_LOG(INFO, fmt, ## args)
+extern int test_dmadev_api(uint16_t dev_id);
 
 #define SKELDMA_TEST_RUN(test) \
 	testsuite_run_test(test, #test)
@@ -73,10 +69,10 @@ testsuite_run_test(int (*test)(void), const char *name)
 		ret = test();
 		if (ret < 0) {
 			failed++;
-			SKELDMA_TEST_INFO("%s Failed", name);
+			printf("%s Failed\n", name);
 		} else {
 			passed++;
-			SKELDMA_TEST_DEBUG("%s Passed", name);
+			printf("%s Passed\n", name);
 		}
 	}
 
@@ -486,11 +482,11 @@ test_dmadev_completed_status(void)
 }
 
 int
-test_dma_skeleton(uint16_t dev_id)
+test_dmadev_api(uint16_t dev_id)
 {
 	int ret = testsuite_setup(dev_id);
 	if (ret) {
-		SKELDMA_TEST_INFO("testsuite setup fail!");
+		printf("testsuite setup fail!");
 		return -1;
 	}
 
@@ -510,9 +506,9 @@ test_dma_skeleton(uint16_t dev_id)
 
 	testsuite_teardown();
 
-	SKELDMA_TEST_INFO("Total tests   : %d\n", total);
-	SKELDMA_TEST_INFO("Passed        : %d\n", passed);
-	SKELDMA_TEST_INFO("Failed        : %d\n", failed);
+	printf("Total tests   : %d\n", total);
+	printf("Passed        : %d\n", passed);
+	printf("Failed        : %d\n", failed);
 
 	if (failed)
 		return -1;
diff --git a/drivers/dma/skeleton/meson.build b/drivers/dma/skeleton/meson.build
index 5d47339c6f..27509b1668 100644
--- a/drivers/dma/skeleton/meson.build
+++ b/drivers/dma/skeleton/meson.build
@@ -4,5 +4,4 @@
 deps += ['dmadev', 'kvargs', 'ring', 'bus_vdev']
 sources = files(
         'skeleton_dmadev.c',
-        'skeleton_dmadev_test.c',
 )
diff --git a/drivers/dma/skeleton/skeleton_dmadev.c b/drivers/dma/skeleton/skeleton_dmadev.c
index 1707e88173..ad129c578c 100644
--- a/drivers/dma/skeleton/skeleton_dmadev.c
+++ b/drivers/dma/skeleton/skeleton_dmadev.c
@@ -30,6 +30,12 @@
 /* Count of instances */
 static uint16_t skeldma_init_once;
 
+int
+test_dma_skeleton(uint16_t dev_id __rte_unused)
+{
+	return 0;
+}
+
 static int
 skeldma_info_get(const struct rte_dmadev *dev, struct rte_dmadev_info *dev_info,
 		 uint32_t info_sz)
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [RFC PATCH 2/7] dmadev: remove selftest support
  2021-08-26 18:32 [dpdk-dev] [RFC PATCH 0/7] add test suite for DMA drivers Bruce Richardson
  2021-08-26 18:32 ` [dpdk-dev] [RFC PATCH 1/7] app/test: take API tests from skeleton dmadev Bruce Richardson
@ 2021-08-26 18:32 ` Bruce Richardson
  2021-08-26 18:32 ` [dpdk-dev] [RFC PATCH 3/7] app/test: add basic dmadev instance tests Bruce Richardson
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-08-26 18:32 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, Chengwen Feng, jerinj, Bruce Richardson

Since the dmadev provides a common API for all devices, there should be
no need to drivers to perform their own selftests, but instead should be
tested by the common tests in the autotest binary.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 app/test/test_dmadev.c                 | 27 ++++-------------
 drivers/dma/skeleton/skeleton_dmadev.c | 41 +++-----------------------
 drivers/dma/skeleton/skeleton_dmadev.h |  3 --
 lib/dmadev/rte_dmadev.c                | 10 -------
 lib/dmadev/rte_dmadev.h                | 18 -----------
 lib/dmadev/rte_dmadev_core.h           |  4 ---
 lib/dmadev/version.map                 |  1 -
 7 files changed, 9 insertions(+), 95 deletions(-)

diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
index 62fe27b7e8..683e024a56 100644
--- a/app/test/test_dmadev.c
+++ b/app/test/test_dmadev.c
@@ -13,7 +13,7 @@
 extern int test_dmadev_api(uint16_t dev_id);
 
 static int
-test_dmadev_selftest_skeleton(void)
+test_apis(void)
 {
 	const char *pmd = "dma_skeleton";
 	int id;
@@ -32,30 +32,13 @@ test_dmadev_selftest_skeleton(void)
 }
 
 static int
-test_dmadev_selftests(void)
+test_dmadev(void)
 {
-	const int count = rte_dmadev_count();
-	int ret = 0;
-	int i;
-
 	/* basic sanity on dmadev infrastructure */
-	if (test_dmadev_selftest_skeleton() < 0)
+	if (test_apis() < 0)
 		return -1;
 
-	/* now run self-test on all dmadevs */
-	if (count > 0)
-		printf("\n### Run selftest on each available dmadev\n");
-	for (i = 0; i < RTE_DMADEV_MAX_DEVS; i++) {
-		if (rte_dmadevices[i].state != RTE_DMADEV_ATTACHED)
-			continue;
-		int result = rte_dmadev_selftest(i);
-		printf("dmadev %u (%s) selftest: %s\n", i,
-			rte_dmadevices[i].data->dev_name,
-			result == 0 ? "Passed" : "Failed");
-		ret |= result;
-	}
-
-	return ret;
+	return 0;
 }
 
-REGISTER_TEST_COMMAND(dmadev_autotest, test_dmadev_selftests);
+REGISTER_TEST_COMMAND(dmadev_autotest, test_dmadev);
diff --git a/drivers/dma/skeleton/skeleton_dmadev.c b/drivers/dma/skeleton/skeleton_dmadev.c
index ad129c578c..85ba4dae54 100644
--- a/drivers/dma/skeleton/skeleton_dmadev.c
+++ b/drivers/dma/skeleton/skeleton_dmadev.c
@@ -30,12 +30,6 @@
 /* Count of instances */
 static uint16_t skeldma_init_once;
 
-int
-test_dma_skeleton(uint16_t dev_id __rte_unused)
-{
-	return 0;
-}
-
 static int
 skeldma_info_get(const struct rte_dmadev *dev, struct rte_dmadev_info *dev_info,
 		 uint32_t info_sz)
@@ -436,7 +430,6 @@ static const struct rte_dmadev_ops skeldma_ops = {
 	.stats_reset = skeldma_stats_reset,
 
 	.dev_dump = skeldma_dump,
-	.dev_selftest = test_dma_skeleton,
 };
 
 static int
@@ -510,24 +503,11 @@ skeldma_parse_lcore(const char *key __rte_unused,
 	return 0;
 }
 
-static int
-skeldma_parse_selftest(const char *key __rte_unused,
-		       const char *value,
-		       void *opaque)
-{
-	int flag = atoi(value);
-	if (flag == 0 || flag == 1)
-		*(int *)opaque = flag;
-	return 0;
-}
-
 static void
-skeldma_parse_vdev_args(struct rte_vdev_device *vdev,
-			int *lcore_id, int *selftest)
+skeldma_parse_vdev_args(struct rte_vdev_device *vdev, int *lcore_id)
 {
 	static const char *const args[] = {
 		SKELDMA_ARG_LCORE,
-		SKELDMA_ARG_SELFTEST,
 		NULL
 	};
 
@@ -544,11 +524,7 @@ skeldma_parse_vdev_args(struct rte_vdev_device *vdev,
 
 	(void)rte_kvargs_process(kvlist, SKELDMA_ARG_LCORE,
 				 skeldma_parse_lcore, lcore_id);
-	(void)rte_kvargs_process(kvlist, SKELDMA_ARG_SELFTEST,
-				 skeldma_parse_selftest, selftest);
-
-	SKELDMA_INFO("Parse lcore_id = %d selftest = %d\n",
-		     *lcore_id, *selftest);
+	SKELDMA_INFO("Parse lcore_id = %d\n", *lcore_id);
 
 	rte_kvargs_free(kvlist);
 }
@@ -558,7 +534,6 @@ skeldma_probe(struct rte_vdev_device *vdev)
 {
 	const char *name;
 	int lcore_id = -1;
-	int selftest = 0;
 	int ret;
 
 	name = rte_vdev_device_name(vdev);
@@ -576,17 +551,10 @@ skeldma_probe(struct rte_vdev_device *vdev)
 		return -EINVAL;
 	}
 
-	skeldma_parse_vdev_args(vdev, &lcore_id, &selftest);
+	skeldma_parse_vdev_args(vdev, &lcore_id);
 
 	ret = skeldma_create(name, vdev, lcore_id);
 	if (ret >= 0) {
-		/* In case command line argument for 'selftest' was passed;
-		 * if invalid arguments were passed, execution continues but
-		 * without selftest.
-		 */
-		if (selftest)
-			(void)test_dma_skeleton(ret);
-
 		SKELDMA_INFO("Create %s dmadev lcore-id %d\n", name, lcore_id);
 		/* Device instance created; Second instance not possible */
 		skeldma_init_once = 1;
@@ -623,5 +591,4 @@ static struct rte_vdev_driver skeldma_pmd_drv = {
 RTE_LOG_REGISTER_DEFAULT(skeldma_logtype, INFO);
 RTE_PMD_REGISTER_VDEV(dma_skeleton, skeldma_pmd_drv);
 RTE_PMD_REGISTER_PARAM_STRING(dma_skeleton,
-		SKELDMA_ARG_LCORE "=<uint16> "
-		SKELDMA_ARG_SELFTEST "=<0|1> ");
+		SKELDMA_ARG_LCORE "=<uint16> ");
diff --git a/drivers/dma/skeleton/skeleton_dmadev.h b/drivers/dma/skeleton/skeleton_dmadev.h
index e8a310da18..8cdc2bb0c9 100644
--- a/drivers/dma/skeleton/skeleton_dmadev.h
+++ b/drivers/dma/skeleton/skeleton_dmadev.h
@@ -22,7 +22,6 @@ extern int skeldma_logtype;
 	SKELDMA_LOG(ERR, fmt, ## args)
 
 #define SKELDMA_ARG_LCORE	"lcore"
-#define SKELDMA_ARG_SELFTEST	"selftest"
 
 struct skeldma_desc {
 	void *src;
@@ -71,6 +70,4 @@ struct skeldma_hw {
 	uint64_t completed_count;
 };
 
-int test_dma_skeleton(uint16_t dev_id);
-
 #endif /* __SKELETON_DMADEV_H__ */
diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c
index 80be485b78..1c946402db 100644
--- a/lib/dmadev/rte_dmadev.c
+++ b/lib/dmadev/rte_dmadev.c
@@ -555,13 +555,3 @@ rte_dmadev_dump(uint16_t dev_id, FILE *f)
 
 	return 0;
 }
-
-int
-rte_dmadev_selftest(uint16_t dev_id)
-{
-	struct rte_dmadev *dev = &rte_dmadevices[dev_id];
-
-	RTE_DMADEV_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL);
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_selftest, -ENOTSUP);
-	return (*dev->dev_ops->dev_selftest)(dev_id);
-}
diff --git a/lib/dmadev/rte_dmadev.h b/lib/dmadev/rte_dmadev.h
index cf9e4bfa0f..e8f58e9213 100644
--- a/lib/dmadev/rte_dmadev.h
+++ b/lib/dmadev/rte_dmadev.h
@@ -654,24 +654,6 @@ __rte_experimental
 int
 rte_dmadev_dump(uint16_t dev_id, FILE *f);
 
-/**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice.
- *
- * Trigger the dmadev self test.
- *
- * @param dev_id
- *   The identifier of the device.
- *
- * @return
- *   - 0: selftest successful.
- *   - -ENOTSUP: if the device doesn't support selftest.
- *   - other values < 0 on failure.
- */
-__rte_experimental
-int
-rte_dmadev_selftest(uint16_t dev_id);
-
 /**
  * rte_dma_status_code - DMA transfer result status code defines.
  */
diff --git a/lib/dmadev/rte_dmadev_core.h b/lib/dmadev/rte_dmadev_core.h
index aa8e622f85..e94aa1c457 100644
--- a/lib/dmadev/rte_dmadev_core.h
+++ b/lib/dmadev/rte_dmadev_core.h
@@ -53,9 +53,6 @@ typedef int (*rte_dmadev_stats_reset_t)(struct rte_dmadev *dev, uint16_t vchan);
 typedef int (*rte_dmadev_dump_t)(const struct rte_dmadev *dev, FILE *f);
 /**< @internal Used to dump internal information. */
 
-typedef int (*rte_dmadev_selftest_t)(uint16_t dev_id);
-/**< @internal Used to start dmadev selftest. */
-
 typedef int (*rte_dmadev_copy_t)(struct rte_dmadev *dev, uint16_t vchan,
 				 rte_iova_t src, rte_iova_t dst,
 				 uint32_t length, uint64_t flags);
@@ -109,7 +106,6 @@ struct rte_dmadev_ops {
 	rte_dmadev_stats_get_t stats_get;
 	rte_dmadev_stats_reset_t stats_reset;
 	rte_dmadev_dump_t dev_dump;
-	rte_dmadev_selftest_t dev_selftest;
 };
 
 /**
diff --git a/lib/dmadev/version.map b/lib/dmadev/version.map
index 86c5e75321..80be592713 100644
--- a/lib/dmadev/version.map
+++ b/lib/dmadev/version.map
@@ -13,7 +13,6 @@ EXPERIMENTAL {
 	rte_dmadev_get_dev_id;
 	rte_dmadev_info_get;
 	rte_dmadev_is_valid_dev;
-	rte_dmadev_selftest;
 	rte_dmadev_start;
 	rte_dmadev_stats_get;
 	rte_dmadev_stats_reset;
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [RFC PATCH 3/7] app/test: add basic dmadev instance tests
  2021-08-26 18:32 [dpdk-dev] [RFC PATCH 0/7] add test suite for DMA drivers Bruce Richardson
  2021-08-26 18:32 ` [dpdk-dev] [RFC PATCH 1/7] app/test: take API tests from skeleton dmadev Bruce Richardson
  2021-08-26 18:32 ` [dpdk-dev] [RFC PATCH 2/7] dmadev: remove selftest support Bruce Richardson
@ 2021-08-26 18:32 ` Bruce Richardson
  2021-08-26 18:32 ` [dpdk-dev] [RFC PATCH 4/7] app/test: add basic dmadev copy tests Bruce Richardson
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-08-26 18:32 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, Chengwen Feng, jerinj, Bruce Richardson

Run basic sanity tests for configuring, starting and stopping a dmadev instance
to help validate drivers. This also provides the framework for future tests for
data-path operation.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 app/test/test_dmadev.c | 70 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 70 insertions(+)

diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
index 683e024a56..f895556d29 100644
--- a/app/test/test_dmadev.c
+++ b/app/test/test_dmadev.c
@@ -12,6 +12,67 @@
 /* from test_dmadev_api.c */
 extern int test_dmadev_api(uint16_t dev_id);

+#define PRINT_ERR(...) print_err(__func__, __LINE__, __VA_ARGS__)
+
+static inline int
+__rte_format_printf(3, 4)
+print_err(const char *func, int lineno, const char *format, ...)
+{
+	va_list ap;
+	int ret;
+
+	ret = fprintf(stderr, "In %s:%d - ", func, lineno);
+	va_start(ap, format);
+	ret += vfprintf(stderr, format, ap);
+	va_end(ap);
+
+	return ret;
+}
+
+static int
+test_dmadev_instance(uint16_t dev_id)
+{
+#define TEST_RINGSIZE 512
+	/* Setup of the dmadev device. 8< */
+	struct rte_dmadev_info info;
+	const struct rte_dmadev_conf conf = { .nb_vchans = 1};
+	const struct rte_dmadev_vchan_conf qconf = {
+			.direction = RTE_DMA_DIR_MEM_TO_MEM,
+			.nb_desc = TEST_RINGSIZE,
+	};
+	const int vchan = 0;
+
+	printf("\n### Test dmadev instance %u\n", dev_id);
+
+	rte_dmadev_info_get(dev_id, &info);
+	if (info.max_vchans < 1) {
+		PRINT_ERR("Error, no channels available on device id %u\n", dev_id);
+		return -1;
+	}
+	if (rte_dmadev_configure(dev_id, &conf) != 0) {
+		PRINT_ERR("Error with rte_dmadev_configure()\n");
+		return -1;
+	}
+	if (rte_dmadev_vchan_setup(dev_id, vchan, &qconf) < 0) {
+		PRINT_ERR("Error with queue configuration\n");
+		return -1;
+	}
+	/* >8 End of setup of the dmadev device. */
+	rte_dmadev_info_get(dev_id, &info);
+	if (info.nb_vchans != 1) {
+		PRINT_ERR("Error, no configured queues reported on device id %u\n", dev_id);
+		return -1;
+	}
+
+	if (rte_dmadev_start(dev_id) != 0) {
+		PRINT_ERR("Error with rte_rawdev_start()\n");
+		return -1;
+	}
+
+	rte_dmadev_stop(dev_id);
+	return 0;
+}
+
 static int
 test_apis(void)
 {
@@ -34,10 +95,19 @@ test_apis(void)
 static int
 test_dmadev(void)
 {
+	int i;
+
 	/* basic sanity on dmadev infrastructure */
 	if (test_apis() < 0)
 		return -1;

+	if (rte_dmadev_count() == 0)
+		return TEST_SKIPPED;
+
+	for (i = 0; i < RTE_DMADEV_MAX_DEVS; i++)
+		if (rte_dmadevices[i].state == RTE_DMADEV_ATTACHED && test_dmadev_instance(i) < 0)
+			return -1;
+
 	return 0;
 }

--
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [RFC PATCH 4/7] app/test: add basic dmadev copy tests
  2021-08-26 18:32 [dpdk-dev] [RFC PATCH 0/7] add test suite for DMA drivers Bruce Richardson
                   ` (2 preceding siblings ...)
  2021-08-26 18:32 ` [dpdk-dev] [RFC PATCH 3/7] app/test: add basic dmadev instance tests Bruce Richardson
@ 2021-08-26 18:32 ` Bruce Richardson
  2021-08-27  7:14   ` Jerin Jacob
  2021-08-26 18:32 ` [dpdk-dev] [RFC PATCH 5/7] app/test: add more comprehensive " Bruce Richardson
                   ` (7 subsequent siblings)
  11 siblings, 1 reply; 130+ messages in thread
From: Bruce Richardson @ 2021-08-26 18:32 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, Chengwen Feng, jerinj, Bruce Richardson

For each dmadev instance, perform some basic copy tests to validate that
functionality.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 app/test/test_dmadev.c | 157 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 157 insertions(+)

diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
index f895556d29..a9f7d34a94 100644
--- a/app/test/test_dmadev.c
+++ b/app/test/test_dmadev.c
@@ -1,11 +1,14 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright(c) 2021 HiSilicon Limited.
  */
+#include <unistd.h>
 
 #include <rte_common.h>
 #include <rte_dev.h>
 #include <rte_dmadev.h>
 #include <rte_bus_vdev.h>
+#include <rte_mbuf.h>
+#include <rte_random.h>
 
 #include "test.h"
 
@@ -14,6 +17,11 @@ extern int test_dmadev_api(uint16_t dev_id);
 
 #define PRINT_ERR(...) print_err(__func__, __LINE__, __VA_ARGS__)
 
+#define COPY_LEN 1024
+
+static struct rte_mempool *pool;
+static uint16_t id_count;
+
 static inline int
 __rte_format_printf(3, 4)
 print_err(const char *func, int lineno, const char *format, ...)
@@ -29,10 +37,123 @@ print_err(const char *func, int lineno, const char *format, ...)
 	return ret;
 }
 
+static int
+test_enqueue_copies(int dev_id, uint16_t vchan)
+{
+	unsigned int i;
+	uint16_t id;
+
+	/* test doing a single copy */
+	do {
+		struct rte_mbuf *src, *dst;
+		char *src_data, *dst_data;
+
+		src = rte_pktmbuf_alloc(pool);
+		dst = rte_pktmbuf_alloc(pool);
+		src_data = rte_pktmbuf_mtod(src, char *);
+		dst_data = rte_pktmbuf_mtod(dst, char *);
+
+		for (i = 0; i < COPY_LEN; i++)
+			src_data[i] = rte_rand() & 0xFF;
+
+		id = rte_dmadev_copy(dev_id, vchan, src->buf_iova + src->data_off,
+				dst->buf_iova + dst->data_off, COPY_LEN, RTE_DMA_OP_FLAG_SUBMIT);
+		if (id != id_count) {
+			PRINT_ERR("Error with rte_dmadev_copy, got %u, expected %u\n",
+					id, id_count);
+			return -1;
+		}
+
+		/* give time for copy to finish, then check it was done */
+		usleep(10);
+
+		for (i = 0; i < COPY_LEN; i++) {
+			if (dst_data[i] != src_data[i]) {
+				PRINT_ERR("Data mismatch at char %u [Got %02x not %02x]\n", i,
+						dst_data[i], src_data[i]);
+				rte_dmadev_dump(dev_id, stderr);
+				return -1;
+			}
+		}
+
+		/* now check completion works */
+		if (rte_dmadev_completed(dev_id, vchan, 1, &id, NULL) != 1) {
+			PRINT_ERR("Error with rte_dmadev_completed\n");
+			return -1;
+		}
+		if (id != id_count) {
+			PRINT_ERR("Error:incorrect job id received, %u [expected %u]\n",
+					id, id_count);
+			return -1;
+		}
+
+		rte_pktmbuf_free(src);
+		rte_pktmbuf_free(dst);
+
+		/* now check completion works */
+		if (rte_dmadev_completed(dev_id, 0, 1, NULL, NULL) != 0) {
+			PRINT_ERR("Error with rte_dmadev_completed in empty check\n");
+			return -1;
+		}
+		id_count++;
+
+	} while (0);
+
+	/* test doing a multiple single copies */
+	do {
+		const uint16_t max_ops = 4;
+		struct rte_mbuf *src, *dst;
+		char *src_data, *dst_data;
+
+		src = rte_pktmbuf_alloc(pool);
+		dst = rte_pktmbuf_alloc(pool);
+		src_data = rte_pktmbuf_mtod(src, char *);
+		dst_data = rte_pktmbuf_mtod(dst, char *);
+
+		for (i = 0; i < COPY_LEN; i++)
+			src_data[i] = rte_rand() & 0xFF;
+
+		/* perform the same copy <max_ops> times */
+		for (i = 0; i < max_ops; i++) {
+			if (rte_dmadev_copy(dev_id, vchan,
+					src->buf_iova + src->data_off,
+					dst->buf_iova + dst->data_off,
+					COPY_LEN, RTE_DMA_OP_FLAG_SUBMIT) != id_count++) {
+				PRINT_ERR("Error with rte_dmadev_copy\n");
+				return -1;
+			}
+		}
+		usleep(10);
+
+		if ((i = rte_dmadev_completed(dev_id, vchan, max_ops * 2, &id, NULL)) != max_ops) {
+			PRINT_ERR("Error with rte_dmadev_completed, got %u not %u\n", i, max_ops);
+			return -1;
+		}
+		if (id != id_count - 1) {
+			PRINT_ERR("Error, incorrect job id returned: got %u not %u\n",
+					id, id_count - 1);
+			return -1;
+		}
+		for (i = 0; i < COPY_LEN; i++) {
+			if (dst_data[i] != src_data[i]) {
+				PRINT_ERR("Data mismatch at char %u\n", i);
+				return -1;
+			}
+		}
+		rte_pktmbuf_free(src);
+		rte_pktmbuf_free(dst);
+	} while (0);
+
+	return 0;
+}
+
 static int
 test_dmadev_instance(uint16_t dev_id)
 {
 #define TEST_RINGSIZE 512
+	struct rte_dmadev_stats stats;
+	int i;
+
 	/* Setup of the dmadev device. 8< */
 	struct rte_dmadev_info info;
 	const struct rte_dmadev_conf conf = { .nb_vchans = 1};
@@ -68,9 +189,45 @@ test_dmadev_instance(uint16_t dev_id)
 		PRINT_ERR("Error with rte_rawdev_start()\n");
 		return -1;
 	}
+	id_count = 0;
 
+	/* create a mempool for running tests */
+	pool = rte_pktmbuf_pool_create("TEST_DMADEV_POOL",
+			TEST_RINGSIZE * 2, /* n == num elements */
+			32,  /* cache size */
+			0,   /* priv size */
+			2048, /* data room size */
+			info.device->numa_node);
+	if (pool == NULL) {
+		PRINT_ERR("Error with mempool creation\n");
+		return -1;
+	}
+
+	/* run the test cases, use many iterations to ensure UINT16_MAX id wraparound */
+	printf("DMA Dev: %u, Running Copy Tests\n", dev_id);
+	for (i = 0; i < 640; i++) {
+
+		if (test_enqueue_copies(dev_id, vchan) != 0) {
+			printf("Error with iteration %d\n", i);
+			rte_dmadev_dump(dev_id, stdout);
+			goto err;
+		}
+
+		rte_dmadev_stats_get(dev_id, 0, &stats);
+		printf("Ops submitted: %"PRIu64"\t", stats.submitted);
+		printf("Ops completed: %"PRIu64"\r", stats.completed);
+	}
+	printf("\n");
+
+
+	rte_mempool_free(pool);
 	rte_dmadev_stop(dev_id);
 	return 0;
+
+err:
+	rte_mempool_free(pool);
+	rte_dmadev_stop(dev_id);
+	return -1;
 }
 
 static int
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [RFC PATCH 5/7] app/test: add more comprehensive dmadev copy tests
  2021-08-26 18:32 [dpdk-dev] [RFC PATCH 0/7] add test suite for DMA drivers Bruce Richardson
                   ` (3 preceding siblings ...)
  2021-08-26 18:32 ` [dpdk-dev] [RFC PATCH 4/7] app/test: add basic dmadev copy tests Bruce Richardson
@ 2021-08-26 18:32 ` Bruce Richardson
  2021-08-26 18:33 ` [dpdk-dev] [RFC PATCH 6/7] app/test: test dmadev instance failure handling Bruce Richardson
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-08-26 18:32 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, Chengwen Feng, jerinj, Bruce Richardson

Add unit tests for various combinations of use for dmadev, copying
bursts of packets in various formats, e.g.

1. enqueuing two smaller bursts and completing them as one burst
2. enqueuing one burst and gathering completions in smaller bursts
3. using completed_status() function to gather completions rather than
   just completed()

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 app/test/test_dmadev.c | 123 ++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 122 insertions(+), 1 deletion(-)

diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
index a9f7d34a94..f3ebac2812 100644
--- a/app/test/test_dmadev.c
+++ b/app/test/test_dmadev.c
@@ -37,6 +37,120 @@ print_err(const char *func, int lineno, const char *format, ...)
 	return ret;
 }
 
+/* run a series of copy tests just using some different options for enqueues and completions */
+static int
+do_multi_copies(int dev_id, uint16_t vchan,
+		int split_batches,     /* submit 2 x 16 or 1 x 32 burst */
+		int split_completions, /* gather 2 x 16 or 1 x 32 completions */
+		int use_completed_status) /* use completed or completed_status function */
+{
+	struct rte_mbuf *srcs[32], *dsts[32];
+	enum rte_dma_status_code sc[32];
+	unsigned int i, j;
+	bool dma_err = false;
+
+	/* Enqueue burst of copies and hit doorbell. 8< */
+	for (i = 0; i < RTE_DIM(srcs); i++) {
+		uint64_t *src_data;
+
+		if (split_batches && i == RTE_DIM(srcs) / 2)
+			rte_dmadev_submit(dev_id, vchan);
+
+		srcs[i] = rte_pktmbuf_alloc(pool);
+		dsts[i] = rte_pktmbuf_alloc(pool);
+		src_data = rte_pktmbuf_mtod(srcs[i], uint64_t *);
+		if (srcs[i] == NULL || dsts[i] == NULL) {
+			PRINT_ERR("Error allocating buffers\n");
+			return -1;
+		}
+
+		for (j = 0; j < COPY_LEN/sizeof(uint64_t); j++)
+			src_data[j] = rte_rand();
+
+		if (rte_dmadev_copy(dev_id, vchan, srcs[i]->buf_iova + srcs[i]->data_off,
+				dsts[i]->buf_iova + dsts[i]->data_off, COPY_LEN, 0) != id_count++) {
+			PRINT_ERR("Error with rte_dmadev_copy for buffer %u\n", i);
+			return -1;
+		}
+	}
+	rte_dmadev_submit(dev_id, vchan);
+	/* >8 End of enqueue burst of copies and hit doorbell. */
+	usleep(20);
+
+	if (split_completions) {
+		/* gather completions in two halves */
+		uint16_t half_len = RTE_DIM(srcs) / 2;
+		int ret = rte_dmadev_completed(dev_id, vchan, half_len, NULL, &dma_err);
+		if (ret != half_len || dma_err) {
+			PRINT_ERR("Error with rte_dmadev_completed - first half. ret = %d, expected ret = %u, dma_err = %d\n",
+					ret, half_len, dma_err);
+			rte_dmadev_dump(dev_id, stdout);
+			return -1;
+		}
+		ret = rte_dmadev_completed(dev_id, vchan, half_len, NULL, &dma_err);
+		if (ret != half_len || dma_err) {
+			PRINT_ERR("Error with rte_dmadev_completed - second half. ret = %d, expected ret = %u, dma_err = %d\n",
+					ret, half_len, dma_err);
+			rte_dmadev_dump(dev_id, stdout);
+			return -1;
+		}
+	} else {
+		/* gather all completions in one go, using either
+		 * completed or completed_status fns
+		 */
+		if (!use_completed_status) {
+			int n = rte_dmadev_completed(dev_id, vchan, RTE_DIM(srcs), NULL, &dma_err);
+			if (n != RTE_DIM(srcs) || dma_err) {
+				PRINT_ERR("Error with rte_dmadev_completed, %u [expected: %zu], dma_err = %d\n",
+						n, RTE_DIM(srcs), dma_err);
+				rte_dmadev_dump(dev_id, stdout);
+				return -1;
+			}
+		} else {
+			int n = rte_dmadev_completed_status(dev_id, vchan, RTE_DIM(srcs), NULL, sc);
+			if (n != RTE_DIM(srcs)) {
+				PRINT_ERR("Error with rte_dmadev_completed_status, %u [expected: %zu]\n",
+						n, RTE_DIM(srcs));
+				rte_dmadev_dump(dev_id, stdout);
+				return -1;
+			}
+			for (j = 0; j < (uint16_t)n; j++) {
+				if (sc[j] != RTE_DMA_STATUS_SUCCESSFUL) {
+					PRINT_ERR("Error with rte_dmadev_completed_status, job %u reports failure [code %u]\n",
+							j, sc[j]);
+					rte_dmadev_dump(dev_id, stdout);
+					return -1;
+				}
+			}
+		}
+	}
+
+	/* check for empty */
+	int ret = use_completed_status ?
+			rte_dmadev_completed_status(dev_id, vchan, RTE_DIM(srcs), NULL, sc) :
+			rte_dmadev_completed(dev_id, vchan, RTE_DIM(srcs), NULL, &dma_err);
+	if (ret != 0) {
+		PRINT_ERR("Error with completion check - ops unexpectedly returned\n");
+		rte_dmadev_dump(dev_id, stdout);
+		return -1;
+	}
+
+	for (i = 0; i < RTE_DIM(srcs); i++) {
+		char *src_data, *dst_data;
+
+		src_data = rte_pktmbuf_mtod(srcs[i], char *);
+		dst_data = rte_pktmbuf_mtod(dsts[i], char *);
+		for (j = 0; j < COPY_LEN; j++)
+			if (src_data[j] != dst_data[j]) {
+				PRINT_ERR("Error with copy of packet %u, byte %u\n", i, j);
+				return -1;
+			}
+		rte_pktmbuf_free(srcs[i]);
+		rte_pktmbuf_free(dsts[i]);
+	}
+	return 0;
+}
+
 static int
 test_enqueue_copies(int dev_id, uint16_t vchan)
 {
@@ -144,7 +258,14 @@ test_enqueue_copies(int dev_id, uint16_t vchan)
 		rte_pktmbuf_free(dst);
 	} while (0);
 
-	return 0;
+	/* test doing multiple copies */
+	return do_multi_copies(dev_id, vchan, 0, 0, 0) /* enqueue and complete 1 batch at a time */
+			/* enqueue 2 batches and then complete both */
+			|| do_multi_copies(dev_id, vchan, 1, 0, 0)
+			/* enqueue 1 batch, then complete in two halves */
+			|| do_multi_copies(dev_id, vchan, 0, 1, 0)
+			/* test using completed_status in place of regular completed API */
+			|| do_multi_copies(dev_id, vchan, 0, 0, 1);
 }
 
 static int
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [RFC PATCH 6/7] app/test: test dmadev instance failure handling
  2021-08-26 18:32 [dpdk-dev] [RFC PATCH 0/7] add test suite for DMA drivers Bruce Richardson
                   ` (4 preceding siblings ...)
  2021-08-26 18:32 ` [dpdk-dev] [RFC PATCH 5/7] app/test: add more comprehensive " Bruce Richardson
@ 2021-08-26 18:33 ` Bruce Richardson
  2021-08-26 18:33 ` [dpdk-dev] [RFC PATCH 7/7] app/test: add dmadev fill tests Bruce Richardson
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-08-26 18:33 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, Chengwen Feng, jerinj, Bruce Richardson

Add a series of tests to inject bad copy operations into a dmadev to
test the error handling and reporting capabilities. Various combinations
of errors in various positions in a burst are tested, as are errors in
bursts with fence flag set, and multiple errors in a single burst.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 app/test/test_dmadev.c | 395 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 395 insertions(+)

diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
index f3ebac2812..9b34632cbc 100644
--- a/app/test/test_dmadev.c
+++ b/app/test/test_dmadev.c
@@ -268,6 +268,387 @@ test_enqueue_copies(int dev_id, uint16_t vchan)
 			|| do_multi_copies(dev_id, vchan, 0, 0, 1);
 }
 
+/* Failure handling test cases - global macros and variables for those tests*/
+#define COMP_BURST_SZ	16
+#define OPT_FENCE(idx) ((fence && idx == 8) ? RTE_DMA_OP_FLAG_FENCE : 0)
+
+static int
+test_failure_in_full_burst(int dev_id, uint16_t vchan, bool fence,
+		struct rte_mbuf **srcs, struct rte_mbuf **dsts, unsigned int fail_idx)
+{
+	/* Test single full batch statuses with failures */
+	enum rte_dma_status_code status[COMP_BURST_SZ];
+	uint16_t invalid_addr_id = 0;
+	uint16_t idx;
+	uint16_t count, status_count;
+	unsigned int j;
+	bool error = 0;
+
+	for (j = 0; j < COMP_BURST_SZ; j++) {
+		int id = rte_dmadev_copy(dev_id, vchan,
+				(j == fail_idx ? 0 : (srcs[j]->buf_iova + srcs[j]->data_off)),
+				dsts[j]->buf_iova + dsts[j]->data_off,
+				COPY_LEN, OPT_FENCE(j));
+		if (id < 0) {
+			PRINT_ERR("Error with rte_dmadev_copy for buffer %u\n", j);
+			return -1;
+		}
+		if (j == fail_idx)
+			invalid_addr_id = id;
+	}
+	rte_dmadev_submit(dev_id, vchan);
+	usleep(10);
+
+	count = rte_dmadev_completed(dev_id, vchan, COMP_BURST_SZ, &idx, &error);
+	if (count != fail_idx) {
+		PRINT_ERR("Error with rte_dmadev_completed for failure test. Got returned %u not %u.\n",
+				count, fail_idx);
+		rte_dmadev_dump(dev_id, stdout);
+		return -1;
+	}
+	if (error == false) {
+		PRINT_ERR("Error, missing expected failed copy, %u. has_error is not set\n",
+				fail_idx);
+		return -1;
+	}
+	if (idx != invalid_addr_id - 1) {
+		PRINT_ERR("Error, missing expected failed copy, %u. Got last idx %u, not %u\n",
+				fail_idx, idx, invalid_addr_id - 1);
+		return -1;
+	}
+
+	/* all checks ok, now verify calling completed() again always returns 0 */
+	for (j = 0; j < 10; j++) {
+		if (rte_dmadev_completed(dev_id, vchan, COMP_BURST_SZ, &idx, &error) != 0
+				|| error == false || idx != (invalid_addr_id - 1)) {
+			PRINT_ERR("Error with follow-up completed calls for fail idx %u\n",
+					fail_idx);
+			return -1;
+		}
+	}
+
+	status_count = rte_dmadev_completed_status(dev_id, vchan, COMP_BURST_SZ,
+			&idx, status);
+	/* some HW may stop on error and be restarted after getting error status for single value
+	 * To handle this case, if we get just one error back, wait for more completions and get
+	 * status for rest of the burst
+	 */
+	if (status_count == 1) {
+		usleep(10);
+		status_count += rte_dmadev_completed_status(dev_id, vchan, COMP_BURST_SZ - 1,
+					&idx, &status[1]);
+	}
+	/* check that at this point we have all status values */
+	if (status_count != COMP_BURST_SZ - count) {
+		PRINT_ERR("Error with completed_status calls for fail idx %u. Got %u not %u\n",
+				fail_idx, status_count, COMP_BURST_SZ - count);
+		return -1;
+	}
+	/* now verify just one failure followed by multiple successful or skipped entries */
+	if (status[0] == RTE_DMA_STATUS_SUCCESSFUL) {
+		PRINT_ERR("Error with status returned for fail idx %u. First status was not failure\n",
+				fail_idx);
+		return -1;
+	}
+	for (j = 1; j < status_count; j++) {
+		/* after a failure in a burst, depending on ordering/fencing,
+		 * operations may be successful or skipped because of previous error.
+		 */
+		if (status[j] != RTE_DMA_STATUS_SUCCESSFUL
+				&& status[j] != RTE_DMA_STATUS_NOT_ATTEMPTED) {
+			PRINT_ERR("Error with status calls for fail idx %u. Status for job %u (of %u) is not successful\n",
+					fail_idx, count + j, COMP_BURST_SZ);
+			return -1;
+		}
+	}
+	return 0;
+}
+
+static int
+test_individual_status_query_with_failure(int dev_id, uint16_t vchan, bool fence,
+		struct rte_mbuf **srcs, struct rte_mbuf **dsts, unsigned int fail_idx)
+{
+	/* Test gathering batch statuses one at a time */
+	enum rte_dma_status_code status[COMP_BURST_SZ];
+	uint16_t invalid_addr_id = 0;
+	uint16_t idx;
+	uint16_t count = 0, status_count = 0;
+	unsigned int j;
+	bool error = false;
+
+	for (j = 0; j < COMP_BURST_SZ; j++) {
+		int id = rte_dmadev_copy(dev_id, vchan,
+				(j == fail_idx ? 0 : (srcs[j]->buf_iova + srcs[j]->data_off)),
+				dsts[j]->buf_iova + dsts[j]->data_off,
+				COPY_LEN, OPT_FENCE(j));
+		if (id < 0) {
+			PRINT_ERR("Error with rte_dmadev_copy for buffer %u\n", j);
+			return -1;
+		}
+		if (j == fail_idx)
+			invalid_addr_id = id;
+	}
+	rte_dmadev_submit(dev_id, vchan);
+	usleep(10);
+
+	/* use regular "completed" until we hit error */
+	while (!error) {
+		uint16_t n = rte_dmadev_completed(dev_id, vchan, 1, &idx, &error);
+		count += n;
+		if (n > 1 || count >= COMP_BURST_SZ) {
+			PRINT_ERR("Error - too many completions got\n");
+			return -1;
+		}
+		if (n == 0 && !error) {
+			PRINT_ERR("Error, unexpectedly got zero completions after %u completed\n",
+					count);
+			return -1;
+		}
+	}
+	if (idx != invalid_addr_id - 1) {
+		PRINT_ERR("Error, last successful index not as expected, got %u, expected %u\n",
+				idx, invalid_addr_id - 1);
+		return -1;
+	}
+
+	/* use completed_status until we hit end of burst */
+	while (count + status_count < COMP_BURST_SZ) {
+		uint16_t n = rte_dmadev_completed_status(dev_id, vchan, 1, &idx,
+				&status[status_count]);
+		usleep(10); /* allow delay to ensure jobs are completed */
+		status_count += n;
+		if (n != 1) {
+			PRINT_ERR("Error: unexpected number of completions received, %u, not 1\n",
+					n);
+			return -1;
+		}
+	}
+
+	/* check for single failure */
+	if (status[0] == RTE_DMA_STATUS_SUCCESSFUL) {
+		PRINT_ERR("Error, unexpected successful DMA transaction\n");
+		return -1;
+	}
+	for (j = 1; j < status_count; j++) {
+		if (status[j] != RTE_DMA_STATUS_SUCCESSFUL
+				&& status[j] != RTE_DMA_STATUS_NOT_ATTEMPTED) {
+			PRINT_ERR("Error, unexpected DMA error reported\n");
+			return -1;
+		}
+	}
+
+	return 0;
+}
+
+static int
+test_single_item_status_query_with_failure(int dev_id, uint16_t vchan,
+		struct rte_mbuf **srcs, struct rte_mbuf **dsts, unsigned int fail_idx)
+{
+	/* When error occurs just collect a single error using "completed_status()"
+	 * before going to back to completed() calls
+	 */
+	enum rte_dma_status_code status;
+	uint16_t invalid_addr_id = 0;
+	uint16_t idx;
+	uint16_t count, status_count, count2;
+	unsigned int j;
+	bool error = 0;
+
+	for (j = 0; j < COMP_BURST_SZ; j++) {
+		int id = rte_dmadev_copy(dev_id, vchan,
+				(j == fail_idx ? 0 : (srcs[j]->buf_iova + srcs[j]->data_off)),
+				dsts[j]->buf_iova + dsts[j]->data_off,
+				COPY_LEN, 0);
+		if (id < 0) {
+			PRINT_ERR("Error with rte_dmadev_copy for buffer %u\n", j);
+			return -1;
+		}
+		if (j == fail_idx)
+			invalid_addr_id = id;
+	}
+	rte_dmadev_submit(dev_id, vchan);
+	usleep(10);
+
+	/* get up to the error point */
+	count = rte_dmadev_completed(dev_id, vchan, COMP_BURST_SZ, &idx, &error);
+	if (count != fail_idx) {
+		PRINT_ERR("Error with rte_dmadev_completed for failure test. Got returned %u not %u.\n",
+				count, fail_idx);
+		rte_dmadev_dump(dev_id, stdout);
+		return -1;
+	}
+	if (error == false) {
+		PRINT_ERR("Error, missing expected failed copy, %u. has_error is not set\n",
+				fail_idx);
+		return -1;
+	}
+	if (idx != invalid_addr_id - 1) {
+		PRINT_ERR("Error, missing expected failed copy, %u. Got last idx %u, not %u\n",
+				fail_idx, idx, invalid_addr_id - 1);
+		return -1;
+	}
+
+	/* get the error code */
+	status_count = rte_dmadev_completed_status(dev_id, vchan, 1, &idx, &status);
+	if (status_count != 1) {
+		PRINT_ERR("Error with completed_status calls for fail idx %u. Got %u not %u\n",
+				fail_idx, status_count, COMP_BURST_SZ - count);
+		return -1;
+	}
+	if (status == RTE_DMA_STATUS_SUCCESSFUL) {
+		PRINT_ERR("Error with status returned for fail idx %u. First status was not failure\n",
+				fail_idx);
+		return -1;
+	}
+	usleep(10); /* delay in case more time needed after error handled to complete other jobs */
+
+	/* get the rest of the completions without status */
+	count2 = rte_dmadev_completed(dev_id, vchan, COMP_BURST_SZ, &idx, &error);
+	if (error == true) {
+		PRINT_ERR("Error, got further errors post completed_status() call, for failure case %u.\n",
+				fail_idx);
+		return -1;
+	}
+	if (count + status_count + count2 != COMP_BURST_SZ) {
+		PRINT_ERR("Error, incorrect number of completions received, got %u not %u\n",
+				count + status_count + count2, COMP_BURST_SZ);
+		return -1;
+	}
+
+	return 0;
+}
+
+static int
+test_multi_failure(int dev_id, uint16_t vchan, struct rte_mbuf **srcs, struct rte_mbuf **dsts,
+		const unsigned int *fail, size_t num_fail)
+{
+	/* test having multiple errors in one go */
+	enum rte_dma_status_code status[COMP_BURST_SZ];
+	unsigned int i, j;
+	uint16_t count, err_count = 0;
+	bool error = 0;
+
+	/* enqueue and gather completions in one go */
+	for (j = 0; j < COMP_BURST_SZ; j++) {
+		uintptr_t src = srcs[j]->buf_iova + srcs[j]->data_off;
+		/* set up for failure if the current index is anywhere is the fails array */
+		for (i = 0; i < num_fail; i++)
+			if (j == fail[i])
+				src = 0;
+
+		int id = rte_dmadev_copy(dev_id, vchan,
+				src, dsts[j]->buf_iova + dsts[j]->data_off,
+				COPY_LEN, 0);
+		if (id < 0) {
+			PRINT_ERR("Error with rte_dmadev_copy for buffer %u\n", j);
+			return -1;
+		}
+	}
+	rte_dmadev_submit(dev_id, vchan);
+	usleep(10);
+
+	count = rte_dmadev_completed_status(dev_id, vchan, COMP_BURST_SZ, NULL, status);
+	while (count < COMP_BURST_SZ) {
+		usleep(10);
+
+		uint16_t ret = rte_dmadev_completed_status(dev_id, vchan, COMP_BURST_SZ - count,
+				NULL, &status[count]);
+		if (ret == 0) {
+			PRINT_ERR("Error getting all completions for jobs. Got %u of %u\n",
+					count, COMP_BURST_SZ);
+			return -1;
+		}
+		count += ret;
+	}
+	for (i = 0; i < count; i++) {
+		if (status[i] != RTE_DMA_STATUS_SUCCESSFUL)
+			err_count++;
+	}
+	if (err_count != num_fail) {
+		PRINT_ERR("Error: Invalid number of failed completions returned, %u; expected %zu\n",
+			err_count, num_fail);
+		return -1;
+	}
+
+	/* enqueue and gather completions in bursts, but getting errors one at a time */
+	for (j = 0; j < COMP_BURST_SZ; j++) {
+		uintptr_t src = srcs[j]->buf_iova + srcs[j]->data_off;
+		/* set up for failure if the current index is anywhere is the fails array */
+		for (i = 0; i < num_fail; i++)
+			if (j == fail[i])
+				src = 0;
+
+		int id = rte_dmadev_copy(dev_id, vchan,
+				src, dsts[j]->buf_iova + dsts[j]->data_off,
+				COPY_LEN, 0);
+		if (id < 0) {
+			PRINT_ERR("Error with rte_dmadev_copy for buffer %u\n", j);
+			return -1;
+		}
+	}
+	rte_dmadev_submit(dev_id, vchan);
+	usleep(10);
+
+	count = 0;
+	err_count = 0;
+	while (count + err_count < COMP_BURST_SZ) {
+		count += rte_dmadev_completed(dev_id, vchan, COMP_BURST_SZ, NULL, &error);
+		if (error) {
+			uint16_t ret = rte_dmadev_completed_status(dev_id, vchan, 1,
+					NULL, status);
+			if (ret != 1) {
+				PRINT_ERR("Error getting error-status for completions\n");
+				return -1;
+			}
+			err_count += ret;
+			usleep(10);
+		}
+	}
+	if (err_count != num_fail) {
+		PRINT_ERR("Error: Incorrect number of failed completions received, got %u not %lu\n",
+				err_count, num_fail);
+		return -1;
+	}
+
+	return 0;
+}
+
+static int
+test_completion_status(int dev_id, uint16_t vchan, bool fence)
+{
+	const unsigned int fail[] = {0, 7, 14, 15};
+	struct rte_mbuf *srcs[COMP_BURST_SZ], *dsts[COMP_BURST_SZ];
+	unsigned int i;
+
+	for (i = 0; i < COMP_BURST_SZ; i++) {
+		srcs[i] = rte_pktmbuf_alloc(pool);
+		dsts[i] = rte_pktmbuf_alloc(pool);
+	}
+
+	for (i = 0; i < RTE_DIM(fail); i++) {
+		if (test_failure_in_full_burst(dev_id, vchan, fence, srcs, dsts, fail[i]) < 0)
+			return -1;
+
+		if (test_individual_status_query_with_failure(dev_id, vchan, fence,
+				srcs, dsts, fail[i]) < 0)
+			return -1;
+
+		/* test is run the same fenced, or unfenced, but no harm in running it twice */
+		if (test_single_item_status_query_with_failure(dev_id, vchan,
+				srcs, dsts, fail[i]) < 0)
+			return -1;
+	}
+
+	if (test_multi_failure(dev_id, vchan, srcs, dsts, fail, RTE_DIM(fail)) < 0)
+		return -1;
+
+	for (i = 0; i < COMP_BURST_SZ; i++) {
+		rte_pktmbuf_free(srcs[i]);
+		rte_pktmbuf_free(dsts[i]);
+	}
+	return 0;
+}
+
 static int
 test_dmadev_instance(uint16_t dev_id)
 {
@@ -340,6 +721,20 @@ test_dmadev_instance(uint16_t dev_id)
 	}
 	printf("\n");
 
+	/* to test error handling we can provide null pointers for source or dest in copies. This
+	 * requires VA mode in DPDK, since NULL(0) is a valid physical address.
+	 */
+	if (rte_eal_iova_mode() == RTE_IOVA_VA) {
+		printf("DMA Dev: %u, Running Completion Handling Tests\n", dev_id);
+		if (test_completion_status(dev_id, vchan, false) != 0) /* without fences */
+			goto err;
+		if (test_completion_status(dev_id, vchan, true) != 0) /* with fences */
+			goto err;
+		rte_dmadev_stats_get(dev_id, 0, &stats);
+		printf("Ops submitted: %"PRIu64"\t", stats.submitted);
+		printf("Ops completed: %"PRIu64"\n", stats.completed);
+	}
+
 
 	rte_mempool_free(pool);
 	rte_dmadev_stop(dev_id);
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [RFC PATCH 7/7] app/test: add dmadev fill tests
  2021-08-26 18:32 [dpdk-dev] [RFC PATCH 0/7] add test suite for DMA drivers Bruce Richardson
                   ` (5 preceding siblings ...)
  2021-08-26 18:33 ` [dpdk-dev] [RFC PATCH 6/7] app/test: test dmadev instance failure handling Bruce Richardson
@ 2021-08-26 18:33 ` Bruce Richardson
  2021-09-01 16:32 ` [dpdk-dev] [PATCH v2 0/6] add test suite for DMA drivers Bruce Richardson
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-08-26 18:33 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, Chengwen Feng, jerinj, Bruce Richardson

From: Kevin Laatz <kevin.laatz@intel.com>

For dma devices which support the fill operation, run unit tests to
verify fill behaviour is correct.

Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 app/test/test_dmadev.c | 68 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 68 insertions(+)

diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
index 9b34632cbc..cc04689adb 100644
--- a/app/test/test_dmadev.c
+++ b/app/test/test_dmadev.c
@@ -649,6 +649,62 @@ test_completion_status(int dev_id, uint16_t vchan, bool fence)
 	return 0;
 }
 
+static int
+test_enqueue_fill(int dev_id, uint16_t vchan)
+{
+	const unsigned int lengths[] = {8, 64, 1024, 50, 100, 89};
+	struct rte_mbuf *dst;
+	char *dst_data;
+	uint64_t pattern = 0xfedcba9876543210;
+	unsigned int i, j;
+
+	dst = rte_pktmbuf_alloc(pool);
+	if (dst == NULL) {
+		PRINT_ERR("Failed to allocate mbuf\n");
+		return -1;
+	}
+	dst_data = rte_pktmbuf_mtod(dst, char *);
+
+	for (i = 0; i < RTE_DIM(lengths); i++) {
+		/* reset dst_data */
+		memset(dst_data, 0, rte_pktmbuf_data_len(dst));
+
+		/* perform the fill operation */
+		int id = rte_dmadev_fill(dev_id, vchan, pattern,
+				rte_pktmbuf_iova(dst), lengths[i], RTE_DMA_OP_FLAG_SUBMIT);
+		if (id < 0) {
+			PRINT_ERR("Error with rte_ioat_enqueue_fill\n");
+			return -1;
+		}
+		usleep(10);
+
+		if (rte_dmadev_completed(dev_id, vchan, 1, NULL, NULL) != 1) {
+			PRINT_ERR("Error: fill operation failed (length: %u)\n", lengths[i]);
+			return -1;
+		}
+		/* check the data from the fill operation is correct */
+		for (j = 0; j < lengths[i]; j++) {
+			char pat_byte = ((char *)&pattern)[j % 8];
+			if (dst_data[j] != pat_byte) {
+				PRINT_ERR("Error with fill operation (lengths = %u): got (%x), not (%x)\n",
+						lengths[i], dst_data[j], pat_byte);
+				return -1;
+			}
+		}
+		/* check that the data after the fill operation was not written to */
+		for (; j < rte_pktmbuf_data_len(dst); j++) {
+			if (dst_data[j] != 0) {
+				PRINT_ERR("Error, fill operation wrote too far (lengths = %u): got (%x), not (%x)\n",
+						lengths[i], dst_data[j], 0);
+				return -1;
+			}
+		}
+	}
+
+	rte_pktmbuf_free(dst);
+	return 0;
+}
+
 static int
 test_dmadev_instance(uint16_t dev_id)
 {
@@ -735,6 +791,18 @@ test_dmadev_instance(uint16_t dev_id)
 		printf("Ops completed: %"PRIu64"\n", stats.completed);
 	}
 
+	if ((info.dev_capa & RTE_DMADEV_CAPA_OPS_FILL) == 0)
+		printf("DMA Dev: %u, No device fill support - skipping fill tests\n", dev_id);
+	else {
+		printf("DMA Dev: %u, Running Fill Tests\n", dev_id);
+
+		if (test_enqueue_fill(dev_id, vchan) != 0)
+			goto err;
+
+		rte_dmadev_stats_get(dev_id, 0, &stats);
+		printf("Ops submitted: %"PRIu64"\t", stats.submitted);
+		printf("Ops completed: %"PRIu64"\n", stats.completed);
+	}
 
 	rte_mempool_free(pool);
 	rte_dmadev_stop(dev_id);
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [RFC PATCH 4/7] app/test: add basic dmadev copy tests
  2021-08-26 18:32 ` [dpdk-dev] [RFC PATCH 4/7] app/test: add basic dmadev copy tests Bruce Richardson
@ 2021-08-27  7:14   ` Jerin Jacob
  2021-08-27 10:41     ` Bruce Richardson
  0 siblings, 1 reply; 130+ messages in thread
From: Jerin Jacob @ 2021-08-27  7:14 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: dpdk-dev, Walsh, Conor, Laatz, Kevin, Chengwen Feng, Jerin Jacob

On Fri, Aug 27, 2021 at 12:03 AM Bruce Richardson
<bruce.richardson@intel.com> wrote:
>
> For each dmadev instance, perform some basic copy tests to validate that
> functionality.
>
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
>  app/test/test_dmadev.c | 157 +++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 157 insertions(+)
>
> diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
> index f895556d29..a9f7d34a94 100644
> --- a/app/test/test_dmadev.c
> +++ b/app/test/test_dmadev.c
> @@ -1,11 +1,14 @@
>  /* SPDX-License-Identifier: BSD-3-Clause
>   * Copyright(c) 2021 HiSilicon Limited.
>   */
> +#include <unistd.h>
>
>  #include <rte_common.h>
>  #include <rte_dev.h>
>  #include <rte_dmadev.h>
>  #include <rte_bus_vdev.h>
> +#include <rte_mbuf.h>
> +#include <rte_random.h>
>
>  #include "test.h"
>
> @@ -14,6 +17,11 @@ extern int test_dmadev_api(uint16_t dev_id);
>
>  #define PRINT_ERR(...) print_err(__func__, __LINE__, __VA_ARGS__)
>
> +#define COPY_LEN 1024
> +
> +static struct rte_mempool *pool;
> +static uint16_t id_count;
> +
>  static inline int
>  __rte_format_printf(3, 4)
>  print_err(const char *func, int lineno, const char *format, ...)
> @@ -29,10 +37,123 @@ print_err(const char *func, int lineno, const char *format, ...)
>         return ret;
>  }
>
> +static int
> +test_enqueue_copies(int dev_id, uint16_t vchan)
> +{
> +       unsigned int i;
> +       uint16_t id;
> +
> +       /* test doing a single copy */
> +       do {
> +               struct rte_mbuf *src, *dst;
> +               char *src_data, *dst_data;
> +
> +               src = rte_pktmbuf_alloc(pool);
> +               dst = rte_pktmbuf_alloc(pool);
> +               src_data = rte_pktmbuf_mtod(src, char *);
> +               dst_data = rte_pktmbuf_mtod(dst, char *);
> +
> +               for (i = 0; i < COPY_LEN; i++)
> +                       src_data[i] = rte_rand() & 0xFF;
> +
> +               id = rte_dmadev_copy(dev_id, vchan, src->buf_iova + src->data_off,
> +                               dst->buf_iova + dst->data_off, COPY_LEN, RTE_DMA_OP_FLAG_SUBMIT);
> +               if (id != id_count) {
> +                       PRINT_ERR("Error with rte_dmadev_copy, got %u, expected %u\n",
> +                                       id, id_count);
> +                       return -1;
> +               }
> +
> +               /* give time for copy to finish, then check it was done */
> +               usleep(10);

Across series, We have this pattern. IMHO, It is not portable.
Can we have a helper function either common lib code or test code to
busy poll for completion with timeout? and in test code, we have a much bigger
timeout to accommodate all the devices. That way if the driver completes
early it can continue to execute and makes it portable.


> +
> +               for (i = 0; i < COPY_LEN; i++) {
> +                       if (dst_data[i] != src_data[i]) {
> +                               PRINT_ERR("Data mismatch at char %u [Got %02x not %02x]\n", i,
> +                                               dst_data[i], src_data[i]);
> +                               rte_dmadev_dump(dev_id, stderr);
> +                               return -1;
> +                       }
> +               }
> +
> +               /* now check completion works */
> +               if (rte_dmadev_completed(dev_id, vchan, 1, &id, NULL) != 1) {
> +                       PRINT_ERR("Error with rte_dmadev_completed\n");
> +                       return -1;
> +               }
> +               if (id != id_count) {
> +                       PRINT_ERR("Error:incorrect job id received, %u [expected %u]\n",
> +                                       id, id_count);
> +                       return -1;
> +               }
> +

^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [RFC PATCH 4/7] app/test: add basic dmadev copy tests
  2021-08-27  7:14   ` Jerin Jacob
@ 2021-08-27 10:41     ` Bruce Richardson
  0 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-08-27 10:41 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dpdk-dev, Walsh, Conor, Laatz, Kevin, Chengwen Feng, Jerin Jacob

On Fri, Aug 27, 2021 at 12:44:17PM +0530, Jerin Jacob wrote:
> On Fri, Aug 27, 2021 at 12:03 AM Bruce Richardson
> <bruce.richardson@intel.com> wrote:
> >
> > For each dmadev instance, perform some basic copy tests to validate that
> > functionality.
> >
> > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > ---
> >  app/test/test_dmadev.c | 157 +++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 157 insertions(+)
> >
> > diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
> > index f895556d29..a9f7d34a94 100644
> > --- a/app/test/test_dmadev.c
> > +++ b/app/test/test_dmadev.c
> > @@ -1,11 +1,14 @@
> >  /* SPDX-License-Identifier: BSD-3-Clause
> >   * Copyright(c) 2021 HiSilicon Limited.
> >   */
> > +#include <unistd.h>
> >
> >  #include <rte_common.h>
> >  #include <rte_dev.h>
> >  #include <rte_dmadev.h>
> >  #include <rte_bus_vdev.h>
> > +#include <rte_mbuf.h>
> > +#include <rte_random.h>
> >
> >  #include "test.h"
> >
> > @@ -14,6 +17,11 @@ extern int test_dmadev_api(uint16_t dev_id);
> >
> >  #define PRINT_ERR(...) print_err(__func__, __LINE__, __VA_ARGS__)
> >
> > +#define COPY_LEN 1024
> > +
> > +static struct rte_mempool *pool;
> > +static uint16_t id_count;
> > +
> >  static inline int
> >  __rte_format_printf(3, 4)
> >  print_err(const char *func, int lineno, const char *format, ...)
> > @@ -29,10 +37,123 @@ print_err(const char *func, int lineno, const char *format, ...)
> >         return ret;
> >  }
> >
> > +static int
> > +test_enqueue_copies(int dev_id, uint16_t vchan)
> > +{
> > +       unsigned int i;
> > +       uint16_t id;
> > +
> > +       /* test doing a single copy */
> > +       do {
> > +               struct rte_mbuf *src, *dst;
> > +               char *src_data, *dst_data;
> > +
> > +               src = rte_pktmbuf_alloc(pool);
> > +               dst = rte_pktmbuf_alloc(pool);
> > +               src_data = rte_pktmbuf_mtod(src, char *);
> > +               dst_data = rte_pktmbuf_mtod(dst, char *);
> > +
> > +               for (i = 0; i < COPY_LEN; i++)
> > +                       src_data[i] = rte_rand() & 0xFF;
> > +
> > +               id = rte_dmadev_copy(dev_id, vchan, src->buf_iova + src->data_off,
> > +                               dst->buf_iova + dst->data_off, COPY_LEN, RTE_DMA_OP_FLAG_SUBMIT);
> > +               if (id != id_count) {
> > +                       PRINT_ERR("Error with rte_dmadev_copy, got %u, expected %u\n",
> > +                                       id, id_count);
> > +                       return -1;
> > +               }
> > +
> > +               /* give time for copy to finish, then check it was done */
> > +               usleep(10);
> 
> Across series, We have this pattern. IMHO, It is not portable.
> Can we have a helper function either common lib code or test code to
> busy poll for completion with timeout? and in test code, we have a much bigger
> timeout to accommodate all the devices. That way if the driver completes
> early it can continue to execute and makes it portable.
> 

It's less that ideal, I admit, but I'm not sure it's not portable. The main
concern here for these unit tests is to try and ensure that at all times
the state of the device is fully known, so the delays are there to ensure
that the device has completely finished all work given to it. The
suggestion of having a polling loop to gather completions won't work for
all scenarios are it doesn't give us the same degree of control in that we
cannot know the exact state of job completion when running each poll -
meaning that running tests such as for gathering all completions in one go,
or in parts of a fixed size are hard to do.

The other alternative is to provide an API in dmadev to await quiescence of
a DMA device. This would be a better alternative to having the fixed
delays, but means a new API for testing only. I can investigate this for
the next version of the patchset but it means more work for driver writers.
[This could be perhaps worked around by having dmadev implement a
"usleep()" call for any drivers that don't implement the function, which
would provide a quicker way for driver writers to test incomplete drivers]

Also, if having "large" delays is a concern, I don't think it's a major
one. With all delays as  usleep(10) - and 10 microseconds should be a long
time for a HW device - an test run of an optimized build of DPDK takes ~1.5
seconds for 2 dmadev instances, and a debug build only ~2.5 seconds.
Increasing the delay tenfold to 100 usec, brings the optimized test run to
<2.5 secs and a debug build run to ~4 seconds i.e. 2 seconds per device.
Therefore I don't believe having delays in this code is a real problem
other than being a less-elegant solution than a polling-based one.

In any case, I'll see about adding an "all-jobs-done" API to dmadev for v2
to remove these delays as much as possible.

Regards,
/Bruce

^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v2 0/6] add test suite for DMA drivers
  2021-08-26 18:32 [dpdk-dev] [RFC PATCH 0/7] add test suite for DMA drivers Bruce Richardson
                   ` (6 preceding siblings ...)
  2021-08-26 18:33 ` [dpdk-dev] [RFC PATCH 7/7] app/test: add dmadev fill tests Bruce Richardson
@ 2021-09-01 16:32 ` Bruce Richardson
  2021-09-01 16:32   ` [dpdk-dev] [PATCH v2 1/6] dmadev: add device idle check for testing use Bruce Richardson
                     ` (5 more replies)
  2021-09-07 16:49 ` [dpdk-dev] [PATCH v3 0/8] add test suite for DMA drivers Bruce Richardson
                   ` (3 subsequent siblings)
  11 siblings, 6 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-09-01 16:32 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

This patchset adds a fairly comprehensive set of tests for basic dmadev
functionality. Tests are added to verify basic copy operation in each
device, using both submit function and submit flag, and verifying completion
gathering using both "completed()" and "completed_status()" functions. Beyond
that, tests are then added for the error reporting and handling, as is a suite
of tests for the fill() operation for devices that support those.

NOTE: This patchset depends on series v17 of the dmadev set [1].

[1] http://patches.dpdk.org/project/dpdk/list/?series=18502

V2:
* added into dmadev a API to check for a device being idle
* removed the hard-coded timeout delays before checking completions, and instead
  wait for device to be idle
* added in checks for statistics updates as part of some tests
* fixed issue identified by internal coverity scan
* other minor miscellaneous changes and fixes.

Bruce Richardson (5):
  dmadev: add device idle check for testing use
  app/test: add basic dmadev instance tests
  app/test: add basic dmadev copy tests
  app/test: add more comprehensive dmadev copy tests
  app/test: test dmadev instance failure handling

Kevin Laatz (1):
  app/test: add dmadev fill tests

 app/test/test_dmadev.c       | 893 +++++++++++++++++++++++++++++++++++
 lib/dmadev/rte_dmadev.c      |  16 +
 lib/dmadev/rte_dmadev.h      |  21 +
 lib/dmadev/rte_dmadev_core.h |   4 +
 lib/dmadev/version.map       |   1 +
 5 files changed, 935 insertions(+)

--
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v2 1/6] dmadev: add device idle check for testing use
  2021-09-01 16:32 ` [dpdk-dev] [PATCH v2 0/6] add test suite for DMA drivers Bruce Richardson
@ 2021-09-01 16:32   ` Bruce Richardson
  2021-09-02 12:54     ` fengchengwen
  2021-09-01 16:32   ` [dpdk-dev] [PATCH v2 2/6] app/test: add basic dmadev instance tests Bruce Richardson
                     ` (4 subsequent siblings)
  5 siblings, 1 reply; 130+ messages in thread
From: Bruce Richardson @ 2021-09-01 16:32 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

Add in a function to check if a device or vchan has completed all jobs
assigned to it, without gathering in the results. This is primarily for
use in testing, to allow the hardware to be in a known-state prior to
gathering completions.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 lib/dmadev/rte_dmadev.c      | 16 ++++++++++++++++
 lib/dmadev/rte_dmadev.h      | 21 +++++++++++++++++++++
 lib/dmadev/rte_dmadev_core.h |  4 ++++
 lib/dmadev/version.map       |  1 +
 4 files changed, 42 insertions(+)

diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c
index 1c946402db..e249411631 100644
--- a/lib/dmadev/rte_dmadev.c
+++ b/lib/dmadev/rte_dmadev.c
@@ -555,3 +555,19 @@ rte_dmadev_dump(uint16_t dev_id, FILE *f)
 
 	return 0;
 }
+
+int
+rte_dmadev_vchan_idle(uint16_t dev_id, uint16_t vchan)
+{
+	struct rte_dmadev *dev = &rte_dmadevices[dev_id];
+
+	RTE_DMADEV_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL);
+	if (vchan >= dev->data->dev_conf.nb_vchans) {
+		RTE_DMADEV_LOG(ERR,
+			"Device %u vchan %u out of range\n", dev_id, vchan);
+		return -EINVAL;
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vchan_idle, -ENOTSUP);
+	return (*dev->dev_ops->vchan_idle)(dev, vchan);
+}
diff --git a/lib/dmadev/rte_dmadev.h b/lib/dmadev/rte_dmadev.h
index e8f58e9213..350e7defc8 100644
--- a/lib/dmadev/rte_dmadev.h
+++ b/lib/dmadev/rte_dmadev.h
@@ -1028,6 +1028,27 @@ rte_dmadev_completed_status(uint16_t dev_id, uint16_t vchan,
 	return (*dev->completed_status)(dev, vchan, nb_cpls, last_idx, status);
 }
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Determine if all jobs have completed on a device channel.
+ * This function is primarily designed for testing use, as it allows a process to check if
+ * all jobs are completed, without actually gathering completions from those jobs.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @param vchan
+ *   The identifier of virtual DMA channel.
+ * @return
+ *   1 - if all jobs have completed and the device vchan is idle
+ *   0 - if there are still outstanding jobs yet to complete
+ *   < 0 - error code indicating there was a problem calling the API
+ */
+__rte_experimental
+int
+rte_dmadev_vchan_idle(uint16_t dev_id, uint16_t vchan);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/dmadev/rte_dmadev_core.h b/lib/dmadev/rte_dmadev_core.h
index e94aa1c457..7ec5a5b572 100644
--- a/lib/dmadev/rte_dmadev_core.h
+++ b/lib/dmadev/rte_dmadev_core.h
@@ -53,6 +53,9 @@ typedef int (*rte_dmadev_stats_reset_t)(struct rte_dmadev *dev, uint16_t vchan);
 typedef int (*rte_dmadev_dump_t)(const struct rte_dmadev *dev, FILE *f);
 /**< @internal Used to dump internal information. */
 
+typedef int (*rte_dmadev_vchan_idle_t)(const struct rte_dmadev *dev, uint16_t vchan);
+/**< @internal Used to check if a virtual channel has finished all jobs. */
+
 typedef int (*rte_dmadev_copy_t)(struct rte_dmadev *dev, uint16_t vchan,
 				 rte_iova_t src, rte_iova_t dst,
 				 uint32_t length, uint64_t flags);
@@ -106,6 +109,7 @@ struct rte_dmadev_ops {
 	rte_dmadev_stats_get_t stats_get;
 	rte_dmadev_stats_reset_t stats_reset;
 	rte_dmadev_dump_t dev_dump;
+	rte_dmadev_vchan_idle_t vchan_idle;
 };
 
 /**
diff --git a/lib/dmadev/version.map b/lib/dmadev/version.map
index 80be592713..b7e52fda3d 100644
--- a/lib/dmadev/version.map
+++ b/lib/dmadev/version.map
@@ -18,6 +18,7 @@ EXPERIMENTAL {
 	rte_dmadev_stats_reset;
 	rte_dmadev_stop;
 	rte_dmadev_submit;
+	rte_dmadev_vchan_idle;
 	rte_dmadev_vchan_setup;
 
 	local: *;
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v2 2/6] app/test: add basic dmadev instance tests
  2021-09-01 16:32 ` [dpdk-dev] [PATCH v2 0/6] add test suite for DMA drivers Bruce Richardson
  2021-09-01 16:32   ` [dpdk-dev] [PATCH v2 1/6] dmadev: add device idle check for testing use Bruce Richardson
@ 2021-09-01 16:32   ` Bruce Richardson
  2021-09-01 19:24     ` Mattias Rönnblom
  2021-09-03 16:07     ` Conor Walsh
  2021-09-01 16:32   ` [dpdk-dev] [PATCH v2 3/6] app/test: add basic dmadev copy tests Bruce Richardson
                     ` (3 subsequent siblings)
  5 siblings, 2 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-09-01 16:32 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

Run basic sanity tests for configuring, starting and stopping a dmadev
instance to help validate drivers. This also provides the framework for
future tests for data-path operation.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 app/test/test_dmadev.c | 81 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 81 insertions(+)

diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
index bb01e86483..12f7c69629 100644
--- a/app/test/test_dmadev.c
+++ b/app/test/test_dmadev.c
@@ -2,6 +2,7 @@
  * Copyright(c) 2021 HiSilicon Limited.
  * Copyright(c) 2021 Intel Corporation.
  */
+#include <inttypes.h>
 
 #include <rte_common.h>
 #include <rte_dev.h>
@@ -13,6 +14,77 @@
 /* from test_dmadev_api.c */
 extern int test_dmadev_api(uint16_t dev_id);
 
+#define PRINT_ERR(...) print_err(__func__, __LINE__, __VA_ARGS__)
+
+static inline int
+__rte_format_printf(3, 4)
+print_err(const char *func, int lineno, const char *format, ...)
+{
+	va_list ap;
+	int ret;
+
+	ret = fprintf(stderr, "In %s:%d - ", func, lineno);
+	va_start(ap, format);
+	ret += vfprintf(stderr, format, ap);
+	va_end(ap);
+
+	return ret;
+}
+
+static int
+test_dmadev_instance(uint16_t dev_id)
+{
+#define TEST_RINGSIZE 512
+	struct rte_dmadev_stats stats;
+	struct rte_dmadev_info info;
+	const struct rte_dmadev_conf conf = { .nb_vchans = 1};
+	const struct rte_dmadev_vchan_conf qconf = {
+			.direction = RTE_DMA_DIR_MEM_TO_MEM,
+			.nb_desc = TEST_RINGSIZE,
+	};
+	const int vchan = 0;
+
+	printf("\n### Test dmadev instance %u\n", dev_id);
+
+	rte_dmadev_info_get(dev_id, &info);
+	if (info.max_vchans < 1) {
+		PRINT_ERR("Error, no channels available on device id %u\n", dev_id);
+		return -1;
+	}
+	if (rte_dmadev_configure(dev_id, &conf) != 0) {
+		PRINT_ERR("Error with rte_dmadev_configure()\n");
+		return -1;
+	}
+	if (rte_dmadev_vchan_setup(dev_id, vchan, &qconf) < 0) {
+		PRINT_ERR("Error with queue configuration\n");
+		return -1;
+	}
+
+	rte_dmadev_info_get(dev_id, &info);
+	if (info.nb_vchans != 1) {
+		PRINT_ERR("Error, no configured queues reported on device id %u\n", dev_id);
+		return -1;
+	}
+
+	if (rte_dmadev_start(dev_id) != 0) {
+		PRINT_ERR("Error with rte_dmadev_start()\n");
+		return -1;
+	}
+	if (rte_dmadev_stats_get(dev_id, vchan, &stats) != 0) {
+		PRINT_ERR("Error with rte_dmadev_stats_get()\n");
+		return -1;
+	}
+	if (stats.completed != 0 || stats.submitted != 0 || stats.errors != 0) {
+		PRINT_ERR("Error device stats are not all zero: completed = %"PRIu64", submitted = %"PRIu64", errors = %"PRIu64"\n",
+				stats.completed, stats.submitted, stats.errors);
+		return -1;
+	}
+
+	rte_dmadev_stop(dev_id);
+	rte_dmadev_stats_reset(dev_id, vchan);
+	return 0;
+}
+
 static int
 test_apis(void)
 {
@@ -35,10 +107,19 @@ test_apis(void)
 static int
 test_dmadev(void)
 {
+	int i;
+
 	/* basic sanity on dmadev infrastructure */
 	if (test_apis() < 0)
 		return -1;
 
+	if (rte_dmadev_count() == 0)
+		return TEST_SKIPPED;
+
+	for (i = 0; i < RTE_DMADEV_MAX_DEVS; i++)
+		if (rte_dmadevices[i].state == RTE_DMADEV_ATTACHED && test_dmadev_instance(i) < 0)
+			return -1;
+
 	return 0;
 }
 
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v2 3/6] app/test: add basic dmadev copy tests
  2021-09-01 16:32 ` [dpdk-dev] [PATCH v2 0/6] add test suite for DMA drivers Bruce Richardson
  2021-09-01 16:32   ` [dpdk-dev] [PATCH v2 1/6] dmadev: add device idle check for testing use Bruce Richardson
  2021-09-01 16:32   ` [dpdk-dev] [PATCH v2 2/6] app/test: add basic dmadev instance tests Bruce Richardson
@ 2021-09-01 16:32   ` Bruce Richardson
  2021-09-02  7:44     ` Jerin Jacob
                       ` (2 more replies)
  2021-09-01 16:32   ` [dpdk-dev] [PATCH v2 4/6] app/test: add more comprehensive " Bruce Richardson
                     ` (2 subsequent siblings)
  5 siblings, 3 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-09-01 16:32 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

For each dmadev instance, perform some basic copy tests to validate that
functionality.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 app/test/test_dmadev.c | 174 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 174 insertions(+)

diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
index 12f7c69629..261f45db71 100644
--- a/app/test/test_dmadev.c
+++ b/app/test/test_dmadev.c
@@ -2,12 +2,15 @@
  * Copyright(c) 2021 HiSilicon Limited.
  * Copyright(c) 2021 Intel Corporation.
  */
+#include <unistd.h>
 #include <inttypes.h>
 
 #include <rte_common.h>
 #include <rte_dev.h>
 #include <rte_dmadev.h>
 #include <rte_bus_vdev.h>
+#include <rte_mbuf.h>
+#include <rte_random.h>
 
 #include "test.h"
 
@@ -16,6 +19,11 @@ extern int test_dmadev_api(uint16_t dev_id);
 
 #define PRINT_ERR(...) print_err(__func__, __LINE__, __VA_ARGS__)
 
+#define COPY_LEN 1024
+
+static struct rte_mempool *pool;
+static uint16_t id_count;
+
 static inline int
 __rte_format_printf(3, 4)
 print_err(const char *func, int lineno, const char *format, ...)
@@ -31,6 +39,134 @@ print_err(const char *func, int lineno, const char *format, ...)
 	return ret;
 }
 
+static inline void
+await_hw(int dev_id, uint16_t vchan)
+{
+	int idle = rte_dmadev_vchan_idle(dev_id, vchan);
+	if (idle < 0) {
+		/* for drivers that don't support this op, just sleep for 25 microseconds */
+		usleep(25);
+		return;
+	}
+
+	/* for those that do, *max* end time is one second from now, but all should be faster */
+	const uint64_t end_cycles = rte_get_timer_cycles() + rte_get_timer_hz();
+	while (!idle && rte_get_timer_cycles() < end_cycles) {
+		rte_pause();
+		idle = rte_dmadev_vchan_idle(dev_id, vchan);
+	}
+}
+
+static int
+test_enqueue_copies(int dev_id, uint16_t vchan)
+{
+	unsigned int i;
+	uint16_t id;
+
+	/* test doing a single copy */
+	do {
+		struct rte_mbuf *src, *dst;
+		char *src_data, *dst_data;
+
+		src = rte_pktmbuf_alloc(pool);
+		dst = rte_pktmbuf_alloc(pool);
+		src_data = rte_pktmbuf_mtod(src, char *);
+		dst_data = rte_pktmbuf_mtod(dst, char *);
+
+		for (i = 0; i < COPY_LEN; i++)
+			src_data[i] = rte_rand() & 0xFF;
+
+		id = rte_dmadev_copy(dev_id, vchan, src->buf_iova + src->data_off,
+				dst->buf_iova + dst->data_off, COPY_LEN, RTE_DMA_OP_FLAG_SUBMIT);
+		if (id != id_count) {
+			PRINT_ERR("Error with rte_dmadev_copy, got %u, expected %u\n",
+					id, id_count);
+			return -1;
+		}
+
+		/* give time for copy to finish, then check it was done */
+		await_hw(dev_id, vchan);
+
+		for (i = 0; i < COPY_LEN; i++) {
+			if (dst_data[i] != src_data[i]) {
+				PRINT_ERR("Data mismatch at char %u [Got %02x not %02x]\n", i,
+						dst_data[i], src_data[i]);
+				rte_dmadev_dump(dev_id, stderr);
+				return -1;
+			}
+		}
+
+		/* now check completion works */
+		if (rte_dmadev_completed(dev_id, vchan, 1, &id, NULL) != 1) {
+			PRINT_ERR("Error with rte_dmadev_completed\n");
+			return -1;
+		}
+		if (id != id_count) {
+			PRINT_ERR("Error:incorrect job id received, %u [expected %u]\n",
+					id, id_count);
+			return -1;
+		}
+
+		rte_pktmbuf_free(src);
+		rte_pktmbuf_free(dst);
+
+		/* now check completion works */
+		if (rte_dmadev_completed(dev_id, 0, 1, NULL, NULL) != 0) {
+			PRINT_ERR("Error with rte_dmadev_completed in empty check\n");
+			return -1;
+		}
+		id_count++;
+
+	} while (0);
+
+	/* test doing a multiple single copies */
+	do {
+		const uint16_t max_ops = 4;
+		struct rte_mbuf *src, *dst;
+		char *src_data, *dst_data;
+
+		src = rte_pktmbuf_alloc(pool);
+		dst = rte_pktmbuf_alloc(pool);
+		src_data = rte_pktmbuf_mtod(src, char *);
+		dst_data = rte_pktmbuf_mtod(dst, char *);
+
+		for (i = 0; i < COPY_LEN; i++)
+			src_data[i] = rte_rand() & 0xFF;
+
+		/* perform the same copy <max_ops> times */
+		for (i = 0; i < max_ops; i++) {
+			if (rte_dmadev_copy(dev_id, vchan,
+					src->buf_iova + src->data_off,
+					dst->buf_iova + dst->data_off,
+					COPY_LEN, RTE_DMA_OP_FLAG_SUBMIT) != id_count++) {
+				PRINT_ERR("Error with rte_dmadev_copy\n");
+				return -1;
+			}
+		}
+		await_hw(dev_id, vchan);
+
+		if ((i = rte_dmadev_completed(dev_id, vchan, max_ops * 2, &id, NULL)) != max_ops) {
+			PRINT_ERR("Error with rte_dmadev_completed, got %u not %u\n", i, max_ops);
+			return -1;
+		}
+		if (id != id_count - 1) {
+			PRINT_ERR("Error, incorrect job id returned: got %u not %u\n",
+					id, id_count - 1);
+			return -1;
+		}
+		for (i = 0; i < COPY_LEN; i++) {
+			if (dst_data[i] != src_data[i]) {
+				PRINT_ERR("Data mismatch at char %u\n", i);
+				return -1;
+			}
+		}
+		rte_pktmbuf_free(src);
+		rte_pktmbuf_free(dst);
+	} while (0);
+
+	return 0;
+}
+
 static int
 test_dmadev_instance(uint16_t dev_id)
 {
@@ -43,6 +179,7 @@ test_dmadev_instance(uint16_t dev_id)
 			.nb_desc = TEST_RINGSIZE,
 	};
 	const int vchan = 0;
+	int i;
 
 	printf("\n### Test dmadev instance %u\n", dev_id);
 
@@ -79,10 +216,47 @@ test_dmadev_instance(uint16_t dev_id)
 				stats.completed, stats.submitted, stats.errors);
 		return -1;
 	}
+	id_count = 0;
+
+	/* create a mempool for running tests */
+	pool = rte_pktmbuf_pool_create("TEST_DMADEV_POOL",
+			TEST_RINGSIZE * 2, /* n == num elements */
+			32,  /* cache size */
+			0,   /* priv size */
+			2048, /* data room size */
+			info.device->numa_node);
+	if (pool == NULL) {
+		PRINT_ERR("Error with mempool creation\n");
+		return -1;
+	}
+
+	/* run the test cases, use many iterations to ensure UINT16_MAX id wraparound */
+	printf("DMA Dev: %u, Running Copy Tests\n", dev_id);
+	for (i = 0; i < 640; i++) {
+
+		if (test_enqueue_copies(dev_id, vchan) != 0) {
+			printf("Error with iteration %d\n", i);
+			rte_dmadev_dump(dev_id, stdout);
+			goto err;
+		}
 
+		rte_dmadev_stats_get(dev_id, 0, &stats);
+		printf("Ops submitted: %"PRIu64"\t", stats.submitted);
+		printf("Ops completed: %"PRIu64"\t", stats.completed);
+		printf("Errors: %"PRIu64"\r", stats.errors);
+	}
+	printf("\n");
+
+
+	rte_mempool_free(pool);
 	rte_dmadev_stop(dev_id);
 	rte_dmadev_stats_reset(dev_id, vchan);
 	return 0;
+
+err:
+	rte_mempool_free(pool);
+	rte_dmadev_stop(dev_id);
+	return -1;
 }
 
 static int
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v2 4/6] app/test: add more comprehensive dmadev copy tests
  2021-09-01 16:32 ` [dpdk-dev] [PATCH v2 0/6] add test suite for DMA drivers Bruce Richardson
                     ` (2 preceding siblings ...)
  2021-09-01 16:32   ` [dpdk-dev] [PATCH v2 3/6] app/test: add basic dmadev copy tests Bruce Richardson
@ 2021-09-01 16:32   ` Bruce Richardson
  2021-09-03 16:08     ` Conor Walsh
  2021-09-03 16:11     ` Kevin Laatz
  2021-09-01 16:32   ` [dpdk-dev] [PATCH v2 5/6] app/test: test dmadev instance failure handling Bruce Richardson
  2021-09-01 16:32   ` [dpdk-dev] [PATCH v2 6/6] app/test: add dmadev fill tests Bruce Richardson
  5 siblings, 2 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-09-01 16:32 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

Add unit tests for various combinations of use for dmadev, copying
bursts of packets in various formats, e.g.

1. enqueuing two smaller bursts and completing them as one burst
2. enqueuing one burst and gathering completions in smaller bursts
3. using completed_status() function to gather completions rather than
   just completed()

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 app/test/test_dmadev.c | 142 ++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 140 insertions(+), 2 deletions(-)

diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
index 261f45db71..7a808a9cba 100644
--- a/app/test/test_dmadev.c
+++ b/app/test/test_dmadev.c
@@ -39,6 +39,20 @@ print_err(const char *func, int lineno, const char *format, ...)
 	return ret;
 }
 
+static inline int
+check_stats(struct rte_dmadev_stats *stats, bool check_errors)
+{
+	if (stats->completed != stats->submitted) {
+		PRINT_ERR("Error, not all submitted jobs are reported as completed\n");
+		return -1;
+	}
+	if (check_errors && stats->errors != 0) {
+		PRINT_ERR("Errors reported during copy processing, aborting tests\n");
+		return -1;
+	}
+	return 0;
+}
+
 static inline void
 await_hw(int dev_id, uint16_t vchan)
 {
@@ -57,6 +71,120 @@ await_hw(int dev_id, uint16_t vchan)
 	}
 }
 
+/* run a series of copy tests just using some different options for enqueues and completions */
+static int
+do_multi_copies(int dev_id, uint16_t vchan,
+		int split_batches,     /* submit 2 x 16 or 1 x 32 burst */
+		int split_completions, /* gather 2 x 16 or 1 x 32 completions */
+		int use_completed_status) /* use completed or completed_status function */
+{
+	struct rte_mbuf *srcs[32], *dsts[32];
+	enum rte_dma_status_code sc[32];
+	unsigned int i, j;
+	bool dma_err = false;
+
+	/* Enqueue burst of copies and hit doorbell */
+	for (i = 0; i < RTE_DIM(srcs); i++) {
+		uint64_t *src_data;
+
+		if (split_batches && i == RTE_DIM(srcs) / 2)
+			rte_dmadev_submit(dev_id, vchan);
+
+		srcs[i] = rte_pktmbuf_alloc(pool);
+		dsts[i] = rte_pktmbuf_alloc(pool);
+		if (srcs[i] == NULL || dsts[i] == NULL) {
+			PRINT_ERR("Error allocating buffers\n");
+			return -1;
+		}
+		src_data = rte_pktmbuf_mtod(srcs[i], uint64_t *);
+
+		for (j = 0; j < COPY_LEN/sizeof(uint64_t); j++)
+			src_data[j] = rte_rand();
+
+		if (rte_dmadev_copy(dev_id, vchan, srcs[i]->buf_iova + srcs[i]->data_off,
+				dsts[i]->buf_iova + dsts[i]->data_off, COPY_LEN, 0) != id_count++) {
+			PRINT_ERR("Error with rte_dmadev_copy for buffer %u\n", i);
+			return -1;
+		}
+	}
+	rte_dmadev_submit(dev_id, vchan);
+
+	await_hw(dev_id, vchan);
+
+	if (split_completions) {
+		/* gather completions in two halves */
+		uint16_t half_len = RTE_DIM(srcs) / 2;
+		int ret = rte_dmadev_completed(dev_id, vchan, half_len, NULL, &dma_err);
+		if (ret != half_len || dma_err) {
+			PRINT_ERR("Error with rte_dmadev_completed - first half. ret = %d, expected ret = %u, dma_err = %d\n",
+					ret, half_len, dma_err);
+			rte_dmadev_dump(dev_id, stdout);
+			return -1;
+		}
+		ret = rte_dmadev_completed(dev_id, vchan, half_len, NULL, &dma_err);
+		if (ret != half_len || dma_err) {
+			PRINT_ERR("Error with rte_dmadev_completed - second half. ret = %d, expected ret = %u, dma_err = %d\n",
+					ret, half_len, dma_err);
+			rte_dmadev_dump(dev_id, stdout);
+			return -1;
+		}
+	} else {
+		/* gather all completions in one go, using either
+		 * completed or completed_status fns
+		 */
+		if (!use_completed_status) {
+			int n = rte_dmadev_completed(dev_id, vchan, RTE_DIM(srcs), NULL, &dma_err);
+			if (n != RTE_DIM(srcs) || dma_err) {
+				PRINT_ERR("Error with rte_dmadev_completed, %u [expected: %zu], dma_err = %d\n",
+						n, RTE_DIM(srcs), dma_err);
+				rte_dmadev_dump(dev_id, stdout);
+				return -1;
+			}
+		} else {
+			int n = rte_dmadev_completed_status(dev_id, vchan, RTE_DIM(srcs), NULL, sc);
+			if (n != RTE_DIM(srcs)) {
+				PRINT_ERR("Error with rte_dmadev_completed_status, %u [expected: %zu]\n",
+						n, RTE_DIM(srcs));
+				rte_dmadev_dump(dev_id, stdout);
+				return -1;
+			}
+			for (j = 0; j < (uint16_t)n; j++) {
+				if (sc[j] != RTE_DMA_STATUS_SUCCESSFUL) {
+					PRINT_ERR("Error with rte_dmadev_completed_status, job %u reports failure [code %u]\n",
+							j, sc[j]);
+					rte_dmadev_dump(dev_id, stdout);
+					return -1;
+				}
+			}
+		}
+	}
+
+	/* check for empty */
+	int ret = use_completed_status ?
+			rte_dmadev_completed_status(dev_id, vchan, RTE_DIM(srcs), NULL, sc) :
+			rte_dmadev_completed(dev_id, vchan, RTE_DIM(srcs), NULL, &dma_err);
+	if (ret != 0) {
+		PRINT_ERR("Error with completion check - ops unexpectedly returned\n");
+		rte_dmadev_dump(dev_id, stdout);
+		return -1;
+	}
+
+	for (i = 0; i < RTE_DIM(srcs); i++) {
+		char *src_data, *dst_data;
+
+		src_data = rte_pktmbuf_mtod(srcs[i], char *);
+		dst_data = rte_pktmbuf_mtod(dsts[i], char *);
+		for (j = 0; j < COPY_LEN; j++)
+			if (src_data[j] != dst_data[j]) {
+				PRINT_ERR("Error with copy of packet %u, byte %u\n", i, j);
+				return -1;
+			}
+		rte_pktmbuf_free(srcs[i]);
+		rte_pktmbuf_free(dsts[i]);
+	}
+	return 0;
+}
+
 static int
 test_enqueue_copies(int dev_id, uint16_t vchan)
 {
@@ -164,7 +292,14 @@ test_enqueue_copies(int dev_id, uint16_t vchan)
 		rte_pktmbuf_free(dst);
 	} while (0);
 
-	return 0;
+	/* test doing multiple copies */
+	return do_multi_copies(dev_id, vchan, 0, 0, 0) /* enqueue and complete 1 batch at a time */
+			/* enqueue 2 batches and then complete both */
+			|| do_multi_copies(dev_id, vchan, 1, 0, 0)
+			/* enqueue 1 batch, then complete in two halves */
+			|| do_multi_copies(dev_id, vchan, 0, 1, 0)
+			/* test using completed_status in place of regular completed API */
+			|| do_multi_copies(dev_id, vchan, 0, 0, 1);
 }
 
 static int
@@ -216,6 +351,8 @@ test_dmadev_instance(uint16_t dev_id)
 				stats.completed, stats.submitted, stats.errors);
 		return -1;
 	}
+
+	rte_dmadev_stats_reset(dev_id, vchan);
 	id_count = 0;
 
 	/* create a mempool for running tests */
@@ -246,7 +383,8 @@ test_dmadev_instance(uint16_t dev_id)
 		printf("Errors: %"PRIu64"\r", stats.errors);
 	}
 	printf("\n");
-
+	if (check_stats(&stats, true) < 0)
+		goto err;
 
 	rte_mempool_free(pool);
 	rte_dmadev_stop(dev_id);
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v2 5/6] app/test: test dmadev instance failure handling
  2021-09-01 16:32 ` [dpdk-dev] [PATCH v2 0/6] add test suite for DMA drivers Bruce Richardson
                     ` (3 preceding siblings ...)
  2021-09-01 16:32   ` [dpdk-dev] [PATCH v2 4/6] app/test: add more comprehensive " Bruce Richardson
@ 2021-09-01 16:32   ` Bruce Richardson
  2021-09-01 19:53     ` Mattias Rönnblom
                       ` (2 more replies)
  2021-09-01 16:32   ` [dpdk-dev] [PATCH v2 6/6] app/test: add dmadev fill tests Bruce Richardson
  5 siblings, 3 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-09-01 16:32 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

Add a series of tests to inject bad copy operations into a dmadev to
test the error handling and reporting capabilities. Various combinations
of errors in various positions in a burst are tested, as are errors in
bursts with fence flag set, and multiple errors in a single burst.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 app/test/test_dmadev.c | 427 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 427 insertions(+)

diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
index 7a808a9cba..5d7b6ddd87 100644
--- a/app/test/test_dmadev.c
+++ b/app/test/test_dmadev.c
@@ -302,6 +302,414 @@ test_enqueue_copies(int dev_id, uint16_t vchan)
 			|| do_multi_copies(dev_id, vchan, 0, 0, 1);
 }
 
+/* Failure handling test cases - global macros and variables for those tests*/
+#define COMP_BURST_SZ	16
+#define OPT_FENCE(idx) ((fence && idx == 8) ? RTE_DMA_OP_FLAG_FENCE : 0)
+
+static int
+test_failure_in_full_burst(int dev_id, uint16_t vchan, bool fence,
+		struct rte_mbuf **srcs, struct rte_mbuf **dsts, unsigned int fail_idx)
+{
+	/* Test single full batch statuses with failures */
+	enum rte_dma_status_code status[COMP_BURST_SZ];
+	struct rte_dmadev_stats baseline, stats;
+	uint16_t invalid_addr_id = 0;
+	uint16_t idx;
+	uint16_t count, status_count;
+	unsigned int i;
+	bool error = 0;
+	int err_count = 0;
+
+	rte_dmadev_stats_get(dev_id, vchan, &baseline); /* get a baseline set of stats */
+	for (i = 0; i < COMP_BURST_SZ; i++) {
+		int id = rte_dmadev_copy(dev_id, vchan,
+				(i == fail_idx ? 0 : (srcs[i]->buf_iova + srcs[i]->data_off)),
+				dsts[i]->buf_iova + dsts[i]->data_off,
+				COPY_LEN, OPT_FENCE(i));
+		if (id < 0) {
+			PRINT_ERR("Error with rte_dmadev_copy for buffer %u\n", i);
+			return -1;
+		}
+		if (i == fail_idx)
+			invalid_addr_id = id;
+	}
+	rte_dmadev_submit(dev_id, vchan);
+	rte_dmadev_stats_get(dev_id, vchan, &stats);
+	if (stats.submitted != baseline.submitted + COMP_BURST_SZ) {
+		PRINT_ERR("Submitted stats value not as expected, %"PRIu64" not %"PRIu64"\n",
+				stats.submitted, baseline.submitted + COMP_BURST_SZ);
+		return -1;
+	}
+
+	await_hw(dev_id, vchan);
+
+	count = rte_dmadev_completed(dev_id, vchan, COMP_BURST_SZ, &idx, &error);
+	if (count != fail_idx) {
+		PRINT_ERR("Error with rte_dmadev_completed for failure test. Got returned %u not %u.\n",
+				count, fail_idx);
+		rte_dmadev_dump(dev_id, stdout);
+		return -1;
+	}
+	if (error == false) {
+		PRINT_ERR("Error, missing expected failed copy, %u. has_error is not set\n",
+				fail_idx);
+		return -1;
+	}
+	if (idx != invalid_addr_id - 1) {
+		PRINT_ERR("Error, missing expected failed copy, %u. Got last idx %u, not %u\n",
+				fail_idx, idx, invalid_addr_id - 1);
+		return -1;
+	}
+
+	/* all checks ok, now verify calling completed() again always returns 0 */
+	for (i = 0; i < 10; i++) {
+		if (rte_dmadev_completed(dev_id, vchan, COMP_BURST_SZ, &idx, &error) != 0
+				|| error == false || idx != (invalid_addr_id - 1)) {
+			PRINT_ERR("Error with follow-up completed calls for fail idx %u\n",
+					fail_idx);
+			return -1;
+		}
+	}
+
+	status_count = rte_dmadev_completed_status(dev_id, vchan, COMP_BURST_SZ,
+			&idx, status);
+	/* some HW may stop on error and be restarted after getting error status for single value
+	 * To handle this case, if we get just one error back, wait for more completions and get
+	 * status for rest of the burst
+	 */
+	if (status_count == 1) {
+		await_hw(dev_id, vchan);
+		status_count += rte_dmadev_completed_status(dev_id, vchan, COMP_BURST_SZ - 1,
+					&idx, &status[1]);
+	}
+	/* check that at this point we have all status values */
+	if (status_count != COMP_BURST_SZ - count) {
+		PRINT_ERR("Error with completed_status calls for fail idx %u. Got %u not %u\n",
+				fail_idx, status_count, COMP_BURST_SZ - count);
+		return -1;
+	}
+	/* now verify just one failure followed by multiple successful or skipped entries */
+	if (status[0] == RTE_DMA_STATUS_SUCCESSFUL) {
+		PRINT_ERR("Error with status returned for fail idx %u. First status was not failure\n",
+				fail_idx);
+		return -1;
+	}
+	for (i = 1; i < status_count; i++) {
+		/* after a failure in a burst, depending on ordering/fencing,
+		 * operations may be successful or skipped because of previous error.
+		 */
+		if (status[i] != RTE_DMA_STATUS_SUCCESSFUL
+				&& status[i] != RTE_DMA_STATUS_NOT_ATTEMPTED) {
+			PRINT_ERR("Error with status calls for fail idx %u. Status for job %u (of %u) is not successful\n",
+					fail_idx, count + i, COMP_BURST_SZ);
+			return -1;
+		}
+	}
+
+	/* check the completed + errors stats are as expected */
+	rte_dmadev_stats_get(dev_id, vchan, &stats);
+	if (stats.completed != baseline.completed + COMP_BURST_SZ) {
+		PRINT_ERR("Completed stats value not as expected, %"PRIu64" not %"PRIu64"\n",
+				stats.completed, baseline.completed + COMP_BURST_SZ);
+		return -1;
+	}
+	for (i = 0; i < status_count; i++)
+		err_count += (status[i] != RTE_DMA_STATUS_SUCCESSFUL);
+	if (stats.errors != baseline.errors + err_count) {
+		PRINT_ERR("'Errors' stats value not as expected, %"PRIu64" not %"PRIu64"\n",
+				stats.errors, baseline.errors + err_count);
+		return -1;
+	}
+
+	return 0;
+}
+
+static int
+test_individual_status_query_with_failure(int dev_id, uint16_t vchan, bool fence,
+		struct rte_mbuf **srcs, struct rte_mbuf **dsts, unsigned int fail_idx)
+{
+	/* Test gathering batch statuses one at a time */
+	enum rte_dma_status_code status[COMP_BURST_SZ];
+	uint16_t invalid_addr_id = 0;
+	uint16_t idx;
+	uint16_t count = 0, status_count = 0;
+	unsigned int j;
+	bool error = false;
+
+	for (j = 0; j < COMP_BURST_SZ; j++) {
+		int id = rte_dmadev_copy(dev_id, vchan,
+				(j == fail_idx ? 0 : (srcs[j]->buf_iova + srcs[j]->data_off)),
+				dsts[j]->buf_iova + dsts[j]->data_off,
+				COPY_LEN, OPT_FENCE(j));
+		if (id < 0) {
+			PRINT_ERR("Error with rte_dmadev_copy for buffer %u\n", j);
+			return -1;
+		}
+		if (j == fail_idx)
+			invalid_addr_id = id;
+	}
+	rte_dmadev_submit(dev_id, vchan);
+	await_hw(dev_id, vchan);
+
+	/* use regular "completed" until we hit error */
+	while (!error) {
+		uint16_t n = rte_dmadev_completed(dev_id, vchan, 1, &idx, &error);
+		count += n;
+		if (n > 1 || count >= COMP_BURST_SZ) {
+			PRINT_ERR("Error - too many completions got\n");
+			return -1;
+		}
+		if (n == 0 && !error) {
+			PRINT_ERR("Error, unexpectedly got zero completions after %u completed\n",
+					count);
+			return -1;
+		}
+	}
+	if (idx != invalid_addr_id - 1) {
+		PRINT_ERR("Error, last successful index not as expected, got %u, expected %u\n",
+				idx, invalid_addr_id - 1);
+		return -1;
+	}
+
+	/* use completed_status until we hit end of burst */
+	while (count + status_count < COMP_BURST_SZ) {
+		uint16_t n = rte_dmadev_completed_status(dev_id, vchan, 1, &idx,
+				&status[status_count]);
+		await_hw(dev_id, vchan); /* allow delay to ensure jobs are completed */
+		status_count += n;
+		if (n != 1) {
+			PRINT_ERR("Error: unexpected number of completions received, %u, not 1\n",
+					n);
+			return -1;
+		}
+	}
+
+	/* check for single failure */
+	if (status[0] == RTE_DMA_STATUS_SUCCESSFUL) {
+		PRINT_ERR("Error, unexpected successful DMA transaction\n");
+		return -1;
+	}
+	for (j = 1; j < status_count; j++) {
+		if (status[j] != RTE_DMA_STATUS_SUCCESSFUL
+				&& status[j] != RTE_DMA_STATUS_NOT_ATTEMPTED) {
+			PRINT_ERR("Error, unexpected DMA error reported\n");
+			return -1;
+		}
+	}
+
+	return 0;
+}
+
+static int
+test_single_item_status_query_with_failure(int dev_id, uint16_t vchan,
+		struct rte_mbuf **srcs, struct rte_mbuf **dsts, unsigned int fail_idx)
+{
+	/* When error occurs just collect a single error using "completed_status()"
+	 * before going to back to completed() calls
+	 */
+	enum rte_dma_status_code status;
+	uint16_t invalid_addr_id = 0;
+	uint16_t idx;
+	uint16_t count, status_count, count2;
+	unsigned int j;
+	bool error = 0;
+
+	for (j = 0; j < COMP_BURST_SZ; j++) {
+		int id = rte_dmadev_copy(dev_id, vchan,
+				(j == fail_idx ? 0 : (srcs[j]->buf_iova + srcs[j]->data_off)),
+				dsts[j]->buf_iova + dsts[j]->data_off,
+				COPY_LEN, 0);
+		if (id < 0) {
+			PRINT_ERR("Error with rte_dmadev_copy for buffer %u\n", j);
+			return -1;
+		}
+		if (j == fail_idx)
+			invalid_addr_id = id;
+	}
+	rte_dmadev_submit(dev_id, vchan);
+	await_hw(dev_id, vchan);
+
+	/* get up to the error point */
+	count = rte_dmadev_completed(dev_id, vchan, COMP_BURST_SZ, &idx, &error);
+	if (count != fail_idx) {
+		PRINT_ERR("Error with rte_dmadev_completed for failure test. Got returned %u not %u.\n",
+				count, fail_idx);
+		rte_dmadev_dump(dev_id, stdout);
+		return -1;
+	}
+	if (error == false) {
+		PRINT_ERR("Error, missing expected failed copy, %u. has_error is not set\n",
+				fail_idx);
+		return -1;
+	}
+	if (idx != invalid_addr_id - 1) {
+		PRINT_ERR("Error, missing expected failed copy, %u. Got last idx %u, not %u\n",
+				fail_idx, idx, invalid_addr_id - 1);
+		return -1;
+	}
+
+	/* get the error code */
+	status_count = rte_dmadev_completed_status(dev_id, vchan, 1, &idx, &status);
+	if (status_count != 1) {
+		PRINT_ERR("Error with completed_status calls for fail idx %u. Got %u not %u\n",
+				fail_idx, status_count, COMP_BURST_SZ - count);
+		return -1;
+	}
+	if (status == RTE_DMA_STATUS_SUCCESSFUL) {
+		PRINT_ERR("Error with status returned for fail idx %u. First status was not failure\n",
+				fail_idx);
+		return -1;
+	}
+	/* delay in case time needed after err handled to complete other jobs */
+	await_hw(dev_id, vchan);
+
+	/* get the rest of the completions without status */
+	count2 = rte_dmadev_completed(dev_id, vchan, COMP_BURST_SZ, &idx, &error);
+	if (error == true) {
+		PRINT_ERR("Error, got further errors post completed_status() call, for failure case %u.\n",
+				fail_idx);
+		return -1;
+	}
+	if (count + status_count + count2 != COMP_BURST_SZ) {
+		PRINT_ERR("Error, incorrect number of completions received, got %u not %u\n",
+				count + status_count + count2, COMP_BURST_SZ);
+		return -1;
+	}
+
+	return 0;
+}
+
+static int
+test_multi_failure(int dev_id, uint16_t vchan, struct rte_mbuf **srcs, struct rte_mbuf **dsts,
+		const unsigned int *fail, size_t num_fail)
+{
+	/* test having multiple errors in one go */
+	enum rte_dma_status_code status[COMP_BURST_SZ];
+	unsigned int i, j;
+	uint16_t count, err_count = 0;
+	bool error = 0;
+
+	/* enqueue and gather completions in one go */
+	for (j = 0; j < COMP_BURST_SZ; j++) {
+		uintptr_t src = srcs[j]->buf_iova + srcs[j]->data_off;
+		/* set up for failure if the current index is anywhere is the fails array */
+		for (i = 0; i < num_fail; i++)
+			if (j == fail[i])
+				src = 0;
+
+		int id = rte_dmadev_copy(dev_id, vchan,
+				src, dsts[j]->buf_iova + dsts[j]->data_off,
+				COPY_LEN, 0);
+		if (id < 0) {
+			PRINT_ERR("Error with rte_dmadev_copy for buffer %u\n", j);
+			return -1;
+		}
+	}
+	rte_dmadev_submit(dev_id, vchan);
+	await_hw(dev_id, vchan);
+
+	count = rte_dmadev_completed_status(dev_id, vchan, COMP_BURST_SZ, NULL, status);
+	while (count < COMP_BURST_SZ) {
+		await_hw(dev_id, vchan);
+
+		uint16_t ret = rte_dmadev_completed_status(dev_id, vchan, COMP_BURST_SZ - count,
+				NULL, &status[count]);
+		if (ret == 0) {
+			PRINT_ERR("Error getting all completions for jobs. Got %u of %u\n",
+					count, COMP_BURST_SZ);
+			return -1;
+		}
+		count += ret;
+	}
+	for (i = 0; i < count; i++) {
+		if (status[i] != RTE_DMA_STATUS_SUCCESSFUL)
+			err_count++;
+	}
+	if (err_count != num_fail) {
+		PRINT_ERR("Error: Invalid number of failed completions returned, %u; expected %zu\n",
+			err_count, num_fail);
+		return -1;
+	}
+
+	/* enqueue and gather completions in bursts, but getting errors one at a time */
+	for (j = 0; j < COMP_BURST_SZ; j++) {
+		uintptr_t src = srcs[j]->buf_iova + srcs[j]->data_off;
+		/* set up for failure if the current index is anywhere is the fails array */
+		for (i = 0; i < num_fail; i++)
+			if (j == fail[i])
+				src = 0;
+
+		int id = rte_dmadev_copy(dev_id, vchan,
+				src, dsts[j]->buf_iova + dsts[j]->data_off,
+				COPY_LEN, 0);
+		if (id < 0) {
+			PRINT_ERR("Error with rte_dmadev_copy for buffer %u\n", j);
+			return -1;
+		}
+	}
+	rte_dmadev_submit(dev_id, vchan);
+	await_hw(dev_id, vchan);
+
+	count = 0;
+	err_count = 0;
+	while (count + err_count < COMP_BURST_SZ) {
+		count += rte_dmadev_completed(dev_id, vchan, COMP_BURST_SZ, NULL, &error);
+		if (error) {
+			uint16_t ret = rte_dmadev_completed_status(dev_id, vchan, 1,
+					NULL, status);
+			if (ret != 1) {
+				PRINT_ERR("Error getting error-status for completions\n");
+				return -1;
+			}
+			err_count += ret;
+			await_hw(dev_id, vchan);
+		}
+	}
+	if (err_count != num_fail) {
+		PRINT_ERR("Error: Incorrect number of failed completions received, got %u not %zu\n",
+				err_count, num_fail);
+		return -1;
+	}
+
+	return 0;
+}
+
+static int
+test_completion_status(int dev_id, uint16_t vchan, bool fence)
+{
+	const unsigned int fail[] = {0, 7, 14, 15};
+	struct rte_mbuf *srcs[COMP_BURST_SZ], *dsts[COMP_BURST_SZ];
+	unsigned int i;
+
+	for (i = 0; i < COMP_BURST_SZ; i++) {
+		srcs[i] = rte_pktmbuf_alloc(pool);
+		dsts[i] = rte_pktmbuf_alloc(pool);
+	}
+
+	for (i = 0; i < RTE_DIM(fail); i++) {
+		if (test_failure_in_full_burst(dev_id, vchan, fence, srcs, dsts, fail[i]) < 0)
+			return -1;
+
+		if (test_individual_status_query_with_failure(dev_id, vchan, fence,
+				srcs, dsts, fail[i]) < 0)
+			return -1;
+
+		/* test is run the same fenced, or unfenced, but no harm in running it twice */
+		if (test_single_item_status_query_with_failure(dev_id, vchan,
+				srcs, dsts, fail[i]) < 0)
+			return -1;
+	}
+
+	if (test_multi_failure(dev_id, vchan, srcs, dsts, fail, RTE_DIM(fail)) < 0)
+		return -1;
+
+	for (i = 0; i < COMP_BURST_SZ; i++) {
+		rte_pktmbuf_free(srcs[i]);
+		rte_pktmbuf_free(dsts[i]);
+	}
+	return 0;
+}
+
 static int
 test_dmadev_instance(uint16_t dev_id)
 {
@@ -386,6 +794,25 @@ test_dmadev_instance(uint16_t dev_id)
 	if (check_stats(&stats, true) < 0)
 		goto err;
 
+	/* to test error handling we can provide null pointers for source or dest in copies. This
+	 * requires VA mode in DPDK, since NULL(0) is a valid physical address.
+	 */
+	if (rte_eal_iova_mode() == RTE_IOVA_VA) {
+		rte_dmadev_stats_reset(dev_id, vchan);
+		printf("DMA Dev: %u, Running Completion Handling Tests (errors expected)\n",
+				dev_id);
+		if (test_completion_status(dev_id, vchan, false) != 0) /* without fences */
+			goto err;
+		if (test_completion_status(dev_id, vchan, true) != 0) /* with fences */
+			goto err;
+		rte_dmadev_stats_get(dev_id, 0, &stats);
+		printf("Ops submitted: %"PRIu64"\t", stats.submitted);
+		printf("Ops completed: %"PRIu64"\t", stats.completed);
+		printf("Errors: %"PRIu64"\n", stats.errors);
+		if (check_stats(&stats, false) < 0) /* don't check stats.errors this time */
+			goto err;
+	}
+
 	rte_mempool_free(pool);
 	rte_dmadev_stop(dev_id);
 	rte_dmadev_stats_reset(dev_id, vchan);
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v2 6/6] app/test: add dmadev fill tests
  2021-09-01 16:32 ` [dpdk-dev] [PATCH v2 0/6] add test suite for DMA drivers Bruce Richardson
                     ` (4 preceding siblings ...)
  2021-09-01 16:32   ` [dpdk-dev] [PATCH v2 5/6] app/test: test dmadev instance failure handling Bruce Richardson
@ 2021-09-01 16:32   ` Bruce Richardson
  2021-09-03 16:09     ` Conor Walsh
  5 siblings, 1 reply; 130+ messages in thread
From: Bruce Richardson @ 2021-09-01 16:32 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

From: Kevin Laatz <kevin.laatz@intel.com>

For dma devices which support the fill operation, run unit tests to
verify fill behaviour is correct.

Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 app/test/test_dmadev.c | 73 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 73 insertions(+)

diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
index 5d7b6ddd87..c44c3ad9db 100644
--- a/app/test/test_dmadev.c
+++ b/app/test/test_dmadev.c
@@ -710,6 +710,62 @@ test_completion_status(int dev_id, uint16_t vchan, bool fence)
 	return 0;
 }
 
+static int
+test_enqueue_fill(int dev_id, uint16_t vchan)
+{
+	const unsigned int lengths[] = {8, 64, 1024, 50, 100, 89};
+	struct rte_mbuf *dst;
+	char *dst_data;
+	uint64_t pattern = 0xfedcba9876543210;
+	unsigned int i, j;
+
+	dst = rte_pktmbuf_alloc(pool);
+	if (dst == NULL) {
+		PRINT_ERR("Failed to allocate mbuf\n");
+		return -1;
+	}
+	dst_data = rte_pktmbuf_mtod(dst, char *);
+
+	for (i = 0; i < RTE_DIM(lengths); i++) {
+		/* reset dst_data */
+		memset(dst_data, 0, rte_pktmbuf_data_len(dst));
+
+		/* perform the fill operation */
+		int id = rte_dmadev_fill(dev_id, vchan, pattern,
+				rte_pktmbuf_iova(dst), lengths[i], RTE_DMA_OP_FLAG_SUBMIT);
+		if (id < 0) {
+			PRINT_ERR("Error with rte_ioat_enqueue_fill\n");
+			return -1;
+		}
+		await_hw(dev_id, vchan);
+
+		if (rte_dmadev_completed(dev_id, vchan, 1, NULL, NULL) != 1) {
+			PRINT_ERR("Error: fill operation failed (length: %u)\n", lengths[i]);
+			return -1;
+		}
+		/* check the data from the fill operation is correct */
+		for (j = 0; j < lengths[i]; j++) {
+			char pat_byte = ((char *)&pattern)[j % 8];
+			if (dst_data[j] != pat_byte) {
+				PRINT_ERR("Error with fill operation (lengths = %u): got (%x), not (%x)\n",
+						lengths[i], dst_data[j], pat_byte);
+				return -1;
+			}
+		}
+		/* check that the data after the fill operation was not written to */
+		for (; j < rte_pktmbuf_data_len(dst); j++) {
+			if (dst_data[j] != 0) {
+				PRINT_ERR("Error, fill operation wrote too far (lengths = %u): got (%x), not (%x)\n",
+						lengths[i], dst_data[j], 0);
+				return -1;
+			}
+		}
+	}
+
+	rte_pktmbuf_free(dst);
+	return 0;
+}
+
 static int
 test_dmadev_instance(uint16_t dev_id)
 {
@@ -813,6 +869,23 @@ test_dmadev_instance(uint16_t dev_id)
 			goto err;
 	}
 
+	if ((info.dev_capa & RTE_DMADEV_CAPA_OPS_FILL) == 0)
+		printf("DMA Dev: %u, No device fill support - skipping fill tests\n", dev_id);
+	else {
+		rte_dmadev_stats_reset(dev_id, vchan);
+		printf("DMA Dev: %u, Running Fill Tests\n", dev_id);
+
+		if (test_enqueue_fill(dev_id, vchan) != 0)
+			goto err;
+
+		rte_dmadev_stats_get(dev_id, 0, &stats);
+		printf("Ops submitted: %"PRIu64"\t", stats.submitted);
+		printf("Ops completed: %"PRIu64"\t", stats.completed);
+		printf("Errors: %"PRIu64"\n", stats.errors);
+		if (check_stats(&stats, true) < 0)
+			goto err;
+	}
+
 	rte_mempool_free(pool);
 	rte_dmadev_stop(dev_id);
 	rte_dmadev_stats_reset(dev_id, vchan);
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v2 2/6] app/test: add basic dmadev instance tests
  2021-09-01 16:32   ` [dpdk-dev] [PATCH v2 2/6] app/test: add basic dmadev instance tests Bruce Richardson
@ 2021-09-01 19:24     ` Mattias Rönnblom
  2021-09-02 10:30       ` Bruce Richardson
  2021-09-03 16:07     ` Conor Walsh
  1 sibling, 1 reply; 130+ messages in thread
From: Mattias Rönnblom @ 2021-09-01 19:24 UTC (permalink / raw)
  To: Bruce Richardson, dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj

On 2021-09-01 18:32, Bruce Richardson wrote:
> Run basic sanity tests for configuring, starting and stopping a dmadev
> instance to help validate drivers. This also provides the framework for
> future tests for data-path operation.
> 
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
>   app/test/test_dmadev.c | 81 ++++++++++++++++++++++++++++++++++++++++++
>   1 file changed, 81 insertions(+)
> 
> diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
> index bb01e86483..12f7c69629 100644
> --- a/app/test/test_dmadev.c
> +++ b/app/test/test_dmadev.c
> @@ -2,6 +2,7 @@
>    * Copyright(c) 2021 HiSilicon Limited.
>    * Copyright(c) 2021 Intel Corporation.
>    */
> +#include <inttypes.h>
>   
>   #include <rte_common.h>
>   #include <rte_dev.h>
> @@ -13,6 +14,77 @@
>   /* from test_dmadev_api.c */
>   extern int test_dmadev_api(uint16_t dev_id);
>   
> +#define PRINT_ERR(...) print_err(__func__, __LINE__, __VA_ARGS__)
> +
> +static inline int

Remove inline.

> +__rte_format_printf(3, 4)
> +print_err(const char *func, int lineno, const char *format, ...)
> +{
> +	va_list ap;
> +	int ret;
> +
> +	ret = fprintf(stderr, "In %s:%d - ", func, lineno);
Check return code here, and return on error.

> +	va_start(ap, format);
> +	ret += vfprintf(stderr, format, ap);

..and here.

> +	va_end(ap);
> +
> +	return ret;

A negative return value in one call and an valid byte count result for 
the other should produce an error, but here it might not.

You might argue this is just test code, but then I suggest not checking 
the return values at all.

> +}
> +
> +static int
> +test_dmadev_instance(uint16_t dev_id)
> +{
> +#define TEST_RINGSIZE 512
> +	struct rte_dmadev_stats stats;
> +	struct rte_dmadev_info info;
> +	const struct rte_dmadev_conf conf = { .nb_vchans = 1};
> +	const struct rte_dmadev_vchan_conf qconf = {
> +			.direction = RTE_DMA_DIR_MEM_TO_MEM,
> +			.nb_desc = TEST_RINGSIZE,
> +	};
> +	const int vchan = 0;
> +
> +	printf("\n### Test dmadev instance %u\n", dev_id);
> +
> +	rte_dmadev_info_get(dev_id, &info);
> +	if (info.max_vchans < 1) {
> +		PRINT_ERR("Error, no channels available on device id %u\n", dev_id);
> +		return -1;
> +	}
> +	if (rte_dmadev_configure(dev_id, &conf) != 0) {
> +		PRINT_ERR("Error with rte_dmadev_configure()\n");
> +		return -1;
> +	}
> +	if (rte_dmadev_vchan_setup(dev_id, vchan, &qconf) < 0) {
> +		PRINT_ERR("Error with queue configuration\n");
> +		return -1;
> +	}
> +
> +	rte_dmadev_info_get(dev_id, &info);
> +	if (info.nb_vchans != 1) {
> +		PRINT_ERR("Error, no configured queues reported on device id %u\n", dev_id);
> +		return -1;
> +	}
> +
> +	if (rte_dmadev_start(dev_id) != 0) {
> +		PRINT_ERR("Error with rte_dmadev_start()\n");
> +		return -1;
> +	}
> +	if (rte_dmadev_stats_get(dev_id, vchan, &stats) != 0) {
> +		PRINT_ERR("Error with rte_dmadev_stats_get()\n");
> +		return -1;
> +	}
> +	if (stats.completed != 0 || stats.submitted != 0 || stats.errors != 0) {
> +		PRINT_ERR("Error device stats are not all zero: completed = %"PRIu64", submitted = %"PRIu64", errors = %"PRIu64"\n",
> +				stats.completed, stats.submitted, stats.errors);
> +		return -1;
> +	}
> +
> +	rte_dmadev_stop(dev_id);
> +	rte_dmadev_stats_reset(dev_id, vchan);
> +	return 0;
> +}
> +
>   static int
>   test_apis(void)
>   {
> @@ -35,10 +107,19 @@ test_apis(void)
>   static int
>   test_dmadev(void)
>   {
> +	int i;
> +
>   	/* basic sanity on dmadev infrastructure */
>   	if (test_apis() < 0)
>   		return -1;
>   
> +	if (rte_dmadev_count() == 0)
> +		return TEST_SKIPPED;
> +
> +	for (i = 0; i < RTE_DMADEV_MAX_DEVS; i++)
> +		if (rte_dmadevices[i].state == RTE_DMADEV_ATTACHED && test_dmadev_instance(i) < 0)
> +			return -1;
> +
>   	return 0;
>   }
>   
> 

^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v2 5/6] app/test: test dmadev instance failure handling
  2021-09-01 16:32   ` [dpdk-dev] [PATCH v2 5/6] app/test: test dmadev instance failure handling Bruce Richardson
@ 2021-09-01 19:53     ` Mattias Rönnblom
  2021-09-03 16:08     ` Conor Walsh
  2021-09-03 16:21     ` Kevin Laatz
  2 siblings, 0 replies; 130+ messages in thread
From: Mattias Rönnblom @ 2021-09-01 19:53 UTC (permalink / raw)
  To: Bruce Richardson, dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj

On 2021-09-01 18:32, Bruce Richardson wrote:
> Add a series of tests to inject bad copy operations into a dmadev to
> test the error handling and reporting capabilities. Various combinations
> of errors in various positions in a burst are tested, as are errors in
> bursts with fence flag set, and multiple errors in a single burst.
> 
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
>   app/test/test_dmadev.c | 427 +++++++++++++++++++++++++++++++++++++++++
>   1 file changed, 427 insertions(+)
> 
> diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
> index 7a808a9cba..5d7b6ddd87 100644
> --- a/app/test/test_dmadev.c
> +++ b/app/test/test_dmadev.c
> @@ -302,6 +302,414 @@ test_enqueue_copies(int dev_id, uint16_t vchan)
>   			|| do_multi_copies(dev_id, vchan, 0, 0, 1);
>   }
>   
> +/* Failure handling test cases - global macros and variables for those tests*/
> +#define COMP_BURST_SZ	16
> +#define OPT_FENCE(idx) ((fence && idx == 8) ? RTE_DMA_OP_FLAG_FENCE : 0)
> +
> +static int
> +test_failure_in_full_burst(int dev_id, uint16_t vchan, bool fence,
> +		struct rte_mbuf **srcs, struct rte_mbuf **dsts, unsigned int fail_idx)
> +{
> +	/* Test single full batch statuses with failures */
> +	enum rte_dma_status_code status[COMP_BURST_SZ];
> +	struct rte_dmadev_stats baseline, stats;
> +	uint16_t invalid_addr_id = 0;
> +	uint16_t idx;
> +	uint16_t count, status_count;
> +	unsigned int i;
> +	bool error = 0;

error = false;

> +	int err_count = 0;
> +
> +	rte_dmadev_stats_get(dev_id, vchan, &baseline); /* get a baseline set of stats */
> +	for (i = 0; i < COMP_BURST_SZ; i++) {
> +		int id = rte_dmadev_copy(dev_id, vchan,
> +				(i == fail_idx ? 0 : (srcs[i]->buf_iova + srcs[i]->data_off)),
> +				dsts[i]->buf_iova + dsts[i]->data_off,
> +				COPY_LEN, OPT_FENCE(i));
> +		if (id < 0) {
> +			PRINT_ERR("Error with rte_dmadev_copy for buffer %u\n", i);
> +			return -1;
> +		}
> +		if (i == fail_idx)
> +			invalid_addr_id = id;
> +	}
> +	rte_dmadev_submit(dev_id, vchan);
> +	rte_dmadev_stats_get(dev_id, vchan, &stats);
> +	if (stats.submitted != baseline.submitted + COMP_BURST_SZ) {
> +		PRINT_ERR("Submitted stats value not as expected, %"PRIu64" not %"PRIu64"\n",
> +				stats.submitted, baseline.submitted + COMP_BURST_SZ);
> +		return -1;
> +	}
> +
> +	await_hw(dev_id, vchan);
> +
> +	count = rte_dmadev_completed(dev_id, vchan, COMP_BURST_SZ, &idx, &error);
> +	if (count != fail_idx) {
> +		PRINT_ERR("Error with rte_dmadev_completed for failure test. Got returned %u not %u.\n",
> +				count, fail_idx);
> +		rte_dmadev_dump(dev_id, stdout);
> +		return -1;
> +	}
> +	if (error == false) {
if (!error)
> +		PRINT_ERR("Error, missing expected failed copy, %u. has_error is not set\n",
> +				fail_idx);
> +		return -1;
> +	}
> +	if (idx != invalid_addr_id - 1) {
> +		PRINT_ERR("Error, missing expected failed copy, %u. Got last idx %u, not %u\n",
> +				fail_idx, idx, invalid_addr_id - 1);
> +		return -1;
> +	}
> +
> +	/* all checks ok, now verify calling completed() again always returns 0 */
> +	for (i = 0; i < 10; i++) {
> +		if (rte_dmadev_completed(dev_id, vchan, COMP_BURST_SZ, &idx, &error) != 0
> +				|| error == false || idx != (invalid_addr_id - 1)) {
> +			PRINT_ERR("Error with follow-up completed calls for fail idx %u\n",
> +					fail_idx);
> +			return -1;
> +		}
> +	}
> +
> +	status_count = rte_dmadev_completed_status(dev_id, vchan, COMP_BURST_SZ,
> +			&idx, status);
> +	/* some HW may stop on error and be restarted after getting error status for single value
> +	 * To handle this case, if we get just one error back, wait for more completions and get
> +	 * status for rest of the burst
> +	 */
> +	if (status_count == 1) {
> +		await_hw(dev_id, vchan);
> +		status_count += rte_dmadev_completed_status(dev_id, vchan, COMP_BURST_SZ - 1,
> +					&idx, &status[1]);
> +	}
> +	/* check that at this point we have all status values */
> +	if (status_count != COMP_BURST_SZ - count) {
> +		PRINT_ERR("Error with completed_status calls for fail idx %u. Got %u not %u\n",
> +				fail_idx, status_count, COMP_BURST_SZ - count);
> +		return -1;
> +	}
> +	/* now verify just one failure followed by multiple successful or skipped entries */
> +	if (status[0] == RTE_DMA_STATUS_SUCCESSFUL) {
> +		PRINT_ERR("Error with status returned for fail idx %u. First status was not failure\n",
> +				fail_idx);
> +		return -1;
> +	}
> +	for (i = 1; i < status_count; i++) {
> +		/* after a failure in a burst, depending on ordering/fencing,
> +		 * operations may be successful or skipped because of previous error.
> +		 */
> +		if (status[i] != RTE_DMA_STATUS_SUCCESSFUL
> +				&& status[i] != RTE_DMA_STATUS_NOT_ATTEMPTED) {
> +			PRINT_ERR("Error with status calls for fail idx %u. Status for job %u (of %u) is not successful\n",
> +					fail_idx, count + i, COMP_BURST_SZ);
> +			return -1;
> +		}
> +	}
> +
> +	/* check the completed + errors stats are as expected */
> +	rte_dmadev_stats_get(dev_id, vchan, &stats);
> +	if (stats.completed != baseline.completed + COMP_BURST_SZ) {
> +		PRINT_ERR("Completed stats value not as expected, %"PRIu64" not %"PRIu64"\n",
> +				stats.completed, baseline.completed + COMP_BURST_SZ);
> +		return -1;
> +	}
> +	for (i = 0; i < status_count; i++)
> +		err_count += (status[i] != RTE_DMA_STATUS_SUCCESSFUL);
> +	if (stats.errors != baseline.errors + err_count) {
> +		PRINT_ERR("'Errors' stats value not as expected, %"PRIu64" not %"PRIu64"\n",
> +				stats.errors, baseline.errors + err_count);
> +		return -1;
> +	}
> +
> +	return 0;
> +}
> +
> +static int
> +test_individual_status_query_with_failure(int dev_id, uint16_t vchan, bool fence,
> +		struct rte_mbuf **srcs, struct rte_mbuf **dsts, unsigned int fail_idx)
> +{
> +	/* Test gathering batch statuses one at a time */
> +	enum rte_dma_status_code status[COMP_BURST_SZ];
> +	uint16_t invalid_addr_id = 0;
> +	uint16_t idx;
> +	uint16_t count = 0, status_count = 0;
> +	unsigned int j;
> +	bool error = false;
> +
> +	for (j = 0; j < COMP_BURST_SZ; j++) {
> +		int id = rte_dmadev_copy(dev_id, vchan,
> +				(j == fail_idx ? 0 : (srcs[j]->buf_iova + srcs[j]->data_off)),
> +				dsts[j]->buf_iova + dsts[j]->data_off,
> +				COPY_LEN, OPT_FENCE(j));
> +		if (id < 0) {
> +			PRINT_ERR("Error with rte_dmadev_copy for buffer %u\n", j);
> +			return -1;
> +		}
> +		if (j == fail_idx)
> +			invalid_addr_id = id;
> +	}
> +	rte_dmadev_submit(dev_id, vchan);
> +	await_hw(dev_id, vchan);
> +
> +	/* use regular "completed" until we hit error */
> +	while (!error) {
> +		uint16_t n = rte_dmadev_completed(dev_id, vchan, 1, &idx, &error);
> +		count += n;
> +		if (n > 1 || count >= COMP_BURST_SZ) {
> +			PRINT_ERR("Error - too many completions got\n");
> +			return -1;
> +		}
> +		if (n == 0 && !error) {
> +			PRINT_ERR("Error, unexpectedly got zero completions after %u completed\n",
> +					count);
> +			return -1;
> +		}
> +	}
> +	if (idx != invalid_addr_id - 1) {
> +		PRINT_ERR("Error, last successful index not as expected, got %u, expected %u\n",
> +				idx, invalid_addr_id - 1);
> +		return -1;
> +	}
> +
> +	/* use completed_status until we hit end of burst */
> +	while (count + status_count < COMP_BURST_SZ) {
> +		uint16_t n = rte_dmadev_completed_status(dev_id, vchan, 1, &idx,
> +				&status[status_count]);
> +		await_hw(dev_id, vchan); /* allow delay to ensure jobs are completed */
> +		status_count += n;
> +		if (n != 1) {
> +			PRINT_ERR("Error: unexpected number of completions received, %u, not 1\n",
> +					n);
> +			return -1;
> +		}
> +	}
> +
> +	/* check for single failure */
> +	if (status[0] == RTE_DMA_STATUS_SUCCESSFUL) {
> +		PRINT_ERR("Error, unexpected successful DMA transaction\n");
> +		return -1;
> +	}
> +	for (j = 1; j < status_count; j++) {
> +		if (status[j] != RTE_DMA_STATUS_SUCCESSFUL
> +				&& status[j] != RTE_DMA_STATUS_NOT_ATTEMPTED) {
> +			PRINT_ERR("Error, unexpected DMA error reported\n");
> +			return -1;
> +		}
> +	}
> +
> +	return 0;
> +}
> +
> +static int
> +test_single_item_status_query_with_failure(int dev_id, uint16_t vchan,
> +		struct rte_mbuf **srcs, struct rte_mbuf **dsts, unsigned int fail_idx)
> +{
> +	/* When error occurs just collect a single error using "completed_status()"
> +	 * before going to back to completed() calls
> +	 */
> +	enum rte_dma_status_code status;
> +	uint16_t invalid_addr_id = 0;
> +	uint16_t idx;
> +	uint16_t count, status_count, count2;
> +	unsigned int j;
> +	bool error = 0;

Same here.

> +
> +	for (j = 0; j < COMP_BURST_SZ; j++) {
> +		int id = rte_dmadev_copy(dev_id, vchan,
> +				(j == fail_idx ? 0 : (srcs[j]->buf_iova + srcs[j]->data_off)),
> +				dsts[j]->buf_iova + dsts[j]->data_off,
> +				COPY_LEN, 0);
> +		if (id < 0) {
> +			PRINT_ERR("Error with rte_dmadev_copy for buffer %u\n", j);
> +			return -1;
> +		}
> +		if (j == fail_idx)
> +			invalid_addr_id = id;
> +	}
> +	rte_dmadev_submit(dev_id, vchan);
> +	await_hw(dev_id, vchan);
> +
> +	/* get up to the error point */
> +	count = rte_dmadev_completed(dev_id, vchan, COMP_BURST_SZ, &idx, &error);
> +	if (count != fail_idx) {
> +		PRINT_ERR("Error with rte_dmadev_completed for failure test. Got returned %u not %u.\n",
> +				count, fail_idx);
> +		rte_dmadev_dump(dev_id, stdout);
> +		return -1;
> +	}
> +	if (error == false) {

And here.

> +		PRINT_ERR("Error, missing expected failed copy, %u. has_error is not set\n",
> +				fail_idx);
> +		return -1;
> +	}
> +	if (idx != invalid_addr_id - 1) {
> +		PRINT_ERR("Error, missing expected failed copy, %u. Got last idx %u, not %u\n",
> +				fail_idx, idx, invalid_addr_id - 1);
> +		return -1;
> +	}
> +
> +	/* get the error code */
> +	status_count = rte_dmadev_completed_status(dev_id, vchan, 1, &idx, &status);
> +	if (status_count != 1) {
> +		PRINT_ERR("Error with completed_status calls for fail idx %u. Got %u not %u\n",
> +				fail_idx, status_count, COMP_BURST_SZ - count);
> +		return -1;
> +	}
> +	if (status == RTE_DMA_STATUS_SUCCESSFUL) {
> +		PRINT_ERR("Error with status returned for fail idx %u. First status was not failure\n",
> +				fail_idx);
> +		return -1;
> +	}
> +	/* delay in case time needed after err handled to complete other jobs */
> +	await_hw(dev_id, vchan);
> +
> +	/* get the rest of the completions without status */
> +	count2 = rte_dmadev_completed(dev_id, vchan, COMP_BURST_SZ, &idx, &error);
> +	if (error == true) {

if (error)

> +		PRINT_ERR("Error, got further errors post completed_status() call, for failure case %u.\n",
> +				fail_idx);
> +		return -1;
> +	}
> +	if (count + status_count + count2 != COMP_BURST_SZ) {
> +		PRINT_ERR("Error, incorrect number of completions received, got %u not %u\n",
> +				count + status_count + count2, COMP_BURST_SZ);
> +		return -1;
> +	}
> +
> +	return 0;
> +}
> +
> +static int
> +test_multi_failure(int dev_id, uint16_t vchan, struct rte_mbuf **srcs, struct rte_mbuf **dsts,
> +		const unsigned int *fail, size_t num_fail)
> +{
> +	/* test having multiple errors in one go */
> +	enum rte_dma_status_code status[COMP_BURST_SZ];
> +	unsigned int i, j;
> +	uint16_t count, err_count = 0;
> +	bool error = 0;

false

> +
> +	/* enqueue and gather completions in one go */
> +	for (j = 0; j < COMP_BURST_SZ; j++) {
> +		uintptr_t src = srcs[j]->buf_iova + srcs[j]->data_off;
> +		/* set up for failure if the current index is anywhere is the fails array */
> +		for (i = 0; i < num_fail; i++)
> +			if (j == fail[i])
> +				src = 0;
> +
> +		int id = rte_dmadev_copy(dev_id, vchan,
> +				src, dsts[j]->buf_iova + dsts[j]->data_off,
> +				COPY_LEN, 0);
> +		if (id < 0) {
> +			PRINT_ERR("Error with rte_dmadev_copy for buffer %u\n", j);
> +			return -1;
> +		}
> +	}
> +	rte_dmadev_submit(dev_id, vchan);
> +	await_hw(dev_id, vchan);
> +
> +	count = rte_dmadev_completed_status(dev_id, vchan, COMP_BURST_SZ, NULL, status);
> +	while (count < COMP_BURST_SZ) {
> +		await_hw(dev_id, vchan);
> +
> +		uint16_t ret = rte_dmadev_completed_status(dev_id, vchan, COMP_BURST_SZ - count,
> +				NULL, &status[count]);
> +		if (ret == 0) {
> +			PRINT_ERR("Error getting all completions for jobs. Got %u of %u\n",
> +					count, COMP_BURST_SZ);
> +			return -1;
> +		}
> +		count += ret;
> +	}
> +	for (i = 0; i < count; i++) {
> +		if (status[i] != RTE_DMA_STATUS_SUCCESSFUL)
> +			err_count++;
> +	}

Remove {} around the loop?

> +	if (err_count != num_fail) {
> +		PRINT_ERR("Error: Invalid number of failed completions returned, %u; expected %zu\n",
> +			err_count, num_fail);
> +		return -1;
> +	}
> +
> +	/* enqueue and gather completions in bursts, but getting errors one at a time */
> +	for (j = 0; j < COMP_BURST_SZ; j++) {
> +		uintptr_t src = srcs[j]->buf_iova + srcs[j]->data_off;
> +		/* set up for failure if the current index is anywhere is the fails array */
> +		for (i = 0; i < num_fail; i++)
> +			if (j == fail[i])
> +				src = 0;
> +
> +		int id = rte_dmadev_copy(dev_id, vchan,
> +				src, dsts[j]->buf_iova + dsts[j]->data_off,
> +				COPY_LEN, 0);
> +		if (id < 0) {
> +			PRINT_ERR("Error with rte_dmadev_copy for buffer %u\n", j);
> +			return -1;
> +		}
> +	}
> +	rte_dmadev_submit(dev_id, vchan);
> +	await_hw(dev_id, vchan);
> +
> +	count = 0;
> +	err_count = 0;
> +	while (count + err_count < COMP_BURST_SZ) {
> +		count += rte_dmadev_completed(dev_id, vchan, COMP_BURST_SZ, NULL, &error);
> +		if (error) {
> +			uint16_t ret = rte_dmadev_completed_status(dev_id, vchan, 1,
> +					NULL, status);
> +			if (ret != 1) {
> +				PRINT_ERR("Error getting error-status for completions\n");
> +				return -1;
> +			}
> +			err_count += ret;
> +			await_hw(dev_id, vchan);
> +		}
> +	}
> +	if (err_count != num_fail) {
> +		PRINT_ERR("Error: Incorrect number of failed completions received, got %u not %zu\n",
> +				err_count, num_fail);
> +		return -1;
> +	}
> +
> +	return 0;
> +}
> +
> +static int
> +test_completion_status(int dev_id, uint16_t vchan, bool fence)
> +{
> +	const unsigned int fail[] = {0, 7, 14, 15};
> +	struct rte_mbuf *srcs[COMP_BURST_SZ], *dsts[COMP_BURST_SZ];
> +	unsigned int i;
> +
> +	for (i = 0; i < COMP_BURST_SZ; i++) {
> +		srcs[i] = rte_pktmbuf_alloc(pool);
> +		dsts[i] = rte_pktmbuf_alloc(pool);
> +	}
> +
> +	for (i = 0; i < RTE_DIM(fail); i++) {
> +		if (test_failure_in_full_burst(dev_id, vchan, fence, srcs, dsts, fail[i]) < 0)
> +			return -1;
> +
> +		if (test_individual_status_query_with_failure(dev_id, vchan, fence,
> +				srcs, dsts, fail[i]) < 0)
> +			return -1;
> +
> +		/* test is run the same fenced, or unfenced, but no harm in running it twice */
> +		if (test_single_item_status_query_with_failure(dev_id, vchan,
> +				srcs, dsts, fail[i]) < 0)
> +			return -1;
> +	}
> +
> +	if (test_multi_failure(dev_id, vchan, srcs, dsts, fail, RTE_DIM(fail)) < 0)
> +		return -1;
> +
> +	for (i = 0; i < COMP_BURST_SZ; i++) {
> +		rte_pktmbuf_free(srcs[i]);
> +		rte_pktmbuf_free(dsts[i]);
> +	}
> +	return 0;
> +}
> +
>   static int
>   test_dmadev_instance(uint16_t dev_id)
>   {
> @@ -386,6 +794,25 @@ test_dmadev_instance(uint16_t dev_id)
>   	if (check_stats(&stats, true) < 0)
>   		goto err;
>   
> +	/* to test error handling we can provide null pointers for source or dest in copies. This
> +	 * requires VA mode in DPDK, since NULL(0) is a valid physical address.
> +	 */
> +	if (rte_eal_iova_mode() == RTE_IOVA_VA) {
> +		rte_dmadev_stats_reset(dev_id, vchan);
> +		printf("DMA Dev: %u, Running Completion Handling Tests (errors expected)\n",
> +				dev_id);
> +		if (test_completion_status(dev_id, vchan, false) != 0) /* without fences */
> +			goto err;
> +		if (test_completion_status(dev_id, vchan, true) != 0) /* with fences */
> +			goto err;
> +		rte_dmadev_stats_get(dev_id, 0, &stats);
> +		printf("Ops submitted: %"PRIu64"\t", stats.submitted);
> +		printf("Ops completed: %"PRIu64"\t", stats.completed);
> +		printf("Errors: %"PRIu64"\n", stats.errors);
> +		if (check_stats(&stats, false) < 0) /* don't check stats.errors this time */
> +			goto err;
> +	}
> +
>   	rte_mempool_free(pool);
>   	rte_dmadev_stop(dev_id);
>   	rte_dmadev_stats_reset(dev_id, vchan);
> 

^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v2 3/6] app/test: add basic dmadev copy tests
  2021-09-01 16:32   ` [dpdk-dev] [PATCH v2 3/6] app/test: add basic dmadev copy tests Bruce Richardson
@ 2021-09-02  7:44     ` Jerin Jacob
  2021-09-02  8:06       ` Bruce Richardson
  2021-09-03 16:05     ` Kevin Laatz
  2021-09-03 16:07     ` Conor Walsh
  2 siblings, 1 reply; 130+ messages in thread
From: Jerin Jacob @ 2021-09-02  7:44 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: dpdk-dev, Walsh, Conor, Laatz, Kevin, fengchengwen, Jerin Jacob,
	Satananda Burla

On Wed, Sep 1, 2021 at 10:02 PM Bruce Richardson
<bruce.richardson@intel.com> wrote:
>
> For each dmadev instance, perform some basic copy tests to validate that
> functionality.
>
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
>  app/test/test_dmadev.c | 174 +++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 174 insertions(+)
>
> diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
> index 12f7c69629..261f45db71 100644
> --- a/app/test/test_dmadev.c
> +++ b/app/test/test_dmadev.c
> @@ -2,12 +2,15 @@
>   * Copyright(c) 2021 HiSilicon Limited.
>   * Copyright(c) 2021 Intel Corporation.
>   */
> +#include <unistd.h>
>  #include <inttypes.h>
>
>  #include <rte_common.h>
>  #include <rte_dev.h>
>  #include <rte_dmadev.h>
>  #include <rte_bus_vdev.h>
> +#include <rte_mbuf.h>
> +#include <rte_random.h>
>
>  #include "test.h"
>
> @@ -16,6 +19,11 @@ extern int test_dmadev_api(uint16_t dev_id);
>
>  #define PRINT_ERR(...) print_err(__func__, __LINE__, __VA_ARGS__)
>
> +#define COPY_LEN 1024
> +
> +static struct rte_mempool *pool;
> +static uint16_t id_count;
> +
>  static inline int
>  __rte_format_printf(3, 4)
>  print_err(const char *func, int lineno, const char *format, ...)
> @@ -31,6 +39,134 @@ print_err(const char *func, int lineno, const char *format, ...)
>         return ret;
>  }
>
> +static inline void
> +await_hw(int dev_id, uint16_t vchan)
> +{
> +       int idle = rte_dmadev_vchan_idle(dev_id, vchan);
> +       if (idle < 0) {
> +               /* for drivers that don't support this op, just sleep for 25 microseconds */
> +               usleep(25);
> +               return;
> +       }

Can following model eliminate the need for rte_dmadev_vchan_idle() API. Right?

static inline bool
await_hw(int dev_id, uint16_t vchan, uint16_t  nb_req, uint16_t *last_idx)
{
             const uint64_t tmo =   rte_get_timer_hz();
             bool has_error  = false;

             const uint64_t end_cycles = rte_get_timer_cycles() + tmo;
              while (rte_get_timer_cycles() < end_cycles && nb_req > 0
&& has_error  == false) {
                           rte_pause();
                           nb_req -= rte_dmadev_completed(dev_id,
nb_req, last_idx, &has_error);
              }

              return has_error ;
}



> +
> +       /* for those that do, *max* end time is one second from now, but all should be faster */
> +       const uint64_t end_cycles = rte_get_timer_cycles() + rte_get_timer_hz();
> +       while (!idle && rte_get_timer_cycles() < end_cycles) {
> +               rte_pause();
> +               idle = rte_dmadev_vchan_idle(dev_id, vchan);
> +       }
> +}

^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v2 3/6] app/test: add basic dmadev copy tests
  2021-09-02  7:44     ` Jerin Jacob
@ 2021-09-02  8:06       ` Bruce Richardson
  2021-09-02 10:54         ` Jerin Jacob
  0 siblings, 1 reply; 130+ messages in thread
From: Bruce Richardson @ 2021-09-02  8:06 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dpdk-dev, Walsh, Conor, Laatz, Kevin, fengchengwen, Jerin Jacob,
	Satananda Burla

On Thu, Sep 02, 2021 at 01:14:38PM +0530, Jerin Jacob wrote:
> On Wed, Sep 1, 2021 at 10:02 PM Bruce Richardson
> <bruce.richardson@intel.com> wrote:
> >
> > For each dmadev instance, perform some basic copy tests to validate that
> > functionality.
> >
> > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > ---
> >  app/test/test_dmadev.c | 174 +++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 174 insertions(+)
> >
> > diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
> > index 12f7c69629..261f45db71 100644
> > --- a/app/test/test_dmadev.c
> > +++ b/app/test/test_dmadev.c
> > @@ -2,12 +2,15 @@
> >   * Copyright(c) 2021 HiSilicon Limited.
> >   * Copyright(c) 2021 Intel Corporation.
> >   */
> > +#include <unistd.h>
> >  #include <inttypes.h>
> >
> >  #include <rte_common.h>
> >  #include <rte_dev.h>
> >  #include <rte_dmadev.h>
> >  #include <rte_bus_vdev.h>
> > +#include <rte_mbuf.h>
> > +#include <rte_random.h>
> >
> >  #include "test.h"
> >
> > @@ -16,6 +19,11 @@ extern int test_dmadev_api(uint16_t dev_id);
> >
> >  #define PRINT_ERR(...) print_err(__func__, __LINE__, __VA_ARGS__)
> >
> > +#define COPY_LEN 1024
> > +
> > +static struct rte_mempool *pool;
> > +static uint16_t id_count;
> > +
> >  static inline int
> >  __rte_format_printf(3, 4)
> >  print_err(const char *func, int lineno, const char *format, ...)
> > @@ -31,6 +39,134 @@ print_err(const char *func, int lineno, const char *format, ...)
> >         return ret;
> >  }
> >
> > +static inline void
> > +await_hw(int dev_id, uint16_t vchan)
> > +{
> > +       int idle = rte_dmadev_vchan_idle(dev_id, vchan);
> > +       if (idle < 0) {
> > +               /* for drivers that don't support this op, just sleep for 25 microseconds */
> > +               usleep(25);
> > +               return;
> > +       }
> 
> Can following model eliminate the need for rte_dmadev_vchan_idle() API. Right?
> 
> static inline bool
> await_hw(int dev_id, uint16_t vchan, uint16_t  nb_req, uint16_t *last_idx)
> {
>              const uint64_t tmo =   rte_get_timer_hz();
>              bool has_error  = false;
> 
>              const uint64_t end_cycles = rte_get_timer_cycles() + tmo;
>               while (rte_get_timer_cycles() < end_cycles && nb_req > 0
> && has_error  == false) {
>                            rte_pause();
>                            nb_req -= rte_dmadev_completed(dev_id,
> nb_req, last_idx, &has_error);
>               }
> 
>               return has_error ;
> }
>
It would, but unfortunately it also removes the possibility of doing a
number of the tests in the set, particularly around failure handling. We
used runtime coverage tools to ensure we were hitting as many legs of code
as possible in drivers, and to cover these possibilities we need to do
various different types of completion gathering, e.g. gather multiple
bursts in one go, gathering a single burst in two halves, gathering a burst
using completion_status rather than completion, gathering completions
one-at-a-time with a call for each individually, and for error handling
gathering just the failing element alone, or gathering completions for all
remaining elements not just the failing one, etc. etc. 

These tests are useful both for finding bugs (and they did find ones in our
drivers), but also to ensure similar behaviour across different drivers
using the API. However, they really only can be done in a consistent way if
we are able to ensure that at certain points the hardware has finished
processing before we begin gathering completions. Therefore, having a way
to poll for idle is useful. As you see, I've also left in the delay as a
fallback in case drivers choose not to implement it, thereby making it an
optional API.

Beyond testing, I can see the API to poll for idleness being useful for the
device shutdown case. I was considering whether the "stop" API should also
use it to ensure that the hardware is idle before stopping. I decided
against it for now, but you could see applications making use of this -
waiting for the hardware to finish its work before stopping it.

Regards,
/Bruce

^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v2 2/6] app/test: add basic dmadev instance tests
  2021-09-01 19:24     ` Mattias Rönnblom
@ 2021-09-02 10:30       ` Bruce Richardson
  0 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-09-02 10:30 UTC (permalink / raw)
  To: Mattias Rönnblom; +Cc: dev, conor.walsh, kevin.laatz, fengchengwen, jerinj

On Wed, Sep 01, 2021 at 09:24:12PM +0200, Mattias Rönnblom wrote:
> On 2021-09-01 18:32, Bruce Richardson wrote:
> > Run basic sanity tests for configuring, starting and stopping a dmadev
> > instance to help validate drivers. This also provides the framework for
> > future tests for data-path operation.
> > 
> > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > ---
> >   app/test/test_dmadev.c | 81 ++++++++++++++++++++++++++++++++++++++++++
> >   1 file changed, 81 insertions(+)
> > 
> > diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
> > index bb01e86483..12f7c69629 100644
> > --- a/app/test/test_dmadev.c
> > +++ b/app/test/test_dmadev.c
> > @@ -2,6 +2,7 @@
> >    * Copyright(c) 2021 HiSilicon Limited.
> >    * Copyright(c) 2021 Intel Corporation.
> >    */
> > +#include <inttypes.h>
> >   #include <rte_common.h>
> >   #include <rte_dev.h>
> > @@ -13,6 +14,77 @@
> >   /* from test_dmadev_api.c */
> >   extern int test_dmadev_api(uint16_t dev_id);
> > +#define PRINT_ERR(...) print_err(__func__, __LINE__, __VA_ARGS__)
> > +
> > +static inline int
> 
> Remove inline.
>
While I understand it's probably not doing a lot having "inline" there, any
particular reason why you think it should be removed?
 
> > +__rte_format_printf(3, 4)
> > +print_err(const char *func, int lineno, const char *format, ...)
> > +{
> > +	va_list ap;
> > +	int ret;
> > +
> > +	ret = fprintf(stderr, "In %s:%d - ", func, lineno);
> Check return code here, and return on error.
> 
> > +	va_start(ap, format);
> > +	ret += vfprintf(stderr, format, ap);
> 
> ..and here.
> 
> > +	va_end(ap);
> > +
> > +	return ret;
> 
> A negative return value in one call and an valid byte count result for the
> other should produce an error, but here it might not.
> 
> You might argue this is just test code, but then I suggest not checking the
> return values at all.
> 

Indeed the return value is never checked anywhere in the calls to PRINT_ERR
macro, and since the writes are going to stderr it's pretty low risk.
Therefore, I'll remove the return value handling completely as you suggest.

/Bruce

^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v2 3/6] app/test: add basic dmadev copy tests
  2021-09-02  8:06       ` Bruce Richardson
@ 2021-09-02 10:54         ` Jerin Jacob
  2021-09-02 11:43           ` Bruce Richardson
  0 siblings, 1 reply; 130+ messages in thread
From: Jerin Jacob @ 2021-09-02 10:54 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: dpdk-dev, Walsh, Conor, Laatz, Kevin, fengchengwen, Jerin Jacob,
	Satananda Burla

On Thu, Sep 2, 2021 at 1:36 PM Bruce Richardson
<bruce.richardson@intel.com> wrote:
>
> On Thu, Sep 02, 2021 at 01:14:38PM +0530, Jerin Jacob wrote:
> > On Wed, Sep 1, 2021 at 10:02 PM Bruce Richardson
> > <bruce.richardson@intel.com> wrote:
> > >
> > > For each dmadev instance, perform some basic copy tests to validate that
> > > functionality.
> > >
> > > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > > ---
> > >  app/test/test_dmadev.c | 174 +++++++++++++++++++++++++++++++++++++++++
> > >  1 file changed, 174 insertions(+)
> > >
> > > diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
> > > index 12f7c69629..261f45db71 100644
> > > --- a/app/test/test_dmadev.c
> > > +++ b/app/test/test_dmadev.c
> > > @@ -2,12 +2,15 @@
> > >   * Copyright(c) 2021 HiSilicon Limited.
> > >   * Copyright(c) 2021 Intel Corporation.
> > >   */
> > > +#include <unistd.h>
> > >  #include <inttypes.h>
> > >
> > >  #include <rte_common.h>
> > >  #include <rte_dev.h>
> > >  #include <rte_dmadev.h>
> > >  #include <rte_bus_vdev.h>
> > > +#include <rte_mbuf.h>
> > > +#include <rte_random.h>
> > >
> > >  #include "test.h"
> > >
> > > @@ -16,6 +19,11 @@ extern int test_dmadev_api(uint16_t dev_id);
> > >
> > >  #define PRINT_ERR(...) print_err(__func__, __LINE__, __VA_ARGS__)
> > >
> > > +#define COPY_LEN 1024
> > > +
> > > +static struct rte_mempool *pool;
> > > +static uint16_t id_count;
> > > +
> > >  static inline int
> > >  __rte_format_printf(3, 4)
> > >  print_err(const char *func, int lineno, const char *format, ...)
> > > @@ -31,6 +39,134 @@ print_err(const char *func, int lineno, const char *format, ...)
> > >         return ret;
> > >  }
> > >
> > > +static inline void
> > > +await_hw(int dev_id, uint16_t vchan)
> > > +{
> > > +       int idle = rte_dmadev_vchan_idle(dev_id, vchan);
> > > +       if (idle < 0) {
> > > +               /* for drivers that don't support this op, just sleep for 25 microseconds */
> > > +               usleep(25);
> > > +               return;
> > > +       }
> >
> > Can following model eliminate the need for rte_dmadev_vchan_idle() API. Right?
> >
> > static inline bool
> > await_hw(int dev_id, uint16_t vchan, uint16_t  nb_req, uint16_t *last_idx)
> > {
> >              const uint64_t tmo =   rte_get_timer_hz();
> >              bool has_error  = false;
> >
> >              const uint64_t end_cycles = rte_get_timer_cycles() + tmo;
> >               while (rte_get_timer_cycles() < end_cycles && nb_req > 0
> > && has_error  == false) {
> >                            rte_pause();
> >                            nb_req -= rte_dmadev_completed(dev_id,
> > nb_req, last_idx, &has_error);
> >               }
> >
> >               return has_error ;
> > }
> >
> It would, but unfortunately it also removes the possibility of doing a
> number of the tests in the set, particularly around failure handling. We
> used runtime coverage tools to ensure we were hitting as many legs of code
> as possible in drivers, and to cover these possibilities we need to do
> various different types of completion gathering, e.g. gather multiple
> bursts in one go, gathering a single burst in two halves, gathering a burst
> using completion_status rather than completion, gathering completions
> one-at-a-time with a call for each individually, and for error handling
> gathering just the failing element alone, or gathering completions for all
> remaining elements not just the failing one, etc. etc.

Agree with the rationale.


>
> These tests are useful both for finding bugs (and they did find ones in our
> drivers), but also to ensure similar behaviour across different drivers
> using the API. However, they really only can be done in a consistent way if
> we are able to ensure that at certain points the hardware has finished
> processing before we begin gathering completions. Therefore, having a way
> to poll for idle is useful. As you see, I've also left in the delay as a
> fallback in case drivers choose not to implement it, thereby making it an
> optional API.
>
> Beyond testing, I can see the API to poll for idleness being useful for the
> device shutdown case. I was considering whether the "stop" API should also
> use it to ensure that the hardware is idle before stopping. I decided
> against it for now, but you could see applications making use of this -
> waiting for the hardware to finish its work before stopping it.


I think 25us will not be enough, e.s.p If is PCI-Dev to PCI-Dev kind
of test cases.
Since it is the functional test case, I think, we can keep it a very
higher range to
support all cases. Maybe 50ms is a good target.


>
> Regards,
> /Bruce

^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v2 3/6] app/test: add basic dmadev copy tests
  2021-09-02 10:54         ` Jerin Jacob
@ 2021-09-02 11:43           ` Bruce Richardson
  2021-09-02 13:05             ` Jerin Jacob
  0 siblings, 1 reply; 130+ messages in thread
From: Bruce Richardson @ 2021-09-02 11:43 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dpdk-dev, Walsh, Conor, Laatz, Kevin, fengchengwen, Jerin Jacob,
	Satananda Burla

On Thu, Sep 02, 2021 at 04:24:18PM +0530, Jerin Jacob wrote:
> 
> I think 25us will not be enough, e.s.p If is PCI-Dev to PCI-Dev kind
> of test cases.
> Since it is the functional test case, I think, we can keep it a very
> higher range to
> support all cases. Maybe 50ms is a good target.
> 

Sure, no problem to push it up. If it turns out that all upstreamed drivers
implement the "idle" function we can remove the fallback option completely,
but I'll keep it for now and push timeout up. Do you really think it needs
to be in the (tens of )millisecond range? Even for tests going across PCI
would most transactions not complete in the microsecond range, e.g. 100
usec?

^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v2 1/6] dmadev: add device idle check for testing use
  2021-09-01 16:32   ` [dpdk-dev] [PATCH v2 1/6] dmadev: add device idle check for testing use Bruce Richardson
@ 2021-09-02 12:54     ` fengchengwen
  2021-09-02 14:21       ` Bruce Richardson
  0 siblings, 1 reply; 130+ messages in thread
From: fengchengwen @ 2021-09-02 12:54 UTC (permalink / raw)
  To: Bruce Richardson, dev; +Cc: conor.walsh, kevin.laatz, jerinj

When some hardware is faulty, the error code cannot be set, and it just stops working.
In addition, interrupts are generally not enabled. Therefore, for such hardware, the
framework needs to have a mechanism to transmit the status so that applications can
sense the status.

So how about extend vchan_idle to vchan_status, include: idle, running, error-stop ?

On 2021/9/2 0:32, Bruce Richardson wrote:
> Add in a function to check if a device or vchan has completed all jobs
> assigned to it, without gathering in the results. This is primarily for
> use in testing, to allow the hardware to be in a known-state prior to
> gathering completions.
> 

[snip]

^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v2 3/6] app/test: add basic dmadev copy tests
  2021-09-02 11:43           ` Bruce Richardson
@ 2021-09-02 13:05             ` Jerin Jacob
  2021-09-02 14:21               ` Bruce Richardson
  0 siblings, 1 reply; 130+ messages in thread
From: Jerin Jacob @ 2021-09-02 13:05 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: dpdk-dev, Walsh, Conor, Laatz, Kevin, fengchengwen, Jerin Jacob,
	Satananda Burla

On Thu, Sep 2, 2021 at 5:13 PM Bruce Richardson
<bruce.richardson@intel.com> wrote:
>
> On Thu, Sep 02, 2021 at 04:24:18PM +0530, Jerin Jacob wrote:
> >
> > I think 25us will not be enough, e.s.p If is PCI-Dev to PCI-Dev kind
> > of test cases.
> > Since it is the functional test case, I think, we can keep it a very
> > higher range to
> > support all cases. Maybe 50ms is a good target.
> >
>
> Sure, no problem to push it up. If it turns out that all upstreamed drivers
> implement the "idle" function we can remove the fallback option completely,
> but I'll keep it for now and push timeout up. Do you really think it needs
> to be in the (tens of )millisecond range? Even for tests going across PCI
> would most transactions not complete in the microsecond range, e.g. 100
> usec?

Based on busload and size of buffers the completion time can vary. I
think, 1 ms could be
good trade-off. Also, In the future some HW needs beyond that then we
can increase.

^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v2 1/6] dmadev: add device idle check for testing use
  2021-09-02 12:54     ` fengchengwen
@ 2021-09-02 14:21       ` Bruce Richardson
  0 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-09-02 14:21 UTC (permalink / raw)
  To: fengchengwen; +Cc: dev, conor.walsh, kevin.laatz, jerinj

On Thu, Sep 02, 2021 at 08:54:11PM +0800, fengchengwen wrote:
> When some hardware is faulty, the error code cannot be set, and it just stops working.
> In addition, interrupts are generally not enabled. Therefore, for such hardware, the
> framework needs to have a mechanism to transmit the status so that applications can
> sense the status.
> 
> So how about extend vchan_idle to vchan_status, include: idle, running, error-stop ?
> 

We can look at changing that. By "idle" I originally meant "non-active",
meaning that it's either halted or idle, but I think we can separate out
those two into separate items like you suggest.

/Bruce

^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v2 3/6] app/test: add basic dmadev copy tests
  2021-09-02 13:05             ` Jerin Jacob
@ 2021-09-02 14:21               ` Bruce Richardson
  0 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-09-02 14:21 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dpdk-dev, Walsh, Conor, Laatz, Kevin, fengchengwen, Jerin Jacob,
	Satananda Burla

On Thu, Sep 02, 2021 at 06:35:07PM +0530, Jerin Jacob wrote:
> On Thu, Sep 2, 2021 at 5:13 PM Bruce Richardson
> <bruce.richardson@intel.com> wrote:
> >
> > On Thu, Sep 02, 2021 at 04:24:18PM +0530, Jerin Jacob wrote:
> > >
> > > I think 25us will not be enough, e.s.p If is PCI-Dev to PCI-Dev kind
> > > of test cases.
> > > Since it is the functional test case, I think, we can keep it a very
> > > higher range to
> > > support all cases. Maybe 50ms is a good target.
> > >
> >
> > Sure, no problem to push it up. If it turns out that all upstreamed drivers
> > implement the "idle" function we can remove the fallback option completely,
> > but I'll keep it for now and push timeout up. Do you really think it needs
> > to be in the (tens of )millisecond range? Even for tests going across PCI
> > would most transactions not complete in the microsecond range, e.g. 100
> > usec?
> 
> Based on busload and size of buffers the completion time can vary. I
> think, 1 ms could be
> good trade-off. Also, In the future some HW needs beyond that then we
> can increase.

Ok, thanks.

^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v2 3/6] app/test: add basic dmadev copy tests
  2021-09-01 16:32   ` [dpdk-dev] [PATCH v2 3/6] app/test: add basic dmadev copy tests Bruce Richardson
  2021-09-02  7:44     ` Jerin Jacob
@ 2021-09-03 16:05     ` Kevin Laatz
  2021-09-03 16:07     ` Conor Walsh
  2 siblings, 0 replies; 130+ messages in thread
From: Kevin Laatz @ 2021-09-03 16:05 UTC (permalink / raw)
  To: Bruce Richardson, dev; +Cc: conor.walsh, fengchengwen, jerinj

On 01/09/2021 17:32, Bruce Richardson wrote:
> For each dmadev instance, perform some basic copy tests to validate that
> functionality.
>
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
>   app/test/test_dmadev.c | 174 +++++++++++++++++++++++++++++++++++++++++
>   1 file changed, 174 insertions(+)

<snip>

> +
> +static int
> +test_enqueue_copies(int dev_id, uint16_t vchan)
> +{
> +	unsigned int i;
> +	uint16_t id;
> +
> +	/* test doing a single copy */
> +	do {
> +		struct rte_mbuf *src, *dst;
> +		char *src_data, *dst_data;
> +
> +		src = rte_pktmbuf_alloc(pool);
> +		dst = rte_pktmbuf_alloc(pool);
> +		src_data = rte_pktmbuf_mtod(src, char *);
> +		dst_data = rte_pktmbuf_mtod(dst, char *);
> +
> +		for (i = 0; i < COPY_LEN; i++)
> +			src_data[i] = rte_rand() & 0xFF;
> +
> +		id = rte_dmadev_copy(dev_id, vchan, src->buf_iova + src->data_off,
> +				dst->buf_iova + dst->data_off, COPY_LEN, RTE_DMA_OP_FLAG_SUBMIT);

Could use the rte_mbuf APIs to get the struct members here and 
throughout the other tests in this set.

No strong opinion on this either way.


> +		if (id != id_count) {
> +			PRINT_ERR("Error with rte_dmadev_copy, got %u, expected %u\n",
> +					id, id_count);
> +			return -1;
> +		}
> +
> +		/* give time for copy to finish, then check it was done */
> +		await_hw(dev_id, vchan);
> +
> +		for (i = 0; i < COPY_LEN; i++) {
> +			if (dst_data[i] != src_data[i]) {
> +				PRINT_ERR("Data mismatch at char %u [Got %02x not %02x]\n", i,
> +						dst_data[i], src_data[i]);
> +				rte_dmadev_dump(dev_id, stderr);
> +				return -1;
> +			}
> +		}
> +
> +		/* now check completion works */
> +		if (rte_dmadev_completed(dev_id, vchan, 1, &id, NULL) != 1) {
> +			PRINT_ERR("Error with rte_dmadev_completed\n");
> +			return -1;
> +		}
> +		if (id != id_count) {
> +			PRINT_ERR("Error:incorrect job id received, %u [expected %u]\n",
> +					id, id_count);
> +			return -1;
> +		}
> +
> +		rte_pktmbuf_free(src);
> +		rte_pktmbuf_free(dst);
> +
> +		/* now check completion works */

This comment doesn't match with the check being done.


> +		if (rte_dmadev_completed(dev_id, 0, 1, NULL, NULL) != 0) {
> +			PRINT_ERR("Error with rte_dmadev_completed in empty check\n");
> +			return -1;
> +		}
> +		id_count++;
> +
> +	} while (0);
> +

<snip>

Apart from minor comments above, LGTM.

Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>



^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v2 2/6] app/test: add basic dmadev instance tests
  2021-09-01 16:32   ` [dpdk-dev] [PATCH v2 2/6] app/test: add basic dmadev instance tests Bruce Richardson
  2021-09-01 19:24     ` Mattias Rönnblom
@ 2021-09-03 16:07     ` Conor Walsh
  1 sibling, 0 replies; 130+ messages in thread
From: Conor Walsh @ 2021-09-03 16:07 UTC (permalink / raw)
  To: Bruce Richardson, dev; +Cc: kevin.laatz, fengchengwen, jerinj


> Run basic sanity tests for configuring, starting and stopping a dmadev
> instance to help validate drivers. This also provides the framework for
> future tests for data-path operation.
>
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---

Reviewed-by: Conor Walsh <conor.walsh@intel.com>


^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v2 3/6] app/test: add basic dmadev copy tests
  2021-09-01 16:32   ` [dpdk-dev] [PATCH v2 3/6] app/test: add basic dmadev copy tests Bruce Richardson
  2021-09-02  7:44     ` Jerin Jacob
  2021-09-03 16:05     ` Kevin Laatz
@ 2021-09-03 16:07     ` Conor Walsh
  2 siblings, 0 replies; 130+ messages in thread
From: Conor Walsh @ 2021-09-03 16:07 UTC (permalink / raw)
  To: Bruce Richardson, dev; +Cc: kevin.laatz, fengchengwen, jerinj


> For each dmadev instance, perform some basic copy tests to validate that
> functionality.
>
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
<snip>
> +static inline void
> +await_hw(int dev_id, uint16_t vchan)
> +{
> +	int idle = rte_dmadev_vchan_idle(dev_id, vchan);
> +	if (idle < 0) {
> +		/* for drivers that don't support this op, just sleep for 25 microseconds */
> +		usleep(25);
> +		return;
> +	}
> +
> +	/* for those that do, *max* end time is one second from now, but all should be faster */
> +	const uint64_t end_cycles = rte_get_timer_cycles() + rte_get_timer_hz();
> +	while (!idle && rte_get_timer_cycles() < end_cycles) {
> +		rte_pause();
> +		idle = rte_dmadev_vchan_idle(dev_id, vchan);
> +	}
> +}

The new DMA IOAT driver works fine with this function and will not be 
affected by an increase in timeout time as suggested by Jerin.

Reviewed-by: Conor Walsh <conor.walsh@intel.com>


^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v2 4/6] app/test: add more comprehensive dmadev copy tests
  2021-09-01 16:32   ` [dpdk-dev] [PATCH v2 4/6] app/test: add more comprehensive " Bruce Richardson
@ 2021-09-03 16:08     ` Conor Walsh
  2021-09-03 16:11     ` Kevin Laatz
  1 sibling, 0 replies; 130+ messages in thread
From: Conor Walsh @ 2021-09-03 16:08 UTC (permalink / raw)
  To: Bruce Richardson, dev; +Cc: kevin.laatz, fengchengwen, jerinj


> Add unit tests for various combinations of use for dmadev, copying
> bursts of packets in various formats, e.g.
>
> 1. enqueuing two smaller bursts and completing them as one burst
> 2. enqueuing one burst and gathering completions in smaller bursts
> 3. using completed_status() function to gather completions rather than
>     just completed()
>
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
<snip>
> +static inline int
> +check_stats(struct rte_dmadev_stats *stats, bool check_errors)
> +{
> +	if (stats->completed != stats->submitted) {
> +		PRINT_ERR("Error, not all submitted jobs are reported as completed\n");
> +		return -1;
> +	}
Need to double check with Chengwen about the definitive meaning of the 
completed stat.


Reviewed-by: Conor Walsh <conor.walsh@intel.com>


^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v2 5/6] app/test: test dmadev instance failure handling
  2021-09-01 16:32   ` [dpdk-dev] [PATCH v2 5/6] app/test: test dmadev instance failure handling Bruce Richardson
  2021-09-01 19:53     ` Mattias Rönnblom
@ 2021-09-03 16:08     ` Conor Walsh
  2021-09-03 16:21     ` Kevin Laatz
  2 siblings, 0 replies; 130+ messages in thread
From: Conor Walsh @ 2021-09-03 16:08 UTC (permalink / raw)
  To: Bruce Richardson, dev; +Cc: kevin.laatz, fengchengwen, jerinj


> Add a series of tests to inject bad copy operations into a dmadev to
> test the error handling and reporting capabilities. Various combinations
> of errors in various positions in a burst are tested, as are errors in
> bursts with fence flag set, and multiple errors in a single burst.
>
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---

Reviewed-by: Conor Walsh <conor.walsh@intel.com>


^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v2 6/6] app/test: add dmadev fill tests
  2021-09-01 16:32   ` [dpdk-dev] [PATCH v2 6/6] app/test: add dmadev fill tests Bruce Richardson
@ 2021-09-03 16:09     ` Conor Walsh
  2021-09-03 16:17       ` Conor Walsh
  0 siblings, 1 reply; 130+ messages in thread
From: Conor Walsh @ 2021-09-03 16:09 UTC (permalink / raw)
  To: Bruce Richardson, dev; +Cc: kevin.laatz, fengchengwen, jerinj


> From: Kevin Laatz <kevin.laatz@intel.com>
>
> For dma devices which support the fill operation, run unit tests to
> verify fill behaviour is correct.
>
> Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---

Reviewed-by: Conor Walsh <conor.walsh@intel.com>


^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v2 4/6] app/test: add more comprehensive dmadev copy tests
  2021-09-01 16:32   ` [dpdk-dev] [PATCH v2 4/6] app/test: add more comprehensive " Bruce Richardson
  2021-09-03 16:08     ` Conor Walsh
@ 2021-09-03 16:11     ` Kevin Laatz
  1 sibling, 0 replies; 130+ messages in thread
From: Kevin Laatz @ 2021-09-03 16:11 UTC (permalink / raw)
  To: Bruce Richardson, dev; +Cc: conor.walsh, fengchengwen, jerinj

On 01/09/2021 17:32, Bruce Richardson wrote:
> Add unit tests for various combinations of use for dmadev, copying
> bursts of packets in various formats, e.g.
>
> 1. enqueuing two smaller bursts and completing them as one burst
> 2. enqueuing one burst and gathering completions in smaller bursts
> 3. using completed_status() function to gather completions rather than
>     just completed()
>
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
>   app/test/test_dmadev.c | 142 ++++++++++++++++++++++++++++++++++++++++-
>   1 file changed, 140 insertions(+), 2 deletions(-)
>

Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>


^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v2 6/6] app/test: add dmadev fill tests
  2021-09-03 16:09     ` Conor Walsh
@ 2021-09-03 16:17       ` Conor Walsh
  2021-09-03 16:33         ` Bruce Richardson
  0 siblings, 1 reply; 130+ messages in thread
From: Conor Walsh @ 2021-09-03 16:17 UTC (permalink / raw)
  To: Bruce Richardson, dev; +Cc: kevin.laatz, fengchengwen, jerinj


It's probably worth noting that the new DMA IOAT drivers passes all of 
these driver tests.

>
>> From: Kevin Laatz <kevin.laatz@intel.com>
>>
>> For dma devices which support the fill operation, run unit tests to
>> verify fill behaviour is correct.
>>
>> Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
>> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
>> ---
>
> Reviewed-by: Conor Walsh <conor.walsh@intel.com>
>

^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v2 5/6] app/test: test dmadev instance failure handling
  2021-09-01 16:32   ` [dpdk-dev] [PATCH v2 5/6] app/test: test dmadev instance failure handling Bruce Richardson
  2021-09-01 19:53     ` Mattias Rönnblom
  2021-09-03 16:08     ` Conor Walsh
@ 2021-09-03 16:21     ` Kevin Laatz
  2 siblings, 0 replies; 130+ messages in thread
From: Kevin Laatz @ 2021-09-03 16:21 UTC (permalink / raw)
  To: Bruce Richardson, dev; +Cc: conor.walsh, fengchengwen, jerinj

On 01/09/2021 17:32, Bruce Richardson wrote:
> Add a series of tests to inject bad copy operations into a dmadev to
> test the error handling and reporting capabilities. Various combinations
> of errors in various positions in a burst are tested, as are errors in
> bursts with fence flag set, and multiple errors in a single burst.
>
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
>   app/test/test_dmadev.c | 427 +++++++++++++++++++++++++++++++++++++++++
>   1 file changed, 427 insertions(+)
>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>

^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v2 6/6] app/test: add dmadev fill tests
  2021-09-03 16:17       ` Conor Walsh
@ 2021-09-03 16:33         ` Bruce Richardson
  0 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-09-03 16:33 UTC (permalink / raw)
  To: Conor Walsh; +Cc: dev, kevin.laatz, fengchengwen, jerinj

On Fri, Sep 03, 2021 at 05:17:49PM +0100, Conor Walsh wrote:
> 
> It's probably worth noting that the new DMA IOAT drivers passes all of these
> driver tests.
> 
That's probably better called out in the cover letter of the IOAT set
itself.

^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v3 0/8] add test suite for DMA drivers
  2021-08-26 18:32 [dpdk-dev] [RFC PATCH 0/7] add test suite for DMA drivers Bruce Richardson
                   ` (7 preceding siblings ...)
  2021-09-01 16:32 ` [dpdk-dev] [PATCH v2 0/6] add test suite for DMA drivers Bruce Richardson
@ 2021-09-07 16:49 ` Bruce Richardson
  2021-09-07 16:49   ` [dpdk-dev] [PATCH v3 1/8] dmadev: add channel status check for testing use Bruce Richardson
                     ` (7 more replies)
  2021-09-17 13:30 ` [dpdk-dev] [PATCH v4 0/9] add test suite for DMA drivers Bruce Richardson
                   ` (2 subsequent siblings)
  11 siblings, 8 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-09-07 16:49 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

This patchset adds a fairly comprehensive set of tests for basic dmadev
functionality. Tests are added to verify basic copy operation in each
device, using both submit function and submit flag, and verifying completion
gathering using both "completed()" and "completed_status()" functions. Beyond
that, tests are then added for the error reporting and handling, as is a suite
of tests for the fill() operation for devices that support those.

Depends-on: series-18738 ("support dmadev")

V3:
* add patch and tests for a burst-capacity function
* addressed review feedback from v2
* code cleanups to try and shorten code where possible

V2:
* added into dmadev a API to check for a device being idle
* removed the hard-coded timeout delays before checking completions, and instead
  wait for device to be idle
* added in checks for statistics updates as part of some tests
* fixed issue identified by internal coverity scan
* other minor miscellaneous changes and fixes.

Bruce Richardson (5):
  dmadev: add channel status check for testing use
  app/test: add basic dmadev instance tests
  app/test: add basic dmadev copy tests
  app/test: add more comprehensive dmadev copy tests
  app/test: test dmadev instance failure handling

Kevin Laatz (3):
  dmadev: add burst capacity API
  app/test: add dmadev fill tests
  app/test: add dmadev burst capacity API test

 app/test/test_dmadev.c       | 820 ++++++++++++++++++++++++++++++++++-
 lib/dmadev/rte_dmadev.c      |  27 ++
 lib/dmadev/rte_dmadev.h      |  52 +++
 lib/dmadev/rte_dmadev_core.h |  11 +
 lib/dmadev/version.map       |   2 +
 5 files changed, 911 insertions(+), 1 deletion(-)

--
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v3 1/8] dmadev: add channel status check for testing use
  2021-09-07 16:49 ` [dpdk-dev] [PATCH v3 0/8] add test suite for DMA drivers Bruce Richardson
@ 2021-09-07 16:49   ` Bruce Richardson
  2021-09-08 10:50     ` Walsh, Conor
  2021-09-08 13:20     ` Kevin Laatz
  2021-09-07 16:49   ` [dpdk-dev] [PATCH v3 2/8] dmadev: add burst capacity API Bruce Richardson
                     ` (6 subsequent siblings)
  7 siblings, 2 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-09-07 16:49 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

Add in a function to check if a device or vchan has completed all jobs
assigned to it, without gathering in the results. This is primarily for
use in testing, to allow the hardware to be in a known-state prior to
gathering completions.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 lib/dmadev/rte_dmadev.c      | 16 ++++++++++++++++
 lib/dmadev/rte_dmadev.h      | 33 +++++++++++++++++++++++++++++++++
 lib/dmadev/rte_dmadev_core.h |  6 ++++++
 lib/dmadev/version.map       |  1 +
 4 files changed, 56 insertions(+)

diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c
index ee8db9aaca..ab45928efb 100644
--- a/lib/dmadev/rte_dmadev.c
+++ b/lib/dmadev/rte_dmadev.c
@@ -605,3 +605,19 @@ rte_dmadev_dump(uint16_t dev_id, FILE *f)
 
 	return 0;
 }
+
+int
+rte_dmadev_vchan_status(uint16_t dev_id, uint16_t vchan, enum rte_dmadev_vchan_status *status)
+{
+	struct rte_dmadev *dev = &rte_dmadevices[dev_id];
+
+	RTE_DMADEV_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL);
+	if (vchan >= dev->data->dev_conf.nb_vchans) {
+		RTE_DMADEV_LOG(ERR,
+			"Device %u vchan %u out of range\n", dev_id, vchan);
+		return -EINVAL;
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vchan_status, -ENOTSUP);
+	return (*dev->dev_ops->vchan_status)(dev, vchan, status);
+}
diff --git a/lib/dmadev/rte_dmadev.h b/lib/dmadev/rte_dmadev.h
index 3cb95fe31a..39d73872c8 100644
--- a/lib/dmadev/rte_dmadev.h
+++ b/lib/dmadev/rte_dmadev.h
@@ -640,6 +640,39 @@ __rte_experimental
 int
 rte_dmadev_stats_reset(uint16_t dev_id, uint16_t vchan);
 
+/**
+ * device vchannel status
+ *
+ * Enum with the options for the channel status, either idle, active or halted due to error
+ */
+enum rte_dmadev_vchan_status {
+	RTE_DMA_VCHAN_IDLE,          /**< not processing, awaiting ops */
+	RTE_DMA_VCHAN_ACTIVE,        /**< currently processing jobs */
+	RTE_DMA_VCHAN_HALTED_ERROR,  /**< not processing due to error, cannot accept new ops */
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Determine if all jobs have completed on a device channel.
+ * This function is primarily designed for testing use, as it allows a process to check if
+ * all jobs are completed, without actually gathering completions from those jobs.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @param vchan
+ *   The identifier of virtual DMA channel.
+ * @param[out] status
+ *   The vchan status
+ * @return
+ *   0 - call completed successfully
+ *   < 0 - error code indicating there was a problem calling the API
+ */
+__rte_experimental
+int
+rte_dmadev_vchan_status(uint16_t dev_id, uint16_t vchan, enum rte_dmadev_vchan_status *status);
+
 /**
  * @warning
  * @b EXPERIMENTAL: this API may change without prior notice.
diff --git a/lib/dmadev/rte_dmadev_core.h b/lib/dmadev/rte_dmadev_core.h
index 32618b020c..3c9d698044 100644
--- a/lib/dmadev/rte_dmadev_core.h
+++ b/lib/dmadev/rte_dmadev_core.h
@@ -55,6 +55,10 @@ typedef int (*rte_dmadev_stats_reset_t)(struct rte_dmadev *dev, uint16_t vchan);
 typedef int (*rte_dmadev_dump_t)(const struct rte_dmadev *dev, FILE *f);
 /**< @internal Used to dump internal information. */
 
+typedef int (*rte_dmadev_vchan_status_t)(const struct rte_dmadev *dev, uint16_t vchan,
+		enum rte_dmadev_vchan_status *status);
+/**< @internal Used to check if a virtual channel has finished all jobs. */
+
 typedef int (*rte_dmadev_copy_t)(struct rte_dmadev *dev, uint16_t vchan,
 				 rte_iova_t src, rte_iova_t dst,
 				 uint32_t length, uint64_t flags);
@@ -110,6 +114,8 @@ struct rte_dmadev_ops {
 	rte_dmadev_stats_get_t      stats_get;
 	rte_dmadev_stats_reset_t    stats_reset;
 
+	rte_dmadev_vchan_status_t   vchan_status;
+
 	rte_dmadev_dump_t           dev_dump;
 };
 
diff --git a/lib/dmadev/version.map b/lib/dmadev/version.map
index 80be592713..10eeb0f7a3 100644
--- a/lib/dmadev/version.map
+++ b/lib/dmadev/version.map
@@ -19,6 +19,7 @@ EXPERIMENTAL {
 	rte_dmadev_stop;
 	rte_dmadev_submit;
 	rte_dmadev_vchan_setup;
+	rte_dmadev_vchan_status;
 
 	local: *;
 };
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v3 2/8] dmadev: add burst capacity API
  2021-09-07 16:49 ` [dpdk-dev] [PATCH v3 0/8] add test suite for DMA drivers Bruce Richardson
  2021-09-07 16:49   ` [dpdk-dev] [PATCH v3 1/8] dmadev: add channel status check for testing use Bruce Richardson
@ 2021-09-07 16:49   ` Bruce Richardson
  2021-09-08 10:53     ` Walsh, Conor
  2021-09-08 18:17     ` Jerin Jacob
  2021-09-07 16:49   ` [dpdk-dev] [PATCH v3 3/8] app/test: add basic dmadev instance tests Bruce Richardson
                     ` (5 subsequent siblings)
  7 siblings, 2 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-09-07 16:49 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj

From: Kevin Laatz <kevin.laatz@intel.com>

Add a burst capacity check API to the dmadev library. This API is useful to
applications which need to how many descriptors can be enqueued in the
current batch. For example, it could be used to determine whether all
segments of a multi-segment packet can be enqueued in the same batch or not
(to avoid half-offload of the packet).

Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
---
 lib/dmadev/rte_dmadev.c      | 11 +++++++++++
 lib/dmadev/rte_dmadev.h      | 19 +++++++++++++++++++
 lib/dmadev/rte_dmadev_core.h |  5 +++++
 lib/dmadev/version.map       |  1 +
 4 files changed, 36 insertions(+)

diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c
index ab45928efb..6494871f05 100644
--- a/lib/dmadev/rte_dmadev.c
+++ b/lib/dmadev/rte_dmadev.c
@@ -573,6 +573,17 @@ dmadev_dump_capability(FILE *f, uint64_t dev_capa)
 	fprintf(f, "\n");
 }
 
+int
+rte_dmadev_burst_capacity(uint16_t dev_id, uint16_t vchan)
+{
+	const struct rte_dmadev *dev = &rte_dmadevices[dev_id];
+
+	RTE_DMADEV_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL);
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->burst_capacity, -ENOTSUP);
+	return (*dev->dev_ops->burst_capacity)(dev, vchan);
+}
+
 int
 rte_dmadev_dump(uint16_t dev_id, FILE *f)
 {
diff --git a/lib/dmadev/rte_dmadev.h b/lib/dmadev/rte_dmadev.h
index 39d73872c8..8b84914810 100644
--- a/lib/dmadev/rte_dmadev.h
+++ b/lib/dmadev/rte_dmadev.h
@@ -673,6 +673,25 @@ __rte_experimental
 int
 rte_dmadev_vchan_status(uint16_t dev_id, uint16_t vchan, enum rte_dmadev_vchan_status *status);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Check remaining capacity in descriptor ring for the current burst.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @param vchan
+ *   The identifier of virtual DMA channel.
+ *
+ * @return
+ *   - Remaining space in the descriptor ring for the current burst on success.
+ *   - -ENOTSUP: if not supported by the device.
+ */
+__rte_experimental
+int
+rte_dmadev_burst_capacity(uint16_t dev_id, uint16_t vchan);
+
 /**
  * @warning
  * @b EXPERIMENTAL: this API may change without prior notice.
diff --git a/lib/dmadev/rte_dmadev_core.h b/lib/dmadev/rte_dmadev_core.h
index 3c9d698044..2756936798 100644
--- a/lib/dmadev/rte_dmadev_core.h
+++ b/lib/dmadev/rte_dmadev_core.h
@@ -52,6 +52,10 @@ typedef int (*rte_dmadev_stats_get_t)(const struct rte_dmadev *dev,
 typedef int (*rte_dmadev_stats_reset_t)(struct rte_dmadev *dev, uint16_t vchan);
 /**< @internal Used to reset basic statistics. */
 
+typedef uint16_t (*rte_dmadev_burst_capacity_t)(const struct rte_dmadev *dev,
+			uint16_t vchan);
+/** < @internal Used to check the remaining space in descriptor ring. */
+
 typedef int (*rte_dmadev_dump_t)(const struct rte_dmadev *dev, FILE *f);
 /**< @internal Used to dump internal information. */
 
@@ -114,6 +118,7 @@ struct rte_dmadev_ops {
 	rte_dmadev_stats_get_t      stats_get;
 	rte_dmadev_stats_reset_t    stats_reset;
 
+	rte_dmadev_burst_capacity_t burst_capacity;
 	rte_dmadev_vchan_status_t   vchan_status;
 
 	rte_dmadev_dump_t           dev_dump;
diff --git a/lib/dmadev/version.map b/lib/dmadev/version.map
index 10eeb0f7a3..56cb279e8f 100644
--- a/lib/dmadev/version.map
+++ b/lib/dmadev/version.map
@@ -1,6 +1,7 @@
 EXPERIMENTAL {
 	global:
 
+	rte_dmadev_burst_capacity;
 	rte_dmadev_close;
 	rte_dmadev_completed;
 	rte_dmadev_completed_status;
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v3 3/8] app/test: add basic dmadev instance tests
  2021-09-07 16:49 ` [dpdk-dev] [PATCH v3 0/8] add test suite for DMA drivers Bruce Richardson
  2021-09-07 16:49   ` [dpdk-dev] [PATCH v3 1/8] dmadev: add channel status check for testing use Bruce Richardson
  2021-09-07 16:49   ` [dpdk-dev] [PATCH v3 2/8] dmadev: add burst capacity API Bruce Richardson
@ 2021-09-07 16:49   ` Bruce Richardson
  2021-09-08 13:21     ` Kevin Laatz
  2021-09-07 16:49   ` [dpdk-dev] [PATCH v3 4/8] app/test: add basic dmadev copy tests Bruce Richardson
                     ` (4 subsequent siblings)
  7 siblings, 1 reply; 130+ messages in thread
From: Bruce Richardson @ 2021-09-07 16:49 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

Run basic sanity tests for configuring, starting and stopping a dmadev
instance to help validate drivers. This also provides the framework for
future tests for data-path operation.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
---
 app/test/test_dmadev.c | 72 +++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 71 insertions(+), 1 deletion(-)

diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
index 92c47fc041..691785b74f 100644
--- a/app/test/test_dmadev.c
+++ b/app/test/test_dmadev.c
@@ -3,6 +3,8 @@
  * Copyright(c) 2021 Intel Corporation.
  */
 
+#include <inttypes.h>
+
 #include <rte_dmadev.h>
 #include <rte_bus_vdev.h>
 
@@ -11,6 +13,65 @@
 /* from test_dmadev_api.c */
 extern int test_dmadev_api(uint16_t dev_id);
 
+#define ERR_RETURN(...) do { print_err(__func__, __LINE__, __VA_ARGS__); return -1; } while (0)
+
+static void
+__rte_format_printf(3, 4)
+print_err(const char *func, int lineno, const char *format, ...)
+{
+	va_list ap;
+
+	fprintf(stderr, "In %s:%d - ", func, lineno);
+	va_start(ap, format);
+	vfprintf(stderr, format, ap);
+	va_end(ap);
+}
+
+static int
+test_dmadev_instance(uint16_t dev_id)
+{
+#define TEST_RINGSIZE 512
+	struct rte_dmadev_stats stats;
+	struct rte_dmadev_info info;
+	const struct rte_dmadev_conf conf = { .nb_vchans = 1};
+	const struct rte_dmadev_vchan_conf qconf = {
+			.direction = RTE_DMA_DIR_MEM_TO_MEM,
+			.nb_desc = TEST_RINGSIZE,
+	};
+	const int vchan = 0;
+
+	printf("\n### Test dmadev instance %u\n", dev_id);
+
+	rte_dmadev_info_get(dev_id, &info);
+	if (info.max_vchans < 1)
+		ERR_RETURN("Error, no channels available on device id %u\n", dev_id);
+
+	if (rte_dmadev_configure(dev_id, &conf) != 0)
+		ERR_RETURN("Error with rte_dmadev_configure()\n");
+
+	if (rte_dmadev_vchan_setup(dev_id, vchan, &qconf) < 0)
+		ERR_RETURN("Error with queue configuration\n");
+
+	rte_dmadev_info_get(dev_id, &info);
+	if (info.nb_vchans != 1)
+		ERR_RETURN("Error, no configured queues reported on device id %u\n", dev_id);
+
+	if (rte_dmadev_start(dev_id) != 0)
+		ERR_RETURN("Error with rte_dmadev_start()\n");
+
+	if (rte_dmadev_stats_get(dev_id, vchan, &stats) != 0)
+		ERR_RETURN("Error with rte_dmadev_stats_get()\n");
+
+	if (stats.completed != 0 || stats.submitted != 0 || stats.errors != 0)
+		ERR_RETURN("Error device stats are not all zero: completed = %"PRIu64", "
+				"submitted = %"PRIu64", errors = %"PRIu64"\n",
+				stats.completed, stats.submitted, stats.errors);
+
+	rte_dmadev_stop(dev_id);
+	rte_dmadev_stats_reset(dev_id, vchan);
+	return 0;
+}
+
 static int
 test_apis(void)
 {
@@ -33,9 +94,18 @@ test_apis(void)
 static int
 test_dmadev(void)
 {
+	int i;
+
 	/* basic sanity on dmadev infrastructure */
 	if (test_apis() < 0)
-		return -1;
+		ERR_RETURN("Error performing API tests\n");
+
+	if (rte_dmadev_count() == 0)
+		return TEST_SKIPPED;
+
+	for (i = 0; i < RTE_DMADEV_MAX_DEVS; i++)
+		if (rte_dmadevices[i].state == RTE_DMADEV_ATTACHED && test_dmadev_instance(i) < 0)
+			ERR_RETURN("Error, test failure for device %d\n", i);
 
 	return 0;
 }
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v3 4/8] app/test: add basic dmadev copy tests
  2021-09-07 16:49 ` [dpdk-dev] [PATCH v3 0/8] add test suite for DMA drivers Bruce Richardson
                     ` (2 preceding siblings ...)
  2021-09-07 16:49   ` [dpdk-dev] [PATCH v3 3/8] app/test: add basic dmadev instance tests Bruce Richardson
@ 2021-09-07 16:49   ` Bruce Richardson
  2021-09-07 16:49   ` [dpdk-dev] [PATCH v3 5/8] app/test: add more comprehensive " Bruce Richardson
                     ` (3 subsequent siblings)
  7 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-09-07 16:49 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

For each dmadev instance, perform some basic copy tests to validate that
functionality.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
---
 app/test/test_dmadev.c | 175 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 175 insertions(+)

diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
index 691785b74f..69c8bc9b84 100644
--- a/app/test/test_dmadev.c
+++ b/app/test/test_dmadev.c
@@ -6,6 +6,10 @@
 #include <inttypes.h>
 
 #include <rte_dmadev.h>
+#include <rte_mbuf.h>
+#include <rte_pause.h>
+#include <rte_cycles.h>
+#include <rte_random.h>
 #include <rte_bus_vdev.h>
 
 #include "test.h"
@@ -15,6 +19,11 @@ extern int test_dmadev_api(uint16_t dev_id);
 
 #define ERR_RETURN(...) do { print_err(__func__, __LINE__, __VA_ARGS__); return -1; } while (0)
 
+#define COPY_LEN 1024
+
+static struct rte_mempool *pool;
+static uint16_t id_count;
+
 static void
 __rte_format_printf(3, 4)
 print_err(const char *func, int lineno, const char *format, ...)
@@ -27,10 +36,155 @@ print_err(const char *func, int lineno, const char *format, ...)
 	va_end(ap);
 }
 
+static int
+runtest(const char *printable, int (*test_fn)(int dev_id, uint16_t vchan), int iterations,
+		int dev_id, uint16_t vchan, bool check_err_stats)
+{
+	struct rte_dmadev_stats stats;
+	int i;
+
+	rte_dmadev_stats_reset(dev_id, vchan);
+	printf("DMA Dev %d: Running %s Tests %s\n", dev_id, printable,
+			check_err_stats ? " " : "(errors expected)");
+	for (i = 0; i < iterations; i++) {
+		if (test_fn(dev_id, vchan) < 0)
+			return -1;
+
+		rte_dmadev_stats_get(dev_id, 0, &stats);
+		printf("Ops submitted: %"PRIu64"\t", stats.submitted);
+		printf("Ops completed: %"PRIu64"\t", stats.completed);
+		printf("Errors: %"PRIu64"\r", stats.errors);
+
+		if (stats.completed != stats.submitted)
+			ERR_RETURN("\nError, not all submitted jobs are reported as completed\n");
+		if (check_err_stats && stats.errors != 0)
+			ERR_RETURN("\nErrors reported during op processing, aborting tests\n");
+	}
+	printf("\n");
+	return 0;
+}
+
+static void
+await_hw(int dev_id, uint16_t vchan)
+{
+	enum rte_dmadev_vchan_status st;
+
+	if (rte_dmadev_vchan_status(dev_id, vchan, &st) < 0) {
+		/* for drivers that don't support this op, just sleep for 1 millisecond */
+		rte_delay_us_sleep(1000);
+		return;
+	}
+
+	/* for those that do, *max* end time is one second from now, but all should be faster */
+	const uint64_t end_cycles = rte_get_timer_cycles() + rte_get_timer_hz();
+	while (st == RTE_DMA_VCHAN_ACTIVE && rte_get_timer_cycles() < end_cycles) {
+		rte_pause();
+		rte_dmadev_vchan_status(dev_id, vchan, &st);
+	}
+}
+
+static int
+test_enqueue_copies(int dev_id, uint16_t vchan)
+{
+	unsigned int i;
+	uint16_t id;
+
+	/* test doing a single copy */
+	do {
+		struct rte_mbuf *src, *dst;
+		char *src_data, *dst_data;
+
+		src = rte_pktmbuf_alloc(pool);
+		dst = rte_pktmbuf_alloc(pool);
+		src_data = rte_pktmbuf_mtod(src, char *);
+		dst_data = rte_pktmbuf_mtod(dst, char *);
+
+		for (i = 0; i < COPY_LEN; i++)
+			src_data[i] = rte_rand() & 0xFF;
+
+		id = rte_dmadev_copy(dev_id, vchan, rte_pktmbuf_iova(src), rte_pktmbuf_iova(dst),
+				COPY_LEN, RTE_DMA_OP_FLAG_SUBMIT);
+		if (id != id_count)
+			ERR_RETURN("Error with rte_dmadev_copy, got %u, expected %u\n",
+					id, id_count);
+
+		/* give time for copy to finish, then check it was done */
+		await_hw(dev_id, vchan);
+
+		for (i = 0; i < COPY_LEN; i++)
+			if (dst_data[i] != src_data[i])
+				ERR_RETURN("Data mismatch at char %u [Got %02x not %02x]\n", i,
+						dst_data[i], src_data[i]);
+
+		/* now check completion works */
+		if (rte_dmadev_completed(dev_id, vchan, 1, &id, NULL) != 1)
+			ERR_RETURN("Error with rte_dmadev_completed\n");
+
+		if (id != id_count)
+			ERR_RETURN("Error:incorrect job id received, %u [expected %u]\n",
+					id, id_count);
+
+		rte_pktmbuf_free(src);
+		rte_pktmbuf_free(dst);
+
+		/* now check completion returns nothing more */
+		if (rte_dmadev_completed(dev_id, 0, 1, NULL, NULL) != 0)
+			ERR_RETURN("Error with rte_dmadev_completed in empty check\n");
+
+		id_count++;
+
+	} while (0);
+
+	/* test doing a multiple single copies */
+	do {
+		const uint16_t max_ops = 4;
+		struct rte_mbuf *src, *dst;
+		char *src_data, *dst_data;
+		uint16_t count;
+
+		src = rte_pktmbuf_alloc(pool);
+		dst = rte_pktmbuf_alloc(pool);
+		src_data = rte_pktmbuf_mtod(src, char *);
+		dst_data = rte_pktmbuf_mtod(dst, char *);
+
+		for (i = 0; i < COPY_LEN; i++)
+			src_data[i] = rte_rand() & 0xFF;
+
+		/* perform the same copy <max_ops> times */
+		for (i = 0; i < max_ops; i++)
+			if (rte_dmadev_copy(dev_id, vchan,
+					rte_pktmbuf_iova(src),
+					rte_pktmbuf_iova(dst),
+					COPY_LEN, RTE_DMA_OP_FLAG_SUBMIT) != id_count++)
+				ERR_RETURN("Error with rte_dmadev_copy\n");
+
+		await_hw(dev_id, vchan);
+
+		count = rte_dmadev_completed(dev_id, vchan, max_ops * 2, &id, NULL);
+		if (count != max_ops)
+			ERR_RETURN("Error with rte_dmadev_completed, got %u not %u\n",
+					count, max_ops);
+
+		if (id != id_count - 1)
+			ERR_RETURN("Error, incorrect job id returned: got %u not %u\n",
+					id, id_count - 1);
+
+		for (i = 0; i < COPY_LEN; i++)
+			if (dst_data[i] != src_data[i])
+				ERR_RETURN("Data mismatch at char %u\n", i);
+
+		rte_pktmbuf_free(src);
+		rte_pktmbuf_free(dst);
+	} while (0);
+
+	return 0;
+}
+
 static int
 test_dmadev_instance(uint16_t dev_id)
 {
 #define TEST_RINGSIZE 512
+#define CHECK_ERRS    true
 	struct rte_dmadev_stats stats;
 	struct rte_dmadev_info info;
 	const struct rte_dmadev_conf conf = { .nb_vchans = 1};
@@ -66,10 +220,31 @@ test_dmadev_instance(uint16_t dev_id)
 		ERR_RETURN("Error device stats are not all zero: completed = %"PRIu64", "
 				"submitted = %"PRIu64", errors = %"PRIu64"\n",
 				stats.completed, stats.submitted, stats.errors);
+	id_count = 0;
+
+	/* create a mempool for running tests */
+	pool = rte_pktmbuf_pool_create("TEST_DMADEV_POOL",
+			TEST_RINGSIZE * 2, /* n == num elements */
+			32,  /* cache size */
+			0,   /* priv size */
+			2048, /* data room size */
+			info.device->numa_node);
+	if (pool == NULL)
+		ERR_RETURN("Error with mempool creation\n");
 
+	/* run the test cases, use many iterations to ensure UINT16_MAX id wraparound */
+	if (runtest("copy", test_enqueue_copies, 640, dev_id, vchan, CHECK_ERRS) < 0)
+		goto err;
+
+	rte_mempool_free(pool);
 	rte_dmadev_stop(dev_id);
 	rte_dmadev_stats_reset(dev_id, vchan);
 	return 0;
+
+err:
+	rte_mempool_free(pool);
+	rte_dmadev_stop(dev_id);
+	return -1;
 }
 
 static int
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v3 5/8] app/test: add more comprehensive dmadev copy tests
  2021-09-07 16:49 ` [dpdk-dev] [PATCH v3 0/8] add test suite for DMA drivers Bruce Richardson
                     ` (3 preceding siblings ...)
  2021-09-07 16:49   ` [dpdk-dev] [PATCH v3 4/8] app/test: add basic dmadev copy tests Bruce Richardson
@ 2021-09-07 16:49   ` Bruce Richardson
  2021-09-07 16:49   ` [dpdk-dev] [PATCH v3 6/8] app/test: test dmadev instance failure handling Bruce Richardson
                     ` (2 subsequent siblings)
  7 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-09-07 16:49 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

Add unit tests for various combinations of use for dmadev, copying
bursts of packets in various formats, e.g.

1. enqueuing two smaller bursts and completing them as one burst
2. enqueuing one burst and gathering completions in smaller bursts
3. using completed_status() function to gather completions rather than
   just completed()

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
---
 app/test/test_dmadev.c | 101 ++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 100 insertions(+), 1 deletion(-)

diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
index 69c8bc9b84..3c9b711ab6 100644
--- a/app/test/test_dmadev.c
+++ b/app/test/test_dmadev.c
@@ -83,6 +83,98 @@ await_hw(int dev_id, uint16_t vchan)
 	}
 }
 
+/* run a series of copy tests just using some different options for enqueues and completions */
+static int
+do_multi_copies(int dev_id, uint16_t vchan,
+		int split_batches,     /* submit 2 x 16 or 1 x 32 burst */
+		int split_completions, /* gather 2 x 16 or 1 x 32 completions */
+		int use_completed_status) /* use completed or completed_status function */
+{
+	struct rte_mbuf *srcs[32], *dsts[32];
+	enum rte_dma_status_code sc[32];
+	unsigned int i, j;
+	bool dma_err = false;
+
+	/* Enqueue burst of copies and hit doorbell */
+	for (i = 0; i < RTE_DIM(srcs); i++) {
+		uint64_t *src_data;
+
+		if (split_batches && i == RTE_DIM(srcs) / 2)
+			rte_dmadev_submit(dev_id, vchan);
+
+		srcs[i] = rte_pktmbuf_alloc(pool);
+		dsts[i] = rte_pktmbuf_alloc(pool);
+		if (srcs[i] == NULL || dsts[i] == NULL)
+			ERR_RETURN("Error allocating buffers\n");
+
+		src_data = rte_pktmbuf_mtod(srcs[i], uint64_t *);
+		for (j = 0; j < COPY_LEN/sizeof(uint64_t); j++)
+			src_data[j] = rte_rand();
+
+		if (rte_dmadev_copy(dev_id, vchan, srcs[i]->buf_iova + srcs[i]->data_off,
+				dsts[i]->buf_iova + dsts[i]->data_off, COPY_LEN, 0) != id_count++)
+			ERR_RETURN("Error with rte_dmadev_copy for buffer %u\n", i);
+	}
+	rte_dmadev_submit(dev_id, vchan);
+
+	await_hw(dev_id, vchan);
+
+	if (split_completions) {
+		/* gather completions in two halves */
+		uint16_t half_len = RTE_DIM(srcs) / 2;
+		int ret = rte_dmadev_completed(dev_id, vchan, half_len, NULL, &dma_err);
+		if (ret != half_len || dma_err)
+			ERR_RETURN("Error with rte_dmadev_completed - first half. ret = %d, expected ret = %u, dma_err = %d\n",
+					ret, half_len, dma_err);
+
+		ret = rte_dmadev_completed(dev_id, vchan, half_len, NULL, &dma_err);
+		if (ret != half_len || dma_err)
+			ERR_RETURN("Error with rte_dmadev_completed - second half. ret = %d, expected ret = %u, dma_err = %d\n",
+					ret, half_len, dma_err);
+	} else {
+		/* gather all completions in one go, using either
+		 * completed or completed_status fns
+		 */
+		if (!use_completed_status) {
+			int n = rte_dmadev_completed(dev_id, vchan, RTE_DIM(srcs), NULL, &dma_err);
+			if (n != RTE_DIM(srcs) || dma_err)
+				ERR_RETURN("Error with rte_dmadev_completed, %u [expected: %zu], dma_err = %d\n",
+						n, RTE_DIM(srcs), dma_err);
+		} else {
+			int n = rte_dmadev_completed_status(dev_id, vchan, RTE_DIM(srcs), NULL, sc);
+			if (n != RTE_DIM(srcs))
+				ERR_RETURN("Error with rte_dmadev_completed_status, %u [expected: %zu]\n",
+						n, RTE_DIM(srcs));
+
+			for (j = 0; j < (uint16_t)n; j++)
+				if (sc[j] != RTE_DMA_STATUS_SUCCESSFUL)
+					ERR_RETURN("Error with rte_dmadev_completed_status, job %u reports failure [code %u]\n",
+							j, sc[j]);
+		}
+	}
+
+	/* check for empty */
+	int ret = use_completed_status ?
+			rte_dmadev_completed_status(dev_id, vchan, RTE_DIM(srcs), NULL, sc) :
+			rte_dmadev_completed(dev_id, vchan, RTE_DIM(srcs), NULL, &dma_err);
+	if (ret != 0)
+		ERR_RETURN("Error with completion check - ops unexpectedly returned\n");
+
+	for (i = 0; i < RTE_DIM(srcs); i++) {
+		char *src_data, *dst_data;
+
+		src_data = rte_pktmbuf_mtod(srcs[i], char *);
+		dst_data = rte_pktmbuf_mtod(dsts[i], char *);
+		for (j = 0; j < COPY_LEN; j++)
+			if (src_data[j] != dst_data[j])
+				ERR_RETURN("Error with copy of packet %u, byte %u\n", i, j);
+
+		rte_pktmbuf_free(srcs[i]);
+		rte_pktmbuf_free(dsts[i]);
+	}
+	return 0;
+}
+
 static int
 test_enqueue_copies(int dev_id, uint16_t vchan)
 {
@@ -177,7 +269,14 @@ test_enqueue_copies(int dev_id, uint16_t vchan)
 		rte_pktmbuf_free(dst);
 	} while (0);
 
-	return 0;
+	/* test doing multiple copies */
+	return do_multi_copies(dev_id, vchan, 0, 0, 0) /* enqueue and complete 1 batch at a time */
+			/* enqueue 2 batches and then complete both */
+			|| do_multi_copies(dev_id, vchan, 1, 0, 0)
+			/* enqueue 1 batch, then complete in two halves */
+			|| do_multi_copies(dev_id, vchan, 0, 1, 0)
+			/* test using completed_status in place of regular completed API */
+			|| do_multi_copies(dev_id, vchan, 0, 0, 1);
 }
 
 static int
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v3 6/8] app/test: test dmadev instance failure handling
  2021-09-07 16:49 ` [dpdk-dev] [PATCH v3 0/8] add test suite for DMA drivers Bruce Richardson
                     ` (4 preceding siblings ...)
  2021-09-07 16:49   ` [dpdk-dev] [PATCH v3 5/8] app/test: add more comprehensive " Bruce Richardson
@ 2021-09-07 16:49   ` Bruce Richardson
  2021-09-07 16:49   ` [dpdk-dev] [PATCH v3 7/8] app/test: add dmadev fill tests Bruce Richardson
  2021-09-07 16:49   ` [dpdk-dev] [PATCH v3 8/8] app/test: add dmadev burst capacity API test Bruce Richardson
  7 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-09-07 16:49 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

Add a series of tests to inject bad copy operations into a dmadev to
test the error handling and reporting capabilities. Various combinations
of errors in various positions in a burst are tested, as are errors in
bursts with fence flag set, and multiple errors in a single burst.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
---
 app/test/test_dmadev.c | 357 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 357 insertions(+)

diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
index 3c9b711ab6..35ba1ff45d 100644
--- a/app/test/test_dmadev.c
+++ b/app/test/test_dmadev.c
@@ -279,6 +279,354 @@ test_enqueue_copies(int dev_id, uint16_t vchan)
 			|| do_multi_copies(dev_id, vchan, 0, 0, 1);
 }
 
+/* Failure handling test cases - global macros and variables for those tests*/
+#define COMP_BURST_SZ	16
+#define OPT_FENCE(idx) ((fence && idx == 8) ? RTE_DMA_OP_FLAG_FENCE : 0)
+
+static int
+test_failure_in_full_burst(int dev_id, uint16_t vchan, bool fence,
+		struct rte_mbuf **srcs, struct rte_mbuf **dsts, unsigned int fail_idx)
+{
+	/* Test single full batch statuses with failures */
+	enum rte_dma_status_code status[COMP_BURST_SZ];
+	struct rte_dmadev_stats baseline, stats;
+	uint16_t invalid_addr_id = 0;
+	uint16_t idx;
+	uint16_t count, status_count;
+	unsigned int i;
+	bool error = false;
+	int err_count = 0;
+
+	rte_dmadev_stats_get(dev_id, vchan, &baseline); /* get a baseline set of stats */
+	for (i = 0; i < COMP_BURST_SZ; i++) {
+		int id = rte_dmadev_copy(dev_id, vchan,
+				(i == fail_idx ? 0 : (srcs[i]->buf_iova + srcs[i]->data_off)),
+				dsts[i]->buf_iova + dsts[i]->data_off,
+				COPY_LEN, OPT_FENCE(i));
+		if (id < 0)
+			ERR_RETURN("Error with rte_dmadev_copy for buffer %u\n", i);
+		if (i == fail_idx)
+			invalid_addr_id = id;
+	}
+	rte_dmadev_submit(dev_id, vchan);
+	rte_dmadev_stats_get(dev_id, vchan, &stats);
+	if (stats.submitted != baseline.submitted + COMP_BURST_SZ)
+		ERR_RETURN("Submitted stats value not as expected, %"PRIu64" not %"PRIu64"\n",
+				stats.submitted, baseline.submitted + COMP_BURST_SZ);
+
+	await_hw(dev_id, vchan);
+
+	count = rte_dmadev_completed(dev_id, vchan, COMP_BURST_SZ, &idx, &error);
+	if (count != fail_idx)
+		ERR_RETURN("Error with rte_dmadev_completed for failure test. Got returned %u not %u.\n",
+				count, fail_idx);
+	if (!error)
+		ERR_RETURN("Error, missing expected failed copy, %u. has_error is not set\n",
+				fail_idx);
+	if (idx != invalid_addr_id - 1)
+		ERR_RETURN("Error, missing expected failed copy, %u. Got last idx %u, not %u\n",
+				fail_idx, idx, invalid_addr_id - 1);
+
+	/* all checks ok, now verify calling completed() again always returns 0 */
+	for (i = 0; i < 10; i++)
+		if (rte_dmadev_completed(dev_id, vchan, COMP_BURST_SZ, &idx, &error) != 0
+				|| error == false || idx != (invalid_addr_id - 1))
+			ERR_RETURN("Error with follow-up completed calls for fail idx %u\n",
+					fail_idx);
+
+	status_count = rte_dmadev_completed_status(dev_id, vchan, COMP_BURST_SZ,
+			&idx, status);
+	/* some HW may stop on error and be restarted after getting error status for single value
+	 * To handle this case, if we get just one error back, wait for more completions and get
+	 * status for rest of the burst
+	 */
+	if (status_count == 1) {
+		await_hw(dev_id, vchan);
+		status_count += rte_dmadev_completed_status(dev_id, vchan, COMP_BURST_SZ - 1,
+					&idx, &status[1]);
+	}
+	/* check that at this point we have all status values */
+	if (status_count != COMP_BURST_SZ - count)
+		ERR_RETURN("Error with completed_status calls for fail idx %u. Got %u not %u\n",
+				fail_idx, status_count, COMP_BURST_SZ - count);
+	/* now verify just one failure followed by multiple successful or skipped entries */
+	if (status[0] == RTE_DMA_STATUS_SUCCESSFUL)
+		ERR_RETURN("Error with status returned for fail idx %u. First status was not failure\n",
+				fail_idx);
+	for (i = 1; i < status_count; i++)
+		/* after a failure in a burst, depending on ordering/fencing,
+		 * operations may be successful or skipped because of previous error.
+		 */
+		if (status[i] != RTE_DMA_STATUS_SUCCESSFUL
+				&& status[i] != RTE_DMA_STATUS_NOT_ATTEMPTED)
+			ERR_RETURN("Error with status calls for fail idx %u. Status for job %u (of %u) is not successful\n",
+					fail_idx, count + i, COMP_BURST_SZ);
+
+	/* check the completed + errors stats are as expected */
+	rte_dmadev_stats_get(dev_id, vchan, &stats);
+	if (stats.completed != baseline.completed + COMP_BURST_SZ)
+		ERR_RETURN("Completed stats value not as expected, %"PRIu64" not %"PRIu64"\n",
+				stats.completed, baseline.completed + COMP_BURST_SZ);
+	for (i = 0; i < status_count; i++)
+		err_count += (status[i] != RTE_DMA_STATUS_SUCCESSFUL);
+	if (stats.errors != baseline.errors + err_count)
+		ERR_RETURN("'Errors' stats value not as expected, %"PRIu64" not %"PRIu64"\n",
+				stats.errors, baseline.errors + err_count);
+
+	return 0;
+}
+
+static int
+test_individual_status_query_with_failure(int dev_id, uint16_t vchan, bool fence,
+		struct rte_mbuf **srcs, struct rte_mbuf **dsts, unsigned int fail_idx)
+{
+	/* Test gathering batch statuses one at a time */
+	enum rte_dma_status_code status[COMP_BURST_SZ];
+	uint16_t invalid_addr_id = 0;
+	uint16_t idx;
+	uint16_t count = 0, status_count = 0;
+	unsigned int j;
+	bool error = false;
+
+	for (j = 0; j < COMP_BURST_SZ; j++) {
+		int id = rte_dmadev_copy(dev_id, vchan,
+				(j == fail_idx ? 0 : (srcs[j]->buf_iova + srcs[j]->data_off)),
+				dsts[j]->buf_iova + dsts[j]->data_off,
+				COPY_LEN, OPT_FENCE(j));
+		if (id < 0)
+			ERR_RETURN("Error with rte_dmadev_copy for buffer %u\n", j);
+		if (j == fail_idx)
+			invalid_addr_id = id;
+	}
+	rte_dmadev_submit(dev_id, vchan);
+	await_hw(dev_id, vchan);
+
+	/* use regular "completed" until we hit error */
+	while (!error) {
+		uint16_t n = rte_dmadev_completed(dev_id, vchan, 1, &idx, &error);
+		count += n;
+		if (n > 1 || count >= COMP_BURST_SZ)
+			ERR_RETURN("Error - too many completions got\n");
+		if (n == 0 && !error)
+			ERR_RETURN("Error, unexpectedly got zero completions after %u completed\n",
+					count);
+	}
+	if (idx != invalid_addr_id - 1)
+		ERR_RETURN("Error, last successful index not as expected, got %u, expected %u\n",
+				idx, invalid_addr_id - 1);
+
+	/* use completed_status until we hit end of burst */
+	while (count + status_count < COMP_BURST_SZ) {
+		uint16_t n = rte_dmadev_completed_status(dev_id, vchan, 1, &idx,
+				&status[status_count]);
+		await_hw(dev_id, vchan); /* allow delay to ensure jobs are completed */
+		status_count += n;
+		if (n != 1)
+			ERR_RETURN("Error: unexpected number of completions received, %u, not 1\n",
+					n);
+	}
+
+	/* check for single failure */
+	if (status[0] == RTE_DMA_STATUS_SUCCESSFUL)
+		ERR_RETURN("Error, unexpected successful DMA transaction\n");
+	for (j = 1; j < status_count; j++)
+		if (status[j] != RTE_DMA_STATUS_SUCCESSFUL
+				&& status[j] != RTE_DMA_STATUS_NOT_ATTEMPTED)
+			ERR_RETURN("Error, unexpected DMA error reported\n");
+
+	return 0;
+}
+
+static int
+test_single_item_status_query_with_failure(int dev_id, uint16_t vchan,
+		struct rte_mbuf **srcs, struct rte_mbuf **dsts, unsigned int fail_idx)
+{
+	/* When error occurs just collect a single error using "completed_status()"
+	 * before going to back to completed() calls
+	 */
+	enum rte_dma_status_code status;
+	uint16_t invalid_addr_id = 0;
+	uint16_t idx;
+	uint16_t count, status_count, count2;
+	unsigned int j;
+	bool error = false;
+
+	for (j = 0; j < COMP_BURST_SZ; j++) {
+		int id = rte_dmadev_copy(dev_id, vchan,
+				(j == fail_idx ? 0 : (srcs[j]->buf_iova + srcs[j]->data_off)),
+				dsts[j]->buf_iova + dsts[j]->data_off,
+				COPY_LEN, 0);
+		if (id < 0)
+			ERR_RETURN("Error with rte_dmadev_copy for buffer %u\n", j);
+		if (j == fail_idx)
+			invalid_addr_id = id;
+	}
+	rte_dmadev_submit(dev_id, vchan);
+	await_hw(dev_id, vchan);
+
+	/* get up to the error point */
+	count = rte_dmadev_completed(dev_id, vchan, COMP_BURST_SZ, &idx, &error);
+	if (count != fail_idx)
+		ERR_RETURN("Error with rte_dmadev_completed for failure test. Got returned %u not %u.\n",
+				count, fail_idx);
+	if (!error)
+		ERR_RETURN("Error, missing expected failed copy, %u. has_error is not set\n",
+				fail_idx);
+	if (idx != invalid_addr_id - 1)
+		ERR_RETURN("Error, missing expected failed copy, %u. Got last idx %u, not %u\n",
+				fail_idx, idx, invalid_addr_id - 1);
+
+	/* get the error code */
+	status_count = rte_dmadev_completed_status(dev_id, vchan, 1, &idx, &status);
+	if (status_count != 1)
+		ERR_RETURN("Error with completed_status calls for fail idx %u. Got %u not %u\n",
+				fail_idx, status_count, COMP_BURST_SZ - count);
+	if (status == RTE_DMA_STATUS_SUCCESSFUL)
+		ERR_RETURN("Error with status returned for fail idx %u. First status was not failure\n",
+				fail_idx);
+
+	/* delay in case time needed after err handled to complete other jobs */
+	await_hw(dev_id, vchan);
+
+	/* get the rest of the completions without status */
+	count2 = rte_dmadev_completed(dev_id, vchan, COMP_BURST_SZ, &idx, &error);
+	if (error == true)
+		ERR_RETURN("Error, got further errors post completed_status() call, for failure case %u.\n",
+				fail_idx);
+	if (count + status_count + count2 != COMP_BURST_SZ)
+		ERR_RETURN("Error, incorrect number of completions received, got %u not %u\n",
+				count + status_count + count2, COMP_BURST_SZ);
+
+	return 0;
+}
+
+static int
+test_multi_failure(int dev_id, uint16_t vchan, struct rte_mbuf **srcs, struct rte_mbuf **dsts,
+		const unsigned int *fail, size_t num_fail)
+{
+	/* test having multiple errors in one go */
+	enum rte_dma_status_code status[COMP_BURST_SZ];
+	unsigned int i, j;
+	uint16_t count, err_count = 0;
+	bool error = false;
+
+	/* enqueue and gather completions in one go */
+	for (j = 0; j < COMP_BURST_SZ; j++) {
+		uintptr_t src = srcs[j]->buf_iova + srcs[j]->data_off;
+		/* set up for failure if the current index is anywhere is the fails array */
+		for (i = 0; i < num_fail; i++)
+			if (j == fail[i])
+				src = 0;
+
+		int id = rte_dmadev_copy(dev_id, vchan,
+				src, dsts[j]->buf_iova + dsts[j]->data_off,
+				COPY_LEN, 0);
+		if (id < 0)
+			ERR_RETURN("Error with rte_dmadev_copy for buffer %u\n", j);
+	}
+	rte_dmadev_submit(dev_id, vchan);
+	await_hw(dev_id, vchan);
+
+	count = rte_dmadev_completed_status(dev_id, vchan, COMP_BURST_SZ, NULL, status);
+	while (count < COMP_BURST_SZ) {
+		await_hw(dev_id, vchan);
+
+		uint16_t ret = rte_dmadev_completed_status(dev_id, vchan, COMP_BURST_SZ - count,
+				NULL, &status[count]);
+		if (ret == 0)
+			ERR_RETURN("Error getting all completions for jobs. Got %u of %u\n",
+					count, COMP_BURST_SZ);
+		count += ret;
+	}
+	for (i = 0; i < count; i++)
+		if (status[i] != RTE_DMA_STATUS_SUCCESSFUL)
+			err_count++;
+
+	if (err_count != num_fail)
+		ERR_RETURN("Error: Invalid number of failed completions returned, %u; expected %zu\n",
+			err_count, num_fail);
+
+	/* enqueue and gather completions in bursts, but getting errors one at a time */
+	for (j = 0; j < COMP_BURST_SZ; j++) {
+		uintptr_t src = srcs[j]->buf_iova + srcs[j]->data_off;
+		/* set up for failure if the current index is anywhere is the fails array */
+		for (i = 0; i < num_fail; i++)
+			if (j == fail[i])
+				src = 0;
+
+		int id = rte_dmadev_copy(dev_id, vchan,
+				src, dsts[j]->buf_iova + dsts[j]->data_off,
+				COPY_LEN, 0);
+		if (id < 0)
+			ERR_RETURN("Error with rte_dmadev_copy for buffer %u\n", j);
+	}
+	rte_dmadev_submit(dev_id, vchan);
+	await_hw(dev_id, vchan);
+
+	count = 0;
+	err_count = 0;
+	while (count + err_count < COMP_BURST_SZ) {
+		count += rte_dmadev_completed(dev_id, vchan, COMP_BURST_SZ, NULL, &error);
+		if (error) {
+			uint16_t ret = rte_dmadev_completed_status(dev_id, vchan, 1,
+					NULL, status);
+			if (ret != 1)
+				ERR_RETURN("Error getting error-status for completions\n");
+			err_count += ret;
+			await_hw(dev_id, vchan);
+		}
+	}
+	if (err_count != num_fail)
+		ERR_RETURN("Error: Incorrect number of failed completions received, got %u not %zu\n",
+				err_count, num_fail);
+
+	return 0;
+}
+
+static int
+test_completion_status(int dev_id, uint16_t vchan, bool fence)
+{
+	const unsigned int fail[] = {0, 7, 14, 15};
+	struct rte_mbuf *srcs[COMP_BURST_SZ], *dsts[COMP_BURST_SZ];
+	unsigned int i;
+
+	for (i = 0; i < COMP_BURST_SZ; i++) {
+		srcs[i] = rte_pktmbuf_alloc(pool);
+		dsts[i] = rte_pktmbuf_alloc(pool);
+	}
+
+	for (i = 0; i < RTE_DIM(fail); i++) {
+		if (test_failure_in_full_burst(dev_id, vchan, fence, srcs, dsts, fail[i]) < 0)
+			return -1;
+
+		if (test_individual_status_query_with_failure(dev_id, vchan, fence,
+				srcs, dsts, fail[i]) < 0)
+			return -1;
+
+		/* test is run the same fenced, or unfenced, but no harm in running it twice */
+		if (test_single_item_status_query_with_failure(dev_id, vchan,
+				srcs, dsts, fail[i]) < 0)
+			return -1;
+	}
+
+	if (test_multi_failure(dev_id, vchan, srcs, dsts, fail, RTE_DIM(fail)) < 0)
+		return -1;
+
+	for (i = 0; i < COMP_BURST_SZ; i++) {
+		rte_pktmbuf_free(srcs[i]);
+		rte_pktmbuf_free(dsts[i]);
+	}
+	return 0;
+}
+
+static int
+test_completion_handling(int dev_id, uint16_t vchan)
+{
+	return test_completion_status(dev_id, vchan, false)              /* without fences */
+			|| test_completion_status(dev_id, vchan, true);  /* with fences */
+
+}
+
 static int
 test_dmadev_instance(uint16_t dev_id)
 {
@@ -335,6 +683,15 @@ test_dmadev_instance(uint16_t dev_id)
 	if (runtest("copy", test_enqueue_copies, 640, dev_id, vchan, CHECK_ERRS) < 0)
 		goto err;
 
+	/* to test error handling we can provide null pointers for source or dest in copies. This
+	 * requires VA mode in DPDK, since NULL(0) is a valid physical address.
+	 */
+	if (rte_eal_iova_mode() != RTE_IOVA_VA)
+		printf("DMA Dev %u: DPDK not in VA mode, skipping error handling tests\n", dev_id);
+	else if (runtest("error handling", test_completion_handling, 1,
+			dev_id, vchan, !CHECK_ERRS) < 0)
+		goto err;
+
 	rte_mempool_free(pool);
 	rte_dmadev_stop(dev_id);
 	rte_dmadev_stats_reset(dev_id, vchan);
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v3 7/8] app/test: add dmadev fill tests
  2021-09-07 16:49 ` [dpdk-dev] [PATCH v3 0/8] add test suite for DMA drivers Bruce Richardson
                     ` (5 preceding siblings ...)
  2021-09-07 16:49   ` [dpdk-dev] [PATCH v3 6/8] app/test: test dmadev instance failure handling Bruce Richardson
@ 2021-09-07 16:49   ` Bruce Richardson
  2021-09-07 16:49   ` [dpdk-dev] [PATCH v3 8/8] app/test: add dmadev burst capacity API test Bruce Richardson
  7 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-09-07 16:49 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

From: Kevin Laatz <kevin.laatz@intel.com>

For dma devices which support the fill operation, run unit tests to
verify fill behaviour is correct.

Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
---
 app/test/test_dmadev.c | 49 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 49 insertions(+)

diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
index 35ba1ff45d..9ad865f249 100644
--- a/app/test/test_dmadev.c
+++ b/app/test/test_dmadev.c
@@ -624,7 +624,51 @@ test_completion_handling(int dev_id, uint16_t vchan)
 {
 	return test_completion_status(dev_id, vchan, false)              /* without fences */
 			|| test_completion_status(dev_id, vchan, true);  /* with fences */
+}
+
+static int
+test_enqueue_fill(int dev_id, uint16_t vchan)
+{
+	const unsigned int lengths[] = {8, 64, 1024, 50, 100, 89};
+	struct rte_mbuf *dst;
+	char *dst_data;
+	uint64_t pattern = 0xfedcba9876543210;
+	unsigned int i, j;
+
+	dst = rte_pktmbuf_alloc(pool);
+	if (dst == NULL)
+		ERR_RETURN("Failed to allocate mbuf\n");
+	dst_data = rte_pktmbuf_mtod(dst, char *);
+
+	for (i = 0; i < RTE_DIM(lengths); i++) {
+		/* reset dst_data */
+		memset(dst_data, 0, rte_pktmbuf_data_len(dst));
+
+		/* perform the fill operation */
+		int id = rte_dmadev_fill(dev_id, vchan, pattern,
+				rte_pktmbuf_iova(dst), lengths[i], RTE_DMA_OP_FLAG_SUBMIT);
+		if (id < 0)
+			ERR_RETURN("Error with rte_ioat_enqueue_fill\n");
+		await_hw(dev_id, vchan);
+
+		if (rte_dmadev_completed(dev_id, vchan, 1, NULL, NULL) != 1)
+			ERR_RETURN("Error: fill operation failed (length: %u)\n", lengths[i]);
+		/* check the data from the fill operation is correct */
+		for (j = 0; j < lengths[i]; j++) {
+			char pat_byte = ((char *)&pattern)[j % 8];
+			if (dst_data[j] != pat_byte)
+				ERR_RETURN("Error with fill operation (lengths = %u): got (%x), not (%x)\n",
+						lengths[i], dst_data[j], pat_byte);
+		}
+		/* check that the data after the fill operation was not written to */
+		for (; j < rte_pktmbuf_data_len(dst); j++)
+			if (dst_data[j] != 0)
+				ERR_RETURN("Error, fill operation wrote too far (lengths = %u): got (%x), not (%x)\n",
+						lengths[i], dst_data[j], 0);
+	}
 
+	rte_pktmbuf_free(dst);
+	return 0;
 }
 
 static int
@@ -692,6 +736,11 @@ test_dmadev_instance(uint16_t dev_id)
 			dev_id, vchan, !CHECK_ERRS) < 0)
 		goto err;
 
+	if ((info.dev_capa & RTE_DMADEV_CAPA_OPS_FILL) == 0)
+		printf("DMA Dev %u: No device fill support, skipping fill tests\n", dev_id);
+	else if (runtest("fill", test_enqueue_fill, 1, dev_id, vchan, CHECK_ERRS) < 0)
+		goto err;
+
 	rte_mempool_free(pool);
 	rte_dmadev_stop(dev_id);
 	rte_dmadev_stats_reset(dev_id, vchan);
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v3 8/8] app/test: add dmadev burst capacity API test
  2021-09-07 16:49 ` [dpdk-dev] [PATCH v3 0/8] add test suite for DMA drivers Bruce Richardson
                     ` (6 preceding siblings ...)
  2021-09-07 16:49   ` [dpdk-dev] [PATCH v3 7/8] app/test: add dmadev fill tests Bruce Richardson
@ 2021-09-07 16:49   ` Bruce Richardson
  2021-09-08 11:03     ` Walsh, Conor
  7 siblings, 1 reply; 130+ messages in thread
From: Bruce Richardson @ 2021-09-07 16:49 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

From: Kevin Laatz <kevin.laatz@intel.com>

Add a test case to validate the functionality of drivers' burst capacity
API implementations.

Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 app/test/test_dmadev.c | 68 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 68 insertions(+)

diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
index 9ad865f249..98dddae6d6 100644
--- a/app/test/test_dmadev.c
+++ b/app/test/test_dmadev.c
@@ -671,6 +671,69 @@ test_enqueue_fill(int dev_id, uint16_t vchan)
 	return 0;
 }
 
+static int
+test_burst_capacity(int dev_id, uint16_t vchan)
+{
+#define CAP_TEST_BURST_SIZE	64
+	const int ring_space = rte_dmadev_burst_capacity(dev_id, vchan);
+	struct rte_mbuf *src, *dst;
+	int i, j, iter;
+	int cap, ret;
+	bool dma_err;
+
+	src = rte_pktmbuf_alloc(pool);
+	dst = rte_pktmbuf_alloc(pool);
+
+	/* to test capacity, we enqueue elements and check capacity is reduced
+	 * by one each time - rebaselining the expected value after each burst
+	 * as the capacity is only for a burst. We enqueue multiple bursts to
+	 * fill up half the ring, before emptying it again. We do this twice to
+	 * ensure that we get to test scenarios where we get ring wrap-around
+	 */
+	for (iter = 0; iter < 2; iter++) {
+		for (i = 0; i < (ring_space / (2 * CAP_TEST_BURST_SIZE)) + 1; i++) {
+			cap = rte_dmadev_burst_capacity(dev_id, vchan);
+
+			for (j = 0; j < CAP_TEST_BURST_SIZE; j++) {
+				ret = rte_dmadev_copy(dev_id, vchan, rte_pktmbuf_iova(src),
+						rte_pktmbuf_iova(dst), COPY_LEN, 0);
+				if (ret < 0)
+					ERR_RETURN("Error with rte_dmadev_copy\n");
+
+				if (rte_dmadev_burst_capacity(dev_id, vchan) != cap - (j + 1))
+					ERR_RETURN("Error, ring capacity did not change as expected\n");
+			}
+			if (rte_dmadev_submit(dev_id, vchan) < 0)
+				ERR_RETURN("Error, failed to submit burst\n");
+
+			if (cap < rte_dmadev_burst_capacity(dev_id, vchan))
+				ERR_RETURN("Error, avail ring capacity has gone up, not down\n");
+		}
+		await_hw(dev_id, vchan);
+
+		for (i = 0; i < (ring_space / (2 * CAP_TEST_BURST_SIZE)) + 1; i++) {
+			ret = rte_dmadev_completed(dev_id, vchan,
+					CAP_TEST_BURST_SIZE, NULL, &dma_err);
+			if (ret != CAP_TEST_BURST_SIZE || dma_err) {
+				enum rte_dma_status_code status;
+
+				rte_dmadev_completed_status(dev_id, vchan, 1, NULL, &status);
+				ERR_RETURN("Error with rte_dmadev_completed, %u [expected: %u], dma_err = %d, i = %u, iter = %u, status = %u\n",
+						ret, CAP_TEST_BURST_SIZE, dma_err, i, iter, status);
+			}
+		}
+		cap = rte_dmadev_burst_capacity(dev_id, vchan);
+		if (cap != ring_space)
+			ERR_RETURN("Error, ring capacity has not reset to original value, got %u, expected %u\n",
+					cap, ring_space);
+	}
+
+	rte_pktmbuf_free(src);
+	rte_pktmbuf_free(dst);
+
+	return 0;
+}
+
 static int
 test_dmadev_instance(uint16_t dev_id)
 {
@@ -741,6 +804,11 @@ test_dmadev_instance(uint16_t dev_id)
 	else if (runtest("fill", test_enqueue_fill, 1, dev_id, vchan, CHECK_ERRS) < 0)
 		goto err;
 
+	if (rte_dmadev_burst_capacity(dev_id, vchan) == -ENOTSUP)
+		printf("DMA Dev %u: Burst capacity API not supported, skipping tests\n", dev_id);
+	else if (runtest("burst capacity", test_burst_capacity, 1, dev_id, vchan, CHECK_ERRS) < 0)
+		goto err;
+
 	rte_mempool_free(pool);
 	rte_dmadev_stop(dev_id);
 	rte_dmadev_stats_reset(dev_id, vchan);
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v3 1/8] dmadev: add channel status check for testing use
  2021-09-07 16:49   ` [dpdk-dev] [PATCH v3 1/8] dmadev: add channel status check for testing use Bruce Richardson
@ 2021-09-08 10:50     ` Walsh, Conor
  2021-09-08 13:20     ` Kevin Laatz
  1 sibling, 0 replies; 130+ messages in thread
From: Walsh, Conor @ 2021-09-08 10:50 UTC (permalink / raw)
  To: Richardson, Bruce, dev; +Cc: Laatz, Kevin, fengchengwen, jerinj


> Subject: [PATCH v3 1/8] dmadev: add channel status check for testing use
> 
> Add in a function to check if a device or vchan has completed all jobs
> assigned to it, without gathering in the results. This is primarily for
> use in testing, to allow the hardware to be in a known-state prior to
> gathering completions.
> 
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>

Reviewed-by: Conor Walsh <conor.walsh@intel.com>

<snip>

^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v3 2/8] dmadev: add burst capacity API
  2021-09-07 16:49   ` [dpdk-dev] [PATCH v3 2/8] dmadev: add burst capacity API Bruce Richardson
@ 2021-09-08 10:53     ` Walsh, Conor
  2021-09-08 18:17     ` Jerin Jacob
  1 sibling, 0 replies; 130+ messages in thread
From: Walsh, Conor @ 2021-09-08 10:53 UTC (permalink / raw)
  To: Richardson, Bruce, dev; +Cc: Laatz, Kevin, fengchengwen, jerinj


> Subject: [PATCH v3 2/8] dmadev: add burst capacity API
> 
> From: Kevin Laatz <kevin.laatz@intel.com>
> 
> Add a burst capacity check API to the dmadev library. This API is useful to
> applications which need to how many descriptors can be enqueued in the
> current batch. For example, it could be used to determine whether all
> segments of a multi-segment packet can be enqueued in the same batch or
> not
> (to avoid half-offload of the packet).
> 
> Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
> ---

Reviewed-by: Conor Walsh <conor.walsh@intel.com>

<snip>

^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v3 8/8] app/test: add dmadev burst capacity API test
  2021-09-07 16:49   ` [dpdk-dev] [PATCH v3 8/8] app/test: add dmadev burst capacity API test Bruce Richardson
@ 2021-09-08 11:03     ` Walsh, Conor
  0 siblings, 0 replies; 130+ messages in thread
From: Walsh, Conor @ 2021-09-08 11:03 UTC (permalink / raw)
  To: Richardson, Bruce, dev; +Cc: Laatz, Kevin, fengchengwen, jerinj


> Subject: [PATCH v3 8/8] app/test: add dmadev burst capacity API test
> 
> From: Kevin Laatz <kevin.laatz@intel.com>
> 
> Add a test case to validate the functionality of drivers' burst capacity
> API implementations.
> 
> Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---

Reviewed-by: Conor Walsh <conor.walsh@intel.com>

<snip>

^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v3 1/8] dmadev: add channel status check for testing use
  2021-09-07 16:49   ` [dpdk-dev] [PATCH v3 1/8] dmadev: add channel status check for testing use Bruce Richardson
  2021-09-08 10:50     ` Walsh, Conor
@ 2021-09-08 13:20     ` Kevin Laatz
  1 sibling, 0 replies; 130+ messages in thread
From: Kevin Laatz @ 2021-09-08 13:20 UTC (permalink / raw)
  To: Bruce Richardson, dev; +Cc: conor.walsh, fengchengwen, jerinj

On 07/09/2021 17:49, Bruce Richardson wrote:
> Add in a function to check if a device or vchan has completed all jobs
> assigned to it, without gathering in the results. This is primarily for
> use in testing, to allow the hardware to be in a known-state prior to
> gathering completions.
>
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
>   lib/dmadev/rte_dmadev.c      | 16 ++++++++++++++++
>   lib/dmadev/rte_dmadev.h      | 33 +++++++++++++++++++++++++++++++++
>   lib/dmadev/rte_dmadev_core.h |  6 ++++++
>   lib/dmadev/version.map       |  1 +
>   4 files changed, 56 insertions(+)
>

Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>


^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v3 3/8] app/test: add basic dmadev instance tests
  2021-09-07 16:49   ` [dpdk-dev] [PATCH v3 3/8] app/test: add basic dmadev instance tests Bruce Richardson
@ 2021-09-08 13:21     ` Kevin Laatz
  0 siblings, 0 replies; 130+ messages in thread
From: Kevin Laatz @ 2021-09-08 13:21 UTC (permalink / raw)
  To: Bruce Richardson, dev; +Cc: conor.walsh, fengchengwen, jerinj

On 07/09/2021 17:49, Bruce Richardson wrote:
> Run basic sanity tests for configuring, starting and stopping a dmadev
> instance to help validate drivers. This also provides the framework for
> future tests for data-path operation.
>
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> Reviewed-by: Conor Walsh <conor.walsh@intel.com>
> ---
>   app/test/test_dmadev.c | 72 +++++++++++++++++++++++++++++++++++++++++-
>   1 file changed, 71 insertions(+), 1 deletion(-)
>

Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>

^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v3 2/8] dmadev: add burst capacity API
  2021-09-07 16:49   ` [dpdk-dev] [PATCH v3 2/8] dmadev: add burst capacity API Bruce Richardson
  2021-09-08 10:53     ` Walsh, Conor
@ 2021-09-08 18:17     ` Jerin Jacob
  2021-09-09  8:16       ` Bruce Richardson
  1 sibling, 1 reply; 130+ messages in thread
From: Jerin Jacob @ 2021-09-08 18:17 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: dpdk-dev, Walsh, Conor, Laatz, Kevin, fengchengwen, Jerin Jacob,
	Satananda Burla, Radha Mohan Chintakuntla

On Tue, 7 Sep 2021, 10:25 pm Bruce Richardson, <bruce.richardson@intel.com>
wrote:

> From: Kevin Laatz <kevin.laatz@intel.com>
>
> Add a burst capacity check API to the dmadev library. This API is useful to
> applications which need to how many descriptors can be enqueued in the
> current batch. For example, it could be used to determine whether all
> segments of a multi-segment packet can be enqueued in the same batch or not
> (to avoid half-offload of the packet).
>

 #Could you share more details on the use case with vhost?
# Are they planning to use this in fast path if so it need to move as fast
path function pointer?
# Assume the use case needs N rte_dma_copy to complete a logical copy at
vhost level. Is the any issue in half-offload, meaning when N th one
successfully completed then only the logical copy is completed. Right?
# There is already nb_desc with which a dma_queue is configured. So if the
application does its accounting properly, it knows how many desc it has
used up and how many completions it has processed.

Would like to understand more details on this API usage.

Sorry for the format issue, sending from mobile.


> Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
> ---
>  lib/dmadev/rte_dmadev.c      | 11 +++++++++++
>  lib/dmadev/rte_dmadev.h      | 19 +++++++++++++++++++
>  lib/dmadev/rte_dmadev_core.h |  5 +++++
>  lib/dmadev/version.map       |  1 +
>  4 files changed, 36 insertions(+)
>
> diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c
> index ab45928efb..6494871f05 100644
> --- a/lib/dmadev/rte_dmadev.c
> +++ b/lib/dmadev/rte_dmadev.c
> @@ -573,6 +573,17 @@ dmadev_dump_capability(FILE *f, uint64_t dev_capa)
>         fprintf(f, "\n");
>  }
>
> +int
> +rte_dmadev_burst_capacity(uint16_t dev_id, uint16_t vchan)
> +{
> +       const struct rte_dmadev *dev = &rte_dmadevices[dev_id];
> +
> +       RTE_DMADEV_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL);
> +
> +       RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->burst_capacity, -ENOTSUP);
> +       return (*dev->dev_ops->burst_capacity)(dev, vchan);
> +}
> +
>  int
>  rte_dmadev_dump(uint16_t dev_id, FILE *f)
>  {
> diff --git a/lib/dmadev/rte_dmadev.h b/lib/dmadev/rte_dmadev.h
> index 39d73872c8..8b84914810 100644
> --- a/lib/dmadev/rte_dmadev.h
> +++ b/lib/dmadev/rte_dmadev.h
> @@ -673,6 +673,25 @@ __rte_experimental
>  int
>  rte_dmadev_vchan_status(uint16_t dev_id, uint16_t vchan, enum
> rte_dmadev_vchan_status *status);
>
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Check remaining capacity in descriptor ring for the current burst.
> + *
> + * @param dev_id
> + *   The identifier of the device.
> + * @param vchan
> + *   The identifier of virtual DMA channel.
> + *
> + * @return
> + *   - Remaining space in the descriptor ring for the current burst on
> success.
> + *   - -ENOTSUP: if not supported by the device.
> + */
> +__rte_experimental
> +int
> +rte_dmadev_burst_capacity(uint16_t dev_id, uint16_t vchan);
> +
>  /**
>   * @warning
>   * @b EXPERIMENTAL: this API may change without prior notice.
> diff --git a/lib/dmadev/rte_dmadev_core.h b/lib/dmadev/rte_dmadev_core.h
> index 3c9d698044..2756936798 100644
> --- a/lib/dmadev/rte_dmadev_core.h
> +++ b/lib/dmadev/rte_dmadev_core.h
> @@ -52,6 +52,10 @@ typedef int (*rte_dmadev_stats_get_t)(const struct
> rte_dmadev *dev,
>  typedef int (*rte_dmadev_stats_reset_t)(struct rte_dmadev *dev, uint16_t
> vchan);
>  /**< @internal Used to reset basic statistics. */
>
> +typedef uint16_t (*rte_dmadev_burst_capacity_t)(const struct rte_dmadev
> *dev,
> +                       uint16_t vchan);
> +/** < @internal Used to check the remaining space in descriptor ring. */
> +
>  typedef int (*rte_dmadev_dump_t)(const struct rte_dmadev *dev, FILE *f);
>  /**< @internal Used to dump internal information. */
>
> @@ -114,6 +118,7 @@ struct rte_dmadev_ops {
>         rte_dmadev_stats_get_t      stats_get;
>         rte_dmadev_stats_reset_t    stats_reset;
>
> +       rte_dmadev_burst_capacity_t burst_capacity;
>         rte_dmadev_vchan_status_t   vchan_status;
>
>         rte_dmadev_dump_t           dev_dump;
> diff --git a/lib/dmadev/version.map b/lib/dmadev/version.map
> index 10eeb0f7a3..56cb279e8f 100644
> --- a/lib/dmadev/version.map
> +++ b/lib/dmadev/version.map
> @@ -1,6 +1,7 @@
>  EXPERIMENTAL {
>         global:
>
> +       rte_dmadev_burst_capacity;
>         rte_dmadev_close;
>         rte_dmadev_completed;
>         rte_dmadev_completed_status;
> --
> 2.30.2
>
>

^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v3 2/8] dmadev: add burst capacity API
  2021-09-08 18:17     ` Jerin Jacob
@ 2021-09-09  8:16       ` Bruce Richardson
  2021-09-17 13:54         ` Jerin Jacob
  0 siblings, 1 reply; 130+ messages in thread
From: Bruce Richardson @ 2021-09-09  8:16 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dpdk-dev, Walsh, Conor, Laatz, Kevin, fengchengwen, Jerin Jacob,
	Satananda Burla, Radha Mohan Chintakuntla, jiayu.hu, sunil.pai.g

On Wed, Sep 08, 2021 at 11:47:59PM +0530, Jerin Jacob wrote:
>    On Tue, 7 Sep 2021, 10:25 pm Bruce Richardson,
>    <[1]bruce.richardson@intel.com> wrote:
> 
>      From: Kevin Laatz <[2]kevin.laatz@intel.com>
>      Add a burst capacity check API to the dmadev library. This API is
>      useful to
>      applications which need to how many descriptors can be enqueued in
>      the
>      current batch. For example, it could be used to determine whether
>      all
>      segments of a multi-segment packet can be enqueued in the same batch
>      or not
>      (to avoid half-offload of the packet).
> 
>     #Could you share more details on the use case with vhost?
>    # Are they planning to use this in fast path if so it need to move as
>    fast path function pointer?

I believe the intent is to use it on fastpath, but I would assume only once
per burst, so the penalty for non-fastpath may be acceptable. As you point
out - for an app that really doesn't want to have to pay that penalty,
tracking ring use itself is possible.

The desire for fast-path use is also why I suggested having the space as an
optional return parameter from the submit API call. It could logically also
be a return value from the "completed" call, which might actually make more
sense.

>    # Assume the use case needs N rte_dma_copy to complete a logical copy
>    at vhost level. Is the any issue in half-offload, meaning when N th one
>    successfully completed then only the logical copy is completed. Right?

Yes, as I understand it, the issue is for multi-segment packets, where we
only want to enqueue the first segment if we know we will success with the
final one too.

>    # There is already nb_desc with which a dma_queue is configured. So if
>    the application does its accounting properly, it knows how many desc it
>    has used up and how many completions it has processed.

Agreed. It's just more work for the app, and for simplicity and
completeness I think we should add this API. Because there are other
options I think it should be available, but not as a fast-path fn (though
again, the difference is likely very small for something not called for
every enqueue).

>    Would like to understand more details on this API usage.
> 
Adding Sunil and Jiayu on CC who are looking at this area from the OVS and
vhost sides.

/Bruce

^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v4 0/9] add test suite for DMA drivers
  2021-08-26 18:32 [dpdk-dev] [RFC PATCH 0/7] add test suite for DMA drivers Bruce Richardson
                   ` (8 preceding siblings ...)
  2021-09-07 16:49 ` [dpdk-dev] [PATCH v3 0/8] add test suite for DMA drivers Bruce Richardson
@ 2021-09-17 13:30 ` Bruce Richardson
  2021-09-17 13:30   ` [dpdk-dev] [PATCH v4 1/9] dmadev: add channel status check for testing use Bruce Richardson
                     ` (8 more replies)
  2021-09-17 13:54 ` [dpdk-dev] [PATCH v5 0/9] add test suite for DMA drivers Bruce Richardson
  2021-09-24 10:29 ` [dpdk-dev] [PATCH v6 00/13] add test suite for DMA drivers Bruce Richardson
  11 siblings, 9 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-09-17 13:30 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

This patchset adds a fairly comprehensive set of tests for basic dmadev
functionality. Tests are added to verify basic copy operation in each
device, using both submit function and submit flag, and verifying completion
gathering using both "completed()" and "completed_status()" functions. Beyond
that, tests are then added for the error reporting and handling, as is a suite
of tests for the fill() operation for devices that support those.

Depends-on: series-18960 ("support dmadev")

V4:
* rebased to v22 of dmadev set
* added patch for iteration macro for dmadevs to allow testing each dmadev in
  turn

V3:
* add patch and tests for a burst-capacity function
* addressed review feedback from v2
* code cleanups to try and shorten code where possible

V2:
* added into dmadev a API to check for a device being idle
* removed the hard-coded timeout delays before checking completions, and instead
  wait for device to be idle
* added in checks for statistics updates as part of some tests
* fixed issue identified by internal coverity scan
* other minor miscellaneous changes and fixes.
 *** SUBJECT HERE ***

*** BLURB HERE ***

Bruce Richardson (6):
  dmadev: add channel status check for testing use
  dmadev: add device iterator
  app/test: add basic dmadev instance tests
  app/test: add basic dmadev copy tests
  app/test: add more comprehensive dmadev copy tests
  app/test: test dmadev instance failure handling

Kevin Laatz (3):
  dmadev: add burst capacity API
  app/test: add dmadev fill tests
  app/test: add dmadev burst capacity API test

 app/test/test_dmadev.c       | 820 ++++++++++++++++++++++++++++++++++-
 lib/dmadev/rte_dmadev.c      |  39 ++
 lib/dmadev/rte_dmadev.h      |  70 +++
 lib/dmadev/rte_dmadev_core.h |   9 +
 lib/dmadev/version.map       |   3 +
 5 files changed, 940 insertions(+), 1 deletion(-)

--
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v4 1/9] dmadev: add channel status check for testing use
  2021-09-17 13:30 ` [dpdk-dev] [PATCH v4 0/9] add test suite for DMA drivers Bruce Richardson
@ 2021-09-17 13:30   ` Bruce Richardson
  2021-09-17 13:30   ` [dpdk-dev] [PATCH v4 2/9] dmadev: add burst capacity API Bruce Richardson
                     ` (7 subsequent siblings)
  8 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-09-17 13:30 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

Add in a function to check if a device or vchan has completed all jobs
assigned to it, without gathering in the results. This is primarily for
use in testing, to allow the hardware to be in a known-state prior to
gathering completions.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 lib/dmadev/rte_dmadev.c      | 15 +++++++++++++++
 lib/dmadev/rte_dmadev.h      | 33 +++++++++++++++++++++++++++++++++
 lib/dmadev/rte_dmadev_core.h |  6 ++++++
 lib/dmadev/version.map       |  1 +
 4 files changed, 55 insertions(+)

diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c
index 544937acf8..859958fff8 100644
--- a/lib/dmadev/rte_dmadev.c
+++ b/lib/dmadev/rte_dmadev.c
@@ -716,3 +716,18 @@ rte_dma_dump(int16_t dev_id, FILE *f)
 
 	return 0;
 }
+
+int
+rte_dma_vchan_status(uint16_t dev_id, uint16_t vchan, enum rte_dma_vchan_status *status)
+{
+	struct rte_dma_dev *dev = &rte_dma_devices[dev_id];
+
+	RTE_DMA_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL);
+	if (vchan >= dev->data->dev_conf.nb_vchans) {
+		RTE_DMA_LOG(ERR, "Device %u vchan %u out of range\n", dev_id, vchan);
+		return -EINVAL;
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vchan_status, -ENOTSUP);
+	return (*dev->dev_ops->vchan_status)(dev, vchan, status);
+}
diff --git a/lib/dmadev/rte_dmadev.h b/lib/dmadev/rte_dmadev.h
index be54f2cb9d..86c4a38f83 100644
--- a/lib/dmadev/rte_dmadev.h
+++ b/lib/dmadev/rte_dmadev.h
@@ -660,6 +660,39 @@ int rte_dma_stats_get(int16_t dev_id, uint16_t vchan,
 __rte_experimental
 int rte_dma_stats_reset(int16_t dev_id, uint16_t vchan);
 
+/**
+ * device vchannel status
+ *
+ * Enum with the options for the channel status, either idle, active or halted due to error
+ */
+enum rte_dma_vchan_status {
+	RTE_DMA_VCHAN_IDLE,          /**< not processing, awaiting ops */
+	RTE_DMA_VCHAN_ACTIVE,        /**< currently processing jobs */
+	RTE_DMA_VCHAN_HALTED_ERROR,  /**< not processing due to error, cannot accept new ops */
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Determine if all jobs have completed on a device channel.
+ * This function is primarily designed for testing use, as it allows a process to check if
+ * all jobs are completed, without actually gathering completions from those jobs.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @param vchan
+ *   The identifier of virtual DMA channel.
+ * @param[out] status
+ *   The vchan status
+ * @return
+ *   0 - call completed successfully
+ *   < 0 - error code indicating there was a problem calling the API
+ */
+__rte_experimental
+int
+rte_dma_vchan_status(uint16_t dev_id, uint16_t vchan, enum rte_dma_vchan_status *status);
+
 /**
  * @warning
  * @b EXPERIMENTAL: this API may change without prior notice.
diff --git a/lib/dmadev/rte_dmadev_core.h b/lib/dmadev/rte_dmadev_core.h
index edb3286cbb..0eec1aa43b 100644
--- a/lib/dmadev/rte_dmadev_core.h
+++ b/lib/dmadev/rte_dmadev_core.h
@@ -46,6 +46,10 @@ typedef int (*rte_dma_vchan_setup_t)(struct rte_dma_dev *dev, uint16_t vchan,
 				const struct rte_dma_vchan_conf *conf,
 				uint32_t conf_sz);
 
+/** @internal Used to check if a virtual channel has finished all jobs. */
+typedef int (*rte_dma_vchan_status_t)(const struct rte_dma_dev *dev, uint16_t vchan,
+		enum rte_dma_vchan_status *status);
+
 /** @internal Used to retrieve basic statistics. */
 typedef int (*rte_dma_stats_get_t)(const struct rte_dma_dev *dev,
 			uint16_t vchan, struct rte_dma_stats *stats,
@@ -119,6 +123,8 @@ struct rte_dma_dev_ops {
 	rte_dma_stats_reset_t    stats_reset;
 
 	rte_dma_dump_t           dev_dump;
+	rte_dma_vchan_status_t   vchan_status;
+
 };
 
 /**
diff --git a/lib/dmadev/version.map b/lib/dmadev/version.map
index c780463bb2..40ea517016 100644
--- a/lib/dmadev/version.map
+++ b/lib/dmadev/version.map
@@ -20,6 +20,7 @@ EXPERIMENTAL {
 	rte_dma_stop;
 	rte_dma_submit;
 	rte_dma_vchan_setup;
+	rte_dma_vchan_status;
 
 	local: *;
 };
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v4 2/9] dmadev: add burst capacity API
  2021-09-17 13:30 ` [dpdk-dev] [PATCH v4 0/9] add test suite for DMA drivers Bruce Richardson
  2021-09-17 13:30   ` [dpdk-dev] [PATCH v4 1/9] dmadev: add channel status check for testing use Bruce Richardson
@ 2021-09-17 13:30   ` Bruce Richardson
  2021-09-17 13:30   ` [dpdk-dev] [PATCH v4 3/9] dmadev: add device iterator Bruce Richardson
                     ` (6 subsequent siblings)
  8 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-09-17 13:30 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj

From: Kevin Laatz <kevin.laatz@intel.com>

Add a burst capacity check API to the dmadev library. This API is useful to
applications which need to how many descriptors can be enqueued in the
current batch. For example, it could be used to determine whether all
segments of a multi-segment packet can be enqueued in the same batch or not
(to avoid half-offload of the packet).

Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
---
 lib/dmadev/rte_dmadev.c      | 11 +++++++++++
 lib/dmadev/rte_dmadev.h      | 19 +++++++++++++++++++
 lib/dmadev/rte_dmadev_core.h |  5 ++++-
 lib/dmadev/version.map       |  1 +
 4 files changed, 35 insertions(+), 1 deletion(-)

diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c
index 859958fff8..ed342e0d32 100644
--- a/lib/dmadev/rte_dmadev.c
+++ b/lib/dmadev/rte_dmadev.c
@@ -684,6 +684,17 @@ dma_dump_capability(FILE *f, uint64_t dev_capa)
 	fprintf(f, "\n");
 }
 
+int
+rte_dma_burst_capacity(uint16_t dev_id, uint16_t vchan)
+{
+	const struct rte_dma_dev *dev = &rte_dma_devices[dev_id];
+
+	RTE_DMA_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL);
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->burst_capacity, -ENOTSUP);
+	return (*dev->dev_ops->burst_capacity)(dev, vchan);
+}
+
 int
 rte_dma_dump(int16_t dev_id, FILE *f)
 {
diff --git a/lib/dmadev/rte_dmadev.h b/lib/dmadev/rte_dmadev.h
index 86c4a38f83..be4bb18ee6 100644
--- a/lib/dmadev/rte_dmadev.h
+++ b/lib/dmadev/rte_dmadev.h
@@ -693,6 +693,25 @@ __rte_experimental
 int
 rte_dma_vchan_status(uint16_t dev_id, uint16_t vchan, enum rte_dma_vchan_status *status);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Check remaining capacity in descriptor ring for the current burst.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @param vchan
+ *   The identifier of virtual DMA channel.
+ *
+ * @return
+ *   - Remaining space in the descriptor ring for the current burst on success.
+ *   - -ENOTSUP: if not supported by the device.
+ */
+__rte_experimental
+int
+rte_dma_burst_capacity(uint16_t dev_id, uint16_t vchan);
+
 /**
  * @warning
  * @b EXPERIMENTAL: this API may change without prior notice.
diff --git a/lib/dmadev/rte_dmadev_core.h b/lib/dmadev/rte_dmadev_core.h
index 0eec1aa43b..21290ef471 100644
--- a/lib/dmadev/rte_dmadev_core.h
+++ b/lib/dmadev/rte_dmadev_core.h
@@ -58,6 +58,9 @@ typedef int (*rte_dma_stats_get_t)(const struct rte_dma_dev *dev,
 /** @internal Used to reset basic statistics. */
 typedef int (*rte_dma_stats_reset_t)(struct rte_dma_dev *dev, uint16_t vchan);
 
+/** @internal Used to check the remaining space in descriptor ring. */
+typedef uint16_t (*rte_dma_burst_capacity_t)(const struct rte_dma_dev *dev, uint16_t vchan);
+
 /** @internal Used to dump internal information. */
 typedef int (*rte_dma_dump_t)(const struct rte_dma_dev *dev, FILE *f);
 
@@ -124,7 +127,7 @@ struct rte_dma_dev_ops {
 
 	rte_dma_dump_t           dev_dump;
 	rte_dma_vchan_status_t   vchan_status;
-
+	rte_dma_burst_capacity_t burst_capacity;
 };
 
 /**
diff --git a/lib/dmadev/version.map b/lib/dmadev/version.map
index 40ea517016..66420c4ede 100644
--- a/lib/dmadev/version.map
+++ b/lib/dmadev/version.map
@@ -1,6 +1,7 @@
 EXPERIMENTAL {
 	global:
 
+	rte_dma_burst_capacity;
 	rte_dma_close;
 	rte_dma_completed;
 	rte_dma_completed_status;
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v4 3/9] dmadev: add device iterator
  2021-09-17 13:30 ` [dpdk-dev] [PATCH v4 0/9] add test suite for DMA drivers Bruce Richardson
  2021-09-17 13:30   ` [dpdk-dev] [PATCH v4 1/9] dmadev: add channel status check for testing use Bruce Richardson
  2021-09-17 13:30   ` [dpdk-dev] [PATCH v4 2/9] dmadev: add burst capacity API Bruce Richardson
@ 2021-09-17 13:30   ` Bruce Richardson
  2021-09-17 13:30   ` [dpdk-dev] [PATCH v4 4/9] app/test: add basic dmadev instance tests Bruce Richardson
                     ` (5 subsequent siblings)
  8 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-09-17 13:30 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

Add a function and wrapper macro to iterate over all dma devices.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 lib/dmadev/rte_dmadev.c | 13 +++++++++++++
 lib/dmadev/rte_dmadev.h | 18 ++++++++++++++++++
 lib/dmadev/version.map  |  1 +
 3 files changed, 32 insertions(+)

diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c
index ed342e0d32..ba189f3539 100644
--- a/lib/dmadev/rte_dmadev.c
+++ b/lib/dmadev/rte_dmadev.c
@@ -55,6 +55,19 @@ rte_dma_dev_max(size_t dev_max)
 	return 0;
 }
 
+uint16_t
+rte_dma_next_dev(uint16_t start_dev_id)
+{
+	uint16_t dev_id = start_dev_id;
+	while (dev_id < dma_devices_max && rte_dma_devices[dev_id].state == RTE_DMA_DEV_UNUSED)
+		dev_id++;
+
+	if (dev_id < dma_devices_max)
+		return dev_id;
+
+	return UINT16_MAX;
+}
+
 static int
 dma_check_name(const char *name)
 {
diff --git a/lib/dmadev/rte_dmadev.h b/lib/dmadev/rte_dmadev.h
index be4bb18ee6..d262b8ed8d 100644
--- a/lib/dmadev/rte_dmadev.h
+++ b/lib/dmadev/rte_dmadev.h
@@ -219,6 +219,24 @@ bool rte_dma_is_valid(int16_t dev_id);
 __rte_experimental
 uint16_t rte_dma_count_avail(void);
 
+/**
+ * Iterates over valid dmadev instances.
+ *
+ * @param start_dev_id
+ *   The id of the next possible dmadev.
+ * @return
+ *   Next valid dmadev, UINT16_MAX if there is none.
+ */
+__rte_experimental
+uint16_t rte_dma_next_dev(uint16_t start_dev_id);
+
+/** Utility macro to iterate over all available dmadevs */
+#define RTE_DMA_FOREACH_DEV(p) \
+	for (p = rte_dma_next_dev(0); \
+	     (uint16_t)p < UINT16_MAX; \
+	     p = rte_dma_next_dev(p + 1))
+
+
 /** DMA device support memory-to-memory transfer.
  *
  * @see struct rte_dma_info::dev_capa
diff --git a/lib/dmadev/version.map b/lib/dmadev/version.map
index 66420c4ede..0ab570a1be 100644
--- a/lib/dmadev/version.map
+++ b/lib/dmadev/version.map
@@ -15,6 +15,7 @@ EXPERIMENTAL {
 	rte_dma_get_dev_id;
 	rte_dma_info_get;
 	rte_dma_is_valid;
+	rte_dma_next_dev;
 	rte_dma_start;
 	rte_dma_stats_get;
 	rte_dma_stats_reset;
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v4 4/9] app/test: add basic dmadev instance tests
  2021-09-17 13:30 ` [dpdk-dev] [PATCH v4 0/9] add test suite for DMA drivers Bruce Richardson
                     ` (2 preceding siblings ...)
  2021-09-17 13:30   ` [dpdk-dev] [PATCH v4 3/9] dmadev: add device iterator Bruce Richardson
@ 2021-09-17 13:30   ` Bruce Richardson
  2021-09-17 13:30   ` [dpdk-dev] [PATCH v4 5/9] app/test: add basic dmadev copy tests Bruce Richardson
                     ` (4 subsequent siblings)
  8 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-09-17 13:30 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

Run basic sanity tests for configuring, starting and stopping a dmadev
instance to help validate drivers. This also provides the framework for
future tests for data-path operation.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
---
 app/test/test_dmadev.c | 72 +++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 71 insertions(+), 1 deletion(-)

diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
index e765ec5f2c..28fbdc0b1f 100644
--- a/app/test/test_dmadev.c
+++ b/app/test/test_dmadev.c
@@ -3,6 +3,8 @@
  * Copyright(c) 2021 Intel Corporation
  */
 
+#include <inttypes.h>
+
 #include <rte_dmadev.h>
 #include <rte_bus_vdev.h>
 
@@ -11,6 +13,65 @@
 /* from test_dmadev_api.c */
 extern int test_dma_api(uint16_t dev_id);
 
+#define ERR_RETURN(...) do { print_err(__func__, __LINE__, __VA_ARGS__); return -1; } while (0)
+
+static void
+__rte_format_printf(3, 4)
+print_err(const char *func, int lineno, const char *format, ...)
+{
+	va_list ap;
+
+	fprintf(stderr, "In %s:%d - ", func, lineno);
+	va_start(ap, format);
+	vfprintf(stderr, format, ap);
+	va_end(ap);
+}
+
+static int
+test_dmadev_instance(uint16_t dev_id)
+{
+#define TEST_RINGSIZE 512
+	struct rte_dma_stats stats;
+	struct rte_dma_info info;
+	const struct rte_dma_conf conf = { .nb_vchans = 1};
+	const struct rte_dma_vchan_conf qconf = {
+			.direction = RTE_DMA_DIR_MEM_TO_MEM,
+			.nb_desc = TEST_RINGSIZE,
+	};
+	const int vchan = 0;
+
+	printf("\n### Test dmadev instance %u\n", dev_id);
+
+	rte_dma_info_get(dev_id, &info);
+	if (info.max_vchans < 1)
+		ERR_RETURN("Error, no channels available on device id %u\n", dev_id);
+
+	if (rte_dma_configure(dev_id, &conf) != 0)
+		ERR_RETURN("Error with rte_dma_configure()\n");
+
+	if (rte_dma_vchan_setup(dev_id, vchan, &qconf) < 0)
+		ERR_RETURN("Error with queue configuration\n");
+
+	rte_dma_info_get(dev_id, &info);
+	if (info.nb_vchans != 1)
+		ERR_RETURN("Error, no configured queues reported on device id %u\n", dev_id);
+
+	if (rte_dma_start(dev_id) != 0)
+		ERR_RETURN("Error with rte_dma_start()\n");
+
+	if (rte_dma_stats_get(dev_id, vchan, &stats) != 0)
+		ERR_RETURN("Error with rte_dma_stats_get()\n");
+
+	if (stats.completed != 0 || stats.submitted != 0 || stats.errors != 0)
+		ERR_RETURN("Error device stats are not all zero: completed = %"PRIu64", "
+				"submitted = %"PRIu64", errors = %"PRIu64"\n",
+				stats.completed, stats.submitted, stats.errors);
+
+	rte_dma_stop(dev_id);
+	rte_dma_stats_reset(dev_id, vchan);
+	return 0;
+}
+
 static int
 test_apis(void)
 {
@@ -33,9 +94,18 @@ test_apis(void)
 static int
 test_dma(void)
 {
+	int i;
+
 	/* basic sanity on dmadev infrastructure */
 	if (test_apis() < 0)
-		return -1;
+		ERR_RETURN("Error performing API tests\n");
+
+	if (rte_dma_count_avail() == 0)
+		return TEST_SKIPPED;
+
+	RTE_DMA_FOREACH_DEV(i)
+		if (test_dmadev_instance(i) < 0)
+			ERR_RETURN("Error, test failure for device %d\n", i);
 
 	return 0;
 }
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v4 5/9] app/test: add basic dmadev copy tests
  2021-09-17 13:30 ` [dpdk-dev] [PATCH v4 0/9] add test suite for DMA drivers Bruce Richardson
                     ` (3 preceding siblings ...)
  2021-09-17 13:30   ` [dpdk-dev] [PATCH v4 4/9] app/test: add basic dmadev instance tests Bruce Richardson
@ 2021-09-17 13:30   ` Bruce Richardson
  2021-09-17 13:30   ` [dpdk-dev] [PATCH v4 6/9] app/test: add more comprehensive " Bruce Richardson
                     ` (3 subsequent siblings)
  8 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-09-17 13:30 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

For each dmadev instance, perform some basic copy tests to validate that
functionality.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
---
 app/test/test_dmadev.c | 175 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 175 insertions(+)

diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
index 28fbdc0b1f..2f22b6e382 100644
--- a/app/test/test_dmadev.c
+++ b/app/test/test_dmadev.c
@@ -6,6 +6,10 @@
 #include <inttypes.h>
 
 #include <rte_dmadev.h>
+#include <rte_mbuf.h>
+#include <rte_pause.h>
+#include <rte_cycles.h>
+#include <rte_random.h>
 #include <rte_bus_vdev.h>
 
 #include "test.h"
@@ -15,6 +19,11 @@ extern int test_dma_api(uint16_t dev_id);
 
 #define ERR_RETURN(...) do { print_err(__func__, __LINE__, __VA_ARGS__); return -1; } while (0)
 
+#define COPY_LEN 1024
+
+static struct rte_mempool *pool;
+static uint16_t id_count;
+
 static void
 __rte_format_printf(3, 4)
 print_err(const char *func, int lineno, const char *format, ...)
@@ -27,10 +36,155 @@ print_err(const char *func, int lineno, const char *format, ...)
 	va_end(ap);
 }
 
+static int
+runtest(const char *printable, int (*test_fn)(int dev_id, uint16_t vchan), int iterations,
+		int dev_id, uint16_t vchan, bool check_err_stats)
+{
+	struct rte_dma_stats stats;
+	int i;
+
+	rte_dma_stats_reset(dev_id, vchan);
+	printf("DMA Dev %d: Running %s Tests %s\n", dev_id, printable,
+			check_err_stats ? " " : "(errors expected)");
+	for (i = 0; i < iterations; i++) {
+		if (test_fn(dev_id, vchan) < 0)
+			return -1;
+
+		rte_dma_stats_get(dev_id, 0, &stats);
+		printf("Ops submitted: %"PRIu64"\t", stats.submitted);
+		printf("Ops completed: %"PRIu64"\t", stats.completed);
+		printf("Errors: %"PRIu64"\r", stats.errors);
+
+		if (stats.completed != stats.submitted)
+			ERR_RETURN("\nError, not all submitted jobs are reported as completed\n");
+		if (check_err_stats && stats.errors != 0)
+			ERR_RETURN("\nErrors reported during op processing, aborting tests\n");
+	}
+	printf("\n");
+	return 0;
+}
+
+static void
+await_hw(int dev_id, uint16_t vchan)
+{
+	enum rte_dma_vchan_status st;
+
+	if (rte_dma_vchan_status(dev_id, vchan, &st) < 0) {
+		/* for drivers that don't support this op, just sleep for 1 millisecond */
+		rte_delay_us_sleep(1000);
+		return;
+	}
+
+	/* for those that do, *max* end time is one second from now, but all should be faster */
+	const uint64_t end_cycles = rte_get_timer_cycles() + rte_get_timer_hz();
+	while (st == RTE_DMA_VCHAN_ACTIVE && rte_get_timer_cycles() < end_cycles) {
+		rte_pause();
+		rte_dma_vchan_status(dev_id, vchan, &st);
+	}
+}
+
+static int
+test_enqueue_copies(int dev_id, uint16_t vchan)
+{
+	unsigned int i;
+	uint16_t id;
+
+	/* test doing a single copy */
+	do {
+		struct rte_mbuf *src, *dst;
+		char *src_data, *dst_data;
+
+		src = rte_pktmbuf_alloc(pool);
+		dst = rte_pktmbuf_alloc(pool);
+		src_data = rte_pktmbuf_mtod(src, char *);
+		dst_data = rte_pktmbuf_mtod(dst, char *);
+
+		for (i = 0; i < COPY_LEN; i++)
+			src_data[i] = rte_rand() & 0xFF;
+
+		id = rte_dma_copy(dev_id, vchan, rte_pktmbuf_iova(src), rte_pktmbuf_iova(dst),
+				COPY_LEN, RTE_DMA_OP_FLAG_SUBMIT);
+		if (id != id_count)
+			ERR_RETURN("Error with rte_dma_copy, got %u, expected %u\n",
+					id, id_count);
+
+		/* give time for copy to finish, then check it was done */
+		await_hw(dev_id, vchan);
+
+		for (i = 0; i < COPY_LEN; i++)
+			if (dst_data[i] != src_data[i])
+				ERR_RETURN("Data mismatch at char %u [Got %02x not %02x]\n", i,
+						dst_data[i], src_data[i]);
+
+		/* now check completion works */
+		if (rte_dma_completed(dev_id, vchan, 1, &id, NULL) != 1)
+			ERR_RETURN("Error with rte_dma_completed\n");
+
+		if (id != id_count)
+			ERR_RETURN("Error:incorrect job id received, %u [expected %u]\n",
+					id, id_count);
+
+		rte_pktmbuf_free(src);
+		rte_pktmbuf_free(dst);
+
+		/* now check completion returns nothing more */
+		if (rte_dma_completed(dev_id, 0, 1, NULL, NULL) != 0)
+			ERR_RETURN("Error with rte_dma_completed in empty check\n");
+
+		id_count++;
+
+	} while (0);
+
+	/* test doing a multiple single copies */
+	do {
+		const uint16_t max_ops = 4;
+		struct rte_mbuf *src, *dst;
+		char *src_data, *dst_data;
+		uint16_t count;
+
+		src = rte_pktmbuf_alloc(pool);
+		dst = rte_pktmbuf_alloc(pool);
+		src_data = rte_pktmbuf_mtod(src, char *);
+		dst_data = rte_pktmbuf_mtod(dst, char *);
+
+		for (i = 0; i < COPY_LEN; i++)
+			src_data[i] = rte_rand() & 0xFF;
+
+		/* perform the same copy <max_ops> times */
+		for (i = 0; i < max_ops; i++)
+			if (rte_dma_copy(dev_id, vchan,
+					rte_pktmbuf_iova(src),
+					rte_pktmbuf_iova(dst),
+					COPY_LEN, RTE_DMA_OP_FLAG_SUBMIT) != id_count++)
+				ERR_RETURN("Error with rte_dma_copy\n");
+
+		await_hw(dev_id, vchan);
+
+		count = rte_dma_completed(dev_id, vchan, max_ops * 2, &id, NULL);
+		if (count != max_ops)
+			ERR_RETURN("Error with rte_dma_completed, got %u not %u\n",
+					count, max_ops);
+
+		if (id != id_count - 1)
+			ERR_RETURN("Error, incorrect job id returned: got %u not %u\n",
+					id, id_count - 1);
+
+		for (i = 0; i < COPY_LEN; i++)
+			if (dst_data[i] != src_data[i])
+				ERR_RETURN("Data mismatch at char %u\n", i);
+
+		rte_pktmbuf_free(src);
+		rte_pktmbuf_free(dst);
+	} while (0);
+
+	return 0;
+}
+
 static int
 test_dmadev_instance(uint16_t dev_id)
 {
 #define TEST_RINGSIZE 512
+#define CHECK_ERRS    true
 	struct rte_dma_stats stats;
 	struct rte_dma_info info;
 	const struct rte_dma_conf conf = { .nb_vchans = 1};
@@ -66,10 +220,31 @@ test_dmadev_instance(uint16_t dev_id)
 		ERR_RETURN("Error device stats are not all zero: completed = %"PRIu64", "
 				"submitted = %"PRIu64", errors = %"PRIu64"\n",
 				stats.completed, stats.submitted, stats.errors);
+	id_count = 0;
+
+	/* create a mempool for running tests */
+	pool = rte_pktmbuf_pool_create("TEST_DMADEV_POOL",
+			TEST_RINGSIZE * 2, /* n == num elements */
+			32,  /* cache size */
+			0,   /* priv size */
+			2048, /* data room size */
+			info.numa_node);
+	if (pool == NULL)
+		ERR_RETURN("Error with mempool creation\n");
 
+	/* run the test cases, use many iterations to ensure UINT16_MAX id wraparound */
+	if (runtest("copy", test_enqueue_copies, 640, dev_id, vchan, CHECK_ERRS) < 0)
+		goto err;
+
+	rte_mempool_free(pool);
 	rte_dma_stop(dev_id);
 	rte_dma_stats_reset(dev_id, vchan);
 	return 0;
+
+err:
+	rte_mempool_free(pool);
+	rte_dma_stop(dev_id);
+	return -1;
 }
 
 static int
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v4 6/9] app/test: add more comprehensive dmadev copy tests
  2021-09-17 13:30 ` [dpdk-dev] [PATCH v4 0/9] add test suite for DMA drivers Bruce Richardson
                     ` (4 preceding siblings ...)
  2021-09-17 13:30   ` [dpdk-dev] [PATCH v4 5/9] app/test: add basic dmadev copy tests Bruce Richardson
@ 2021-09-17 13:30   ` Bruce Richardson
  2021-09-17 13:30   ` [dpdk-dev] [PATCH v4 7/9] app/test: test dmadev instance failure handling Bruce Richardson
                     ` (2 subsequent siblings)
  8 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-09-17 13:30 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

Add unit tests for various combinations of use for dmadev, copying
bursts of packets in various formats, e.g.

1. enqueuing two smaller bursts and completing them as one burst
2. enqueuing one burst and gathering completions in smaller bursts
3. using completed_status() function to gather completions rather than
   just completed()

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
---
 app/test/test_dmadev.c | 101 ++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 100 insertions(+), 1 deletion(-)

diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
index 2f22b6e382..3a1373b1ef 100644
--- a/app/test/test_dmadev.c
+++ b/app/test/test_dmadev.c
@@ -83,6 +83,98 @@ await_hw(int dev_id, uint16_t vchan)
 	}
 }
 
+/* run a series of copy tests just using some different options for enqueues and completions */
+static int
+do_multi_copies(int dev_id, uint16_t vchan,
+		int split_batches,     /* submit 2 x 16 or 1 x 32 burst */
+		int split_completions, /* gather 2 x 16 or 1 x 32 completions */
+		int use_completed_status) /* use completed or completed_status function */
+{
+	struct rte_mbuf *srcs[32], *dsts[32];
+	enum rte_dma_status_code sc[32];
+	unsigned int i, j;
+	bool dma_err = false;
+
+	/* Enqueue burst of copies and hit doorbell */
+	for (i = 0; i < RTE_DIM(srcs); i++) {
+		uint64_t *src_data;
+
+		if (split_batches && i == RTE_DIM(srcs) / 2)
+			rte_dma_submit(dev_id, vchan);
+
+		srcs[i] = rte_pktmbuf_alloc(pool);
+		dsts[i] = rte_pktmbuf_alloc(pool);
+		if (srcs[i] == NULL || dsts[i] == NULL)
+			ERR_RETURN("Error allocating buffers\n");
+
+		src_data = rte_pktmbuf_mtod(srcs[i], uint64_t *);
+		for (j = 0; j < COPY_LEN/sizeof(uint64_t); j++)
+			src_data[j] = rte_rand();
+
+		if (rte_dma_copy(dev_id, vchan, srcs[i]->buf_iova + srcs[i]->data_off,
+				dsts[i]->buf_iova + dsts[i]->data_off, COPY_LEN, 0) != id_count++)
+			ERR_RETURN("Error with rte_dma_copy for buffer %u\n", i);
+	}
+	rte_dma_submit(dev_id, vchan);
+
+	await_hw(dev_id, vchan);
+
+	if (split_completions) {
+		/* gather completions in two halves */
+		uint16_t half_len = RTE_DIM(srcs) / 2;
+		int ret = rte_dma_completed(dev_id, vchan, half_len, NULL, &dma_err);
+		if (ret != half_len || dma_err)
+			ERR_RETURN("Error with rte_dma_completed - first half. ret = %d, expected ret = %u, dma_err = %d\n",
+					ret, half_len, dma_err);
+
+		ret = rte_dma_completed(dev_id, vchan, half_len, NULL, &dma_err);
+		if (ret != half_len || dma_err)
+			ERR_RETURN("Error with rte_dma_completed - second half. ret = %d, expected ret = %u, dma_err = %d\n",
+					ret, half_len, dma_err);
+	} else {
+		/* gather all completions in one go, using either
+		 * completed or completed_status fns
+		 */
+		if (!use_completed_status) {
+			int n = rte_dma_completed(dev_id, vchan, RTE_DIM(srcs), NULL, &dma_err);
+			if (n != RTE_DIM(srcs) || dma_err)
+				ERR_RETURN("Error with rte_dma_completed, %u [expected: %zu], dma_err = %d\n",
+						n, RTE_DIM(srcs), dma_err);
+		} else {
+			int n = rte_dma_completed_status(dev_id, vchan, RTE_DIM(srcs), NULL, sc);
+			if (n != RTE_DIM(srcs))
+				ERR_RETURN("Error with rte_dma_completed_status, %u [expected: %zu]\n",
+						n, RTE_DIM(srcs));
+
+			for (j = 0; j < (uint16_t)n; j++)
+				if (sc[j] != RTE_DMA_STATUS_SUCCESSFUL)
+					ERR_RETURN("Error with rte_dma_completed_status, job %u reports failure [code %u]\n",
+							j, sc[j]);
+		}
+	}
+
+	/* check for empty */
+	int ret = use_completed_status ?
+			rte_dma_completed_status(dev_id, vchan, RTE_DIM(srcs), NULL, sc) :
+			rte_dma_completed(dev_id, vchan, RTE_DIM(srcs), NULL, &dma_err);
+	if (ret != 0)
+		ERR_RETURN("Error with completion check - ops unexpectedly returned\n");
+
+	for (i = 0; i < RTE_DIM(srcs); i++) {
+		char *src_data, *dst_data;
+
+		src_data = rte_pktmbuf_mtod(srcs[i], char *);
+		dst_data = rte_pktmbuf_mtod(dsts[i], char *);
+		for (j = 0; j < COPY_LEN; j++)
+			if (src_data[j] != dst_data[j])
+				ERR_RETURN("Error with copy of packet %u, byte %u\n", i, j);
+
+		rte_pktmbuf_free(srcs[i]);
+		rte_pktmbuf_free(dsts[i]);
+	}
+	return 0;
+}
+
 static int
 test_enqueue_copies(int dev_id, uint16_t vchan)
 {
@@ -177,7 +269,14 @@ test_enqueue_copies(int dev_id, uint16_t vchan)
 		rte_pktmbuf_free(dst);
 	} while (0);
 
-	return 0;
+	/* test doing multiple copies */
+	return do_multi_copies(dev_id, vchan, 0, 0, 0) /* enqueue and complete 1 batch at a time */
+			/* enqueue 2 batches and then complete both */
+			|| do_multi_copies(dev_id, vchan, 1, 0, 0)
+			/* enqueue 1 batch, then complete in two halves */
+			|| do_multi_copies(dev_id, vchan, 0, 1, 0)
+			/* test using completed_status in place of regular completed API */
+			|| do_multi_copies(dev_id, vchan, 0, 0, 1);
 }
 
 static int
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v4 7/9] app/test: test dmadev instance failure handling
  2021-09-17 13:30 ` [dpdk-dev] [PATCH v4 0/9] add test suite for DMA drivers Bruce Richardson
                     ` (5 preceding siblings ...)
  2021-09-17 13:30   ` [dpdk-dev] [PATCH v4 6/9] app/test: add more comprehensive " Bruce Richardson
@ 2021-09-17 13:30   ` Bruce Richardson
  2021-09-17 13:30   ` [dpdk-dev] [PATCH v4 8/9] app/test: add dmadev fill tests Bruce Richardson
  2021-09-17 13:30   ` [dpdk-dev] [PATCH v4 9/9] app/test: add dmadev burst capacity API test Bruce Richardson
  8 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-09-17 13:30 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

Add a series of tests to inject bad copy operations into a dmadev to
test the error handling and reporting capabilities. Various combinations
of errors in various positions in a burst are tested, as are errors in
bursts with fence flag set, and multiple errors in a single burst.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
---
 app/test/test_dmadev.c | 357 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 357 insertions(+)

diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
index 3a1373b1ef..656012239d 100644
--- a/app/test/test_dmadev.c
+++ b/app/test/test_dmadev.c
@@ -279,6 +279,354 @@ test_enqueue_copies(int dev_id, uint16_t vchan)
 			|| do_multi_copies(dev_id, vchan, 0, 0, 1);
 }
 
+/* Failure handling test cases - global macros and variables for those tests*/
+#define COMP_BURST_SZ	16
+#define OPT_FENCE(idx) ((fence && idx == 8) ? RTE_DMA_OP_FLAG_FENCE : 0)
+
+static int
+test_failure_in_full_burst(int dev_id, uint16_t vchan, bool fence,
+		struct rte_mbuf **srcs, struct rte_mbuf **dsts, unsigned int fail_idx)
+{
+	/* Test single full batch statuses with failures */
+	enum rte_dma_status_code status[COMP_BURST_SZ];
+	struct rte_dma_stats baseline, stats;
+	uint16_t invalid_addr_id = 0;
+	uint16_t idx;
+	uint16_t count, status_count;
+	unsigned int i;
+	bool error = false;
+	int err_count = 0;
+
+	rte_dma_stats_get(dev_id, vchan, &baseline); /* get a baseline set of stats */
+	for (i = 0; i < COMP_BURST_SZ; i++) {
+		int id = rte_dma_copy(dev_id, vchan,
+				(i == fail_idx ? 0 : (srcs[i]->buf_iova + srcs[i]->data_off)),
+				dsts[i]->buf_iova + dsts[i]->data_off,
+				COPY_LEN, OPT_FENCE(i));
+		if (id < 0)
+			ERR_RETURN("Error with rte_dma_copy for buffer %u\n", i);
+		if (i == fail_idx)
+			invalid_addr_id = id;
+	}
+	rte_dma_submit(dev_id, vchan);
+	rte_dma_stats_get(dev_id, vchan, &stats);
+	if (stats.submitted != baseline.submitted + COMP_BURST_SZ)
+		ERR_RETURN("Submitted stats value not as expected, %"PRIu64" not %"PRIu64"\n",
+				stats.submitted, baseline.submitted + COMP_BURST_SZ);
+
+	await_hw(dev_id, vchan);
+
+	count = rte_dma_completed(dev_id, vchan, COMP_BURST_SZ, &idx, &error);
+	if (count != fail_idx)
+		ERR_RETURN("Error with rte_dma_completed for failure test. Got returned %u not %u.\n",
+				count, fail_idx);
+	if (!error)
+		ERR_RETURN("Error, missing expected failed copy, %u. has_error is not set\n",
+				fail_idx);
+	if (idx != invalid_addr_id - 1)
+		ERR_RETURN("Error, missing expected failed copy, %u. Got last idx %u, not %u\n",
+				fail_idx, idx, invalid_addr_id - 1);
+
+	/* all checks ok, now verify calling completed() again always returns 0 */
+	for (i = 0; i < 10; i++)
+		if (rte_dma_completed(dev_id, vchan, COMP_BURST_SZ, &idx, &error) != 0
+				|| error == false || idx != (invalid_addr_id - 1))
+			ERR_RETURN("Error with follow-up completed calls for fail idx %u\n",
+					fail_idx);
+
+	status_count = rte_dma_completed_status(dev_id, vchan, COMP_BURST_SZ,
+			&idx, status);
+	/* some HW may stop on error and be restarted after getting error status for single value
+	 * To handle this case, if we get just one error back, wait for more completions and get
+	 * status for rest of the burst
+	 */
+	if (status_count == 1) {
+		await_hw(dev_id, vchan);
+		status_count += rte_dma_completed_status(dev_id, vchan, COMP_BURST_SZ - 1,
+					&idx, &status[1]);
+	}
+	/* check that at this point we have all status values */
+	if (status_count != COMP_BURST_SZ - count)
+		ERR_RETURN("Error with completed_status calls for fail idx %u. Got %u not %u\n",
+				fail_idx, status_count, COMP_BURST_SZ - count);
+	/* now verify just one failure followed by multiple successful or skipped entries */
+	if (status[0] == RTE_DMA_STATUS_SUCCESSFUL)
+		ERR_RETURN("Error with status returned for fail idx %u. First status was not failure\n",
+				fail_idx);
+	for (i = 1; i < status_count; i++)
+		/* after a failure in a burst, depending on ordering/fencing,
+		 * operations may be successful or skipped because of previous error.
+		 */
+		if (status[i] != RTE_DMA_STATUS_SUCCESSFUL
+				&& status[i] != RTE_DMA_STATUS_NOT_ATTEMPTED)
+			ERR_RETURN("Error with status calls for fail idx %u. Status for job %u (of %u) is not successful\n",
+					fail_idx, count + i, COMP_BURST_SZ);
+
+	/* check the completed + errors stats are as expected */
+	rte_dma_stats_get(dev_id, vchan, &stats);
+	if (stats.completed != baseline.completed + COMP_BURST_SZ)
+		ERR_RETURN("Completed stats value not as expected, %"PRIu64" not %"PRIu64"\n",
+				stats.completed, baseline.completed + COMP_BURST_SZ);
+	for (i = 0; i < status_count; i++)
+		err_count += (status[i] != RTE_DMA_STATUS_SUCCESSFUL);
+	if (stats.errors != baseline.errors + err_count)
+		ERR_RETURN("'Errors' stats value not as expected, %"PRIu64" not %"PRIu64"\n",
+				stats.errors, baseline.errors + err_count);
+
+	return 0;
+}
+
+static int
+test_individual_status_query_with_failure(int dev_id, uint16_t vchan, bool fence,
+		struct rte_mbuf **srcs, struct rte_mbuf **dsts, unsigned int fail_idx)
+{
+	/* Test gathering batch statuses one at a time */
+	enum rte_dma_status_code status[COMP_BURST_SZ];
+	uint16_t invalid_addr_id = 0;
+	uint16_t idx;
+	uint16_t count = 0, status_count = 0;
+	unsigned int j;
+	bool error = false;
+
+	for (j = 0; j < COMP_BURST_SZ; j++) {
+		int id = rte_dma_copy(dev_id, vchan,
+				(j == fail_idx ? 0 : (srcs[j]->buf_iova + srcs[j]->data_off)),
+				dsts[j]->buf_iova + dsts[j]->data_off,
+				COPY_LEN, OPT_FENCE(j));
+		if (id < 0)
+			ERR_RETURN("Error with rte_dma_copy for buffer %u\n", j);
+		if (j == fail_idx)
+			invalid_addr_id = id;
+	}
+	rte_dma_submit(dev_id, vchan);
+	await_hw(dev_id, vchan);
+
+	/* use regular "completed" until we hit error */
+	while (!error) {
+		uint16_t n = rte_dma_completed(dev_id, vchan, 1, &idx, &error);
+		count += n;
+		if (n > 1 || count >= COMP_BURST_SZ)
+			ERR_RETURN("Error - too many completions got\n");
+		if (n == 0 && !error)
+			ERR_RETURN("Error, unexpectedly got zero completions after %u completed\n",
+					count);
+	}
+	if (idx != invalid_addr_id - 1)
+		ERR_RETURN("Error, last successful index not as expected, got %u, expected %u\n",
+				idx, invalid_addr_id - 1);
+
+	/* use completed_status until we hit end of burst */
+	while (count + status_count < COMP_BURST_SZ) {
+		uint16_t n = rte_dma_completed_status(dev_id, vchan, 1, &idx,
+				&status[status_count]);
+		await_hw(dev_id, vchan); /* allow delay to ensure jobs are completed */
+		status_count += n;
+		if (n != 1)
+			ERR_RETURN("Error: unexpected number of completions received, %u, not 1\n",
+					n);
+	}
+
+	/* check for single failure */
+	if (status[0] == RTE_DMA_STATUS_SUCCESSFUL)
+		ERR_RETURN("Error, unexpected successful DMA transaction\n");
+	for (j = 1; j < status_count; j++)
+		if (status[j] != RTE_DMA_STATUS_SUCCESSFUL
+				&& status[j] != RTE_DMA_STATUS_NOT_ATTEMPTED)
+			ERR_RETURN("Error, unexpected DMA error reported\n");
+
+	return 0;
+}
+
+static int
+test_single_item_status_query_with_failure(int dev_id, uint16_t vchan,
+		struct rte_mbuf **srcs, struct rte_mbuf **dsts, unsigned int fail_idx)
+{
+	/* When error occurs just collect a single error using "completed_status()"
+	 * before going to back to completed() calls
+	 */
+	enum rte_dma_status_code status;
+	uint16_t invalid_addr_id = 0;
+	uint16_t idx;
+	uint16_t count, status_count, count2;
+	unsigned int j;
+	bool error = false;
+
+	for (j = 0; j < COMP_BURST_SZ; j++) {
+		int id = rte_dma_copy(dev_id, vchan,
+				(j == fail_idx ? 0 : (srcs[j]->buf_iova + srcs[j]->data_off)),
+				dsts[j]->buf_iova + dsts[j]->data_off,
+				COPY_LEN, 0);
+		if (id < 0)
+			ERR_RETURN("Error with rte_dma_copy for buffer %u\n", j);
+		if (j == fail_idx)
+			invalid_addr_id = id;
+	}
+	rte_dma_submit(dev_id, vchan);
+	await_hw(dev_id, vchan);
+
+	/* get up to the error point */
+	count = rte_dma_completed(dev_id, vchan, COMP_BURST_SZ, &idx, &error);
+	if (count != fail_idx)
+		ERR_RETURN("Error with rte_dma_completed for failure test. Got returned %u not %u.\n",
+				count, fail_idx);
+	if (!error)
+		ERR_RETURN("Error, missing expected failed copy, %u. has_error is not set\n",
+				fail_idx);
+	if (idx != invalid_addr_id - 1)
+		ERR_RETURN("Error, missing expected failed copy, %u. Got last idx %u, not %u\n",
+				fail_idx, idx, invalid_addr_id - 1);
+
+	/* get the error code */
+	status_count = rte_dma_completed_status(dev_id, vchan, 1, &idx, &status);
+	if (status_count != 1)
+		ERR_RETURN("Error with completed_status calls for fail idx %u. Got %u not %u\n",
+				fail_idx, status_count, COMP_BURST_SZ - count);
+	if (status == RTE_DMA_STATUS_SUCCESSFUL)
+		ERR_RETURN("Error with status returned for fail idx %u. First status was not failure\n",
+				fail_idx);
+
+	/* delay in case time needed after err handled to complete other jobs */
+	await_hw(dev_id, vchan);
+
+	/* get the rest of the completions without status */
+	count2 = rte_dma_completed(dev_id, vchan, COMP_BURST_SZ, &idx, &error);
+	if (error == true)
+		ERR_RETURN("Error, got further errors post completed_status() call, for failure case %u.\n",
+				fail_idx);
+	if (count + status_count + count2 != COMP_BURST_SZ)
+		ERR_RETURN("Error, incorrect number of completions received, got %u not %u\n",
+				count + status_count + count2, COMP_BURST_SZ);
+
+	return 0;
+}
+
+static int
+test_multi_failure(int dev_id, uint16_t vchan, struct rte_mbuf **srcs, struct rte_mbuf **dsts,
+		const unsigned int *fail, size_t num_fail)
+{
+	/* test having multiple errors in one go */
+	enum rte_dma_status_code status[COMP_BURST_SZ];
+	unsigned int i, j;
+	uint16_t count, err_count = 0;
+	bool error = false;
+
+	/* enqueue and gather completions in one go */
+	for (j = 0; j < COMP_BURST_SZ; j++) {
+		uintptr_t src = srcs[j]->buf_iova + srcs[j]->data_off;
+		/* set up for failure if the current index is anywhere is the fails array */
+		for (i = 0; i < num_fail; i++)
+			if (j == fail[i])
+				src = 0;
+
+		int id = rte_dma_copy(dev_id, vchan,
+				src, dsts[j]->buf_iova + dsts[j]->data_off,
+				COPY_LEN, 0);
+		if (id < 0)
+			ERR_RETURN("Error with rte_dma_copy for buffer %u\n", j);
+	}
+	rte_dma_submit(dev_id, vchan);
+	await_hw(dev_id, vchan);
+
+	count = rte_dma_completed_status(dev_id, vchan, COMP_BURST_SZ, NULL, status);
+	while (count < COMP_BURST_SZ) {
+		await_hw(dev_id, vchan);
+
+		uint16_t ret = rte_dma_completed_status(dev_id, vchan, COMP_BURST_SZ - count,
+				NULL, &status[count]);
+		if (ret == 0)
+			ERR_RETURN("Error getting all completions for jobs. Got %u of %u\n",
+					count, COMP_BURST_SZ);
+		count += ret;
+	}
+	for (i = 0; i < count; i++)
+		if (status[i] != RTE_DMA_STATUS_SUCCESSFUL)
+			err_count++;
+
+	if (err_count != num_fail)
+		ERR_RETURN("Error: Invalid number of failed completions returned, %u; expected %zu\n",
+			err_count, num_fail);
+
+	/* enqueue and gather completions in bursts, but getting errors one at a time */
+	for (j = 0; j < COMP_BURST_SZ; j++) {
+		uintptr_t src = srcs[j]->buf_iova + srcs[j]->data_off;
+		/* set up for failure if the current index is anywhere is the fails array */
+		for (i = 0; i < num_fail; i++)
+			if (j == fail[i])
+				src = 0;
+
+		int id = rte_dma_copy(dev_id, vchan,
+				src, dsts[j]->buf_iova + dsts[j]->data_off,
+				COPY_LEN, 0);
+		if (id < 0)
+			ERR_RETURN("Error with rte_dma_copy for buffer %u\n", j);
+	}
+	rte_dma_submit(dev_id, vchan);
+	await_hw(dev_id, vchan);
+
+	count = 0;
+	err_count = 0;
+	while (count + err_count < COMP_BURST_SZ) {
+		count += rte_dma_completed(dev_id, vchan, COMP_BURST_SZ, NULL, &error);
+		if (error) {
+			uint16_t ret = rte_dma_completed_status(dev_id, vchan, 1,
+					NULL, status);
+			if (ret != 1)
+				ERR_RETURN("Error getting error-status for completions\n");
+			err_count += ret;
+			await_hw(dev_id, vchan);
+		}
+	}
+	if (err_count != num_fail)
+		ERR_RETURN("Error: Incorrect number of failed completions received, got %u not %zu\n",
+				err_count, num_fail);
+
+	return 0;
+}
+
+static int
+test_completion_status(int dev_id, uint16_t vchan, bool fence)
+{
+	const unsigned int fail[] = {0, 7, 14, 15};
+	struct rte_mbuf *srcs[COMP_BURST_SZ], *dsts[COMP_BURST_SZ];
+	unsigned int i;
+
+	for (i = 0; i < COMP_BURST_SZ; i++) {
+		srcs[i] = rte_pktmbuf_alloc(pool);
+		dsts[i] = rte_pktmbuf_alloc(pool);
+	}
+
+	for (i = 0; i < RTE_DIM(fail); i++) {
+		if (test_failure_in_full_burst(dev_id, vchan, fence, srcs, dsts, fail[i]) < 0)
+			return -1;
+
+		if (test_individual_status_query_with_failure(dev_id, vchan, fence,
+				srcs, dsts, fail[i]) < 0)
+			return -1;
+
+		/* test is run the same fenced, or unfenced, but no harm in running it twice */
+		if (test_single_item_status_query_with_failure(dev_id, vchan,
+				srcs, dsts, fail[i]) < 0)
+			return -1;
+	}
+
+	if (test_multi_failure(dev_id, vchan, srcs, dsts, fail, RTE_DIM(fail)) < 0)
+		return -1;
+
+	for (i = 0; i < COMP_BURST_SZ; i++) {
+		rte_pktmbuf_free(srcs[i]);
+		rte_pktmbuf_free(dsts[i]);
+	}
+	return 0;
+}
+
+static int
+test_completion_handling(int dev_id, uint16_t vchan)
+{
+	return test_completion_status(dev_id, vchan, false)              /* without fences */
+			|| test_completion_status(dev_id, vchan, true);  /* with fences */
+
+}
+
 static int
 test_dmadev_instance(uint16_t dev_id)
 {
@@ -335,6 +683,15 @@ test_dmadev_instance(uint16_t dev_id)
 	if (runtest("copy", test_enqueue_copies, 640, dev_id, vchan, CHECK_ERRS) < 0)
 		goto err;
 
+	/* to test error handling we can provide null pointers for source or dest in copies. This
+	 * requires VA mode in DPDK, since NULL(0) is a valid physical address.
+	 */
+	if (rte_eal_iova_mode() != RTE_IOVA_VA)
+		printf("DMA Dev %u: DPDK not in VA mode, skipping error handling tests\n", dev_id);
+	else if (runtest("error handling", test_completion_handling, 1,
+			dev_id, vchan, !CHECK_ERRS) < 0)
+		goto err;
+
 	rte_mempool_free(pool);
 	rte_dma_stop(dev_id);
 	rte_dma_stats_reset(dev_id, vchan);
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v4 8/9] app/test: add dmadev fill tests
  2021-09-17 13:30 ` [dpdk-dev] [PATCH v4 0/9] add test suite for DMA drivers Bruce Richardson
                     ` (6 preceding siblings ...)
  2021-09-17 13:30   ` [dpdk-dev] [PATCH v4 7/9] app/test: test dmadev instance failure handling Bruce Richardson
@ 2021-09-17 13:30   ` Bruce Richardson
  2021-09-17 13:30   ` [dpdk-dev] [PATCH v4 9/9] app/test: add dmadev burst capacity API test Bruce Richardson
  8 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-09-17 13:30 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

From: Kevin Laatz <kevin.laatz@intel.com>

For dma devices which support the fill operation, run unit tests to
verify fill behaviour is correct.

Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
---
 app/test/test_dmadev.c | 49 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 49 insertions(+)

diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
index 656012239d..f763cf273c 100644
--- a/app/test/test_dmadev.c
+++ b/app/test/test_dmadev.c
@@ -624,7 +624,51 @@ test_completion_handling(int dev_id, uint16_t vchan)
 {
 	return test_completion_status(dev_id, vchan, false)              /* without fences */
 			|| test_completion_status(dev_id, vchan, true);  /* with fences */
+}
+
+static int
+test_enqueue_fill(int dev_id, uint16_t vchan)
+{
+	const unsigned int lengths[] = {8, 64, 1024, 50, 100, 89};
+	struct rte_mbuf *dst;
+	char *dst_data;
+	uint64_t pattern = 0xfedcba9876543210;
+	unsigned int i, j;
+
+	dst = rte_pktmbuf_alloc(pool);
+	if (dst == NULL)
+		ERR_RETURN("Failed to allocate mbuf\n");
+	dst_data = rte_pktmbuf_mtod(dst, char *);
+
+	for (i = 0; i < RTE_DIM(lengths); i++) {
+		/* reset dst_data */
+		memset(dst_data, 0, rte_pktmbuf_data_len(dst));
+
+		/* perform the fill operation */
+		int id = rte_dma_fill(dev_id, vchan, pattern,
+				rte_pktmbuf_iova(dst), lengths[i], RTE_DMA_OP_FLAG_SUBMIT);
+		if (id < 0)
+			ERR_RETURN("Error with rte_dma_fill\n");
+		await_hw(dev_id, vchan);
+
+		if (rte_dma_completed(dev_id, vchan, 1, NULL, NULL) != 1)
+			ERR_RETURN("Error: fill operation failed (length: %u)\n", lengths[i]);
+		/* check the data from the fill operation is correct */
+		for (j = 0; j < lengths[i]; j++) {
+			char pat_byte = ((char *)&pattern)[j % 8];
+			if (dst_data[j] != pat_byte)
+				ERR_RETURN("Error with fill operation (lengths = %u): got (%x), not (%x)\n",
+						lengths[i], dst_data[j], pat_byte);
+		}
+		/* check that the data after the fill operation was not written to */
+		for (; j < rte_pktmbuf_data_len(dst); j++)
+			if (dst_data[j] != 0)
+				ERR_RETURN("Error, fill operation wrote too far (lengths = %u): got (%x), not (%x)\n",
+						lengths[i], dst_data[j], 0);
+	}
 
+	rte_pktmbuf_free(dst);
+	return 0;
 }
 
 static int
@@ -692,6 +736,11 @@ test_dmadev_instance(uint16_t dev_id)
 			dev_id, vchan, !CHECK_ERRS) < 0)
 		goto err;
 
+	if ((info.dev_capa & RTE_DMA_CAPA_OPS_FILL) == 0)
+		printf("DMA Dev %u: No device fill support, skipping fill tests\n", dev_id);
+	else if (runtest("fill", test_enqueue_fill, 1, dev_id, vchan, CHECK_ERRS) < 0)
+		goto err;
+
 	rte_mempool_free(pool);
 	rte_dma_stop(dev_id);
 	rte_dma_stats_reset(dev_id, vchan);
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v4 9/9] app/test: add dmadev burst capacity API test
  2021-09-17 13:30 ` [dpdk-dev] [PATCH v4 0/9] add test suite for DMA drivers Bruce Richardson
                     ` (7 preceding siblings ...)
  2021-09-17 13:30   ` [dpdk-dev] [PATCH v4 8/9] app/test: add dmadev fill tests Bruce Richardson
@ 2021-09-17 13:30   ` Bruce Richardson
  8 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-09-17 13:30 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

From: Kevin Laatz <kevin.laatz@intel.com>

Add a test case to validate the functionality of drivers' burst capacity
API implementations.

Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 app/test/test_dmadev.c | 68 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 68 insertions(+)

diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
index f763cf273c..98fcab67f3 100644
--- a/app/test/test_dmadev.c
+++ b/app/test/test_dmadev.c
@@ -671,6 +671,69 @@ test_enqueue_fill(int dev_id, uint16_t vchan)
 	return 0;
 }
 
+static int
+test_burst_capacity(int dev_id, uint16_t vchan)
+{
+#define CAP_TEST_BURST_SIZE	64
+	const int ring_space = rte_dma_burst_capacity(dev_id, vchan);
+	struct rte_mbuf *src, *dst;
+	int i, j, iter;
+	int cap, ret;
+	bool dma_err;
+
+	src = rte_pktmbuf_alloc(pool);
+	dst = rte_pktmbuf_alloc(pool);
+
+	/* to test capacity, we enqueue elements and check capacity is reduced
+	 * by one each time - rebaselining the expected value after each burst
+	 * as the capacity is only for a burst. We enqueue multiple bursts to
+	 * fill up half the ring, before emptying it again. We do this twice to
+	 * ensure that we get to test scenarios where we get ring wrap-around
+	 */
+	for (iter = 0; iter < 2; iter++) {
+		for (i = 0; i < (ring_space / (2 * CAP_TEST_BURST_SIZE)) + 1; i++) {
+			cap = rte_dma_burst_capacity(dev_id, vchan);
+
+			for (j = 0; j < CAP_TEST_BURST_SIZE; j++) {
+				ret = rte_dma_copy(dev_id, vchan, rte_pktmbuf_iova(src),
+						rte_pktmbuf_iova(dst), COPY_LEN, 0);
+				if (ret < 0)
+					ERR_RETURN("Error with rte_dmadev_copy\n");
+
+				if (rte_dma_burst_capacity(dev_id, vchan) != cap - (j + 1))
+					ERR_RETURN("Error, ring capacity did not change as expected\n");
+			}
+			if (rte_dma_submit(dev_id, vchan) < 0)
+				ERR_RETURN("Error, failed to submit burst\n");
+
+			if (cap < rte_dma_burst_capacity(dev_id, vchan))
+				ERR_RETURN("Error, avail ring capacity has gone up, not down\n");
+		}
+		await_hw(dev_id, vchan);
+
+		for (i = 0; i < (ring_space / (2 * CAP_TEST_BURST_SIZE)) + 1; i++) {
+			ret = rte_dma_completed(dev_id, vchan,
+					CAP_TEST_BURST_SIZE, NULL, &dma_err);
+			if (ret != CAP_TEST_BURST_SIZE || dma_err) {
+				enum rte_dma_status_code status;
+
+				rte_dma_completed_status(dev_id, vchan, 1, NULL, &status);
+				ERR_RETURN("Error with rte_dmadev_completed, %u [expected: %u], dma_err = %d, i = %u, iter = %u, status = %u\n",
+						ret, CAP_TEST_BURST_SIZE, dma_err, i, iter, status);
+			}
+		}
+		cap = rte_dma_burst_capacity(dev_id, vchan);
+		if (cap != ring_space)
+			ERR_RETURN("Error, ring capacity has not reset to original value, got %u, expected %u\n",
+					cap, ring_space);
+	}
+
+	rte_pktmbuf_free(src);
+	rte_pktmbuf_free(dst);
+
+	return 0;
+}
+
 static int
 test_dmadev_instance(uint16_t dev_id)
 {
@@ -741,6 +804,11 @@ test_dmadev_instance(uint16_t dev_id)
 	else if (runtest("fill", test_enqueue_fill, 1, dev_id, vchan, CHECK_ERRS) < 0)
 		goto err;
 
+	if (rte_dma_burst_capacity(dev_id, vchan) == -ENOTSUP)
+		printf("DMA Dev %u: Burst capacity API not supported, skipping tests\n", dev_id);
+	else if (runtest("burst capacity", test_burst_capacity, 1, dev_id, vchan, CHECK_ERRS) < 0)
+		goto err;
+
 	rte_mempool_free(pool);
 	rte_dma_stop(dev_id);
 	rte_dma_stats_reset(dev_id, vchan);
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v5 0/9] add test suite for DMA drivers
  2021-08-26 18:32 [dpdk-dev] [RFC PATCH 0/7] add test suite for DMA drivers Bruce Richardson
                   ` (9 preceding siblings ...)
  2021-09-17 13:30 ` [dpdk-dev] [PATCH v4 0/9] add test suite for DMA drivers Bruce Richardson
@ 2021-09-17 13:54 ` Bruce Richardson
  2021-09-17 13:54   ` [dpdk-dev] [PATCH v5 1/9] dmadev: add channel status check for testing use Bruce Richardson
                     ` (8 more replies)
  2021-09-24 10:29 ` [dpdk-dev] [PATCH v6 00/13] add test suite for DMA drivers Bruce Richardson
  11 siblings, 9 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-09-17 13:54 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

This patchset adds a fairly comprehensive set of tests for basic dmadev
functionality. Tests are added to verify basic copy operation in each
device, using both submit function and submit flag, and verifying completion
gathering using both "completed()" and "completed_status()" functions. Beyond
that, tests are then added for the error reporting and handling, as is a suite
of tests for the fill() operation for devices that support those.

Depends-on: series-18960 ("support dmadev")

V5:
* added missing reviewed-by tags from v3 reviewed.

V4:
* rebased to v22 of dmadev set
* added patch for iteration macro for dmadevs to allow testing each dmadev in
  turn

V3:
* add patch and tests for a burst-capacity function
* addressed review feedback from v2
* code cleanups to try and shorten code where possible

V2:
* added into dmadev a API to check for a device being idle
* removed the hard-coded timeout delays before checking completions, and instead
  wait for device to be idle
* added in checks for statistics updates as part of some tests
* fixed issue identified by internal coverity scan
* other minor miscellaneous changes and fixes.
 *** SUBJECT HERE ***

*** BLURB HERE ***

Bruce Richardson (6):
  dmadev: add channel status check for testing use
  dmadev: add device iterator
  app/test: add basic dmadev instance tests
  app/test: add basic dmadev copy tests
  app/test: add more comprehensive dmadev copy tests
  app/test: test dmadev instance failure handling

Kevin Laatz (3):
  dmadev: add burst capacity API
  app/test: add dmadev fill tests
  app/test: add dmadev burst capacity API test

 app/test/test_dmadev.c       | 820 ++++++++++++++++++++++++++++++++++-
 lib/dmadev/rte_dmadev.c      |  39 ++
 lib/dmadev/rte_dmadev.h      |  70 +++
 lib/dmadev/rte_dmadev_core.h |   9 +
 lib/dmadev/version.map       |   3 +
 5 files changed, 940 insertions(+), 1 deletion(-)

--
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v5 1/9] dmadev: add channel status check for testing use
  2021-09-17 13:54 ` [dpdk-dev] [PATCH v5 0/9] add test suite for DMA drivers Bruce Richardson
@ 2021-09-17 13:54   ` Bruce Richardson
  2021-09-22  8:25     ` fengchengwen
  2021-09-17 13:54   ` [dpdk-dev] [PATCH v5 2/9] dmadev: add burst capacity API Bruce Richardson
                     ` (7 subsequent siblings)
  8 siblings, 1 reply; 130+ messages in thread
From: Bruce Richardson @ 2021-09-17 13:54 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

Add in a function to check if a device or vchan has completed all jobs
assigned to it, without gathering in the results. This is primarily for
use in testing, to allow the hardware to be in a known-state prior to
gathering completions.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
---
 lib/dmadev/rte_dmadev.c      | 15 +++++++++++++++
 lib/dmadev/rte_dmadev.h      | 33 +++++++++++++++++++++++++++++++++
 lib/dmadev/rte_dmadev_core.h |  6 ++++++
 lib/dmadev/version.map       |  1 +
 4 files changed, 55 insertions(+)

diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c
index 544937acf8..859958fff8 100644
--- a/lib/dmadev/rte_dmadev.c
+++ b/lib/dmadev/rte_dmadev.c
@@ -716,3 +716,18 @@ rte_dma_dump(int16_t dev_id, FILE *f)

 	return 0;
 }
+
+int
+rte_dma_vchan_status(uint16_t dev_id, uint16_t vchan, enum rte_dma_vchan_status *status)
+{
+	struct rte_dma_dev *dev = &rte_dma_devices[dev_id];
+
+	RTE_DMA_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL);
+	if (vchan >= dev->data->dev_conf.nb_vchans) {
+		RTE_DMA_LOG(ERR, "Device %u vchan %u out of range\n", dev_id, vchan);
+		return -EINVAL;
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vchan_status, -ENOTSUP);
+	return (*dev->dev_ops->vchan_status)(dev, vchan, status);
+}
diff --git a/lib/dmadev/rte_dmadev.h b/lib/dmadev/rte_dmadev.h
index be54f2cb9d..86c4a38f83 100644
--- a/lib/dmadev/rte_dmadev.h
+++ b/lib/dmadev/rte_dmadev.h
@@ -660,6 +660,39 @@ int rte_dma_stats_get(int16_t dev_id, uint16_t vchan,
 __rte_experimental
 int rte_dma_stats_reset(int16_t dev_id, uint16_t vchan);

+/**
+ * device vchannel status
+ *
+ * Enum with the options for the channel status, either idle, active or halted due to error
+ */
+enum rte_dma_vchan_status {
+	RTE_DMA_VCHAN_IDLE,          /**< not processing, awaiting ops */
+	RTE_DMA_VCHAN_ACTIVE,        /**< currently processing jobs */
+	RTE_DMA_VCHAN_HALTED_ERROR,  /**< not processing due to error, cannot accept new ops */
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Determine if all jobs have completed on a device channel.
+ * This function is primarily designed for testing use, as it allows a process to check if
+ * all jobs are completed, without actually gathering completions from those jobs.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @param vchan
+ *   The identifier of virtual DMA channel.
+ * @param[out] status
+ *   The vchan status
+ * @return
+ *   0 - call completed successfully
+ *   < 0 - error code indicating there was a problem calling the API
+ */
+__rte_experimental
+int
+rte_dma_vchan_status(uint16_t dev_id, uint16_t vchan, enum rte_dma_vchan_status *status);
+
 /**
  * @warning
  * @b EXPERIMENTAL: this API may change without prior notice.
diff --git a/lib/dmadev/rte_dmadev_core.h b/lib/dmadev/rte_dmadev_core.h
index edb3286cbb..0eec1aa43b 100644
--- a/lib/dmadev/rte_dmadev_core.h
+++ b/lib/dmadev/rte_dmadev_core.h
@@ -46,6 +46,10 @@ typedef int (*rte_dma_vchan_setup_t)(struct rte_dma_dev *dev, uint16_t vchan,
 				const struct rte_dma_vchan_conf *conf,
 				uint32_t conf_sz);

+/** @internal Used to check if a virtual channel has finished all jobs. */
+typedef int (*rte_dma_vchan_status_t)(const struct rte_dma_dev *dev, uint16_t vchan,
+		enum rte_dma_vchan_status *status);
+
 /** @internal Used to retrieve basic statistics. */
 typedef int (*rte_dma_stats_get_t)(const struct rte_dma_dev *dev,
 			uint16_t vchan, struct rte_dma_stats *stats,
@@ -119,6 +123,8 @@ struct rte_dma_dev_ops {
 	rte_dma_stats_reset_t    stats_reset;

 	rte_dma_dump_t           dev_dump;
+	rte_dma_vchan_status_t   vchan_status;
+
 };

 /**
diff --git a/lib/dmadev/version.map b/lib/dmadev/version.map
index c780463bb2..40ea517016 100644
--- a/lib/dmadev/version.map
+++ b/lib/dmadev/version.map
@@ -20,6 +20,7 @@ EXPERIMENTAL {
 	rte_dma_stop;
 	rte_dma_submit;
 	rte_dma_vchan_setup;
+	rte_dma_vchan_status;

 	local: *;
 };
--
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v5 2/9] dmadev: add burst capacity API
  2021-09-17 13:54 ` [dpdk-dev] [PATCH v5 0/9] add test suite for DMA drivers Bruce Richardson
  2021-09-17 13:54   ` [dpdk-dev] [PATCH v5 1/9] dmadev: add channel status check for testing use Bruce Richardson
@ 2021-09-17 13:54   ` Bruce Richardson
  2021-09-17 13:54   ` [dpdk-dev] [PATCH v5 3/9] dmadev: add device iterator Bruce Richardson
                     ` (6 subsequent siblings)
  8 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-09-17 13:54 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj

From: Kevin Laatz <kevin.laatz@intel.com>

Add a burst capacity check API to the dmadev library. This API is useful to
applications which need to how many descriptors can be enqueued in the
current batch. For example, it could be used to determine whether all
segments of a multi-segment packet can be enqueued in the same batch or not
(to avoid half-offload of the packet).

Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
---
 lib/dmadev/rte_dmadev.c      | 11 +++++++++++
 lib/dmadev/rte_dmadev.h      | 19 +++++++++++++++++++
 lib/dmadev/rte_dmadev_core.h |  5 ++++-
 lib/dmadev/version.map       |  1 +
 4 files changed, 35 insertions(+), 1 deletion(-)

diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c
index 859958fff8..ed342e0d32 100644
--- a/lib/dmadev/rte_dmadev.c
+++ b/lib/dmadev/rte_dmadev.c
@@ -684,6 +684,17 @@ dma_dump_capability(FILE *f, uint64_t dev_capa)
 	fprintf(f, "\n");
 }

+int
+rte_dma_burst_capacity(uint16_t dev_id, uint16_t vchan)
+{
+	const struct rte_dma_dev *dev = &rte_dma_devices[dev_id];
+
+	RTE_DMA_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL);
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->burst_capacity, -ENOTSUP);
+	return (*dev->dev_ops->burst_capacity)(dev, vchan);
+}
+
 int
 rte_dma_dump(int16_t dev_id, FILE *f)
 {
diff --git a/lib/dmadev/rte_dmadev.h b/lib/dmadev/rte_dmadev.h
index 86c4a38f83..be4bb18ee6 100644
--- a/lib/dmadev/rte_dmadev.h
+++ b/lib/dmadev/rte_dmadev.h
@@ -693,6 +693,25 @@ __rte_experimental
 int
 rte_dma_vchan_status(uint16_t dev_id, uint16_t vchan, enum rte_dma_vchan_status *status);

+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Check remaining capacity in descriptor ring for the current burst.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @param vchan
+ *   The identifier of virtual DMA channel.
+ *
+ * @return
+ *   - Remaining space in the descriptor ring for the current burst on success.
+ *   - -ENOTSUP: if not supported by the device.
+ */
+__rte_experimental
+int
+rte_dma_burst_capacity(uint16_t dev_id, uint16_t vchan);
+
 /**
  * @warning
  * @b EXPERIMENTAL: this API may change without prior notice.
diff --git a/lib/dmadev/rte_dmadev_core.h b/lib/dmadev/rte_dmadev_core.h
index 0eec1aa43b..21290ef471 100644
--- a/lib/dmadev/rte_dmadev_core.h
+++ b/lib/dmadev/rte_dmadev_core.h
@@ -58,6 +58,9 @@ typedef int (*rte_dma_stats_get_t)(const struct rte_dma_dev *dev,
 /** @internal Used to reset basic statistics. */
 typedef int (*rte_dma_stats_reset_t)(struct rte_dma_dev *dev, uint16_t vchan);

+/** @internal Used to check the remaining space in descriptor ring. */
+typedef uint16_t (*rte_dma_burst_capacity_t)(const struct rte_dma_dev *dev, uint16_t vchan);
+
 /** @internal Used to dump internal information. */
 typedef int (*rte_dma_dump_t)(const struct rte_dma_dev *dev, FILE *f);

@@ -124,7 +127,7 @@ struct rte_dma_dev_ops {

 	rte_dma_dump_t           dev_dump;
 	rte_dma_vchan_status_t   vchan_status;
-
+	rte_dma_burst_capacity_t burst_capacity;
 };

 /**
diff --git a/lib/dmadev/version.map b/lib/dmadev/version.map
index 40ea517016..66420c4ede 100644
--- a/lib/dmadev/version.map
+++ b/lib/dmadev/version.map
@@ -1,6 +1,7 @@
 EXPERIMENTAL {
 	global:

+	rte_dma_burst_capacity;
 	rte_dma_close;
 	rte_dma_completed;
 	rte_dma_completed_status;
--
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v5 3/9] dmadev: add device iterator
  2021-09-17 13:54 ` [dpdk-dev] [PATCH v5 0/9] add test suite for DMA drivers Bruce Richardson
  2021-09-17 13:54   ` [dpdk-dev] [PATCH v5 1/9] dmadev: add channel status check for testing use Bruce Richardson
  2021-09-17 13:54   ` [dpdk-dev] [PATCH v5 2/9] dmadev: add burst capacity API Bruce Richardson
@ 2021-09-17 13:54   ` Bruce Richardson
  2021-09-22  8:46     ` fengchengwen
  2021-09-17 13:54   ` [dpdk-dev] [PATCH v5 4/9] app/test: add basic dmadev instance tests Bruce Richardson
                     ` (5 subsequent siblings)
  8 siblings, 1 reply; 130+ messages in thread
From: Bruce Richardson @ 2021-09-17 13:54 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

Add a function and wrapper macro to iterate over all dma devices.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 lib/dmadev/rte_dmadev.c | 13 +++++++++++++
 lib/dmadev/rte_dmadev.h | 18 ++++++++++++++++++
 lib/dmadev/version.map  |  1 +
 3 files changed, 32 insertions(+)

diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c
index ed342e0d32..ba189f3539 100644
--- a/lib/dmadev/rte_dmadev.c
+++ b/lib/dmadev/rte_dmadev.c
@@ -55,6 +55,19 @@ rte_dma_dev_max(size_t dev_max)
 	return 0;
 }

+uint16_t
+rte_dma_next_dev(uint16_t start_dev_id)
+{
+	uint16_t dev_id = start_dev_id;
+	while (dev_id < dma_devices_max && rte_dma_devices[dev_id].state == RTE_DMA_DEV_UNUSED)
+		dev_id++;
+
+	if (dev_id < dma_devices_max)
+		return dev_id;
+
+	return UINT16_MAX;
+}
+
 static int
 dma_check_name(const char *name)
 {
diff --git a/lib/dmadev/rte_dmadev.h b/lib/dmadev/rte_dmadev.h
index be4bb18ee6..d262b8ed8d 100644
--- a/lib/dmadev/rte_dmadev.h
+++ b/lib/dmadev/rte_dmadev.h
@@ -219,6 +219,24 @@ bool rte_dma_is_valid(int16_t dev_id);
 __rte_experimental
 uint16_t rte_dma_count_avail(void);

+/**
+ * Iterates over valid dmadev instances.
+ *
+ * @param start_dev_id
+ *   The id of the next possible dmadev.
+ * @return
+ *   Next valid dmadev, UINT16_MAX if there is none.
+ */
+__rte_experimental
+uint16_t rte_dma_next_dev(uint16_t start_dev_id);
+
+/** Utility macro to iterate over all available dmadevs */
+#define RTE_DMA_FOREACH_DEV(p) \
+	for (p = rte_dma_next_dev(0); \
+	     (uint16_t)p < UINT16_MAX; \
+	     p = rte_dma_next_dev(p + 1))
+
+
 /** DMA device support memory-to-memory transfer.
  *
  * @see struct rte_dma_info::dev_capa
diff --git a/lib/dmadev/version.map b/lib/dmadev/version.map
index 66420c4ede..0ab570a1be 100644
--- a/lib/dmadev/version.map
+++ b/lib/dmadev/version.map
@@ -15,6 +15,7 @@ EXPERIMENTAL {
 	rte_dma_get_dev_id;
 	rte_dma_info_get;
 	rte_dma_is_valid;
+	rte_dma_next_dev;
 	rte_dma_start;
 	rte_dma_stats_get;
 	rte_dma_stats_reset;
--
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v5 4/9] app/test: add basic dmadev instance tests
  2021-09-17 13:54 ` [dpdk-dev] [PATCH v5 0/9] add test suite for DMA drivers Bruce Richardson
                     ` (2 preceding siblings ...)
  2021-09-17 13:54   ` [dpdk-dev] [PATCH v5 3/9] dmadev: add device iterator Bruce Richardson
@ 2021-09-17 13:54   ` Bruce Richardson
  2021-09-17 13:54   ` [dpdk-dev] [PATCH v5 5/9] app/test: add basic dmadev copy tests Bruce Richardson
                     ` (4 subsequent siblings)
  8 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-09-17 13:54 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

Run basic sanity tests for configuring, starting and stopping a dmadev
instance to help validate drivers. This also provides the framework for
future tests for data-path operation.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
---
 app/test/test_dmadev.c | 72 +++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 71 insertions(+), 1 deletion(-)

diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
index e765ec5f2c..28fbdc0b1f 100644
--- a/app/test/test_dmadev.c
+++ b/app/test/test_dmadev.c
@@ -3,6 +3,8 @@
  * Copyright(c) 2021 Intel Corporation
  */

+#include <inttypes.h>
+
 #include <rte_dmadev.h>
 #include <rte_bus_vdev.h>

@@ -11,6 +13,65 @@
 /* from test_dmadev_api.c */
 extern int test_dma_api(uint16_t dev_id);

+#define ERR_RETURN(...) do { print_err(__func__, __LINE__, __VA_ARGS__); return -1; } while (0)
+
+static void
+__rte_format_printf(3, 4)
+print_err(const char *func, int lineno, const char *format, ...)
+{
+	va_list ap;
+
+	fprintf(stderr, "In %s:%d - ", func, lineno);
+	va_start(ap, format);
+	vfprintf(stderr, format, ap);
+	va_end(ap);
+}
+
+static int
+test_dmadev_instance(uint16_t dev_id)
+{
+#define TEST_RINGSIZE 512
+	struct rte_dma_stats stats;
+	struct rte_dma_info info;
+	const struct rte_dma_conf conf = { .nb_vchans = 1};
+	const struct rte_dma_vchan_conf qconf = {
+			.direction = RTE_DMA_DIR_MEM_TO_MEM,
+			.nb_desc = TEST_RINGSIZE,
+	};
+	const int vchan = 0;
+
+	printf("\n### Test dmadev instance %u\n", dev_id);
+
+	rte_dma_info_get(dev_id, &info);
+	if (info.max_vchans < 1)
+		ERR_RETURN("Error, no channels available on device id %u\n", dev_id);
+
+	if (rte_dma_configure(dev_id, &conf) != 0)
+		ERR_RETURN("Error with rte_dma_configure()\n");
+
+	if (rte_dma_vchan_setup(dev_id, vchan, &qconf) < 0)
+		ERR_RETURN("Error with queue configuration\n");
+
+	rte_dma_info_get(dev_id, &info);
+	if (info.nb_vchans != 1)
+		ERR_RETURN("Error, no configured queues reported on device id %u\n", dev_id);
+
+	if (rte_dma_start(dev_id) != 0)
+		ERR_RETURN("Error with rte_dma_start()\n");
+
+	if (rte_dma_stats_get(dev_id, vchan, &stats) != 0)
+		ERR_RETURN("Error with rte_dma_stats_get()\n");
+
+	if (stats.completed != 0 || stats.submitted != 0 || stats.errors != 0)
+		ERR_RETURN("Error device stats are not all zero: completed = %"PRIu64", "
+				"submitted = %"PRIu64", errors = %"PRIu64"\n",
+				stats.completed, stats.submitted, stats.errors);
+
+	rte_dma_stop(dev_id);
+	rte_dma_stats_reset(dev_id, vchan);
+	return 0;
+}
+
 static int
 test_apis(void)
 {
@@ -33,9 +94,18 @@ test_apis(void)
 static int
 test_dma(void)
 {
+	int i;
+
 	/* basic sanity on dmadev infrastructure */
 	if (test_apis() < 0)
-		return -1;
+		ERR_RETURN("Error performing API tests\n");
+
+	if (rte_dma_count_avail() == 0)
+		return TEST_SKIPPED;
+
+	RTE_DMA_FOREACH_DEV(i)
+		if (test_dmadev_instance(i) < 0)
+			ERR_RETURN("Error, test failure for device %d\n", i);

 	return 0;
 }
--
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v5 5/9] app/test: add basic dmadev copy tests
  2021-09-17 13:54 ` [dpdk-dev] [PATCH v5 0/9] add test suite for DMA drivers Bruce Richardson
                     ` (3 preceding siblings ...)
  2021-09-17 13:54   ` [dpdk-dev] [PATCH v5 4/9] app/test: add basic dmadev instance tests Bruce Richardson
@ 2021-09-17 13:54   ` Bruce Richardson
  2021-09-17 13:54   ` [dpdk-dev] [PATCH v5 6/9] app/test: add more comprehensive " Bruce Richardson
                     ` (3 subsequent siblings)
  8 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-09-17 13:54 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

For each dmadev instance, perform some basic copy tests to validate that
functionality.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
---
 app/test/test_dmadev.c | 175 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 175 insertions(+)

diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
index 28fbdc0b1f..2f22b6e382 100644
--- a/app/test/test_dmadev.c
+++ b/app/test/test_dmadev.c
@@ -6,6 +6,10 @@
 #include <inttypes.h>

 #include <rte_dmadev.h>
+#include <rte_mbuf.h>
+#include <rte_pause.h>
+#include <rte_cycles.h>
+#include <rte_random.h>
 #include <rte_bus_vdev.h>

 #include "test.h"
@@ -15,6 +19,11 @@ extern int test_dma_api(uint16_t dev_id);

 #define ERR_RETURN(...) do { print_err(__func__, __LINE__, __VA_ARGS__); return -1; } while (0)

+#define COPY_LEN 1024
+
+static struct rte_mempool *pool;
+static uint16_t id_count;
+
 static void
 __rte_format_printf(3, 4)
 print_err(const char *func, int lineno, const char *format, ...)
@@ -27,10 +36,155 @@ print_err(const char *func, int lineno, const char *format, ...)
 	va_end(ap);
 }

+static int
+runtest(const char *printable, int (*test_fn)(int dev_id, uint16_t vchan), int iterations,
+		int dev_id, uint16_t vchan, bool check_err_stats)
+{
+	struct rte_dma_stats stats;
+	int i;
+
+	rte_dma_stats_reset(dev_id, vchan);
+	printf("DMA Dev %d: Running %s Tests %s\n", dev_id, printable,
+			check_err_stats ? " " : "(errors expected)");
+	for (i = 0; i < iterations; i++) {
+		if (test_fn(dev_id, vchan) < 0)
+			return -1;
+
+		rte_dma_stats_get(dev_id, 0, &stats);
+		printf("Ops submitted: %"PRIu64"\t", stats.submitted);
+		printf("Ops completed: %"PRIu64"\t", stats.completed);
+		printf("Errors: %"PRIu64"\r", stats.errors);
+
+		if (stats.completed != stats.submitted)
+			ERR_RETURN("\nError, not all submitted jobs are reported as completed\n");
+		if (check_err_stats && stats.errors != 0)
+			ERR_RETURN("\nErrors reported during op processing, aborting tests\n");
+	}
+	printf("\n");
+	return 0;
+}
+
+static void
+await_hw(int dev_id, uint16_t vchan)
+{
+	enum rte_dma_vchan_status st;
+
+	if (rte_dma_vchan_status(dev_id, vchan, &st) < 0) {
+		/* for drivers that don't support this op, just sleep for 1 millisecond */
+		rte_delay_us_sleep(1000);
+		return;
+	}
+
+	/* for those that do, *max* end time is one second from now, but all should be faster */
+	const uint64_t end_cycles = rte_get_timer_cycles() + rte_get_timer_hz();
+	while (st == RTE_DMA_VCHAN_ACTIVE && rte_get_timer_cycles() < end_cycles) {
+		rte_pause();
+		rte_dma_vchan_status(dev_id, vchan, &st);
+	}
+}
+
+static int
+test_enqueue_copies(int dev_id, uint16_t vchan)
+{
+	unsigned int i;
+	uint16_t id;
+
+	/* test doing a single copy */
+	do {
+		struct rte_mbuf *src, *dst;
+		char *src_data, *dst_data;
+
+		src = rte_pktmbuf_alloc(pool);
+		dst = rte_pktmbuf_alloc(pool);
+		src_data = rte_pktmbuf_mtod(src, char *);
+		dst_data = rte_pktmbuf_mtod(dst, char *);
+
+		for (i = 0; i < COPY_LEN; i++)
+			src_data[i] = rte_rand() & 0xFF;
+
+		id = rte_dma_copy(dev_id, vchan, rte_pktmbuf_iova(src), rte_pktmbuf_iova(dst),
+				COPY_LEN, RTE_DMA_OP_FLAG_SUBMIT);
+		if (id != id_count)
+			ERR_RETURN("Error with rte_dma_copy, got %u, expected %u\n",
+					id, id_count);
+
+		/* give time for copy to finish, then check it was done */
+		await_hw(dev_id, vchan);
+
+		for (i = 0; i < COPY_LEN; i++)
+			if (dst_data[i] != src_data[i])
+				ERR_RETURN("Data mismatch at char %u [Got %02x not %02x]\n", i,
+						dst_data[i], src_data[i]);
+
+		/* now check completion works */
+		if (rte_dma_completed(dev_id, vchan, 1, &id, NULL) != 1)
+			ERR_RETURN("Error with rte_dma_completed\n");
+
+		if (id != id_count)
+			ERR_RETURN("Error:incorrect job id received, %u [expected %u]\n",
+					id, id_count);
+
+		rte_pktmbuf_free(src);
+		rte_pktmbuf_free(dst);
+
+		/* now check completion returns nothing more */
+		if (rte_dma_completed(dev_id, 0, 1, NULL, NULL) != 0)
+			ERR_RETURN("Error with rte_dma_completed in empty check\n");
+
+		id_count++;
+
+	} while (0);
+
+	/* test doing a multiple single copies */
+	do {
+		const uint16_t max_ops = 4;
+		struct rte_mbuf *src, *dst;
+		char *src_data, *dst_data;
+		uint16_t count;
+
+		src = rte_pktmbuf_alloc(pool);
+		dst = rte_pktmbuf_alloc(pool);
+		src_data = rte_pktmbuf_mtod(src, char *);
+		dst_data = rte_pktmbuf_mtod(dst, char *);
+
+		for (i = 0; i < COPY_LEN; i++)
+			src_data[i] = rte_rand() & 0xFF;
+
+		/* perform the same copy <max_ops> times */
+		for (i = 0; i < max_ops; i++)
+			if (rte_dma_copy(dev_id, vchan,
+					rte_pktmbuf_iova(src),
+					rte_pktmbuf_iova(dst),
+					COPY_LEN, RTE_DMA_OP_FLAG_SUBMIT) != id_count++)
+				ERR_RETURN("Error with rte_dma_copy\n");
+
+		await_hw(dev_id, vchan);
+
+		count = rte_dma_completed(dev_id, vchan, max_ops * 2, &id, NULL);
+		if (count != max_ops)
+			ERR_RETURN("Error with rte_dma_completed, got %u not %u\n",
+					count, max_ops);
+
+		if (id != id_count - 1)
+			ERR_RETURN("Error, incorrect job id returned: got %u not %u\n",
+					id, id_count - 1);
+
+		for (i = 0; i < COPY_LEN; i++)
+			if (dst_data[i] != src_data[i])
+				ERR_RETURN("Data mismatch at char %u\n", i);
+
+		rte_pktmbuf_free(src);
+		rte_pktmbuf_free(dst);
+	} while (0);
+
+	return 0;
+}
+
 static int
 test_dmadev_instance(uint16_t dev_id)
 {
 #define TEST_RINGSIZE 512
+#define CHECK_ERRS    true
 	struct rte_dma_stats stats;
 	struct rte_dma_info info;
 	const struct rte_dma_conf conf = { .nb_vchans = 1};
@@ -66,10 +220,31 @@ test_dmadev_instance(uint16_t dev_id)
 		ERR_RETURN("Error device stats are not all zero: completed = %"PRIu64", "
 				"submitted = %"PRIu64", errors = %"PRIu64"\n",
 				stats.completed, stats.submitted, stats.errors);
+	id_count = 0;
+
+	/* create a mempool for running tests */
+	pool = rte_pktmbuf_pool_create("TEST_DMADEV_POOL",
+			TEST_RINGSIZE * 2, /* n == num elements */
+			32,  /* cache size */
+			0,   /* priv size */
+			2048, /* data room size */
+			info.numa_node);
+	if (pool == NULL)
+		ERR_RETURN("Error with mempool creation\n");

+	/* run the test cases, use many iterations to ensure UINT16_MAX id wraparound */
+	if (runtest("copy", test_enqueue_copies, 640, dev_id, vchan, CHECK_ERRS) < 0)
+		goto err;
+
+	rte_mempool_free(pool);
 	rte_dma_stop(dev_id);
 	rte_dma_stats_reset(dev_id, vchan);
 	return 0;
+
+err:
+	rte_mempool_free(pool);
+	rte_dma_stop(dev_id);
+	return -1;
 }

 static int
--
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v5 6/9] app/test: add more comprehensive dmadev copy tests
  2021-09-17 13:54 ` [dpdk-dev] [PATCH v5 0/9] add test suite for DMA drivers Bruce Richardson
                     ` (4 preceding siblings ...)
  2021-09-17 13:54   ` [dpdk-dev] [PATCH v5 5/9] app/test: add basic dmadev copy tests Bruce Richardson
@ 2021-09-17 13:54   ` Bruce Richardson
  2021-09-17 13:54   ` [dpdk-dev] [PATCH v5 7/9] app/test: test dmadev instance failure handling Bruce Richardson
                     ` (2 subsequent siblings)
  8 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-09-17 13:54 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

Add unit tests for various combinations of use for dmadev, copying
bursts of packets in various formats, e.g.

1. enqueuing two smaller bursts and completing them as one burst
2. enqueuing one burst and gathering completions in smaller bursts
3. using completed_status() function to gather completions rather than
   just completed()

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
---
 app/test/test_dmadev.c | 101 ++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 100 insertions(+), 1 deletion(-)

diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
index 2f22b6e382..3a1373b1ef 100644
--- a/app/test/test_dmadev.c
+++ b/app/test/test_dmadev.c
@@ -83,6 +83,98 @@ await_hw(int dev_id, uint16_t vchan)
 	}
 }

+/* run a series of copy tests just using some different options for enqueues and completions */
+static int
+do_multi_copies(int dev_id, uint16_t vchan,
+		int split_batches,     /* submit 2 x 16 or 1 x 32 burst */
+		int split_completions, /* gather 2 x 16 or 1 x 32 completions */
+		int use_completed_status) /* use completed or completed_status function */
+{
+	struct rte_mbuf *srcs[32], *dsts[32];
+	enum rte_dma_status_code sc[32];
+	unsigned int i, j;
+	bool dma_err = false;
+
+	/* Enqueue burst of copies and hit doorbell */
+	for (i = 0; i < RTE_DIM(srcs); i++) {
+		uint64_t *src_data;
+
+		if (split_batches && i == RTE_DIM(srcs) / 2)
+			rte_dma_submit(dev_id, vchan);
+
+		srcs[i] = rte_pktmbuf_alloc(pool);
+		dsts[i] = rte_pktmbuf_alloc(pool);
+		if (srcs[i] == NULL || dsts[i] == NULL)
+			ERR_RETURN("Error allocating buffers\n");
+
+		src_data = rte_pktmbuf_mtod(srcs[i], uint64_t *);
+		for (j = 0; j < COPY_LEN/sizeof(uint64_t); j++)
+			src_data[j] = rte_rand();
+
+		if (rte_dma_copy(dev_id, vchan, srcs[i]->buf_iova + srcs[i]->data_off,
+				dsts[i]->buf_iova + dsts[i]->data_off, COPY_LEN, 0) != id_count++)
+			ERR_RETURN("Error with rte_dma_copy for buffer %u\n", i);
+	}
+	rte_dma_submit(dev_id, vchan);
+
+	await_hw(dev_id, vchan);
+
+	if (split_completions) {
+		/* gather completions in two halves */
+		uint16_t half_len = RTE_DIM(srcs) / 2;
+		int ret = rte_dma_completed(dev_id, vchan, half_len, NULL, &dma_err);
+		if (ret != half_len || dma_err)
+			ERR_RETURN("Error with rte_dma_completed - first half. ret = %d, expected ret = %u, dma_err = %d\n",
+					ret, half_len, dma_err);
+
+		ret = rte_dma_completed(dev_id, vchan, half_len, NULL, &dma_err);
+		if (ret != half_len || dma_err)
+			ERR_RETURN("Error with rte_dma_completed - second half. ret = %d, expected ret = %u, dma_err = %d\n",
+					ret, half_len, dma_err);
+	} else {
+		/* gather all completions in one go, using either
+		 * completed or completed_status fns
+		 */
+		if (!use_completed_status) {
+			int n = rte_dma_completed(dev_id, vchan, RTE_DIM(srcs), NULL, &dma_err);
+			if (n != RTE_DIM(srcs) || dma_err)
+				ERR_RETURN("Error with rte_dma_completed, %u [expected: %zu], dma_err = %d\n",
+						n, RTE_DIM(srcs), dma_err);
+		} else {
+			int n = rte_dma_completed_status(dev_id, vchan, RTE_DIM(srcs), NULL, sc);
+			if (n != RTE_DIM(srcs))
+				ERR_RETURN("Error with rte_dma_completed_status, %u [expected: %zu]\n",
+						n, RTE_DIM(srcs));
+
+			for (j = 0; j < (uint16_t)n; j++)
+				if (sc[j] != RTE_DMA_STATUS_SUCCESSFUL)
+					ERR_RETURN("Error with rte_dma_completed_status, job %u reports failure [code %u]\n",
+							j, sc[j]);
+		}
+	}
+
+	/* check for empty */
+	int ret = use_completed_status ?
+			rte_dma_completed_status(dev_id, vchan, RTE_DIM(srcs), NULL, sc) :
+			rte_dma_completed(dev_id, vchan, RTE_DIM(srcs), NULL, &dma_err);
+	if (ret != 0)
+		ERR_RETURN("Error with completion check - ops unexpectedly returned\n");
+
+	for (i = 0; i < RTE_DIM(srcs); i++) {
+		char *src_data, *dst_data;
+
+		src_data = rte_pktmbuf_mtod(srcs[i], char *);
+		dst_data = rte_pktmbuf_mtod(dsts[i], char *);
+		for (j = 0; j < COPY_LEN; j++)
+			if (src_data[j] != dst_data[j])
+				ERR_RETURN("Error with copy of packet %u, byte %u\n", i, j);
+
+		rte_pktmbuf_free(srcs[i]);
+		rte_pktmbuf_free(dsts[i]);
+	}
+	return 0;
+}
+
 static int
 test_enqueue_copies(int dev_id, uint16_t vchan)
 {
@@ -177,7 +269,14 @@ test_enqueue_copies(int dev_id, uint16_t vchan)
 		rte_pktmbuf_free(dst);
 	} while (0);

-	return 0;
+	/* test doing multiple copies */
+	return do_multi_copies(dev_id, vchan, 0, 0, 0) /* enqueue and complete 1 batch at a time */
+			/* enqueue 2 batches and then complete both */
+			|| do_multi_copies(dev_id, vchan, 1, 0, 0)
+			/* enqueue 1 batch, then complete in two halves */
+			|| do_multi_copies(dev_id, vchan, 0, 1, 0)
+			/* test using completed_status in place of regular completed API */
+			|| do_multi_copies(dev_id, vchan, 0, 0, 1);
 }

 static int
--
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v5 7/9] app/test: test dmadev instance failure handling
  2021-09-17 13:54 ` [dpdk-dev] [PATCH v5 0/9] add test suite for DMA drivers Bruce Richardson
                     ` (5 preceding siblings ...)
  2021-09-17 13:54   ` [dpdk-dev] [PATCH v5 6/9] app/test: add more comprehensive " Bruce Richardson
@ 2021-09-17 13:54   ` Bruce Richardson
  2021-09-17 13:54   ` [dpdk-dev] [PATCH v5 8/9] app/test: add dmadev fill tests Bruce Richardson
  2021-09-17 13:54   ` [dpdk-dev] [PATCH v5 9/9] app/test: add dmadev burst capacity API test Bruce Richardson
  8 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-09-17 13:54 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

Add a series of tests to inject bad copy operations into a dmadev to
test the error handling and reporting capabilities. Various combinations
of errors in various positions in a burst are tested, as are errors in
bursts with fence flag set, and multiple errors in a single burst.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
---
 app/test/test_dmadev.c | 357 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 357 insertions(+)

diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
index 3a1373b1ef..656012239d 100644
--- a/app/test/test_dmadev.c
+++ b/app/test/test_dmadev.c
@@ -279,6 +279,354 @@ test_enqueue_copies(int dev_id, uint16_t vchan)
 			|| do_multi_copies(dev_id, vchan, 0, 0, 1);
 }

+/* Failure handling test cases - global macros and variables for those tests*/
+#define COMP_BURST_SZ	16
+#define OPT_FENCE(idx) ((fence && idx == 8) ? RTE_DMA_OP_FLAG_FENCE : 0)
+
+static int
+test_failure_in_full_burst(int dev_id, uint16_t vchan, bool fence,
+		struct rte_mbuf **srcs, struct rte_mbuf **dsts, unsigned int fail_idx)
+{
+	/* Test single full batch statuses with failures */
+	enum rte_dma_status_code status[COMP_BURST_SZ];
+	struct rte_dma_stats baseline, stats;
+	uint16_t invalid_addr_id = 0;
+	uint16_t idx;
+	uint16_t count, status_count;
+	unsigned int i;
+	bool error = false;
+	int err_count = 0;
+
+	rte_dma_stats_get(dev_id, vchan, &baseline); /* get a baseline set of stats */
+	for (i = 0; i < COMP_BURST_SZ; i++) {
+		int id = rte_dma_copy(dev_id, vchan,
+				(i == fail_idx ? 0 : (srcs[i]->buf_iova + srcs[i]->data_off)),
+				dsts[i]->buf_iova + dsts[i]->data_off,
+				COPY_LEN, OPT_FENCE(i));
+		if (id < 0)
+			ERR_RETURN("Error with rte_dma_copy for buffer %u\n", i);
+		if (i == fail_idx)
+			invalid_addr_id = id;
+	}
+	rte_dma_submit(dev_id, vchan);
+	rte_dma_stats_get(dev_id, vchan, &stats);
+	if (stats.submitted != baseline.submitted + COMP_BURST_SZ)
+		ERR_RETURN("Submitted stats value not as expected, %"PRIu64" not %"PRIu64"\n",
+				stats.submitted, baseline.submitted + COMP_BURST_SZ);
+
+	await_hw(dev_id, vchan);
+
+	count = rte_dma_completed(dev_id, vchan, COMP_BURST_SZ, &idx, &error);
+	if (count != fail_idx)
+		ERR_RETURN("Error with rte_dma_completed for failure test. Got returned %u not %u.\n",
+				count, fail_idx);
+	if (!error)
+		ERR_RETURN("Error, missing expected failed copy, %u. has_error is not set\n",
+				fail_idx);
+	if (idx != invalid_addr_id - 1)
+		ERR_RETURN("Error, missing expected failed copy, %u. Got last idx %u, not %u\n",
+				fail_idx, idx, invalid_addr_id - 1);
+
+	/* all checks ok, now verify calling completed() again always returns 0 */
+	for (i = 0; i < 10; i++)
+		if (rte_dma_completed(dev_id, vchan, COMP_BURST_SZ, &idx, &error) != 0
+				|| error == false || idx != (invalid_addr_id - 1))
+			ERR_RETURN("Error with follow-up completed calls for fail idx %u\n",
+					fail_idx);
+
+	status_count = rte_dma_completed_status(dev_id, vchan, COMP_BURST_SZ,
+			&idx, status);
+	/* some HW may stop on error and be restarted after getting error status for single value
+	 * To handle this case, if we get just one error back, wait for more completions and get
+	 * status for rest of the burst
+	 */
+	if (status_count == 1) {
+		await_hw(dev_id, vchan);
+		status_count += rte_dma_completed_status(dev_id, vchan, COMP_BURST_SZ - 1,
+					&idx, &status[1]);
+	}
+	/* check that at this point we have all status values */
+	if (status_count != COMP_BURST_SZ - count)
+		ERR_RETURN("Error with completed_status calls for fail idx %u. Got %u not %u\n",
+				fail_idx, status_count, COMP_BURST_SZ - count);
+	/* now verify just one failure followed by multiple successful or skipped entries */
+	if (status[0] == RTE_DMA_STATUS_SUCCESSFUL)
+		ERR_RETURN("Error with status returned for fail idx %u. First status was not failure\n",
+				fail_idx);
+	for (i = 1; i < status_count; i++)
+		/* after a failure in a burst, depending on ordering/fencing,
+		 * operations may be successful or skipped because of previous error.
+		 */
+		if (status[i] != RTE_DMA_STATUS_SUCCESSFUL
+				&& status[i] != RTE_DMA_STATUS_NOT_ATTEMPTED)
+			ERR_RETURN("Error with status calls for fail idx %u. Status for job %u (of %u) is not successful\n",
+					fail_idx, count + i, COMP_BURST_SZ);
+
+	/* check the completed + errors stats are as expected */
+	rte_dma_stats_get(dev_id, vchan, &stats);
+	if (stats.completed != baseline.completed + COMP_BURST_SZ)
+		ERR_RETURN("Completed stats value not as expected, %"PRIu64" not %"PRIu64"\n",
+				stats.completed, baseline.completed + COMP_BURST_SZ);
+	for (i = 0; i < status_count; i++)
+		err_count += (status[i] != RTE_DMA_STATUS_SUCCESSFUL);
+	if (stats.errors != baseline.errors + err_count)
+		ERR_RETURN("'Errors' stats value not as expected, %"PRIu64" not %"PRIu64"\n",
+				stats.errors, baseline.errors + err_count);
+
+	return 0;
+}
+
+static int
+test_individual_status_query_with_failure(int dev_id, uint16_t vchan, bool fence,
+		struct rte_mbuf **srcs, struct rte_mbuf **dsts, unsigned int fail_idx)
+{
+	/* Test gathering batch statuses one at a time */
+	enum rte_dma_status_code status[COMP_BURST_SZ];
+	uint16_t invalid_addr_id = 0;
+	uint16_t idx;
+	uint16_t count = 0, status_count = 0;
+	unsigned int j;
+	bool error = false;
+
+	for (j = 0; j < COMP_BURST_SZ; j++) {
+		int id = rte_dma_copy(dev_id, vchan,
+				(j == fail_idx ? 0 : (srcs[j]->buf_iova + srcs[j]->data_off)),
+				dsts[j]->buf_iova + dsts[j]->data_off,
+				COPY_LEN, OPT_FENCE(j));
+		if (id < 0)
+			ERR_RETURN("Error with rte_dma_copy for buffer %u\n", j);
+		if (j == fail_idx)
+			invalid_addr_id = id;
+	}
+	rte_dma_submit(dev_id, vchan);
+	await_hw(dev_id, vchan);
+
+	/* use regular "completed" until we hit error */
+	while (!error) {
+		uint16_t n = rte_dma_completed(dev_id, vchan, 1, &idx, &error);
+		count += n;
+		if (n > 1 || count >= COMP_BURST_SZ)
+			ERR_RETURN("Error - too many completions got\n");
+		if (n == 0 && !error)
+			ERR_RETURN("Error, unexpectedly got zero completions after %u completed\n",
+					count);
+	}
+	if (idx != invalid_addr_id - 1)
+		ERR_RETURN("Error, last successful index not as expected, got %u, expected %u\n",
+				idx, invalid_addr_id - 1);
+
+	/* use completed_status until we hit end of burst */
+	while (count + status_count < COMP_BURST_SZ) {
+		uint16_t n = rte_dma_completed_status(dev_id, vchan, 1, &idx,
+				&status[status_count]);
+		await_hw(dev_id, vchan); /* allow delay to ensure jobs are completed */
+		status_count += n;
+		if (n != 1)
+			ERR_RETURN("Error: unexpected number of completions received, %u, not 1\n",
+					n);
+	}
+
+	/* check for single failure */
+	if (status[0] == RTE_DMA_STATUS_SUCCESSFUL)
+		ERR_RETURN("Error, unexpected successful DMA transaction\n");
+	for (j = 1; j < status_count; j++)
+		if (status[j] != RTE_DMA_STATUS_SUCCESSFUL
+				&& status[j] != RTE_DMA_STATUS_NOT_ATTEMPTED)
+			ERR_RETURN("Error, unexpected DMA error reported\n");
+
+	return 0;
+}
+
+static int
+test_single_item_status_query_with_failure(int dev_id, uint16_t vchan,
+		struct rte_mbuf **srcs, struct rte_mbuf **dsts, unsigned int fail_idx)
+{
+	/* When error occurs just collect a single error using "completed_status()"
+	 * before going to back to completed() calls
+	 */
+	enum rte_dma_status_code status;
+	uint16_t invalid_addr_id = 0;
+	uint16_t idx;
+	uint16_t count, status_count, count2;
+	unsigned int j;
+	bool error = false;
+
+	for (j = 0; j < COMP_BURST_SZ; j++) {
+		int id = rte_dma_copy(dev_id, vchan,
+				(j == fail_idx ? 0 : (srcs[j]->buf_iova + srcs[j]->data_off)),
+				dsts[j]->buf_iova + dsts[j]->data_off,
+				COPY_LEN, 0);
+		if (id < 0)
+			ERR_RETURN("Error with rte_dma_copy for buffer %u\n", j);
+		if (j == fail_idx)
+			invalid_addr_id = id;
+	}
+	rte_dma_submit(dev_id, vchan);
+	await_hw(dev_id, vchan);
+
+	/* get up to the error point */
+	count = rte_dma_completed(dev_id, vchan, COMP_BURST_SZ, &idx, &error);
+	if (count != fail_idx)
+		ERR_RETURN("Error with rte_dma_completed for failure test. Got returned %u not %u.\n",
+				count, fail_idx);
+	if (!error)
+		ERR_RETURN("Error, missing expected failed copy, %u. has_error is not set\n",
+				fail_idx);
+	if (idx != invalid_addr_id - 1)
+		ERR_RETURN("Error, missing expected failed copy, %u. Got last idx %u, not %u\n",
+				fail_idx, idx, invalid_addr_id - 1);
+
+	/* get the error code */
+	status_count = rte_dma_completed_status(dev_id, vchan, 1, &idx, &status);
+	if (status_count != 1)
+		ERR_RETURN("Error with completed_status calls for fail idx %u. Got %u not %u\n",
+				fail_idx, status_count, COMP_BURST_SZ - count);
+	if (status == RTE_DMA_STATUS_SUCCESSFUL)
+		ERR_RETURN("Error with status returned for fail idx %u. First status was not failure\n",
+				fail_idx);
+
+	/* delay in case time needed after err handled to complete other jobs */
+	await_hw(dev_id, vchan);
+
+	/* get the rest of the completions without status */
+	count2 = rte_dma_completed(dev_id, vchan, COMP_BURST_SZ, &idx, &error);
+	if (error == true)
+		ERR_RETURN("Error, got further errors post completed_status() call, for failure case %u.\n",
+				fail_idx);
+	if (count + status_count + count2 != COMP_BURST_SZ)
+		ERR_RETURN("Error, incorrect number of completions received, got %u not %u\n",
+				count + status_count + count2, COMP_BURST_SZ);
+
+	return 0;
+}
+
+static int
+test_multi_failure(int dev_id, uint16_t vchan, struct rte_mbuf **srcs, struct rte_mbuf **dsts,
+		const unsigned int *fail, size_t num_fail)
+{
+	/* test having multiple errors in one go */
+	enum rte_dma_status_code status[COMP_BURST_SZ];
+	unsigned int i, j;
+	uint16_t count, err_count = 0;
+	bool error = false;
+
+	/* enqueue and gather completions in one go */
+	for (j = 0; j < COMP_BURST_SZ; j++) {
+		uintptr_t src = srcs[j]->buf_iova + srcs[j]->data_off;
+		/* set up for failure if the current index is anywhere is the fails array */
+		for (i = 0; i < num_fail; i++)
+			if (j == fail[i])
+				src = 0;
+
+		int id = rte_dma_copy(dev_id, vchan,
+				src, dsts[j]->buf_iova + dsts[j]->data_off,
+				COPY_LEN, 0);
+		if (id < 0)
+			ERR_RETURN("Error with rte_dma_copy for buffer %u\n", j);
+	}
+	rte_dma_submit(dev_id, vchan);
+	await_hw(dev_id, vchan);
+
+	count = rte_dma_completed_status(dev_id, vchan, COMP_BURST_SZ, NULL, status);
+	while (count < COMP_BURST_SZ) {
+		await_hw(dev_id, vchan);
+
+		uint16_t ret = rte_dma_completed_status(dev_id, vchan, COMP_BURST_SZ - count,
+				NULL, &status[count]);
+		if (ret == 0)
+			ERR_RETURN("Error getting all completions for jobs. Got %u of %u\n",
+					count, COMP_BURST_SZ);
+		count += ret;
+	}
+	for (i = 0; i < count; i++)
+		if (status[i] != RTE_DMA_STATUS_SUCCESSFUL)
+			err_count++;
+
+	if (err_count != num_fail)
+		ERR_RETURN("Error: Invalid number of failed completions returned, %u; expected %zu\n",
+			err_count, num_fail);
+
+	/* enqueue and gather completions in bursts, but getting errors one at a time */
+	for (j = 0; j < COMP_BURST_SZ; j++) {
+		uintptr_t src = srcs[j]->buf_iova + srcs[j]->data_off;
+		/* set up for failure if the current index is anywhere is the fails array */
+		for (i = 0; i < num_fail; i++)
+			if (j == fail[i])
+				src = 0;
+
+		int id = rte_dma_copy(dev_id, vchan,
+				src, dsts[j]->buf_iova + dsts[j]->data_off,
+				COPY_LEN, 0);
+		if (id < 0)
+			ERR_RETURN("Error with rte_dma_copy for buffer %u\n", j);
+	}
+	rte_dma_submit(dev_id, vchan);
+	await_hw(dev_id, vchan);
+
+	count = 0;
+	err_count = 0;
+	while (count + err_count < COMP_BURST_SZ) {
+		count += rte_dma_completed(dev_id, vchan, COMP_BURST_SZ, NULL, &error);
+		if (error) {
+			uint16_t ret = rte_dma_completed_status(dev_id, vchan, 1,
+					NULL, status);
+			if (ret != 1)
+				ERR_RETURN("Error getting error-status for completions\n");
+			err_count += ret;
+			await_hw(dev_id, vchan);
+		}
+	}
+	if (err_count != num_fail)
+		ERR_RETURN("Error: Incorrect number of failed completions received, got %u not %zu\n",
+				err_count, num_fail);
+
+	return 0;
+}
+
+static int
+test_completion_status(int dev_id, uint16_t vchan, bool fence)
+{
+	const unsigned int fail[] = {0, 7, 14, 15};
+	struct rte_mbuf *srcs[COMP_BURST_SZ], *dsts[COMP_BURST_SZ];
+	unsigned int i;
+
+	for (i = 0; i < COMP_BURST_SZ; i++) {
+		srcs[i] = rte_pktmbuf_alloc(pool);
+		dsts[i] = rte_pktmbuf_alloc(pool);
+	}
+
+	for (i = 0; i < RTE_DIM(fail); i++) {
+		if (test_failure_in_full_burst(dev_id, vchan, fence, srcs, dsts, fail[i]) < 0)
+			return -1;
+
+		if (test_individual_status_query_with_failure(dev_id, vchan, fence,
+				srcs, dsts, fail[i]) < 0)
+			return -1;
+
+		/* test is run the same fenced, or unfenced, but no harm in running it twice */
+		if (test_single_item_status_query_with_failure(dev_id, vchan,
+				srcs, dsts, fail[i]) < 0)
+			return -1;
+	}
+
+	if (test_multi_failure(dev_id, vchan, srcs, dsts, fail, RTE_DIM(fail)) < 0)
+		return -1;
+
+	for (i = 0; i < COMP_BURST_SZ; i++) {
+		rte_pktmbuf_free(srcs[i]);
+		rte_pktmbuf_free(dsts[i]);
+	}
+	return 0;
+}
+
+static int
+test_completion_handling(int dev_id, uint16_t vchan)
+{
+	return test_completion_status(dev_id, vchan, false)              /* without fences */
+			|| test_completion_status(dev_id, vchan, true);  /* with fences */
+
+}
+
 static int
 test_dmadev_instance(uint16_t dev_id)
 {
@@ -335,6 +683,15 @@ test_dmadev_instance(uint16_t dev_id)
 	if (runtest("copy", test_enqueue_copies, 640, dev_id, vchan, CHECK_ERRS) < 0)
 		goto err;

+	/* to test error handling we can provide null pointers for source or dest in copies. This
+	 * requires VA mode in DPDK, since NULL(0) is a valid physical address.
+	 */
+	if (rte_eal_iova_mode() != RTE_IOVA_VA)
+		printf("DMA Dev %u: DPDK not in VA mode, skipping error handling tests\n", dev_id);
+	else if (runtest("error handling", test_completion_handling, 1,
+			dev_id, vchan, !CHECK_ERRS) < 0)
+		goto err;
+
 	rte_mempool_free(pool);
 	rte_dma_stop(dev_id);
 	rte_dma_stats_reset(dev_id, vchan);
--
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v5 8/9] app/test: add dmadev fill tests
  2021-09-17 13:54 ` [dpdk-dev] [PATCH v5 0/9] add test suite for DMA drivers Bruce Richardson
                     ` (6 preceding siblings ...)
  2021-09-17 13:54   ` [dpdk-dev] [PATCH v5 7/9] app/test: test dmadev instance failure handling Bruce Richardson
@ 2021-09-17 13:54   ` Bruce Richardson
  2021-09-17 13:54   ` [dpdk-dev] [PATCH v5 9/9] app/test: add dmadev burst capacity API test Bruce Richardson
  8 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-09-17 13:54 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

From: Kevin Laatz <kevin.laatz@intel.com>

For dma devices which support the fill operation, run unit tests to
verify fill behaviour is correct.

Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
---
 app/test/test_dmadev.c | 49 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 49 insertions(+)

diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
index 656012239d..f763cf273c 100644
--- a/app/test/test_dmadev.c
+++ b/app/test/test_dmadev.c
@@ -624,7 +624,51 @@ test_completion_handling(int dev_id, uint16_t vchan)
 {
 	return test_completion_status(dev_id, vchan, false)              /* without fences */
 			|| test_completion_status(dev_id, vchan, true);  /* with fences */
+}
+
+static int
+test_enqueue_fill(int dev_id, uint16_t vchan)
+{
+	const unsigned int lengths[] = {8, 64, 1024, 50, 100, 89};
+	struct rte_mbuf *dst;
+	char *dst_data;
+	uint64_t pattern = 0xfedcba9876543210;
+	unsigned int i, j;
+
+	dst = rte_pktmbuf_alloc(pool);
+	if (dst == NULL)
+		ERR_RETURN("Failed to allocate mbuf\n");
+	dst_data = rte_pktmbuf_mtod(dst, char *);
+
+	for (i = 0; i < RTE_DIM(lengths); i++) {
+		/* reset dst_data */
+		memset(dst_data, 0, rte_pktmbuf_data_len(dst));
+
+		/* perform the fill operation */
+		int id = rte_dma_fill(dev_id, vchan, pattern,
+				rte_pktmbuf_iova(dst), lengths[i], RTE_DMA_OP_FLAG_SUBMIT);
+		if (id < 0)
+			ERR_RETURN("Error with rte_dma_fill\n");
+		await_hw(dev_id, vchan);
+
+		if (rte_dma_completed(dev_id, vchan, 1, NULL, NULL) != 1)
+			ERR_RETURN("Error: fill operation failed (length: %u)\n", lengths[i]);
+		/* check the data from the fill operation is correct */
+		for (j = 0; j < lengths[i]; j++) {
+			char pat_byte = ((char *)&pattern)[j % 8];
+			if (dst_data[j] != pat_byte)
+				ERR_RETURN("Error with fill operation (lengths = %u): got (%x), not (%x)\n",
+						lengths[i], dst_data[j], pat_byte);
+		}
+		/* check that the data after the fill operation was not written to */
+		for (; j < rte_pktmbuf_data_len(dst); j++)
+			if (dst_data[j] != 0)
+				ERR_RETURN("Error, fill operation wrote too far (lengths = %u): got (%x), not (%x)\n",
+						lengths[i], dst_data[j], 0);
+	}

+	rte_pktmbuf_free(dst);
+	return 0;
 }

 static int
@@ -692,6 +736,11 @@ test_dmadev_instance(uint16_t dev_id)
 			dev_id, vchan, !CHECK_ERRS) < 0)
 		goto err;

+	if ((info.dev_capa & RTE_DMA_CAPA_OPS_FILL) == 0)
+		printf("DMA Dev %u: No device fill support, skipping fill tests\n", dev_id);
+	else if (runtest("fill", test_enqueue_fill, 1, dev_id, vchan, CHECK_ERRS) < 0)
+		goto err;
+
 	rte_mempool_free(pool);
 	rte_dma_stop(dev_id);
 	rte_dma_stats_reset(dev_id, vchan);
--
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v5 9/9] app/test: add dmadev burst capacity API test
  2021-09-17 13:54 ` [dpdk-dev] [PATCH v5 0/9] add test suite for DMA drivers Bruce Richardson
                     ` (7 preceding siblings ...)
  2021-09-17 13:54   ` [dpdk-dev] [PATCH v5 8/9] app/test: add dmadev fill tests Bruce Richardson
@ 2021-09-17 13:54   ` Bruce Richardson
  8 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-09-17 13:54 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

From: Kevin Laatz <kevin.laatz@intel.com>

Add a test case to validate the functionality of drivers' burst capacity
API implementations.

Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
---
 app/test/test_dmadev.c | 68 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 68 insertions(+)

diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
index f763cf273c..98fcab67f3 100644
--- a/app/test/test_dmadev.c
+++ b/app/test/test_dmadev.c
@@ -671,6 +671,69 @@ test_enqueue_fill(int dev_id, uint16_t vchan)
 	return 0;
 }

+static int
+test_burst_capacity(int dev_id, uint16_t vchan)
+{
+#define CAP_TEST_BURST_SIZE	64
+	const int ring_space = rte_dma_burst_capacity(dev_id, vchan);
+	struct rte_mbuf *src, *dst;
+	int i, j, iter;
+	int cap, ret;
+	bool dma_err;
+
+	src = rte_pktmbuf_alloc(pool);
+	dst = rte_pktmbuf_alloc(pool);
+
+	/* to test capacity, we enqueue elements and check capacity is reduced
+	 * by one each time - rebaselining the expected value after each burst
+	 * as the capacity is only for a burst. We enqueue multiple bursts to
+	 * fill up half the ring, before emptying it again. We do this twice to
+	 * ensure that we get to test scenarios where we get ring wrap-around
+	 */
+	for (iter = 0; iter < 2; iter++) {
+		for (i = 0; i < (ring_space / (2 * CAP_TEST_BURST_SIZE)) + 1; i++) {
+			cap = rte_dma_burst_capacity(dev_id, vchan);
+
+			for (j = 0; j < CAP_TEST_BURST_SIZE; j++) {
+				ret = rte_dma_copy(dev_id, vchan, rte_pktmbuf_iova(src),
+						rte_pktmbuf_iova(dst), COPY_LEN, 0);
+				if (ret < 0)
+					ERR_RETURN("Error with rte_dmadev_copy\n");
+
+				if (rte_dma_burst_capacity(dev_id, vchan) != cap - (j + 1))
+					ERR_RETURN("Error, ring capacity did not change as expected\n");
+			}
+			if (rte_dma_submit(dev_id, vchan) < 0)
+				ERR_RETURN("Error, failed to submit burst\n");
+
+			if (cap < rte_dma_burst_capacity(dev_id, vchan))
+				ERR_RETURN("Error, avail ring capacity has gone up, not down\n");
+		}
+		await_hw(dev_id, vchan);
+
+		for (i = 0; i < (ring_space / (2 * CAP_TEST_BURST_SIZE)) + 1; i++) {
+			ret = rte_dma_completed(dev_id, vchan,
+					CAP_TEST_BURST_SIZE, NULL, &dma_err);
+			if (ret != CAP_TEST_BURST_SIZE || dma_err) {
+				enum rte_dma_status_code status;
+
+				rte_dma_completed_status(dev_id, vchan, 1, NULL, &status);
+				ERR_RETURN("Error with rte_dmadev_completed, %u [expected: %u], dma_err = %d, i = %u, iter = %u, status = %u\n",
+						ret, CAP_TEST_BURST_SIZE, dma_err, i, iter, status);
+			}
+		}
+		cap = rte_dma_burst_capacity(dev_id, vchan);
+		if (cap != ring_space)
+			ERR_RETURN("Error, ring capacity has not reset to original value, got %u, expected %u\n",
+					cap, ring_space);
+	}
+
+	rte_pktmbuf_free(src);
+	rte_pktmbuf_free(dst);
+
+	return 0;
+}
+
 static int
 test_dmadev_instance(uint16_t dev_id)
 {
@@ -741,6 +804,11 @@ test_dmadev_instance(uint16_t dev_id)
 	else if (runtest("fill", test_enqueue_fill, 1, dev_id, vchan, CHECK_ERRS) < 0)
 		goto err;

+	if (rte_dma_burst_capacity(dev_id, vchan) == -ENOTSUP)
+		printf("DMA Dev %u: Burst capacity API not supported, skipping tests\n", dev_id);
+	else if (runtest("burst capacity", test_burst_capacity, 1, dev_id, vchan, CHECK_ERRS) < 0)
+		goto err;
+
 	rte_mempool_free(pool);
 	rte_dma_stop(dev_id);
 	rte_dma_stats_reset(dev_id, vchan);
--
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v3 2/8] dmadev: add burst capacity API
  2021-09-09  8:16       ` Bruce Richardson
@ 2021-09-17 13:54         ` Jerin Jacob
  2021-09-17 14:37           ` Pai G, Sunil
  2021-09-18  1:06           ` Hu, Jiayu
  0 siblings, 2 replies; 130+ messages in thread
From: Jerin Jacob @ 2021-09-17 13:54 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: dpdk-dev, Walsh, Conor, Laatz, Kevin, fengchengwen, Jerin Jacob,
	Satananda Burla, Radha Mohan Chintakuntla, Jiayu Hu, Sunil Pai G

On Thu, Sep 9, 2021 at 1:46 PM Bruce Richardson
<bruce.richardson@intel.com> wrote:
>
> On Wed, Sep 08, 2021 at 11:47:59PM +0530, Jerin Jacob wrote:
> >    On Tue, 7 Sep 2021, 10:25 pm Bruce Richardson,
> >    <[1]bruce.richardson@intel.com> wrote:
> >
> >      From: Kevin Laatz <[2]kevin.laatz@intel.com>
> >      Add a burst capacity check API to the dmadev library. This API is
> >      useful to
> >      applications which need to how many descriptors can be enqueued in
> >      the
> >      current batch. For example, it could be used to determine whether
> >      all
> >      segments of a multi-segment packet can be enqueued in the same batch
> >      or not
> >      (to avoid half-offload of the packet).
> >
> >     #Could you share more details on the use case with vhost?
> >    # Are they planning to use this in fast path if so it need to move as
> >    fast path function pointer?
>
> I believe the intent is to use it on fastpath, but I would assume only once
> per burst, so the penalty for non-fastpath may be acceptable. As you point
> out - for an app that really doesn't want to have to pay that penalty,
> tracking ring use itself is possible.
>
> The desire for fast-path use is also why I suggested having the space as an
> optional return parameter from the submit API call. It could logically also
> be a return value from the "completed" call, which might actually make more
> sense.
>
> >    # Assume the use case needs N rte_dma_copy to complete a logical copy
> >    at vhost level. Is the any issue in half-offload, meaning when N th one
> >    successfully completed then only the logical copy is completed. Right?
>
> Yes, as I understand it, the issue is for multi-segment packets, where we
> only want to enqueue the first segment if we know we will success with the
> final one too.

Sorry for the delay in reply.

If so, why do we need this API. We can mark a logical transaction completed IFF
final segment is succeeded. Since this fastpath API, I would like to
really understand
the real use case for it, so if required then we need to implement in
an optimized way.
Otherwise driver does not need to implement this to have generic solution for
all the drivers.

>
> >    # There is already nb_desc with which a dma_queue is configured. So if
> >    the application does its accounting properly, it knows how many desc it
> >    has used up and how many completions it has processed.
>
> Agreed. It's just more work for the app, and for simplicity and
> completeness I think we should add this API. Because there are other
> options I think it should be available, but not as a fast-path fn (though
> again, the difference is likely very small for something not called for
> every enqueue).
>
> >    Would like to understand more details on this API usage.
> >
> Adding Sunil and Jiayu on CC who are looking at this area from the OVS and
> vhost sides.

See above.

Sunil. Jiayu, Could you share the details on the usage and why it is needed.


>
> /Bruce

^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v3 2/8] dmadev: add burst capacity API
  2021-09-17 13:54         ` Jerin Jacob
@ 2021-09-17 14:37           ` Pai G, Sunil
  2021-09-18 12:18             ` Jerin Jacob
  2021-09-18  1:06           ` Hu, Jiayu
  1 sibling, 1 reply; 130+ messages in thread
From: Pai G, Sunil @ 2021-09-17 14:37 UTC (permalink / raw)
  To: Jerin Jacob, Richardson, Bruce
  Cc: dpdk-dev, Walsh, Conor, Laatz, Kevin, fengchengwen, Jerin Jacob,
	Satananda Burla, Radha Mohan Chintakuntla, Hu, Jiayu

Hi Jerin,  

<snipped>

> > >      Add a burst capacity check API to the dmadev library. This API is
> > >      useful to
> > >      applications which need to how many descriptors can be enqueued in
> > >      the
> > >      current batch. For example, it could be used to determine whether
> > >      all
> > >      segments of a multi-segment packet can be enqueued in the same
> batch
> > >      or not
> > >      (to avoid half-offload of the packet).
> > >
> > >     #Could you share more details on the use case with vhost?
> > >    # Are they planning to use this in fast path if so it need to move as
> > >    fast path function pointer?
> >
> > I believe the intent is to use it on fastpath, but I would assume only
> > once per burst, so the penalty for non-fastpath may be acceptable. As
> > you point out - for an app that really doesn't want to have to pay
> > that penalty, tracking ring use itself is possible.
> >
> > The desire for fast-path use is also why I suggested having the space
> > as an optional return parameter from the submit API call. It could
> > logically also be a return value from the "completed" call, which
> > might actually make more sense.
> >
> > >    # Assume the use case needs N rte_dma_copy to complete a logical
> copy
> > >    at vhost level. Is the any issue in half-offload, meaning when N th one
> > >    successfully completed then only the logical copy is completed. Right?
> >
> > Yes, as I understand it, the issue is for multi-segment packets, where
> > we only want to enqueue the first segment if we know we will success
> > with the final one too.

Yes, this is true. We want to avoid scenarios where only parts of packets could be enqueued.

> 
> Sorry for the delay in reply.
> 
> If so, why do we need this API. We can mark a logical transaction completed
> IFF final segment is succeeded. Since this fastpath API, I would like to really
> understand the real use case for it, so if required then we need to
> implement in an optimized way.
> Otherwise driver does not need to implement this to have generic solution
> for all the drivers.
> 
> >
> > >    # There is already nb_desc with which a dma_queue is configured. So if
> > >    the application does its accounting properly, it knows how many desc it
> > >    has used up and how many completions it has processed.
> >
> > Agreed. It's just more work for the app, and for simplicity and
> > completeness I think we should add this API. Because there are other
> > options I think it should be available, but not as a fast-path fn
> > (though again, the difference is likely very small for something not
> > called for every enqueue).
> >
> > >    Would like to understand more details on this API usage.
> > >
> > Adding Sunil and Jiayu on CC who are looking at this area from the OVS
> > and vhost sides.
> 
> See above.
> 
> Sunil. Jiayu, Could you share the details on the usage and why it is needed.

Here is an example of how the burst capacity API will be potentially used in the app(OVS):
http://patchwork.ozlabs.org/project/openvswitch/patch/20210907120021.4093604-2-sunil.pai.g@intel.com/
Although commented out , it should still provide an idea of its usage.

Thanks and regards,
Sunil


^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v3 2/8] dmadev: add burst capacity API
  2021-09-17 13:54         ` Jerin Jacob
  2021-09-17 14:37           ` Pai G, Sunil
@ 2021-09-18  1:06           ` Hu, Jiayu
  2021-09-18 12:12             ` Jerin Jacob
  1 sibling, 1 reply; 130+ messages in thread
From: Hu, Jiayu @ 2021-09-18  1:06 UTC (permalink / raw)
  To: Jerin Jacob, Richardson, Bruce
  Cc: dpdk-dev, Walsh, Conor, Laatz, Kevin, fengchengwen, Jerin Jacob,
	Satananda Burla, Radha Mohan Chintakuntla, Pai G, Sunil

Hi Jerin,

> -----Original Message-----
> From: Jerin Jacob <jerinjacobk@gmail.com>
> Sent: Friday, September 17, 2021 9:55 PM
> To: Richardson, Bruce <bruce.richardson@intel.com>
> Cc: dpdk-dev <dev@dpdk.org>; Walsh, Conor <conor.walsh@intel.com>;
> Laatz, Kevin <kevin.laatz@intel.com>; fengchengwen
> <fengchengwen@huawei.com>; Jerin Jacob <jerinj@marvell.com>;
> Satananda Burla <sburla@marvell.com>; Radha Mohan Chintakuntla
> <radhac@marvell.com>; Hu, Jiayu <jiayu.hu@intel.com>; Pai G, Sunil
> <sunil.pai.g@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v3 2/8] dmadev: add burst capacity API
> 
> On Thu, Sep 9, 2021 at 1:46 PM Bruce Richardson
> <bruce.richardson@intel.com> wrote:
> >
> > On Wed, Sep 08, 2021 at 11:47:59PM +0530, Jerin Jacob wrote:
> > >    On Tue, 7 Sep 2021, 10:25 pm Bruce Richardson,
> > >    <[1]bruce.richardson@intel.com> wrote:
> > >
> > >      From: Kevin Laatz <[2]kevin.laatz@intel.com>
> > >      Add a burst capacity check API to the dmadev library. This API is
> > >      useful to
> > >      applications which need to how many descriptors can be enqueued in
> > >      the
> > >      current batch. For example, it could be used to determine whether
> > >      all
> > >      segments of a multi-segment packet can be enqueued in the same
> batch
> > >      or not
> > >      (to avoid half-offload of the packet).
> > >
> > >     #Could you share more details on the use case with vhost?
> > >    # Are they planning to use this in fast path if so it need to move as
> > >    fast path function pointer?
> >
> > I believe the intent is to use it on fastpath, but I would assume only
> > once per burst, so the penalty for non-fastpath may be acceptable. As
> > you point out - for an app that really doesn't want to have to pay
> > that penalty, tracking ring use itself is possible.
> >
> > The desire for fast-path use is also why I suggested having the space
> > as an optional return parameter from the submit API call. It could
> > logically also be a return value from the "completed" call, which
> > might actually make more sense.
> >
> > >    # Assume the use case needs N rte_dma_copy to complete a logical
> copy
> > >    at vhost level. Is the any issue in half-offload, meaning when N th one
> > >    successfully completed then only the logical copy is completed. Right?
> >
> > Yes, as I understand it, the issue is for multi-segment packets, where
> > we only want to enqueue the first segment if we know we will success
> > with the final one too.
> 
> Sorry for the delay in reply.
> 
> If so, why do we need this API. We can mark a logical transaction completed
> IFF final segment is succeeded. Since this fastpath API, I would like to really
> understand the real use case for it, so if required then we need to implement
> in an optimized way.
> Otherwise driver does not need to implement this to have generic solution
> for all the drivers.

The fact is  that it's very hard for apps to calculate the available space of a DMA ring.
For DSA, the available space is decided by three factors: the number of available slots
in SW ring, the max batching size of a batch descriptor, and if there are available batch
descriptors. The first one is configured by SW, and apps can calculate it. But the second
depends on DSA HW, and the third one is hided in DSA driver which is not visible to apps.
Considering the complexity of different DMA HW, I think the best way is to hide all details
inside DMA dev and provide this check capacity API for apps.

Thanks,
Jiayu

> 
> >
> > >    # There is already nb_desc with which a dma_queue is configured. So if
> > >    the application does its accounting properly, it knows how many desc it
> > >    has used up and how many completions it has processed.
> >
> > Agreed. It's just more work for the app, and for simplicity and
> > completeness I think we should add this API. Because there are other
> > options I think it should be available, but not as a fast-path fn
> > (though again, the difference is likely very small for something not
> > called for every enqueue).
> >
> > >    Would like to understand more details on this API usage.
> > >
> > Adding Sunil and Jiayu on CC who are looking at this area from the OVS
> > and vhost sides.
> 
> See above.
> 
> Sunil. Jiayu, Could you share the details on the usage and why it is needed.
> 
> 
> >
> > /Bruce

^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v3 2/8] dmadev: add burst capacity API
  2021-09-18  1:06           ` Hu, Jiayu
@ 2021-09-18 12:12             ` Jerin Jacob
  2021-09-21 13:57               ` Pai G, Sunil
  0 siblings, 1 reply; 130+ messages in thread
From: Jerin Jacob @ 2021-09-18 12:12 UTC (permalink / raw)
  To: Hu, Jiayu
  Cc: Richardson, Bruce, dpdk-dev, Walsh, Conor, Laatz, Kevin,
	fengchengwen, Jerin Jacob, Satananda Burla,
	Radha Mohan Chintakuntla, Pai G, Sunil

On Sat, Sep 18, 2021 at 6:36 AM Hu, Jiayu <jiayu.hu@intel.com> wrote:
>
> Hi Jerin,
>
> > -----Original Message-----
> > From: Jerin Jacob <jerinjacobk@gmail.com>
> > Sent: Friday, September 17, 2021 9:55 PM
> > To: Richardson, Bruce <bruce.richardson@intel.com>
> > Cc: dpdk-dev <dev@dpdk.org>; Walsh, Conor <conor.walsh@intel.com>;
> > Laatz, Kevin <kevin.laatz@intel.com>; fengchengwen
> > <fengchengwen@huawei.com>; Jerin Jacob <jerinj@marvell.com>;
> > Satananda Burla <sburla@marvell.com>; Radha Mohan Chintakuntla
> > <radhac@marvell.com>; Hu, Jiayu <jiayu.hu@intel.com>; Pai G, Sunil
> > <sunil.pai.g@intel.com>
> > Subject: Re: [dpdk-dev] [PATCH v3 2/8] dmadev: add burst capacity API
> >
> > On Thu, Sep 9, 2021 at 1:46 PM Bruce Richardson
> > <bruce.richardson@intel.com> wrote:
> > >
> > > On Wed, Sep 08, 2021 at 11:47:59PM +0530, Jerin Jacob wrote:
> > > >    On Tue, 7 Sep 2021, 10:25 pm Bruce Richardson,
> > > >    <[1]bruce.richardson@intel.com> wrote:
> > > >
> > > >      From: Kevin Laatz <[2]kevin.laatz@intel.com>
> > > >      Add a burst capacity check API to the dmadev library. This API is
> > > >      useful to
> > > >      applications which need to how many descriptors can be enqueued in
> > > >      the
> > > >      current batch. For example, it could be used to determine whether
> > > >      all
> > > >      segments of a multi-segment packet can be enqueued in the same
> > batch
> > > >      or not
> > > >      (to avoid half-offload of the packet).
> > > >
> > > >     #Could you share more details on the use case with vhost?
> > > >    # Are they planning to use this in fast path if so it need to move as
> > > >    fast path function pointer?
> > >
> > > I believe the intent is to use it on fastpath, but I would assume only
> > > once per burst, so the penalty for non-fastpath may be acceptable. As
> > > you point out - for an app that really doesn't want to have to pay
> > > that penalty, tracking ring use itself is possible.
> > >
> > > The desire for fast-path use is also why I suggested having the space
> > > as an optional return parameter from the submit API call. It could
> > > logically also be a return value from the "completed" call, which
> > > might actually make more sense.
> > >
> > > >    # Assume the use case needs N rte_dma_copy to complete a logical
> > copy
> > > >    at vhost level. Is the any issue in half-offload, meaning when N th one
> > > >    successfully completed then only the logical copy is completed. Right?
> > >
> > > Yes, as I understand it, the issue is for multi-segment packets, where
> > > we only want to enqueue the first segment if we know we will success
> > > with the final one too.
> >
> > Sorry for the delay in reply.
> >
> > If so, why do we need this API. We can mark a logical transaction completed
> > IFF final segment is succeeded. Since this fastpath API, I would like to really
> > understand the real use case for it, so if required then we need to implement
> > in an optimized way.
> > Otherwise driver does not need to implement this to have generic solution
> > for all the drivers.


Hi Jiayu, Sunil,

> The fact is  that it's very hard for apps to calculate the available space of a DMA ring.

Yes, I agree.

My question is more why to calculate the space per burst and introduce
yet another fastpath API.
For example, the application needs to copy 8 segments to complete a
logical copy in the application perspective.
In case, when 8th copy is completed then only the application marks
the logical copy completed.
i.e why to check per burst, 8 segments are available or not? Even it
is available, there may be multiple
reasons why any of the segment copies can fail. So the application
needs to track all the jobs completed
or not anyway. Am I missing something in terms of vhost or OVS usage?


> For DSA, the available space is decided by three factors: the number of available slots
> in SW ring, the max batching size of a batch descriptor, and if there are available batch
> descriptors. The first one is configured by SW, and apps can calculate it. But the second
> depends on DSA HW, and the third one is hided in DSA driver which is not visible to apps.
> Considering the complexity of different DMA HW, I think the best way is to hide all details
> inside DMA dev and provide this check capacity API for apps.
>
> Thanks,
> Jiayu
>
> >
> > >
> > > >    # There is already nb_desc with which a dma_queue is configured. So if
> > > >    the application does its accounting properly, it knows how many desc it
> > > >    has used up and how many completions it has processed.
> > >
> > > Agreed. It's just more work for the app, and for simplicity and
> > > completeness I think we should add this API. Because there are other
> > > options I think it should be available, but not as a fast-path fn
> > > (though again, the difference is likely very small for something not
> > > called for every enqueue).
> > >
> > > >    Would like to understand more details on this API usage.
> > > >
> > > Adding Sunil and Jiayu on CC who are looking at this area from the OVS
> > > and vhost sides.
> >
> > See above.
> >
> > Sunil. Jiayu, Could you share the details on the usage and why it is needed.
> >
> >
> > >
> > > /Bruce

^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v3 2/8] dmadev: add burst capacity API
  2021-09-17 14:37           ` Pai G, Sunil
@ 2021-09-18 12:18             ` Jerin Jacob
  0 siblings, 0 replies; 130+ messages in thread
From: Jerin Jacob @ 2021-09-18 12:18 UTC (permalink / raw)
  To: Pai G, Sunil
  Cc: Richardson, Bruce, dpdk-dev, Walsh, Conor, Laatz, Kevin,
	fengchengwen, Jerin Jacob, Satananda Burla,
	Radha Mohan Chintakuntla, Hu, Jiayu

On Fri, Sep 17, 2021 at 8:07 PM Pai G, Sunil <sunil.pai.g@intel.com> wrote:
>
> Hi Jerin,

HI Sunil,

>
> <snipped>
>
> > > >      Add a burst capacity check API to the dmadev library. This API is
> > > >      useful to
> > > >      applications which need to how many descriptors can be enqueued in
> > > >      the
> > > >      current batch. For example, it could be used to determine whether
> > > >      all
> > > >      segments of a multi-segment packet can be enqueued in the same
> > batch
> > > >      or not
> > > >      (to avoid half-offload of the packet).
> > > >
> > > >     #Could you share more details on the use case with vhost?
> > > >    # Are they planning to use this in fast path if so it need to move as
> > > >    fast path function pointer?
> > >
> > > I believe the intent is to use it on fastpath, but I would assume only
> > > once per burst, so the penalty for non-fastpath may be acceptable. As
> > > you point out - for an app that really doesn't want to have to pay
> > > that penalty, tracking ring use itself is possible.
> > >
> > > The desire for fast-path use is also why I suggested having the space
> > > as an optional return parameter from the submit API call. It could
> > > logically also be a return value from the "completed" call, which
> > > might actually make more sense.
> > >
> > > >    # Assume the use case needs N rte_dma_copy to complete a logical
> > copy
> > > >    at vhost level. Is the any issue in half-offload, meaning when N th one
> > > >    successfully completed then only the logical copy is completed. Right?
> > >
> > > Yes, as I understand it, the issue is for multi-segment packets, where
> > > we only want to enqueue the first segment if we know we will success
> > > with the final one too.
>
> Yes, this is true. We want to avoid scenarios where only parts of packets could be enqueued.

What are those scenarios, could you share some descriptions of them.
What if the final or any segment fails event the space is available.
So you have to take care of that anyway. RIght?


>
> >
> > Sorry for the delay in reply.
> >
> > If so, why do we need this API. We can mark a logical transaction completed
> > IFF final segment is succeeded. Since this fastpath API, I would like to really
> > understand the real use case for it, so if required then we need to
> > implement in an optimized way.
> > Otherwise driver does not need to implement this to have generic solution
> > for all the drivers.
> >
> > >
> > > >    # There is already nb_desc with which a dma_queue is configured. So if
> > > >    the application does its accounting properly, it knows how many desc it
> > > >    has used up and how many completions it has processed.
> > >
> > > Agreed. It's just more work for the app, and for simplicity and
> > > completeness I think we should add this API. Because there are other
> > > options I think it should be available, but not as a fast-path fn
> > > (though again, the difference is likely very small for something not
> > > called for every enqueue).
> > >
> > > >    Would like to understand more details on this API usage.
> > > >
> > > Adding Sunil and Jiayu on CC who are looking at this area from the OVS
> > > and vhost sides.
> >
> > See above.
> >
> > Sunil. Jiayu, Could you share the details on the usage and why it is needed.
>
> Here is an example of how the burst capacity API will be potentially used in the app(OVS):
> http://patchwork.ozlabs.org/project/openvswitch/patch/20210907120021.4093604-2-sunil.pai.g@intel.com/
> Although commented out , it should still provide an idea of its usage.
>
> Thanks and regards,
> Sunil
>

^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v3 2/8] dmadev: add burst capacity API
  2021-09-18 12:12             ` Jerin Jacob
@ 2021-09-21 13:57               ` Pai G, Sunil
  2021-09-21 14:56                 ` Jerin Jacob
  0 siblings, 1 reply; 130+ messages in thread
From: Pai G, Sunil @ 2021-09-21 13:57 UTC (permalink / raw)
  To: Jerin Jacob, Hu, Jiayu
  Cc: Richardson, Bruce, dpdk-dev, Walsh, Conor, Laatz, Kevin,
	fengchengwen, Jerin Jacob, Satananda Burla,
	Radha Mohan Chintakuntla

Hi Jerin, 

> > > > >      From: Kevin Laatz <[2]kevin.laatz@intel.com>
> > > > >      Add a burst capacity check API to the dmadev library. This API is
> > > > >      useful to
> > > > >      applications which need to how many descriptors can be enqueued
> in
> > > > >      the
> > > > >      current batch. For example, it could be used to determine whether
> > > > >      all
> > > > >      segments of a multi-segment packet can be enqueued in the
> > > > > same
> > > batch
> > > > >      or not
> > > > >      (to avoid half-offload of the packet).
> > > > >
> > > > >     #Could you share more details on the use case with vhost?
> > > > >    # Are they planning to use this in fast path if so it need to move as
> > > > >    fast path function pointer?
> > > >
> > > > I believe the intent is to use it on fastpath, but I would assume
> > > > only once per burst, so the penalty for non-fastpath may be
> > > > acceptable. As you point out - for an app that really doesn't want
> > > > to have to pay that penalty, tracking ring use itself is possible.
> > > >
> > > > The desire for fast-path use is also why I suggested having the
> > > > space as an optional return parameter from the submit API call. It
> > > > could logically also be a return value from the "completed" call,
> > > > which might actually make more sense.
> > > >
> > > > >    # Assume the use case needs N rte_dma_copy to complete a
> > > > > logical
> > > copy
> > > > >    at vhost level. Is the any issue in half-offload, meaning when N th
> one
> > > > >    successfully completed then only the logical copy is completed.
> Right?
> > > >
> > > > Yes, as I understand it, the issue is for multi-segment packets,
> > > > where we only want to enqueue the first segment if we know we will
> > > > success with the final one too.
> > >
> > > Sorry for the delay in reply.
> > >
> > > If so, why do we need this API. We can mark a logical transaction
> > > completed IFF final segment is succeeded. Since this fastpath API, I
> > > would like to really understand the real use case for it, so if
> > > required then we need to implement in an optimized way.
> > > Otherwise driver does not need to implement this to have generic
> > > solution for all the drivers.
> 
> 
> Hi Jiayu, Sunil,
> 
> > The fact is  that it's very hard for apps to calculate the available space of a
> DMA ring.
> 
> Yes, I agree.
> 
> My question is more why to calculate the space per burst and introduce yet
> another fastpath API.
> For example, the application needs to copy 8 segments to complete a logical
> copy in the application perspective.
> In case, when 8th copy is completed then only the application marks the
> logical copy completed.
> i.e why to check per burst, 8 segments are available or not? Even it is
> available, there may be multiple reasons why any of the segment copies can
> fail. So the application needs to track all the jobs completed or not anyway.
> Am I missing something in terms of vhost or OVS usage?
> 

For the packets that do not entirely fit in the DMA ring , we have a SW copy fallback in place. 
So, we would like to avoid scenario caused because of DMA ring full where few parts of the packet are copied through DMA and other parts by CPU.
Besides, this API would also help improve debuggability/device introspection to check the occupancy rather than the app having to manually track the state of every DMA device in use.

Copying from other thread:

> What are those scenarios, could you share some descriptions of them.
> What if the final or any segment fails event the space is available.
> So you have to take care of that anyway. RIght?

I think this is app dependent no?  The application can choose not to take care of such scenarios and treat the packets as dropped.
Ring full scenarios(-ENOSPC from rte_dma_copy) could be avoided with this API but other errors mean a failure which unfortunately cannot be avoided. 

> 
> > For DSA, the available space is decided by three factors: the number
> > of available slots in SW ring, the max batching size of a batch
> > descriptor, and if there are available batch descriptors. The first
> > one is configured by SW, and apps can calculate it. But the second depends
> on DSA HW, and the third one is hided in DSA driver which is not visible to
> apps.
> > Considering the complexity of different DMA HW, I think the best way
> > is to hide all details inside DMA dev and provide this check capacity API for
> apps.
> >
> > Thanks,
> > Jiayu
> >
> > >
> > > >
> > > > >    # There is already nb_desc with which a dma_queue is configured.
> So if
> > > > >    the application does its accounting properly, it knows how many
> desc it
> > > > >    has used up and how many completions it has processed.
> > > >
> > > > Agreed. It's just more work for the app, and for simplicity and
> > > > completeness I think we should add this API. Because there are
> > > > other options I think it should be available, but not as a
> > > > fast-path fn (though again, the difference is likely very small
> > > > for something not called for every enqueue).
> > > >
> > > > >    Would like to understand more details on this API usage.
> > > > >
> > > > Adding Sunil and Jiayu on CC who are looking at this area from the
> > > > OVS and vhost sides.
> > >
> > > See above.
> > >
> > > Sunil. Jiayu, Could you share the details on the usage and why it is
> needed.
> > >
> > >
> > > >
> > > > /Bruce

^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v3 2/8] dmadev: add burst capacity API
  2021-09-21 13:57               ` Pai G, Sunil
@ 2021-09-21 14:56                 ` Jerin Jacob
  2021-09-21 15:34                   ` Pai G, Sunil
  0 siblings, 1 reply; 130+ messages in thread
From: Jerin Jacob @ 2021-09-21 14:56 UTC (permalink / raw)
  To: Pai G, Sunil
  Cc: Hu, Jiayu, Richardson, Bruce, dpdk-dev, Walsh, Conor, Laatz,
	Kevin, fengchengwen, Jerin Jacob, Satananda Burla,
	Radha Mohan Chintakuntla

On Tue, Sep 21, 2021 at 7:46 PM Pai G, Sunil <sunil.pai.g@intel.com> wrote:
>
> Hi Jerin,

Hi Sunil,


>
> > > > > >      From: Kevin Laatz <[2]kevin.laatz@intel.com>
> > > > > >      Add a burst capacity check API to the dmadev library. This API is
> > > > > >      useful to
> > > > > >      applications which need to how many descriptors can be enqueued
> > in
> > > > > >      the
> > > > > >      current batch. For example, it could be used to determine whether
> > > > > >      all
> > > > > >      segments of a multi-segment packet can be enqueued in the
> > > > > > same
> > > > batch
> > > > > >      or not
> > > > > >      (to avoid half-offload of the packet).
> > > > > >
> > > > > >     #Could you share more details on the use case with vhost?
> > > > > >    # Are they planning to use this in fast path if so it need to move as
> > > > > >    fast path function pointer?
> > > > >
> > > > > I believe the intent is to use it on fastpath, but I would assume
> > > > > only once per burst, so the penalty for non-fastpath may be
> > > > > acceptable. As you point out - for an app that really doesn't want
> > > > > to have to pay that penalty, tracking ring use itself is possible.
> > > > >
> > > > > The desire for fast-path use is also why I suggested having the
> > > > > space as an optional return parameter from the submit API call. It
> > > > > could logically also be a return value from the "completed" call,
> > > > > which might actually make more sense.
> > > > >
> > > > > >    # Assume the use case needs N rte_dma_copy to complete a
> > > > > > logical
> > > > copy
> > > > > >    at vhost level. Is the any issue in half-offload, meaning when N th
> > one
> > > > > >    successfully completed then only the logical copy is completed.
> > Right?
> > > > >
> > > > > Yes, as I understand it, the issue is for multi-segment packets,
> > > > > where we only want to enqueue the first segment if we know we will
> > > > > success with the final one too.
> > > >
> > > > Sorry for the delay in reply.
> > > >
> > > > If so, why do we need this API. We can mark a logical transaction
> > > > completed IFF final segment is succeeded. Since this fastpath API, I
> > > > would like to really understand the real use case for it, so if
> > > > required then we need to implement in an optimized way.
> > > > Otherwise driver does not need to implement this to have generic
> > > > solution for all the drivers.
> >
> >
> > Hi Jiayu, Sunil,
> >
> > > The fact is  that it's very hard for apps to calculate the available space of a
> > DMA ring.
> >
> > Yes, I agree.
> >
> > My question is more why to calculate the space per burst and introduce yet
> > another fastpath API.
> > For example, the application needs to copy 8 segments to complete a logical
> > copy in the application perspective.
> > In case, when 8th copy is completed then only the application marks the
> > logical copy completed.
> > i.e why to check per burst, 8 segments are available or not? Even it is
> > available, there may be multiple reasons why any of the segment copies can
> > fail. So the application needs to track all the jobs completed or not anyway.
> > Am I missing something in terms of vhost or OVS usage?
> >
>
> For the packets that do not entirely fit in the DMA ring , we have a SW copy fallback in place.
> So, we would like to avoid scenario caused because of DMA ring full where few parts of the packet are copied through DMA and other parts by CPU.
> Besides, this API would also help improve debuggability/device introspection to check the occupancy rather than the app having to manually track the state of every DMA device in use.

To understand it better, Could you share more details on feedback
mechanism on your application enqueue

app_enqueue_v1(.., nb_seg)
{
             /* Not enough space, Let application handle it by
dropping or resubmitting */
             if (rte_dmadev_burst_capacity() < nb_seg)
                        return -ENOSPC;

            do rte_dma_op() in loop without checking error;
            return 0; // Success
}

vs
app_enqueue_v2(.., nb_seg)
{
           int rc;

            rc |= rte_dma_op() in loop without checking error;
            return rc; // return the actual status to application  if
Not enough space, Let application handle it by dropping or
resubmitting */
}

Is app_enqueue_v1() and app_enqueue_v2() logically the same from
application PoV. Right?

If not, could you explain, the version you are planning to do for app_enqueue()


> Copying from other thread:
>
> > What are those scenarios, could you share some descriptions of them.
> > What if the final or any segment fails event the space is available.
> > So you have to take care of that anyway. RIght?
>
> I think this is app dependent no?  The application can choose not to take care of such scenarios and treat the packets as dropped.
> Ring full scenarios(-ENOSPC from rte_dma_copy) could be avoided with this API but other errors mean a failure which unfortunately cannot be avoided.
>
> >
> > > For DSA, the available space is decided by three factors: the number
> > > of available slots in SW ring, the max batching size of a batch
> > > descriptor, and if there are available batch descriptors. The first
> > > one is configured by SW, and apps can calculate it. But the second depends
> > on DSA HW, and the third one is hided in DSA driver which is not visible to
> > apps.
> > > Considering the complexity of different DMA HW, I think the best way
> > > is to hide all details inside DMA dev and provide this check capacity API for
> > apps.
> > >
> > > Thanks,
> > > Jiayu
> > >
> > > >
> > > > >
> > > > > >    # There is already nb_desc with which a dma_queue is configured.
> > So if
> > > > > >    the application does its accounting properly, it knows how many
> > desc it
> > > > > >    has used up and how many completions it has processed.
> > > > >
> > > > > Agreed. It's just more work for the app, and for simplicity and
> > > > > completeness I think we should add this API. Because there are
> > > > > other options I think it should be available, but not as a
> > > > > fast-path fn (though again, the difference is likely very small
> > > > > for something not called for every enqueue).
> > > > >
> > > > > >    Would like to understand more details on this API usage.
> > > > > >
> > > > > Adding Sunil and Jiayu on CC who are looking at this area from the
> > > > > OVS and vhost sides.
> > > >
> > > > See above.
> > > >
> > > > Sunil. Jiayu, Could you share the details on the usage and why it is
> > needed.
> > > >
> > > >
> > > > >
> > > > > /Bruce

^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v3 2/8] dmadev: add burst capacity API
  2021-09-21 14:56                 ` Jerin Jacob
@ 2021-09-21 15:34                   ` Pai G, Sunil
  2021-09-21 16:58                     ` Jerin Jacob
  0 siblings, 1 reply; 130+ messages in thread
From: Pai G, Sunil @ 2021-09-21 15:34 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: Hu, Jiayu, Richardson, Bruce, dpdk-dev, Walsh, Conor, Laatz,
	Kevin, fengchengwen, Jerin Jacob, Satananda Burla,
	Radha Mohan Chintakuntla

Hi Jerin,

<snipped>

> > > > The fact is  that it's very hard for apps to calculate the
> > > > available space of a
> > > DMA ring.
> > >
> > > Yes, I agree.
> > >
> > > My question is more why to calculate the space per burst and
> > > introduce yet another fastpath API.
> > > For example, the application needs to copy 8 segments to complete a
> > > logical copy in the application perspective.
> > > In case, when 8th copy is completed then only the application marks
> > > the logical copy completed.
> > > i.e why to check per burst, 8 segments are available or not? Even it
> > > is available, there may be multiple reasons why any of the segment
> > > copies can fail. So the application needs to track all the jobs completed or
> not anyway.
> > > Am I missing something in terms of vhost or OVS usage?
> > >
> >
> > For the packets that do not entirely fit in the DMA ring , we have a SW copy
> fallback in place.
> > So, we would like to avoid scenario caused because of DMA ring full where
> few parts of the packet are copied through DMA and other parts by CPU.
> > Besides, this API would also help improve debuggability/device
> introspection to check the occupancy rather than the app having to manually
> track the state of every DMA device in use.
> 
> To understand it better, Could you share more details on feedback
> mechanism on your application enqueue
> 
> app_enqueue_v1(.., nb_seg)
> {
>              /* Not enough space, Let application handle it by dropping or
> resubmitting */
>              if (rte_dmadev_burst_capacity() < nb_seg)
>                         return -ENOSPC;
> 
>             do rte_dma_op() in loop without checking error;
>             return 0; // Success
> }
> 
> vs
> app_enqueue_v2(.., nb_seg)
> {
>            int rc;
> 
>             rc |= rte_dma_op() in loop without checking error;
>             return rc; // return the actual status to application  if Not enough space,
> Let application handle it by dropping or resubmitting */ }
> 
> Is app_enqueue_v1() and app_enqueue_v2() logically the same from
> application PoV. Right?
> 
> If not, could you explain, the version you are planning to do for
> app_enqueue()

The exact version can be found here : http://patchwork.ozlabs.org/project/openvswitch/patch/20210907120021.4093604-2-sunil.pai.g@intel.com/ 
Unfortunately, both versions are not same in our case because of the SW fallback we have for ring full scenario's.
For a packet with 8 nb_segs, if the ring has only space for 4 , we would avoid this packet with app_enqueue_v1
while going ahead with an enqueue with app_enqueue_v2, resulting in a mix of DMA and CPU copies for a packet which we would want to avoid.

<snipped>

Thanks and regards,
Sunil


^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v3 2/8] dmadev: add burst capacity API
  2021-09-21 15:34                   ` Pai G, Sunil
@ 2021-09-21 16:58                     ` Jerin Jacob
  2021-09-21 17:12                       ` Pai G, Sunil
  0 siblings, 1 reply; 130+ messages in thread
From: Jerin Jacob @ 2021-09-21 16:58 UTC (permalink / raw)
  To: Pai G, Sunil
  Cc: Hu, Jiayu, Richardson, Bruce, dpdk-dev, Walsh, Conor, Laatz,
	Kevin, fengchengwen, Jerin Jacob, Satananda Burla,
	Radha Mohan Chintakuntla

On Tue, Sep 21, 2021 at 9:05 PM Pai G, Sunil <sunil.pai.g@intel.com> wrote:
>
> Hi Jerin,
>
> <snipped>
>
> > > > > The fact is  that it's very hard for apps to calculate the
> > > > > available space of a
> > > > DMA ring.
> > > >
> > > > Yes, I agree.
> > > >
> > > > My question is more why to calculate the space per burst and
> > > > introduce yet another fastpath API.
> > > > For example, the application needs to copy 8 segments to complete a
> > > > logical copy in the application perspective.
> > > > In case, when 8th copy is completed then only the application marks
> > > > the logical copy completed.
> > > > i.e why to check per burst, 8 segments are available or not? Even it
> > > > is available, there may be multiple reasons why any of the segment
> > > > copies can fail. So the application needs to track all the jobs completed or
> > not anyway.
> > > > Am I missing something in terms of vhost or OVS usage?
> > > >
> > >
> > > For the packets that do not entirely fit in the DMA ring , we have a SW copy
> > fallback in place.
> > > So, we would like to avoid scenario caused because of DMA ring full where
> > few parts of the packet are copied through DMA and other parts by CPU.
> > > Besides, this API would also help improve debuggability/device
> > introspection to check the occupancy rather than the app having to manually
> > track the state of every DMA device in use.
> >
> > To understand it better, Could you share more details on feedback
> > mechanism on your application enqueue
> >
> > app_enqueue_v1(.., nb_seg)
> > {
> >              /* Not enough space, Let application handle it by dropping or
> > resubmitting */
> >              if (rte_dmadev_burst_capacity() < nb_seg)
> >                         return -ENOSPC;
> >
> >             do rte_dma_op() in loop without checking error;
> >             return 0; // Success
> > }
> >
> > vs
> > app_enqueue_v2(.., nb_seg)
> > {
> >            int rc;
> >
> >             rc |= rte_dma_op() in loop without checking error;
> >             return rc; // return the actual status to application  if Not enough space,
> > Let application handle it by dropping or resubmitting */ }
> >
> > Is app_enqueue_v1() and app_enqueue_v2() logically the same from
> > application PoV. Right?
> >
> > If not, could you explain, the version you are planning to do for
> > app_enqueue()
>
> The exact version can be found here : http://patchwork.ozlabs.org/project/openvswitch/patch/20210907120021.4093604-2-sunil.pai.g@intel.com/
> Unfortunately, both versions are not same in our case because of the SW fallback we have for ring full scenario's.
> For a packet with 8 nb_segs, if the ring has only space for 4 , we would avoid this packet with app_enqueue_v1
> while going ahead with an enqueue with app_enqueue_v2, resulting in a mix of DMA and CPU copies for a packet which we would want to avoid.

Thanks for RFC link. Usage is clear now, Since you are checking the
space in the completion handler, I am not sure, what is hard to get
the remaining space,
Will following logic work to find the empty space. If not, please
discuss the issue with this approach. I am trying to avoid yet another
fastpath API
and complication in driver as there is element checking space in the
submit queue and completion queue at least in our hardware.

     max_count = nb_desc; (power of 2)
     mask = max_count - 1;

     for (i = 0; I < n; i++) {
          submit_idx = rte_dma_copy();
     }
     rc = rte_dma_completed(…, &completed_idx..);
     space_in_queue =  mask - ((submit_idx – completed_idx) & mask);

>
> <snipped>
>
> Thanks and regards,
> Sunil
>

^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v3 2/8] dmadev: add burst capacity API
  2021-09-21 16:58                     ` Jerin Jacob
@ 2021-09-21 17:12                       ` Pai G, Sunil
  2021-09-21 18:11                         ` Jerin Jacob
  0 siblings, 1 reply; 130+ messages in thread
From: Pai G, Sunil @ 2021-09-21 17:12 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: Hu, Jiayu, Richardson, Bruce, dpdk-dev, Walsh, Conor, Laatz,
	Kevin, fengchengwen, Jerin Jacob, Satananda Burla,
	Radha Mohan Chintakuntla

Hi Jerin, 

<snipped>

> > > To understand it better, Could you share more details on feedback
> > > mechanism on your application enqueue
> > >
> > > app_enqueue_v1(.., nb_seg)
> > > {
> > >              /* Not enough space, Let application handle it by
> > > dropping or resubmitting */
> > >              if (rte_dmadev_burst_capacity() < nb_seg)
> > >                         return -ENOSPC;
> > >
> > >             do rte_dma_op() in loop without checking error;
> > >             return 0; // Success
> > > }
> > >
> > > vs
> > > app_enqueue_v2(.., nb_seg)
> > > {
> > >            int rc;
> > >
> > >             rc |= rte_dma_op() in loop without checking error;
> > >             return rc; // return the actual status to application
> > > if Not enough space, Let application handle it by dropping or
> > > resubmitting */ }
> > >
> > > Is app_enqueue_v1() and app_enqueue_v2() logically the same from
> > > application PoV. Right?
> > >
> > > If not, could you explain, the version you are planning to do for
> > > app_enqueue()
> >
> > The exact version can be found here :
> >
> http://patchwork.ozlabs.org/project/openvswitch/patch/20210907120021.4
> > 093604-2-sunil.pai.g@intel.com/ Unfortunately, both versions are not
> > same in our case because of the SW fallback we have for ring full scenario's.
> > For a packet with 8 nb_segs, if the ring has only space for 4 , we
> > would avoid this packet with app_enqueue_v1 while going ahead with an
> enqueue with app_enqueue_v2, resulting in a mix of DMA and CPU copies
> for a packet which we would want to avoid.
> 
> Thanks for RFC link. Usage is clear now, Since you are checking the space in
> the completion handler, I am not sure, what is hard to get the remaining
> space, Will following logic work to find the empty space. If not, please discuss
> the issue with this approach. I am trying to avoid yet another fastpath API
> and complication in driver as there is element checking space in the submit
> queue and completion queue at least in our hardware.
> 
>      max_count = nb_desc; (power of 2)
>      mask = max_count - 1;
> 
>      for (i = 0; I < n; i++) {
>           submit_idx = rte_dma_copy();
>      }
>      rc = rte_dma_completed(…, &completed_idx..);
>      space_in_queue =  mask - ((submit_idx – completed_idx) & mask);
> 

Unfortunately, space left in the device (as calculated by the app) still can mean there is no space in the device :| 
i.e its pmd dependent.

As Jiayu mentioned before:
> The fact is that it's very hard for apps to calculate the available space of a DMA ring.
> For DSA, the available space is decided by three factors: the number of available slots
> in SW ring, the max batching size of a batch descriptor, and if there are available batch
> descriptors. The first one is configured by SW, and apps can calculate it. But the second
> depends on DSA HW, and the third one is hided in DSA driver which is not visible to apps.
> Considering the complexity of different DMA HW, I think the best way is to hide all details
> inside DMA dev and provide this check capacity API for apps.

<snipped>

Thanks and regards,
Sunil


^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v3 2/8] dmadev: add burst capacity API
  2021-09-21 17:12                       ` Pai G, Sunil
@ 2021-09-21 18:11                         ` Jerin Jacob
  2021-09-22  1:51                           ` fengchengwen
  0 siblings, 1 reply; 130+ messages in thread
From: Jerin Jacob @ 2021-09-21 18:11 UTC (permalink / raw)
  To: Pai G, Sunil
  Cc: Hu, Jiayu, Richardson, Bruce, dpdk-dev, Walsh, Conor, Laatz,
	Kevin, fengchengwen, Jerin Jacob, Satananda Burla,
	Radha Mohan Chintakuntla

On Tue, Sep 21, 2021 at 10:42 PM Pai G, Sunil <sunil.pai.g@intel.com> wrote:
>
> Hi Jerin,
>
> <snipped>
>
> > > > To understand it better, Could you share more details on feedback
> > > > mechanism on your application enqueue
> > > >
> > > > app_enqueue_v1(.., nb_seg)
> > > > {
> > > >              /* Not enough space, Let application handle it by
> > > > dropping or resubmitting */
> > > >              if (rte_dmadev_burst_capacity() < nb_seg)
> > > >                         return -ENOSPC;
> > > >
> > > >             do rte_dma_op() in loop without checking error;
> > > >             return 0; // Success
> > > > }
> > > >
> > > > vs
> > > > app_enqueue_v2(.., nb_seg)
> > > > {
> > > >            int rc;
> > > >
> > > >             rc |= rte_dma_op() in loop without checking error;
> > > >             return rc; // return the actual status to application
> > > > if Not enough space, Let application handle it by dropping or
> > > > resubmitting */ }
> > > >
> > > > Is app_enqueue_v1() and app_enqueue_v2() logically the same from
> > > > application PoV. Right?
> > > >
> > > > If not, could you explain, the version you are planning to do for
> > > > app_enqueue()
> > >
> > > The exact version can be found here :
> > >
> > http://patchwork.ozlabs.org/project/openvswitch/patch/20210907120021.4
> > > 093604-2-sunil.pai.g@intel.com/ Unfortunately, both versions are not
> > > same in our case because of the SW fallback we have for ring full scenario's.
> > > For a packet with 8 nb_segs, if the ring has only space for 4 , we
> > > would avoid this packet with app_enqueue_v1 while going ahead with an
> > enqueue with app_enqueue_v2, resulting in a mix of DMA and CPU copies
> > for a packet which we would want to avoid.
> >
> > Thanks for RFC link. Usage is clear now, Since you are checking the space in
> > the completion handler, I am not sure, what is hard to get the remaining
> > space, Will following logic work to find the empty space. If not, please discuss
> > the issue with this approach. I am trying to avoid yet another fastpath API
> > and complication in driver as there is element checking space in the submit
> > queue and completion queue at least in our hardware.
> >
> >      max_count = nb_desc; (power of 2)
> >      mask = max_count - 1;
> >
> >      for (i = 0; I < n; i++) {
> >           submit_idx = rte_dma_copy();
> >      }
> >      rc = rte_dma_completed(…, &completed_idx..);
> >      space_in_queue =  mask - ((submit_idx – completed_idx) & mask);
> >
>
> Unfortunately, space left in the device (as calculated by the app) still can mean there is no space in the device :|
> i.e its pmd dependent.

I did not pay much attention to Jiayu's reply as I did not know what is DSA.
Now I searched the DSA in ml, I could see the PMD patches.

If it is PMD limitation then I am fine with the proposed API.

@Richardson, Bruce @Laatz, Kevin  @feng Since it is used fastpath, Can
we move to fastpath handler and
remove additional check in fastpath from common code like other APIs.

>
> As Jiayu mentioned before:
> > The fact is that it's very hard for apps to calculate the available space of a DMA ring.
> > For DSA, the available space is decided by three factors: the number of available slots
> > in SW ring, the max batching size of a batch descriptor, and if there are available batch
> > descriptors. The first one is configured by SW, and apps can calculate it. But the second
> > depends on DSA HW, and the third one is hided in DSA driver which is not visible to apps.
> > Considering the complexity of different DMA HW, I think the best way is to hide all details
> > inside DMA dev and provide this check capacity API for apps.
>
> <snipped>
>
> Thanks and regards,
> Sunil
>

^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v3 2/8] dmadev: add burst capacity API
  2021-09-21 18:11                         ` Jerin Jacob
@ 2021-09-22  1:51                           ` fengchengwen
  2021-09-22  7:56                             ` Bruce Richardson
  0 siblings, 1 reply; 130+ messages in thread
From: fengchengwen @ 2021-09-22  1:51 UTC (permalink / raw)
  To: Jerin Jacob, Pai G, Sunil
  Cc: Hu, Jiayu, Richardson, Bruce, dpdk-dev, Walsh, Conor, Laatz,
	Kevin, Jerin Jacob, Satananda Burla, Radha Mohan Chintakuntla

On 2021/9/22 2:11, Jerin Jacob wrote:
> On Tue, Sep 21, 2021 at 10:42 PM Pai G, Sunil <sunil.pai.g@intel.com> wrote:
>>
>> Hi Jerin,
>>
>> <snipped>
>>
>>>>> To understand it better, Could you share more details on feedback
>>>>> mechanism on your application enqueue
>>>>>
>>>>> app_enqueue_v1(.., nb_seg)
>>>>> {
>>>>>              /* Not enough space, Let application handle it by
>>>>> dropping or resubmitting */
>>>>>              if (rte_dmadev_burst_capacity() < nb_seg)
>>>>>                         return -ENOSPC;
>>>>>
>>>>>             do rte_dma_op() in loop without checking error;
>>>>>             return 0; // Success
>>>>> }
>>>>>
>>>>> vs
>>>>> app_enqueue_v2(.., nb_seg)
>>>>> {
>>>>>            int rc;
>>>>>
>>>>>             rc |= rte_dma_op() in loop without checking error;
>>>>>             return rc; // return the actual status to application
>>>>> if Not enough space, Let application handle it by dropping or
>>>>> resubmitting */ }
>>>>>
>>>>> Is app_enqueue_v1() and app_enqueue_v2() logically the same from
>>>>> application PoV. Right?
>>>>>
>>>>> If not, could you explain, the version you are planning to do for
>>>>> app_enqueue()
>>>>
>>>> The exact version can be found here :
>>>>
>>> http://patchwork.ozlabs.org/project/openvswitch/patch/20210907120021.4
>>>> 093604-2-sunil.pai.g@intel.com/ Unfortunately, both versions are not
>>>> same in our case because of the SW fallback we have for ring full scenario's.
>>>> For a packet with 8 nb_segs, if the ring has only space for 4 , we
>>>> would avoid this packet with app_enqueue_v1 while going ahead with an
>>> enqueue with app_enqueue_v2, resulting in a mix of DMA and CPU copies
>>> for a packet which we would want to avoid.
>>>
>>> Thanks for RFC link. Usage is clear now, Since you are checking the space in
>>> the completion handler, I am not sure, what is hard to get the remaining
>>> space, Will following logic work to find the empty space. If not, please discuss
>>> the issue with this approach. I am trying to avoid yet another fastpath API
>>> and complication in driver as there is element checking space in the submit
>>> queue and completion queue at least in our hardware.
>>>
>>>      max_count = nb_desc; (power of 2)
>>>      mask = max_count - 1;
>>>
>>>      for (i = 0; I < n; i++) {
>>>           submit_idx = rte_dma_copy();
>>>      }
>>>      rc = rte_dma_completed(…, &completed_idx..);
>>>      space_in_queue =  mask - ((submit_idx – completed_idx) & mask);
>>>
>>
>> Unfortunately, space left in the device (as calculated by the app) still can mean there is no space in the device :|
>> i.e its pmd dependent.
> 
> I did not pay much attention to Jiayu's reply as I did not know what is DSA.
> Now I searched the DSA in ml, I could see the PMD patches.
> 
> If it is PMD limitation then I am fine with the proposed API.
> 
> @Richardson, Bruce @Laatz, Kevin  @feng Since it is used fastpath, Can
> we move to fastpath handler and
> remove additional check in fastpath from common code like other APIs.

+1 for move to fastpath.

> 
>>
>> As Jiayu mentioned before:
>>> The fact is that it's very hard for apps to calculate the available space of a DMA ring.
>>> For DSA, the available space is decided by three factors: the number of available slots
>>> in SW ring, the max batching size of a batch descriptor, and if there are available batch
>>> descriptors. The first one is configured by SW, and apps can calculate it. But the second
>>> depends on DSA HW, and the third one is hided in DSA driver which is not visible to apps.
>>> Considering the complexity of different DMA HW, I think the best way is to hide all details
>>> inside DMA dev and provide this check capacity API for apps.
>>
>> <snipped>
>>
>> Thanks and regards,
>> Sunil
>>
> .
> 

^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v3 2/8] dmadev: add burst capacity API
  2021-09-22  1:51                           ` fengchengwen
@ 2021-09-22  7:56                             ` Bruce Richardson
  2021-09-22 16:35                               ` Bruce Richardson
  0 siblings, 1 reply; 130+ messages in thread
From: Bruce Richardson @ 2021-09-22  7:56 UTC (permalink / raw)
  To: fengchengwen
  Cc: Jerin Jacob, Pai G, Sunil, Hu, Jiayu, dpdk-dev, Walsh, Conor,
	Laatz, Kevin, Jerin Jacob, Satananda Burla,
	Radha Mohan Chintakuntla

On Wed, Sep 22, 2021 at 09:51:42AM +0800, fengchengwen wrote:
> On 2021/9/22 2:11, Jerin Jacob wrote:
> > On Tue, Sep 21, 2021 at 10:42 PM Pai G, Sunil <sunil.pai.g@intel.com> wrote:
> >>
> >> Hi Jerin,
> >>
> >> <snipped>
> >>
> >>>>> To understand it better, Could you share more details on feedback
> >>>>> mechanism on your application enqueue
> >>>>>
> >>>>> app_enqueue_v1(.., nb_seg)
> >>>>> {
> >>>>>              /* Not enough space, Let application handle it by
> >>>>> dropping or resubmitting */
> >>>>>              if (rte_dmadev_burst_capacity() < nb_seg)
> >>>>>                         return -ENOSPC;
> >>>>>
> >>>>>             do rte_dma_op() in loop without checking error;
> >>>>>             return 0; // Success
> >>>>> }
> >>>>>
> >>>>> vs
> >>>>> app_enqueue_v2(.., nb_seg)
> >>>>> {
> >>>>>            int rc;
> >>>>>
> >>>>>             rc |= rte_dma_op() in loop without checking error;
> >>>>>             return rc; // return the actual status to application
> >>>>> if Not enough space, Let application handle it by dropping or
> >>>>> resubmitting */ }
> >>>>>
> >>>>> Is app_enqueue_v1() and app_enqueue_v2() logically the same from
> >>>>> application PoV. Right?
> >>>>>
> >>>>> If not, could you explain, the version you are planning to do for
> >>>>> app_enqueue()
> >>>>
> >>>> The exact version can be found here :
> >>>>
> >>> http://patchwork.ozlabs.org/project/openvswitch/patch/20210907120021.4
> >>>> 093604-2-sunil.pai.g@intel.com/ Unfortunately, both versions are not
> >>>> same in our case because of the SW fallback we have for ring full scenario's.
> >>>> For a packet with 8 nb_segs, if the ring has only space for 4 , we
> >>>> would avoid this packet with app_enqueue_v1 while going ahead with an
> >>> enqueue with app_enqueue_v2, resulting in a mix of DMA and CPU copies
> >>> for a packet which we would want to avoid.
> >>>
> >>> Thanks for RFC link. Usage is clear now, Since you are checking the space in
> >>> the completion handler, I am not sure, what is hard to get the remaining
> >>> space, Will following logic work to find the empty space. If not, please discuss
> >>> the issue with this approach. I am trying to avoid yet another fastpath API
> >>> and complication in driver as there is element checking space in the submit
> >>> queue and completion queue at least in our hardware.
> >>>
> >>>      max_count = nb_desc; (power of 2)
> >>>      mask = max_count - 1;
> >>>
> >>>      for (i = 0; I < n; i++) {
> >>>           submit_idx = rte_dma_copy();
> >>>      }
> >>>      rc = rte_dma_completed(…, &completed_idx..);
> >>>      space_in_queue =  mask - ((submit_idx – completed_idx) & mask);
> >>>
> >>
> >> Unfortunately, space left in the device (as calculated by the app) still can mean there is no space in the device :|
> >> i.e its pmd dependent.
> > 
> > I did not pay much attention to Jiayu's reply as I did not know what is DSA.
> > Now I searched the DSA in ml, I could see the PMD patches.
> > 
> > If it is PMD limitation then I am fine with the proposed API.
> > 
> > @Richardson, Bruce @Laatz, Kevin  @feng Since it is used fastpath, Can
> > we move to fastpath handler and
> > remove additional check in fastpath from common code like other APIs.
> 
> +1 for move to fastpath.
> 

Will move in next revision.

^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v5 1/9] dmadev: add channel status check for testing use
  2021-09-17 13:54   ` [dpdk-dev] [PATCH v5 1/9] dmadev: add channel status check for testing use Bruce Richardson
@ 2021-09-22  8:25     ` fengchengwen
  2021-09-22  8:31       ` Bruce Richardson
  0 siblings, 1 reply; 130+ messages in thread
From: fengchengwen @ 2021-09-22  8:25 UTC (permalink / raw)
  To: Bruce Richardson, dev; +Cc: conor.walsh, kevin.laatz, jerinj

On 2021/9/17 21:54, Bruce Richardson wrote:
> Add in a function to check if a device or vchan has completed all jobs
> assigned to it, without gathering in the results. This is primarily for
> use in testing, to allow the hardware to be in a known-state prior to
> gathering completions.
> 
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> Reviewed-by: Conor Walsh <conor.walsh@intel.com>
> Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
> ---
>  lib/dmadev/rte_dmadev.c      | 15 +++++++++++++++
>  lib/dmadev/rte_dmadev.h      | 33 +++++++++++++++++++++++++++++++++
>  lib/dmadev/rte_dmadev_core.h |  6 ++++++
>  lib/dmadev/version.map       |  1 +
>  4 files changed, 55 insertions(+)
> 
> diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c
> index 544937acf8..859958fff8 100644
> --- a/lib/dmadev/rte_dmadev.c
> +++ b/lib/dmadev/rte_dmadev.c
> @@ -716,3 +716,18 @@ rte_dma_dump(int16_t dev_id, FILE *f)
> 
>  	return 0;
>  }
> +
> +int
> +rte_dma_vchan_status(uint16_t dev_id, uint16_t vchan, enum rte_dma_vchan_status *status)

uint16_t dev_id -> int16_t

> +{
> +	struct rte_dma_dev *dev = &rte_dma_devices[dev_id];
> +
> +	RTE_DMA_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL);
> +	if (vchan >= dev->data->dev_conf.nb_vchans) {
> +		RTE_DMA_LOG(ERR, "Device %u vchan %u out of range\n", dev_id, vchan);
> +		return -EINVAL;
> +	}
> +
> +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vchan_status, -ENOTSUP);
> +	return (*dev->dev_ops->vchan_status)(dev, vchan, status);
> +}
> diff --git a/lib/dmadev/rte_dmadev.h b/lib/dmadev/rte_dmadev.h
> index be54f2cb9d..86c4a38f83 100644
> --- a/lib/dmadev/rte_dmadev.h
> +++ b/lib/dmadev/rte_dmadev.h
> @@ -660,6 +660,39 @@ int rte_dma_stats_get(int16_t dev_id, uint16_t vchan,
>  __rte_experimental
>  int rte_dma_stats_reset(int16_t dev_id, uint16_t vchan);
> 
> +/**
> + * device vchannel status
> + *
> + * Enum with the options for the channel status, either idle, active or halted due to error

please add @see

> + */
> +enum rte_dma_vchan_status {
> +	RTE_DMA_VCHAN_IDLE,          /**< not processing, awaiting ops */
> +	RTE_DMA_VCHAN_ACTIVE,        /**< currently processing jobs */
> +	RTE_DMA_VCHAN_HALTED_ERROR,  /**< not processing due to error, cannot accept new ops */
> +};
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Determine if all jobs have completed on a device channel.
> + * This function is primarily designed for testing use, as it allows a process to check if
> + * all jobs are completed, without actually gathering completions from those jobs.
> + *
> + * @param dev_id
> + *   The identifier of the device.
> + * @param vchan
> + *   The identifier of virtual DMA channel.
> + * @param[out] status
> + *   The vchan status
> + * @return
> + *   0 - call completed successfully
> + *   < 0 - error code indicating there was a problem calling the API
> + */
> +__rte_experimental
> +int
> +rte_dma_vchan_status(uint16_t dev_id, uint16_t vchan, enum rte_dma_vchan_status *status);
> +
>  /**
>   * @warning
>   * @b EXPERIMENTAL: this API may change without prior notice.
> diff --git a/lib/dmadev/rte_dmadev_core.h b/lib/dmadev/rte_dmadev_core.h
> index edb3286cbb..0eec1aa43b 100644
> --- a/lib/dmadev/rte_dmadev_core.h
> +++ b/lib/dmadev/rte_dmadev_core.h
> @@ -46,6 +46,10 @@ typedef int (*rte_dma_vchan_setup_t)(struct rte_dma_dev *dev, uint16_t vchan,
>  				const struct rte_dma_vchan_conf *conf,
>  				uint32_t conf_sz);
> 
> +/** @internal Used to check if a virtual channel has finished all jobs. */
> +typedef int (*rte_dma_vchan_status_t)(const struct rte_dma_dev *dev, uint16_t vchan,
> +		enum rte_dma_vchan_status *status);
> +
>  /** @internal Used to retrieve basic statistics. */
>  typedef int (*rte_dma_stats_get_t)(const struct rte_dma_dev *dev,
>  			uint16_t vchan, struct rte_dma_stats *stats,
> @@ -119,6 +123,8 @@ struct rte_dma_dev_ops {
>  	rte_dma_stats_reset_t    stats_reset;
> 
>  	rte_dma_dump_t           dev_dump;
> +	rte_dma_vchan_status_t   vchan_status;
> +
>  };
> 
>  /**
> diff --git a/lib/dmadev/version.map b/lib/dmadev/version.map
> index c780463bb2..40ea517016 100644
> --- a/lib/dmadev/version.map
> +++ b/lib/dmadev/version.map
> @@ -20,6 +20,7 @@ EXPERIMENTAL {
>  	rte_dma_stop;
>  	rte_dma_submit;
>  	rte_dma_vchan_setup;
> +	rte_dma_vchan_status;
> 
>  	local: *;
>  };
> --
> 2.30.2
> 
> .
> 

^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v5 1/9] dmadev: add channel status check for testing use
  2021-09-22  8:25     ` fengchengwen
@ 2021-09-22  8:31       ` Bruce Richardson
  0 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-09-22  8:31 UTC (permalink / raw)
  To: fengchengwen; +Cc: dev, conor.walsh, kevin.laatz, jerinj

On Wed, Sep 22, 2021 at 04:25:15PM +0800, fengchengwen wrote:
> On 2021/9/17 21:54, Bruce Richardson wrote:
> > Add in a function to check if a device or vchan has completed all jobs
> > assigned to it, without gathering in the results. This is primarily for
> > use in testing, to allow the hardware to be in a known-state prior to
> > gathering completions.
> > 
> > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > Reviewed-by: Conor Walsh <conor.walsh@intel.com>
> > Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
> > ---
> >  lib/dmadev/rte_dmadev.c      | 15 +++++++++++++++
> >  lib/dmadev/rte_dmadev.h      | 33 +++++++++++++++++++++++++++++++++
> >  lib/dmadev/rte_dmadev_core.h |  6 ++++++
> >  lib/dmadev/version.map       |  1 +
> >  4 files changed, 55 insertions(+)
> > 
> > diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c
> > index 544937acf8..859958fff8 100644
> > --- a/lib/dmadev/rte_dmadev.c
> > +++ b/lib/dmadev/rte_dmadev.c
> > @@ -716,3 +716,18 @@ rte_dma_dump(int16_t dev_id, FILE *f)
> > 
> >  	return 0;
> >  }
> > +
> > +int
> > +rte_dma_vchan_status(uint16_t dev_id, uint16_t vchan, enum rte_dma_vchan_status *status)
> 
> uint16_t dev_id -> int16_t
> 
> > +{
> > +	struct rte_dma_dev *dev = &rte_dma_devices[dev_id];
> > +
> > +	RTE_DMA_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL);
> > +	if (vchan >= dev->data->dev_conf.nb_vchans) {
> > +		RTE_DMA_LOG(ERR, "Device %u vchan %u out of range\n", dev_id, vchan);
> > +		return -EINVAL;
> > +	}
> > +
> > +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vchan_status, -ENOTSUP);
> > +	return (*dev->dev_ops->vchan_status)(dev, vchan, status);
> > +}
> > diff --git a/lib/dmadev/rte_dmadev.h b/lib/dmadev/rte_dmadev.h
> > index be54f2cb9d..86c4a38f83 100644
> > --- a/lib/dmadev/rte_dmadev.h
> > +++ b/lib/dmadev/rte_dmadev.h
> > @@ -660,6 +660,39 @@ int rte_dma_stats_get(int16_t dev_id, uint16_t vchan,
> >  __rte_experimental
> >  int rte_dma_stats_reset(int16_t dev_id, uint16_t vchan);
> > 
> > +/**
> > + * device vchannel status
> > + *
> > + * Enum with the options for the channel status, either idle, active or halted due to error
> 
> please add @see
> 
Will do in next revision.

Thanks for feedback.
/Bruce

^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v5 3/9] dmadev: add device iterator
  2021-09-17 13:54   ` [dpdk-dev] [PATCH v5 3/9] dmadev: add device iterator Bruce Richardson
@ 2021-09-22  8:46     ` fengchengwen
  0 siblings, 0 replies; 130+ messages in thread
From: fengchengwen @ 2021-09-22  8:46 UTC (permalink / raw)
  To: Bruce Richardson, dev; +Cc: conor.walsh, kevin.laatz, jerinj

On 2021/9/17 21:54, Bruce Richardson wrote:
> Add a function and wrapper macro to iterate over all dma devices.
> 
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
>  lib/dmadev/rte_dmadev.c | 13 +++++++++++++
>  lib/dmadev/rte_dmadev.h | 18 ++++++++++++++++++
>  lib/dmadev/version.map  |  1 +
>  3 files changed, 32 insertions(+)
> 
> diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c
> index ed342e0d32..ba189f3539 100644
> --- a/lib/dmadev/rte_dmadev.c
> +++ b/lib/dmadev/rte_dmadev.c
> @@ -55,6 +55,19 @@ rte_dma_dev_max(size_t dev_max)
>  	return 0;
>  }
> 
> +uint16_t
> +rte_dma_next_dev(uint16_t start_dev_id)
> +{
> +	uint16_t dev_id = start_dev_id;
> +	while (dev_id < dma_devices_max && rte_dma_devices[dev_id].state == RTE_DMA_DEV_UNUSED)
> +		dev_id++;
> +
> +	if (dev_id < dma_devices_max)
> +		return dev_id;
> +
> +	return UINT16_MAX;
> +}

Now dev_id is int16_t, so the input parameter should be int16_t and return -1 when non next. e.g.

int16_t
rte_dma_next_dev(int16_t start_dev_id)
{
	int16_t dev_id = start_dev_id;
	while (dev_id < dma_devices_max && rte_dma_devices[dev_id].state == RTE_DMA_DEV_UNUSED)
		dev_id++;

	if (dev_id < dma_devices_max)
		return dev_id;

	return -1;
}

> +
>  static int
>  dma_check_name(const char *name)
>  {
> diff --git a/lib/dmadev/rte_dmadev.h b/lib/dmadev/rte_dmadev.h
> index be4bb18ee6..d262b8ed8d 100644
> --- a/lib/dmadev/rte_dmadev.h
> +++ b/lib/dmadev/rte_dmadev.h
> @@ -219,6 +219,24 @@ bool rte_dma_is_valid(int16_t dev_id);
>  __rte_experimental
>  uint16_t rte_dma_count_avail(void);
> 
> +/**
> + * Iterates over valid dmadev instances.
> + *
> + * @param start_dev_id
> + *   The id of the next possible dmadev.
> + * @return
> + *   Next valid dmadev, UINT16_MAX if there is none.
> + */
> +__rte_experimental
> +uint16_t rte_dma_next_dev(uint16_t start_dev_id);
> +
> +/** Utility macro to iterate over all available dmadevs */
> +#define RTE_DMA_FOREACH_DEV(p) \
> +	for (p = rte_dma_next_dev(0); \
> +	     (uint16_t)p < UINT16_MAX; \
> +	     p = rte_dma_next_dev(p + 1))

#define RTE_DMA_FOREACH_DEV(p) \
	for (p = rte_dma_next_dev(0); \
	     p != -1; \
	     p = rte_dma_next_dev(p + 1))

> +
> +
>  /** DMA device support memory-to-memory transfer.
>   *
>   * @see struct rte_dma_info::dev_capa
> diff --git a/lib/dmadev/version.map b/lib/dmadev/version.map
> index 66420c4ede..0ab570a1be 100644
> --- a/lib/dmadev/version.map
> +++ b/lib/dmadev/version.map
> @@ -15,6 +15,7 @@ EXPERIMENTAL {
>  	rte_dma_get_dev_id;
>  	rte_dma_info_get;
>  	rte_dma_is_valid;
> +	rte_dma_next_dev;
>  	rte_dma_start;
>  	rte_dma_stats_get;
>  	rte_dma_stats_reset;
> --
> 2.30.2
> 
> .
> 

^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v3 2/8] dmadev: add burst capacity API
  2021-09-22  7:56                             ` Bruce Richardson
@ 2021-09-22 16:35                               ` Bruce Richardson
  2021-09-22 17:29                                 ` Jerin Jacob
  0 siblings, 1 reply; 130+ messages in thread
From: Bruce Richardson @ 2021-09-22 16:35 UTC (permalink / raw)
  To: fengchengwen
  Cc: Jerin Jacob, Pai G, Sunil, Hu, Jiayu, dpdk-dev, Walsh, Conor,
	Laatz, Kevin, Jerin Jacob, Satananda Burla,
	Radha Mohan Chintakuntla

On Wed, Sep 22, 2021 at 08:56:50AM +0100, Bruce Richardson wrote:
> On Wed, Sep 22, 2021 at 09:51:42AM +0800, fengchengwen wrote:
> > On 2021/9/22 2:11, Jerin Jacob wrote:
> > > On Tue, Sep 21, 2021 at 10:42 PM Pai G, Sunil <sunil.pai.g@intel.com> wrote:
> > >>
> > >> Hi Jerin,
> > >>
> > >> <snipped>
> > >>
> > >>>>> To understand it better, Could you share more details on feedback
> > >>>>> mechanism on your application enqueue
> > >>>>>
> > >>>>> app_enqueue_v1(.., nb_seg)
> > >>>>> {
> > >>>>>              /* Not enough space, Let application handle it by
> > >>>>> dropping or resubmitting */
> > >>>>>              if (rte_dmadev_burst_capacity() < nb_seg)
> > >>>>>                         return -ENOSPC;
> > >>>>>
> > >>>>>             do rte_dma_op() in loop without checking error;
> > >>>>>             return 0; // Success
> > >>>>> }
> > >>>>>
> > >>>>> vs
> > >>>>> app_enqueue_v2(.., nb_seg)
> > >>>>> {
> > >>>>>            int rc;
> > >>>>>
> > >>>>>             rc |= rte_dma_op() in loop without checking error;
> > >>>>>             return rc; // return the actual status to application
> > >>>>> if Not enough space, Let application handle it by dropping or
> > >>>>> resubmitting */ }
> > >>>>>
> > >>>>> Is app_enqueue_v1() and app_enqueue_v2() logically the same from
> > >>>>> application PoV. Right?
> > >>>>>
> > >>>>> If not, could you explain, the version you are planning to do for
> > >>>>> app_enqueue()
> > >>>>
> > >>>> The exact version can be found here :
> > >>>>
> > >>> http://patchwork.ozlabs.org/project/openvswitch/patch/20210907120021.4
> > >>>> 093604-2-sunil.pai.g@intel.com/ Unfortunately, both versions are not
> > >>>> same in our case because of the SW fallback we have for ring full scenario's.
> > >>>> For a packet with 8 nb_segs, if the ring has only space for 4 , we
> > >>>> would avoid this packet with app_enqueue_v1 while going ahead with an
> > >>> enqueue with app_enqueue_v2, resulting in a mix of DMA and CPU copies
> > >>> for a packet which we would want to avoid.
> > >>>
> > >>> Thanks for RFC link. Usage is clear now, Since you are checking the space in
> > >>> the completion handler, I am not sure, what is hard to get the remaining
> > >>> space, Will following logic work to find the empty space. If not, please discuss
> > >>> the issue with this approach. I am trying to avoid yet another fastpath API
> > >>> and complication in driver as there is element checking space in the submit
> > >>> queue and completion queue at least in our hardware.
> > >>>
> > >>>      max_count = nb_desc; (power of 2)
> > >>>      mask = max_count - 1;
> > >>>
> > >>>      for (i = 0; I < n; i++) {
> > >>>           submit_idx = rte_dma_copy();
> > >>>      }
> > >>>      rc = rte_dma_completed(…, &completed_idx..);
> > >>>      space_in_queue =  mask - ((submit_idx – completed_idx) & mask);
> > >>>
> > >>
> > >> Unfortunately, space left in the device (as calculated by the app) still can mean there is no space in the device :|
> > >> i.e its pmd dependent.
> > > 
> > > I did not pay much attention to Jiayu's reply as I did not know what is DSA.
> > > Now I searched the DSA in ml, I could see the PMD patches.
> > > 
> > > If it is PMD limitation then I am fine with the proposed API.
> > > 
> > > @Richardson, Bruce @Laatz, Kevin  @feng Since it is used fastpath, Can
> > > we move to fastpath handler and
> > > remove additional check in fastpath from common code like other APIs.
> > 
> > +1 for move to fastpath.
> > 
> 
> Will move in next revision.

Follow-up question on this. If it's a fastpath function then we would not
normally check for support from drivers. Therefore do we want to:
1. make it a mandatory function
2. add a feature capability flag

Given that it's likely fairly easy for all drivers to implement, and it
makes it easier for apps to avoid having to check a feature flag for, I'd
tend towards option #1, but just would like consensus before I push any
more patches.

/Bruce

^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v3 2/8] dmadev: add burst capacity API
  2021-09-22 16:35                               ` Bruce Richardson
@ 2021-09-22 17:29                                 ` Jerin Jacob
  2021-09-23 13:24                                   ` fengchengwen
  0 siblings, 1 reply; 130+ messages in thread
From: Jerin Jacob @ 2021-09-22 17:29 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: fengchengwen, Pai G, Sunil, Hu, Jiayu, dpdk-dev, Walsh, Conor,
	Laatz, Kevin, Jerin Jacob, Satananda Burla,
	Radha Mohan Chintakuntla

On Wed, Sep 22, 2021 at 10:06 PM Bruce Richardson
<bruce.richardson@intel.com> wrote:
>
> On Wed, Sep 22, 2021 at 08:56:50AM +0100, Bruce Richardson wrote:
> > On Wed, Sep 22, 2021 at 09:51:42AM +0800, fengchengwen wrote:
> > > On 2021/9/22 2:11, Jerin Jacob wrote:
> > > > On Tue, Sep 21, 2021 at 10:42 PM Pai G, Sunil <sunil.pai.g@intel.com> wrote:
> > > >>
> > > >> Hi Jerin,
> > > >>
> > > >> <snipped>
> > > >>
> > > >>>>> To understand it better, Could you share more details on feedback
> > > >>>>> mechanism on your application enqueue
> > > >>>>>
> > > >>>>> app_enqueue_v1(.., nb_seg)
> > > >>>>> {
> > > >>>>>              /* Not enough space, Let application handle it by
> > > >>>>> dropping or resubmitting */
> > > >>>>>              if (rte_dmadev_burst_capacity() < nb_seg)
> > > >>>>>                         return -ENOSPC;
> > > >>>>>
> > > >>>>>             do rte_dma_op() in loop without checking error;
> > > >>>>>             return 0; // Success
> > > >>>>> }
> > > >>>>>
> > > >>>>> vs
> > > >>>>> app_enqueue_v2(.., nb_seg)
> > > >>>>> {
> > > >>>>>            int rc;
> > > >>>>>
> > > >>>>>             rc |= rte_dma_op() in loop without checking error;
> > > >>>>>             return rc; // return the actual status to application
> > > >>>>> if Not enough space, Let application handle it by dropping or
> > > >>>>> resubmitting */ }
> > > >>>>>
> > > >>>>> Is app_enqueue_v1() and app_enqueue_v2() logically the same from
> > > >>>>> application PoV. Right?
> > > >>>>>
> > > >>>>> If not, could you explain, the version you are planning to do for
> > > >>>>> app_enqueue()
> > > >>>>
> > > >>>> The exact version can be found here :
> > > >>>>
> > > >>> http://patchwork.ozlabs.org/project/openvswitch/patch/20210907120021.4
> > > >>>> 093604-2-sunil.pai.g@intel.com/ Unfortunately, both versions are not
> > > >>>> same in our case because of the SW fallback we have for ring full scenario's.
> > > >>>> For a packet with 8 nb_segs, if the ring has only space for 4 , we
> > > >>>> would avoid this packet with app_enqueue_v1 while going ahead with an
> > > >>> enqueue with app_enqueue_v2, resulting in a mix of DMA and CPU copies
> > > >>> for a packet which we would want to avoid.
> > > >>>
> > > >>> Thanks for RFC link. Usage is clear now, Since you are checking the space in
> > > >>> the completion handler, I am not sure, what is hard to get the remaining
> > > >>> space, Will following logic work to find the empty space. If not, please discuss
> > > >>> the issue with this approach. I am trying to avoid yet another fastpath API
> > > >>> and complication in driver as there is element checking space in the submit
> > > >>> queue and completion queue at least in our hardware.
> > > >>>
> > > >>>      max_count = nb_desc; (power of 2)
> > > >>>      mask = max_count - 1;
> > > >>>
> > > >>>      for (i = 0; I < n; i++) {
> > > >>>           submit_idx = rte_dma_copy();
> > > >>>      }
> > > >>>      rc = rte_dma_completed(…, &completed_idx..);
> > > >>>      space_in_queue =  mask - ((submit_idx – completed_idx) & mask);
> > > >>>
> > > >>
> > > >> Unfortunately, space left in the device (as calculated by the app) still can mean there is no space in the device :|
> > > >> i.e its pmd dependent.
> > > >
> > > > I did not pay much attention to Jiayu's reply as I did not know what is DSA.
> > > > Now I searched the DSA in ml, I could see the PMD patches.
> > > >
> > > > If it is PMD limitation then I am fine with the proposed API.
> > > >
> > > > @Richardson, Bruce @Laatz, Kevin  @feng Since it is used fastpath, Can
> > > > we move to fastpath handler and
> > > > remove additional check in fastpath from common code like other APIs.
> > >
> > > +1 for move to fastpath.
> > >
> >
> > Will move in next revision.
>
> Follow-up question on this. If it's a fastpath function then we would not
> normally check for support from drivers. Therefore do we want to:
> 1. make it a mandatory function
> 2. add a feature capability flag
>
> Given that it's likely fairly easy for all drivers to implement, and it
> makes it easier for apps to avoid having to check a feature flag for, I'd
> tend towards option #1, but just would like consensus before I push any
> more patches.

I think, if vhost using in that way then it needs to make it a
mandatory function.

>
> /Bruce

^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v3 2/8] dmadev: add burst capacity API
  2021-09-22 17:29                                 ` Jerin Jacob
@ 2021-09-23 13:24                                   ` fengchengwen
  0 siblings, 0 replies; 130+ messages in thread
From: fengchengwen @ 2021-09-23 13:24 UTC (permalink / raw)
  To: Jerin Jacob, Bruce Richardson
  Cc: Pai G, Sunil, Hu, Jiayu, dpdk-dev, Walsh, Conor, Laatz, Kevin,
	Jerin Jacob, Satananda Burla, Radha Mohan Chintakuntla

On 2021/9/23 1:29, Jerin Jacob wrote:
> On Wed, Sep 22, 2021 at 10:06 PM Bruce Richardson
> <bruce.richardson@intel.com> wrote:
>>
>> On Wed, Sep 22, 2021 at 08:56:50AM +0100, Bruce Richardson wrote:
>>> On Wed, Sep 22, 2021 at 09:51:42AM +0800, fengchengwen wrote:
>>>> On 2021/9/22 2:11, Jerin Jacob wrote:
>>>>> On Tue, Sep 21, 2021 at 10:42 PM Pai G, Sunil <sunil.pai.g@intel.com> wrote:
>>>>>>
>>>>>> Hi Jerin,
>>>>>>
>>>>>> <snipped>
>>>>>>
>>>>>>>>> To understand it better, Could you share more details on feedback
>>>>>>>>> mechanism on your application enqueue
>>>>>>>>>
>>>>>>>>> app_enqueue_v1(.., nb_seg)
>>>>>>>>> {
>>>>>>>>>              /* Not enough space, Let application handle it by
>>>>>>>>> dropping or resubmitting */
>>>>>>>>>              if (rte_dmadev_burst_capacity() < nb_seg)
>>>>>>>>>                         return -ENOSPC;
>>>>>>>>>
>>>>>>>>>             do rte_dma_op() in loop without checking error;
>>>>>>>>>             return 0; // Success
>>>>>>>>> }
>>>>>>>>>
>>>>>>>>> vs
>>>>>>>>> app_enqueue_v2(.., nb_seg)
>>>>>>>>> {
>>>>>>>>>            int rc;
>>>>>>>>>
>>>>>>>>>             rc |= rte_dma_op() in loop without checking error;
>>>>>>>>>             return rc; // return the actual status to application
>>>>>>>>> if Not enough space, Let application handle it by dropping or
>>>>>>>>> resubmitting */ }
>>>>>>>>>
>>>>>>>>> Is app_enqueue_v1() and app_enqueue_v2() logically the same from
>>>>>>>>> application PoV. Right?
>>>>>>>>>
>>>>>>>>> If not, could you explain, the version you are planning to do for
>>>>>>>>> app_enqueue()
>>>>>>>>
>>>>>>>> The exact version can be found here :
>>>>>>>>
>>>>>>> http://patchwork.ozlabs.org/project/openvswitch/patch/20210907120021.4
>>>>>>>> 093604-2-sunil.pai.g@intel.com/ Unfortunately, both versions are not
>>>>>>>> same in our case because of the SW fallback we have for ring full scenario's.
>>>>>>>> For a packet with 8 nb_segs, if the ring has only space for 4 , we
>>>>>>>> would avoid this packet with app_enqueue_v1 while going ahead with an
>>>>>>> enqueue with app_enqueue_v2, resulting in a mix of DMA and CPU copies
>>>>>>> for a packet which we would want to avoid.
>>>>>>>
>>>>>>> Thanks for RFC link. Usage is clear now, Since you are checking the space in
>>>>>>> the completion handler, I am not sure, what is hard to get the remaining
>>>>>>> space, Will following logic work to find the empty space. If not, please discuss
>>>>>>> the issue with this approach. I am trying to avoid yet another fastpath API
>>>>>>> and complication in driver as there is element checking space in the submit
>>>>>>> queue and completion queue at least in our hardware.
>>>>>>>
>>>>>>>      max_count = nb_desc; (power of 2)
>>>>>>>      mask = max_count - 1;
>>>>>>>
>>>>>>>      for (i = 0; I < n; i++) {
>>>>>>>           submit_idx = rte_dma_copy();
>>>>>>>      }
>>>>>>>      rc = rte_dma_completed(…, &completed_idx..);
>>>>>>>      space_in_queue =  mask - ((submit_idx – completed_idx) & mask);
>>>>>>>
>>>>>>
>>>>>> Unfortunately, space left in the device (as calculated by the app) still can mean there is no space in the device :|
>>>>>> i.e its pmd dependent.
>>>>>
>>>>> I did not pay much attention to Jiayu's reply as I did not know what is DSA.
>>>>> Now I searched the DSA in ml, I could see the PMD patches.
>>>>>
>>>>> If it is PMD limitation then I am fine with the proposed API.
>>>>>
>>>>> @Richardson, Bruce @Laatz, Kevin  @feng Since it is used fastpath, Can
>>>>> we move to fastpath handler and
>>>>> remove additional check in fastpath from common code like other APIs.
>>>>
>>>> +1 for move to fastpath.
>>>>
>>>
>>> Will move in next revision.
>>
>> Follow-up question on this. If it's a fastpath function then we would not
>> normally check for support from drivers. Therefore do we want to:
>> 1. make it a mandatory function
>> 2. add a feature capability flag
>>
>> Given that it's likely fairly easy for all drivers to implement, and it
>> makes it easier for apps to avoid having to check a feature flag for, I'd
>> tend towards option #1, but just would like consensus before I push any
>> more patches.
> 
> I think, if vhost using in that way then it needs to make it a
> mandatory function.

+1

> 
>>
>> /Bruce
> .
> 

^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v6 00/13] add test suite for DMA drivers
  2021-08-26 18:32 [dpdk-dev] [RFC PATCH 0/7] add test suite for DMA drivers Bruce Richardson
                   ` (10 preceding siblings ...)
  2021-09-17 13:54 ` [dpdk-dev] [PATCH v5 0/9] add test suite for DMA drivers Bruce Richardson
@ 2021-09-24 10:29 ` Bruce Richardson
  2021-09-24 10:29   ` [dpdk-dev] [PATCH v6 01/13] dmadev: add channel status check for testing use Bruce Richardson
                     ` (13 more replies)
  11 siblings, 14 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-09-24 10:29 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

This patchset adds a fairly comprehensive set of tests for basic dmadev
functionality. Tests are added to verify basic copy operation in each
device, using both submit function and submit flag, and verifying
completion gathering using both "completed()" and "completed_status()"
functions. Beyond that, tests are then added for the error reporting and
handling, as is a suite of tests for the fill() operation for devices that
support those.

New in version 6 of the series are a couple of new patches to add support
for the skeleton dmadev to pass the suite of tests. This includes adding
missing function support to it, as well as adding a capability flag
to dmadev to indicate that it can't support invalid addresses.

Depends-on: series-18960 ("support dmadev")

V6:
* changed type of dev_id from uint16_t to int16_t
* moved burst_capacity function to datapath function set
* added burst_capacity function and vchan_status functions to skeleton driver
* added capability flag to indicate if device supports handling errors
* enabled running unit tests on skeleton driver

V5:
* added missing reviewed-by tags from v3 reviewed.

V4:
* rebased to v22 of dmadev set
* added patch for iteration macro for dmadevs to allow testing each dmadev in
  turn

V3:
* add patch and tests for a burst-capacity function
* addressed review feedback from v2
* code cleanups to try and shorten code where possible

V2:
* added into dmadev a API to check for a device being idle
* removed the hard-coded timeout delays before checking completions, and instead
  wait for device to be idle
* added in checks for statistics updates as part of some tests
* fixed issue identified by internal coverity scan
* other minor miscellaneous changes and fixes.

Bruce Richardson (10):
  dmadev: add channel status check for testing use
  dma/skeleton: add channel status function
  dma/skeleton: add burst capacity function
  dmadev: add device iterator
  app/test: add basic dmadev instance tests
  app/test: add basic dmadev copy tests
  app/test: run test suite on skeleton driver
  app/test: add more comprehensive dmadev copy tests
  dmadev: add flag for error handling support
  app/test: test dmadev instance failure handling

Kevin Laatz (3):
  dmadev: add burst capacity API
  app/test: add dmadev fill tests
  app/test: add dmadev burst capacity API test

 app/test/test_dmadev.c                 | 829 ++++++++++++++++++++++++-
 drivers/dma/skeleton/skeleton_dmadev.c |  27 +
 drivers/dma/skeleton/skeleton_dmadev.h |   2 +-
 lib/dmadev/rte_dmadev.c                |  29 +
 lib/dmadev/rte_dmadev.h                |  92 +++
 lib/dmadev/rte_dmadev_core.h           |  10 +-
 lib/dmadev/version.map                 |   3 +
 7 files changed, 986 insertions(+), 6 deletions(-)

--
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v6 01/13] dmadev: add channel status check for testing use
  2021-09-24 10:29 ` [dpdk-dev] [PATCH v6 00/13] add test suite for DMA drivers Bruce Richardson
@ 2021-09-24 10:29   ` Bruce Richardson
  2021-09-24 10:29   ` [dpdk-dev] [PATCH v6 02/13] dma/skeleton: add channel status function Bruce Richardson
                     ` (12 subsequent siblings)
  13 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-09-24 10:29 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

Add in a function to check if a device or vchan has completed all jobs
assigned to it, without gathering in the results. This is primarily for
use in testing, to allow the hardware to be in a known-state prior to
gathering completions.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
---
 lib/dmadev/rte_dmadev.c      | 15 +++++++++++++++
 lib/dmadev/rte_dmadev.h      | 34 ++++++++++++++++++++++++++++++++++
 lib/dmadev/rte_dmadev_core.h |  6 ++++++
 lib/dmadev/version.map       |  1 +
 4 files changed, 56 insertions(+)

diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c
index 544937acf8..e61116260b 100644
--- a/lib/dmadev/rte_dmadev.c
+++ b/lib/dmadev/rte_dmadev.c
@@ -716,3 +716,18 @@ rte_dma_dump(int16_t dev_id, FILE *f)
 
 	return 0;
 }
+
+int
+rte_dma_vchan_status(int16_t dev_id, uint16_t vchan, enum rte_dma_vchan_status *status)
+{
+	struct rte_dma_dev *dev = &rte_dma_devices[dev_id];
+
+	RTE_DMA_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL);
+	if (vchan >= dev->data->dev_conf.nb_vchans) {
+		RTE_DMA_LOG(ERR, "Device %u vchan %u out of range\n", dev_id, vchan);
+		return -EINVAL;
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vchan_status, -ENOTSUP);
+	return (*dev->dev_ops->vchan_status)(dev, vchan, status);
+}
diff --git a/lib/dmadev/rte_dmadev.h b/lib/dmadev/rte_dmadev.h
index be54f2cb9d..f4c6546d87 100644
--- a/lib/dmadev/rte_dmadev.h
+++ b/lib/dmadev/rte_dmadev.h
@@ -660,6 +660,40 @@ int rte_dma_stats_get(int16_t dev_id, uint16_t vchan,
 __rte_experimental
 int rte_dma_stats_reset(int16_t dev_id, uint16_t vchan);
 
+/**
+ * device vchannel status
+ *
+ * Enum with the options for the channel status, either idle, active or halted due to error
+ * @see rte_dma_vchan_status
+ */
+enum rte_dma_vchan_status {
+	RTE_DMA_VCHAN_IDLE,          /**< not processing, awaiting ops */
+	RTE_DMA_VCHAN_ACTIVE,        /**< currently processing jobs */
+	RTE_DMA_VCHAN_HALTED_ERROR,  /**< not processing due to error, cannot accept new ops */
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Determine if all jobs have completed on a device channel.
+ * This function is primarily designed for testing use, as it allows a process to check if
+ * all jobs are completed, without actually gathering completions from those jobs.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @param vchan
+ *   The identifier of virtual DMA channel.
+ * @param[out] status
+ *   The vchan status
+ * @return
+ *   0 - call completed successfully
+ *   < 0 - error code indicating there was a problem calling the API
+ */
+__rte_experimental
+int
+rte_dma_vchan_status(int16_t dev_id, uint16_t vchan, enum rte_dma_vchan_status *status);
+
 /**
  * @warning
  * @b EXPERIMENTAL: this API may change without prior notice.
diff --git a/lib/dmadev/rte_dmadev_core.h b/lib/dmadev/rte_dmadev_core.h
index edb3286cbb..0eec1aa43b 100644
--- a/lib/dmadev/rte_dmadev_core.h
+++ b/lib/dmadev/rte_dmadev_core.h
@@ -46,6 +46,10 @@ typedef int (*rte_dma_vchan_setup_t)(struct rte_dma_dev *dev, uint16_t vchan,
 				const struct rte_dma_vchan_conf *conf,
 				uint32_t conf_sz);
 
+/** @internal Used to check if a virtual channel has finished all jobs. */
+typedef int (*rte_dma_vchan_status_t)(const struct rte_dma_dev *dev, uint16_t vchan,
+		enum rte_dma_vchan_status *status);
+
 /** @internal Used to retrieve basic statistics. */
 typedef int (*rte_dma_stats_get_t)(const struct rte_dma_dev *dev,
 			uint16_t vchan, struct rte_dma_stats *stats,
@@ -119,6 +123,8 @@ struct rte_dma_dev_ops {
 	rte_dma_stats_reset_t    stats_reset;
 
 	rte_dma_dump_t           dev_dump;
+	rte_dma_vchan_status_t   vchan_status;
+
 };
 
 /**
diff --git a/lib/dmadev/version.map b/lib/dmadev/version.map
index c780463bb2..40ea517016 100644
--- a/lib/dmadev/version.map
+++ b/lib/dmadev/version.map
@@ -20,6 +20,7 @@ EXPERIMENTAL {
 	rte_dma_stop;
 	rte_dma_submit;
 	rte_dma_vchan_setup;
+	rte_dma_vchan_status;
 
 	local: *;
 };
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v6 02/13] dma/skeleton: add channel status function
  2021-09-24 10:29 ` [dpdk-dev] [PATCH v6 00/13] add test suite for DMA drivers Bruce Richardson
  2021-09-24 10:29   ` [dpdk-dev] [PATCH v6 01/13] dmadev: add channel status check for testing use Bruce Richardson
@ 2021-09-24 10:29   ` Bruce Richardson
  2021-09-24 10:29   ` [dpdk-dev] [PATCH v6 03/13] dmadev: add burst capacity API Bruce Richardson
                     ` (11 subsequent siblings)
  13 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-09-24 10:29 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

In order to avoid timing errors with the unit tests, we need to ensure
we have the vchan_status function to report when a channel is idle.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 drivers/dma/skeleton/skeleton_dmadev.c | 18 +++++++++++++++++-
 drivers/dma/skeleton/skeleton_dmadev.h |  2 +-
 2 files changed, 18 insertions(+), 2 deletions(-)

diff --git a/drivers/dma/skeleton/skeleton_dmadev.c b/drivers/dma/skeleton/skeleton_dmadev.c
index 876878bb78..ada9a3be68 100644
--- a/drivers/dma/skeleton/skeleton_dmadev.c
+++ b/drivers/dma/skeleton/skeleton_dmadev.c
@@ -80,7 +80,7 @@ cpucopy_thread(void *param)
 
 		hw->zero_req_count = 0;
 		rte_memcpy(desc->dst, desc->src, desc->len);
-		hw->completed_count++;
+		__atomic_add_fetch(&hw->completed_count, 1, __ATOMIC_RELEASE);
 		(void)rte_ring_enqueue(hw->desc_completed, (void *)desc);
 	}
 
@@ -258,6 +258,21 @@ skeldma_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan,
 	return vchan_setup(hw, conf->nb_desc);
 }
 
+static int
+skeldma_vchan_status(const struct rte_dma_dev *dev,
+		uint16_t vchan, enum rte_dma_vchan_status *status)
+{
+	struct skeldma_hw *hw = dev->dev_private;
+
+	RTE_SET_USED(vchan);
+
+	*status = RTE_DMA_VCHAN_IDLE;
+	if (hw->submitted_count != __atomic_load_n(&hw->completed_count, __ATOMIC_ACQUIRE)
+			|| hw->zero_req_count == 0)
+		*status = RTE_DMA_VCHAN_ACTIVE;
+	return 0;
+}
+
 static int
 skeldma_stats_get(const struct rte_dma_dev *dev, uint16_t vchan,
 		  struct rte_dma_stats *stats, uint32_t stats_sz)
@@ -425,6 +440,7 @@ static const struct rte_dma_dev_ops skeldma_ops = {
 	.dev_close     = skeldma_close,
 
 	.vchan_setup   = skeldma_vchan_setup,
+	.vchan_status  = skeldma_vchan_status,
 
 	.stats_get     = skeldma_stats_get,
 	.stats_reset   = skeldma_stats_reset,
diff --git a/drivers/dma/skeleton/skeleton_dmadev.h b/drivers/dma/skeleton/skeleton_dmadev.h
index eaa52364bf..91eb5460fc 100644
--- a/drivers/dma/skeleton/skeleton_dmadev.h
+++ b/drivers/dma/skeleton/skeleton_dmadev.h
@@ -54,7 +54,7 @@ struct skeldma_hw {
 
 	/* Cache delimiter for cpucopy thread's operation data */
 	char cache2 __rte_cache_aligned;
-	uint32_t zero_req_count;
+	volatile uint32_t zero_req_count;
 	uint64_t completed_count;
 };
 
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v6 03/13] dmadev: add burst capacity API
  2021-09-24 10:29 ` [dpdk-dev] [PATCH v6 00/13] add test suite for DMA drivers Bruce Richardson
  2021-09-24 10:29   ` [dpdk-dev] [PATCH v6 01/13] dmadev: add channel status check for testing use Bruce Richardson
  2021-09-24 10:29   ` [dpdk-dev] [PATCH v6 02/13] dma/skeleton: add channel status function Bruce Richardson
@ 2021-09-24 10:29   ` Bruce Richardson
  2021-09-24 10:29   ` [dpdk-dev] [PATCH v6 04/13] dma/skeleton: add burst capacity function Bruce Richardson
                     ` (10 subsequent siblings)
  13 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-09-24 10:29 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj

From: Kevin Laatz <kevin.laatz@intel.com>

Add a burst capacity check API to the dmadev library. This API is useful to
applications which need to how many descriptors can be enqueued in the
current batch. For example, it could be used to determine whether all
segments of a multi-segment packet can be enqueued in the same batch or not
(to avoid half-offload of the packet).

Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
---
 lib/dmadev/rte_dmadev.h      | 31 +++++++++++++++++++++++++++++++
 lib/dmadev/rte_dmadev_core.h |  6 ++++--
 lib/dmadev/version.map       |  1 +
 3 files changed, 36 insertions(+), 2 deletions(-)

diff --git a/lib/dmadev/rte_dmadev.h b/lib/dmadev/rte_dmadev.h
index f4c6546d87..bdba19b947 100644
--- a/lib/dmadev/rte_dmadev.h
+++ b/lib/dmadev/rte_dmadev.h
@@ -1100,6 +1100,37 @@ rte_dma_completed_status(int16_t dev_id, uint16_t vchan,
 	return (*dev->completed_status)(dev, vchan, nb_cpls, last_idx, status);
 }
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Check remaining capacity in descriptor ring for the current burst.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @param vchan
+ *   The identifier of virtual DMA channel.
+ *
+ * @return
+ *   - Remaining space in the descriptor ring for the current burst.
+ *   - 0 on error
+ */
+__rte_experimental
+static inline uint16_t
+rte_dma_burst_capacity(int16_t dev_id, uint16_t vchan)
+{
+	const struct rte_dma_dev *dev = &rte_dma_devices[dev_id];
+
+#ifdef RTE_DMADEV_DEBUG
+	if (!rte_dma_is_valid(dev_id) || !dev->data->dev_started ||
+	    vchan >= dev->data->dev_conf.nb_vchans ||
+	    nb_cpls == 0 || status == NULL)
+		return 0;
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->burst_capacity, 0);
+#endif
+	return (*dev->burst_capacity)(dev, vchan);
+}
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/dmadev/rte_dmadev_core.h b/lib/dmadev/rte_dmadev_core.h
index 0eec1aa43b..fd6a737d46 100644
--- a/lib/dmadev/rte_dmadev_core.h
+++ b/lib/dmadev/rte_dmadev_core.h
@@ -58,6 +58,9 @@ typedef int (*rte_dma_stats_get_t)(const struct rte_dma_dev *dev,
 /** @internal Used to reset basic statistics. */
 typedef int (*rte_dma_stats_reset_t)(struct rte_dma_dev *dev, uint16_t vchan);
 
+/** @internal Used to check the remaining space in descriptor ring. */
+typedef uint16_t (*rte_dma_burst_capacity_t)(const struct rte_dma_dev *dev, uint16_t vchan);
+
 /** @internal Used to dump internal information. */
 typedef int (*rte_dma_dump_t)(const struct rte_dma_dev *dev, FILE *f);
 
@@ -124,7 +127,6 @@ struct rte_dma_dev_ops {
 
 	rte_dma_dump_t           dev_dump;
 	rte_dma_vchan_status_t   vchan_status;
-
 };
 
 /**
@@ -171,7 +173,7 @@ struct rte_dma_dev {
 	rte_dma_submit_t           submit;
 	rte_dma_completed_t        completed;
 	rte_dma_completed_status_t completed_status;
-	void *reserved_cl0;
+	rte_dma_burst_capacity_t   burst_capacity;
 	/** Reserve space for future IO functions, while keeping data and
 	 * dev_ops pointers on the second cacheline.
 	 */
diff --git a/lib/dmadev/version.map b/lib/dmadev/version.map
index 40ea517016..66420c4ede 100644
--- a/lib/dmadev/version.map
+++ b/lib/dmadev/version.map
@@ -1,6 +1,7 @@
 EXPERIMENTAL {
 	global:
 
+	rte_dma_burst_capacity;
 	rte_dma_close;
 	rte_dma_completed;
 	rte_dma_completed_status;
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v6 04/13] dma/skeleton: add burst capacity function
  2021-09-24 10:29 ` [dpdk-dev] [PATCH v6 00/13] add test suite for DMA drivers Bruce Richardson
                     ` (2 preceding siblings ...)
  2021-09-24 10:29   ` [dpdk-dev] [PATCH v6 03/13] dmadev: add burst capacity API Bruce Richardson
@ 2021-09-24 10:29   ` Bruce Richardson
  2021-09-24 14:51     ` Conor Walsh
  2021-09-24 10:29   ` [dpdk-dev] [PATCH v6 05/13] dmadev: add device iterator Bruce Richardson
                     ` (9 subsequent siblings)
  13 siblings, 1 reply; 130+ messages in thread
From: Bruce Richardson @ 2021-09-24 10:29 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

Implement function to return the remaining space for operations.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 drivers/dma/skeleton/skeleton_dmadev.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/drivers/dma/skeleton/skeleton_dmadev.c b/drivers/dma/skeleton/skeleton_dmadev.c
index ada9a3be68..65655261e6 100644
--- a/drivers/dma/skeleton/skeleton_dmadev.c
+++ b/drivers/dma/skeleton/skeleton_dmadev.c
@@ -432,6 +432,15 @@ skeldma_completed_status(struct rte_dma_dev *dev,
 	return count;
 }
 
+static uint16_t
+skeldma_burst_capacity(const struct rte_dma_dev *dev, uint16_t vchan)
+{
+	struct skeldma_hw *hw = dev->dev_private;
+
+	RTE_SET_USED(vchan);
+	return rte_ring_count(hw->desc_empty);
+}
+
 static const struct rte_dma_dev_ops skeldma_ops = {
 	.dev_info_get  = skeldma_info_get,
 	.dev_configure = skeldma_configure,
@@ -467,6 +476,7 @@ skeldma_create(const char *name, struct rte_vdev_device *vdev, int lcore_id)
 	dev->submit = skeldma_submit;
 	dev->completed = skeldma_completed;
 	dev->completed_status = skeldma_completed_status;
+	dev->burst_capacity = skeldma_burst_capacity;
 	dev->dev_ops = &skeldma_ops;
 	dev->device = &vdev->device;
 
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v6 05/13] dmadev: add device iterator
  2021-09-24 10:29 ` [dpdk-dev] [PATCH v6 00/13] add test suite for DMA drivers Bruce Richardson
                     ` (3 preceding siblings ...)
  2021-09-24 10:29   ` [dpdk-dev] [PATCH v6 04/13] dma/skeleton: add burst capacity function Bruce Richardson
@ 2021-09-24 10:29   ` Bruce Richardson
  2021-09-24 14:52     ` Conor Walsh
  2021-09-24 15:58     ` Kevin Laatz
  2021-09-24 10:29   ` [dpdk-dev] [PATCH v6 06/13] app/test: add basic dmadev instance tests Bruce Richardson
                     ` (8 subsequent siblings)
  13 siblings, 2 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-09-24 10:29 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

Add a function and wrapper macro to iterate over all dma devices.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 lib/dmadev/rte_dmadev.c | 13 +++++++++++++
 lib/dmadev/rte_dmadev.h | 18 ++++++++++++++++++
 lib/dmadev/version.map  |  1 +
 3 files changed, 32 insertions(+)

diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c
index e61116260b..a761fe1a91 100644
--- a/lib/dmadev/rte_dmadev.c
+++ b/lib/dmadev/rte_dmadev.c
@@ -55,6 +55,19 @@ rte_dma_dev_max(size_t dev_max)
 	return 0;
 }
 
+int16_t
+rte_dma_next_dev(int16_t start_dev_id)
+{
+	int16_t dev_id = start_dev_id;
+	while (dev_id < dma_devices_max && rte_dma_devices[dev_id].state == RTE_DMA_DEV_UNUSED)
+		dev_id++;
+
+	if (dev_id < dma_devices_max)
+		return dev_id;
+
+	return -1;
+}
+
 static int
 dma_check_name(const char *name)
 {
diff --git a/lib/dmadev/rte_dmadev.h b/lib/dmadev/rte_dmadev.h
index bdba19b947..04565f8c5b 100644
--- a/lib/dmadev/rte_dmadev.h
+++ b/lib/dmadev/rte_dmadev.h
@@ -219,6 +219,24 @@ bool rte_dma_is_valid(int16_t dev_id);
 __rte_experimental
 uint16_t rte_dma_count_avail(void);
 
+/**
+ * Iterates over valid dmadev instances.
+ *
+ * @param start_dev_id
+ *   The id of the next possible dmadev.
+ * @return
+ *   Next valid dmadev, UINT16_MAX if there is none.
+ */
+__rte_experimental
+int16_t rte_dma_next_dev(int16_t start_dev_id);
+
+/** Utility macro to iterate over all available dmadevs */
+#define RTE_DMA_FOREACH_DEV(p) \
+	for (p = rte_dma_next_dev(0); \
+	     p != -1; \
+	     p = rte_dma_next_dev(p + 1))
+
+
 /** DMA device support memory-to-memory transfer.
  *
  * @see struct rte_dma_info::dev_capa
diff --git a/lib/dmadev/version.map b/lib/dmadev/version.map
index 66420c4ede..0ab570a1be 100644
--- a/lib/dmadev/version.map
+++ b/lib/dmadev/version.map
@@ -15,6 +15,7 @@ EXPERIMENTAL {
 	rte_dma_get_dev_id;
 	rte_dma_info_get;
 	rte_dma_is_valid;
+	rte_dma_next_dev;
 	rte_dma_start;
 	rte_dma_stats_get;
 	rte_dma_stats_reset;
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v6 06/13] app/test: add basic dmadev instance tests
  2021-09-24 10:29 ` [dpdk-dev] [PATCH v6 00/13] add test suite for DMA drivers Bruce Richardson
                     ` (4 preceding siblings ...)
  2021-09-24 10:29   ` [dpdk-dev] [PATCH v6 05/13] dmadev: add device iterator Bruce Richardson
@ 2021-09-24 10:29   ` Bruce Richardson
  2021-09-24 10:29   ` [dpdk-dev] [PATCH v6 07/13] app/test: add basic dmadev copy tests Bruce Richardson
                     ` (7 subsequent siblings)
  13 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-09-24 10:29 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

Run basic sanity tests for configuring, starting and stopping a dmadev
instance to help validate drivers. This also provides the framework for
future tests for data-path operation.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
---
 app/test/test_dmadev.c | 73 +++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 72 insertions(+), 1 deletion(-)

diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
index e765ec5f2c..9c19659b26 100644
--- a/app/test/test_dmadev.c
+++ b/app/test/test_dmadev.c
@@ -3,6 +3,8 @@
  * Copyright(c) 2021 Intel Corporation
  */
 
+#include <inttypes.h>
+
 #include <rte_dmadev.h>
 #include <rte_bus_vdev.h>
 
@@ -11,6 +13,66 @@
 /* from test_dmadev_api.c */
 extern int test_dma_api(uint16_t dev_id);
 
+#define ERR_RETURN(...) do { print_err(__func__, __LINE__, __VA_ARGS__); return -1; } while (0)
+
+static void
+__rte_format_printf(3, 4)
+print_err(const char *func, int lineno, const char *format, ...)
+{
+	va_list ap;
+
+	fprintf(stderr, "In %s:%d - ", func, lineno);
+	va_start(ap, format);
+	vfprintf(stderr, format, ap);
+	va_end(ap);
+}
+
+static int
+test_dmadev_instance(uint16_t dev_id)
+{
+#define TEST_RINGSIZE 512
+	struct rte_dma_stats stats;
+	struct rte_dma_info info;
+	const struct rte_dma_conf conf = { .nb_vchans = 1};
+	const struct rte_dma_vchan_conf qconf = {
+			.direction = RTE_DMA_DIR_MEM_TO_MEM,
+			.nb_desc = TEST_RINGSIZE,
+	};
+	const int vchan = 0;
+
+	printf("\n### Test dmadev instance %u [%s]\n",
+			dev_id, rte_dma_devices[dev_id].data->dev_name);
+
+	rte_dma_info_get(dev_id, &info);
+	if (info.max_vchans < 1)
+		ERR_RETURN("Error, no channels available on device id %u\n", dev_id);
+
+	if (rte_dma_configure(dev_id, &conf) != 0)
+		ERR_RETURN("Error with rte_dma_configure()\n");
+
+	if (rte_dma_vchan_setup(dev_id, vchan, &qconf) < 0)
+		ERR_RETURN("Error with queue configuration\n");
+
+	rte_dma_info_get(dev_id, &info);
+	if (info.nb_vchans != 1)
+		ERR_RETURN("Error, no configured queues reported on device id %u\n", dev_id);
+
+	if (rte_dma_start(dev_id) != 0)
+		ERR_RETURN("Error with rte_dma_start()\n");
+
+	if (rte_dma_stats_get(dev_id, vchan, &stats) != 0)
+		ERR_RETURN("Error with rte_dma_stats_get()\n");
+
+	if (stats.completed != 0 || stats.submitted != 0 || stats.errors != 0)
+		ERR_RETURN("Error device stats are not all zero: completed = %"PRIu64", "
+				"submitted = %"PRIu64", errors = %"PRIu64"\n",
+				stats.completed, stats.submitted, stats.errors);
+
+	rte_dma_stop(dev_id);
+	rte_dma_stats_reset(dev_id, vchan);
+	return 0;
+}
+
 static int
 test_apis(void)
 {
@@ -33,9 +95,18 @@ test_apis(void)
 static int
 test_dma(void)
 {
+	int i;
+
 	/* basic sanity on dmadev infrastructure */
 	if (test_apis() < 0)
-		return -1;
+		ERR_RETURN("Error performing API tests\n");
+
+	if (rte_dma_count_avail() == 0)
+		return TEST_SKIPPED;
+
+	RTE_DMA_FOREACH_DEV(i)
+		if (test_dmadev_instance(i) < 0)
+			ERR_RETURN("Error, test failure for device %d\n", i);
 
 	return 0;
 }
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v6 07/13] app/test: add basic dmadev copy tests
  2021-09-24 10:29 ` [dpdk-dev] [PATCH v6 00/13] add test suite for DMA drivers Bruce Richardson
                     ` (5 preceding siblings ...)
  2021-09-24 10:29   ` [dpdk-dev] [PATCH v6 06/13] app/test: add basic dmadev instance tests Bruce Richardson
@ 2021-09-24 10:29   ` Bruce Richardson
  2021-09-24 10:29   ` [dpdk-dev] [PATCH v6 08/13] app/test: run test suite on skeleton driver Bruce Richardson
                     ` (6 subsequent siblings)
  13 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-09-24 10:29 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

For each dmadev instance, perform some basic copy tests to validate that
functionality.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
---
 app/test/test_dmadev.c | 175 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 175 insertions(+)

diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
index 9c19659b26..21600686e8 100644
--- a/app/test/test_dmadev.c
+++ b/app/test/test_dmadev.c
@@ -6,6 +6,10 @@
 #include <inttypes.h>
 
 #include <rte_dmadev.h>
+#include <rte_mbuf.h>
+#include <rte_pause.h>
+#include <rte_cycles.h>
+#include <rte_random.h>
 #include <rte_bus_vdev.h>
 
 #include "test.h"
@@ -15,6 +19,11 @@ extern int test_dma_api(uint16_t dev_id);
 
 #define ERR_RETURN(...) do { print_err(__func__, __LINE__, __VA_ARGS__); return -1; } while (0)
 
+#define COPY_LEN 1024
+
+static struct rte_mempool *pool;
+static uint16_t id_count;
+
 static void
 __rte_format_printf(3, 4)
 print_err(const char *func, int lineno, const char *format, ...)
@@ -27,10 +36,155 @@ print_err(const char *func, int lineno, const char *format, ...)
 	va_end(ap);
 }
 
+static int
+runtest(const char *printable, int (*test_fn)(int dev_id, uint16_t vchan), int iterations,
+		int dev_id, uint16_t vchan, bool check_err_stats)
+{
+	struct rte_dma_stats stats;
+	int i;
+
+	rte_dma_stats_reset(dev_id, vchan);
+	printf("DMA Dev %d: Running %s Tests %s\n", dev_id, printable,
+			check_err_stats ? " " : "(errors expected)");
+	for (i = 0; i < iterations; i++) {
+		if (test_fn(dev_id, vchan) < 0)
+			return -1;
+
+		rte_dma_stats_get(dev_id, 0, &stats);
+		printf("Ops submitted: %"PRIu64"\t", stats.submitted);
+		printf("Ops completed: %"PRIu64"\t", stats.completed);
+		printf("Errors: %"PRIu64"\r", stats.errors);
+
+		if (stats.completed != stats.submitted)
+			ERR_RETURN("\nError, not all submitted jobs are reported as completed\n");
+		if (check_err_stats && stats.errors != 0)
+			ERR_RETURN("\nErrors reported during op processing, aborting tests\n");
+	}
+	printf("\n");
+	return 0;
+}
+
+static void
+await_hw(int dev_id, uint16_t vchan)
+{
+	enum rte_dma_vchan_status st;
+
+	if (rte_dma_vchan_status(dev_id, vchan, &st) < 0) {
+		/* for drivers that don't support this op, just sleep for 1 millisecond */
+		rte_delay_us_sleep(1000);
+		return;
+	}
+
+	/* for those that do, *max* end time is one second from now, but all should be faster */
+	const uint64_t end_cycles = rte_get_timer_cycles() + rte_get_timer_hz();
+	while (st == RTE_DMA_VCHAN_ACTIVE && rte_get_timer_cycles() < end_cycles) {
+		rte_pause();
+		rte_dma_vchan_status(dev_id, vchan, &st);
+	}
+}
+
+static int
+test_enqueue_copies(int dev_id, uint16_t vchan)
+{
+	unsigned int i;
+	uint16_t id;
+
+	/* test doing a single copy */
+	do {
+		struct rte_mbuf *src, *dst;
+		char *src_data, *dst_data;
+
+		src = rte_pktmbuf_alloc(pool);
+		dst = rte_pktmbuf_alloc(pool);
+		src_data = rte_pktmbuf_mtod(src, char *);
+		dst_data = rte_pktmbuf_mtod(dst, char *);
+
+		for (i = 0; i < COPY_LEN; i++)
+			src_data[i] = rte_rand() & 0xFF;
+
+		id = rte_dma_copy(dev_id, vchan, rte_pktmbuf_iova(src), rte_pktmbuf_iova(dst),
+				COPY_LEN, RTE_DMA_OP_FLAG_SUBMIT);
+		if (id != id_count)
+			ERR_RETURN("Error with rte_dma_copy, got %u, expected %u\n",
+					id, id_count);
+
+		/* give time for copy to finish, then check it was done */
+		await_hw(dev_id, vchan);
+
+		for (i = 0; i < COPY_LEN; i++)
+			if (dst_data[i] != src_data[i])
+				ERR_RETURN("Data mismatch at char %u [Got %02x not %02x]\n", i,
+						dst_data[i], src_data[i]);
+
+		/* now check completion works */
+		if (rte_dma_completed(dev_id, vchan, 1, &id, NULL) != 1)
+			ERR_RETURN("Error with rte_dma_completed\n");
+
+		if (id != id_count)
+			ERR_RETURN("Error:incorrect job id received, %u [expected %u]\n",
+					id, id_count);
+
+		rte_pktmbuf_free(src);
+		rte_pktmbuf_free(dst);
+
+		/* now check completion returns nothing more */
+		if (rte_dma_completed(dev_id, 0, 1, NULL, NULL) != 0)
+			ERR_RETURN("Error with rte_dma_completed in empty check\n");
+
+		id_count++;
+
+	} while (0);
+
+	/* test doing a multiple single copies */
+	do {
+		const uint16_t max_ops = 4;
+		struct rte_mbuf *src, *dst;
+		char *src_data, *dst_data;
+		uint16_t count;
+
+		src = rte_pktmbuf_alloc(pool);
+		dst = rte_pktmbuf_alloc(pool);
+		src_data = rte_pktmbuf_mtod(src, char *);
+		dst_data = rte_pktmbuf_mtod(dst, char *);
+
+		for (i = 0; i < COPY_LEN; i++)
+			src_data[i] = rte_rand() & 0xFF;
+
+		/* perform the same copy <max_ops> times */
+		for (i = 0; i < max_ops; i++)
+			if (rte_dma_copy(dev_id, vchan,
+					rte_pktmbuf_iova(src),
+					rte_pktmbuf_iova(dst),
+					COPY_LEN, RTE_DMA_OP_FLAG_SUBMIT) != id_count++)
+				ERR_RETURN("Error with rte_dma_copy\n");
+
+		await_hw(dev_id, vchan);
+
+		count = rte_dma_completed(dev_id, vchan, max_ops * 2, &id, NULL);
+		if (count != max_ops)
+			ERR_RETURN("Error with rte_dma_completed, got %u not %u\n",
+					count, max_ops);
+
+		if (id != id_count - 1)
+			ERR_RETURN("Error, incorrect job id returned: got %u not %u\n",
+					id, id_count - 1);
+
+		for (i = 0; i < COPY_LEN; i++)
+			if (dst_data[i] != src_data[i])
+				ERR_RETURN("Data mismatch at char %u\n", i);
+
+		rte_pktmbuf_free(src);
+		rte_pktmbuf_free(dst);
+	} while (0);
+
+	return 0;
+}
+
 static int
 test_dmadev_instance(uint16_t dev_id)
 {
 #define TEST_RINGSIZE 512
+#define CHECK_ERRS    true
 	struct rte_dma_stats stats;
 	struct rte_dma_info info;
 	const struct rte_dma_conf conf = { .nb_vchans = 1};
@@ -67,10 +221,31 @@ test_dmadev_instance(uint16_t dev_id)
 		ERR_RETURN("Error device stats are not all zero: completed = %"PRIu64", "
 				"submitted = %"PRIu64", errors = %"PRIu64"\n",
 				stats.completed, stats.submitted, stats.errors);
+	id_count = 0;
+
+	/* create a mempool for running tests */
+	pool = rte_pktmbuf_pool_create("TEST_DMADEV_POOL",
+			TEST_RINGSIZE * 2, /* n == num elements */
+			32,  /* cache size */
+			0,   /* priv size */
+			2048, /* data room size */
+			info.numa_node);
+	if (pool == NULL)
+		ERR_RETURN("Error with mempool creation\n");
 
+	/* run the test cases, use many iterations to ensure UINT16_MAX id wraparound */
+	if (runtest("copy", test_enqueue_copies, 640, dev_id, vchan, CHECK_ERRS) < 0)
+		goto err;
+
+	rte_mempool_free(pool);
 	rte_dma_stop(dev_id);
 	rte_dma_stats_reset(dev_id, vchan);
 	return 0;
+
+err:
+	rte_mempool_free(pool);
+	rte_dma_stop(dev_id);
+	return -1;
 }
 
 static int
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v6 08/13] app/test: run test suite on skeleton driver
  2021-09-24 10:29 ` [dpdk-dev] [PATCH v6 00/13] add test suite for DMA drivers Bruce Richardson
                     ` (6 preceding siblings ...)
  2021-09-24 10:29   ` [dpdk-dev] [PATCH v6 07/13] app/test: add basic dmadev copy tests Bruce Richardson
@ 2021-09-24 10:29   ` Bruce Richardson
  2021-09-24 15:58     ` Kevin Laatz
  2021-09-24 10:29   ` [dpdk-dev] [PATCH v6 09/13] app/test: add more comprehensive dmadev copy tests Bruce Richardson
                     ` (5 subsequent siblings)
  13 siblings, 1 reply; 130+ messages in thread
From: Bruce Richardson @ 2021-09-24 10:29 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

When running the dmadev_autotest, run the suite of copy tests on the
skeleton driver created for API testing too, rather than just destroying
the driver instances once the API tests are complete. This helps to
sanity check the tests themselves are reasonable.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 app/test/test_dmadev.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
index 21600686e8..c26329e63d 100644
--- a/app/test/test_dmadev.c
+++ b/app/test/test_dmadev.c
@@ -255,14 +255,13 @@ test_apis(void)
 	int id;
 	int ret;
 
-	if (rte_vdev_init(pmd, NULL) < 0)
-		return TEST_SKIPPED;
+	/* attempt to create skeleton instance - ignore errors due to one being already present */
+	rte_vdev_init(pmd, NULL);
 	id = rte_dma_get_dev_id(pmd);
 	if (id < 0)
 		return TEST_SKIPPED;
 	printf("\n### Test dmadev infrastructure using skeleton driver\n");
 	ret = test_dma_api(id);
-	rte_vdev_uninit(pmd);
 
 	return ret;
 }
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v6 09/13] app/test: add more comprehensive dmadev copy tests
  2021-09-24 10:29 ` [dpdk-dev] [PATCH v6 00/13] add test suite for DMA drivers Bruce Richardson
                     ` (7 preceding siblings ...)
  2021-09-24 10:29   ` [dpdk-dev] [PATCH v6 08/13] app/test: run test suite on skeleton driver Bruce Richardson
@ 2021-09-24 10:29   ` Bruce Richardson
  2021-09-24 10:31   ` [dpdk-dev] [PATCH v6 10/13] dmadev: add flag for error handling support Bruce Richardson
                     ` (4 subsequent siblings)
  13 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-09-24 10:29 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

Add unit tests for various combinations of use for dmadev, copying
bursts of packets in various formats, e.g.

1. enqueuing two smaller bursts and completing them as one burst
2. enqueuing one burst and gathering completions in smaller bursts
3. using completed_status() function to gather completions rather than
   just completed()

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
---
 app/test/test_dmadev.c | 101 ++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 100 insertions(+), 1 deletion(-)

diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
index c26329e63d..0aecfa6289 100644
--- a/app/test/test_dmadev.c
+++ b/app/test/test_dmadev.c
@@ -83,6 +83,98 @@ await_hw(int dev_id, uint16_t vchan)
 	}
 }
 
+/* run a series of copy tests just using some different options for enqueues and completions */
+static int
+do_multi_copies(int dev_id, uint16_t vchan,
+		int split_batches,     /* submit 2 x 16 or 1 x 32 burst */
+		int split_completions, /* gather 2 x 16 or 1 x 32 completions */
+		int use_completed_status) /* use completed or completed_status function */
+{
+	struct rte_mbuf *srcs[32], *dsts[32];
+	enum rte_dma_status_code sc[32];
+	unsigned int i, j;
+	bool dma_err = false;
+
+	/* Enqueue burst of copies and hit doorbell */
+	for (i = 0; i < RTE_DIM(srcs); i++) {
+		uint64_t *src_data;
+
+		if (split_batches && i == RTE_DIM(srcs) / 2)
+			rte_dma_submit(dev_id, vchan);
+
+		srcs[i] = rte_pktmbuf_alloc(pool);
+		dsts[i] = rte_pktmbuf_alloc(pool);
+		if (srcs[i] == NULL || dsts[i] == NULL)
+			ERR_RETURN("Error allocating buffers\n");
+
+		src_data = rte_pktmbuf_mtod(srcs[i], uint64_t *);
+		for (j = 0; j < COPY_LEN/sizeof(uint64_t); j++)
+			src_data[j] = rte_rand();
+
+		if (rte_dma_copy(dev_id, vchan, srcs[i]->buf_iova + srcs[i]->data_off,
+				dsts[i]->buf_iova + dsts[i]->data_off, COPY_LEN, 0) != id_count++)
+			ERR_RETURN("Error with rte_dma_copy for buffer %u\n", i);
+	}
+	rte_dma_submit(dev_id, vchan);
+
+	await_hw(dev_id, vchan);
+
+	if (split_completions) {
+		/* gather completions in two halves */
+		uint16_t half_len = RTE_DIM(srcs) / 2;
+		int ret = rte_dma_completed(dev_id, vchan, half_len, NULL, &dma_err);
+		if (ret != half_len || dma_err)
+			ERR_RETURN("Error with rte_dma_completed - first half. ret = %d, expected ret = %u, dma_err = %d\n",
+					ret, half_len, dma_err);
+
+		ret = rte_dma_completed(dev_id, vchan, half_len, NULL, &dma_err);
+		if (ret != half_len || dma_err)
+			ERR_RETURN("Error with rte_dma_completed - second half. ret = %d, expected ret = %u, dma_err = %d\n",
+					ret, half_len, dma_err);
+	} else {
+		/* gather all completions in one go, using either
+		 * completed or completed_status fns
+		 */
+		if (!use_completed_status) {
+			int n = rte_dma_completed(dev_id, vchan, RTE_DIM(srcs), NULL, &dma_err);
+			if (n != RTE_DIM(srcs) || dma_err)
+				ERR_RETURN("Error with rte_dma_completed, %u [expected: %zu], dma_err = %d\n",
+						n, RTE_DIM(srcs), dma_err);
+		} else {
+			int n = rte_dma_completed_status(dev_id, vchan, RTE_DIM(srcs), NULL, sc);
+			if (n != RTE_DIM(srcs))
+				ERR_RETURN("Error with rte_dma_completed_status, %u [expected: %zu]\n",
+						n, RTE_DIM(srcs));
+
+			for (j = 0; j < (uint16_t)n; j++)
+				if (sc[j] != RTE_DMA_STATUS_SUCCESSFUL)
+					ERR_RETURN("Error with rte_dma_completed_status, job %u reports failure [code %u]\n",
+							j, sc[j]);
+		}
+	}
+
+	/* check for empty */
+	int ret = use_completed_status ?
+			rte_dma_completed_status(dev_id, vchan, RTE_DIM(srcs), NULL, sc) :
+			rte_dma_completed(dev_id, vchan, RTE_DIM(srcs), NULL, &dma_err);
+	if (ret != 0)
+		ERR_RETURN("Error with completion check - ops unexpectedly returned\n");
+
+	for (i = 0; i < RTE_DIM(srcs); i++) {
+		char *src_data, *dst_data;
+
+		src_data = rte_pktmbuf_mtod(srcs[i], char *);
+		dst_data = rte_pktmbuf_mtod(dsts[i], char *);
+		for (j = 0; j < COPY_LEN; j++)
+			if (src_data[j] != dst_data[j])
+				ERR_RETURN("Error with copy of packet %u, byte %u\n", i, j);
+
+		rte_pktmbuf_free(srcs[i]);
+		rte_pktmbuf_free(dsts[i]);
+	}
+	return 0;
+}
+
 static int
 test_enqueue_copies(int dev_id, uint16_t vchan)
 {
@@ -177,7 +269,14 @@ test_enqueue_copies(int dev_id, uint16_t vchan)
 		rte_pktmbuf_free(dst);
 	} while (0);
 
-	return 0;
+	/* test doing multiple copies */
+	return do_multi_copies(dev_id, vchan, 0, 0, 0) /* enqueue and complete 1 batch at a time */
+			/* enqueue 2 batches and then complete both */
+			|| do_multi_copies(dev_id, vchan, 1, 0, 0)
+			/* enqueue 1 batch, then complete in two halves */
+			|| do_multi_copies(dev_id, vchan, 0, 1, 0)
+			/* test using completed_status in place of regular completed API */
+			|| do_multi_copies(dev_id, vchan, 0, 0, 1);
 }
 
 static int
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v6 10/13] dmadev: add flag for error handling support
  2021-09-24 10:29 ` [dpdk-dev] [PATCH v6 00/13] add test suite for DMA drivers Bruce Richardson
                     ` (8 preceding siblings ...)
  2021-09-24 10:29   ` [dpdk-dev] [PATCH v6 09/13] app/test: add more comprehensive dmadev copy tests Bruce Richardson
@ 2021-09-24 10:31   ` Bruce Richardson
  2021-09-24 14:52     ` Conor Walsh
  2021-09-24 15:58     ` Kevin Laatz
  2021-09-24 10:31   ` [dpdk-dev] [PATCH v6 11/13] app/test: test dmadev instance failure handling Bruce Richardson
                     ` (3 subsequent siblings)
  13 siblings, 2 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-09-24 10:31 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

Due to HW or driver limiations, not all dmadevs may support full error
handling e.g. safely managing and reporting an invalid address to a copy
operation. The skeleton dmadev, for example, being pure software will
always seg-fault if passed an invalid address. To indicate the
availability of safe error handling by a device, we add a capability
flag for it.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 lib/dmadev/rte_dmadev.c | 1 +
 lib/dmadev/rte_dmadev.h | 9 +++++++++
 2 files changed, 10 insertions(+)

diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c
index a761fe1a91..c9224035dc 100644
--- a/lib/dmadev/rte_dmadev.c
+++ b/lib/dmadev/rte_dmadev.c
@@ -665,6 +665,7 @@ dma_capability_name(uint64_t capability)
 		{ RTE_DMA_CAPA_DEV_TO_DEV,  "dev2dev" },
 		{ RTE_DMA_CAPA_SVA,         "sva"     },
 		{ RTE_DMA_CAPA_SILENT,      "silent"  },
+		{ RTE_DMA_CAPA_HANDLES_ERRORS, "handles_errors" },
 		{ RTE_DMA_CAPA_OPS_COPY,    "copy"    },
 		{ RTE_DMA_CAPA_OPS_COPY_SG, "copy_sg" },
 		{ RTE_DMA_CAPA_OPS_FILL,    "fill"    },
diff --git a/lib/dmadev/rte_dmadev.h b/lib/dmadev/rte_dmadev.h
index 04565f8c5b..ae0b357343 100644
--- a/lib/dmadev/rte_dmadev.h
+++ b/lib/dmadev/rte_dmadev.h
@@ -273,6 +273,15 @@ int16_t rte_dma_next_dev(int16_t start_dev_id);
  * @see struct rte_dma_conf::silent_mode
  */
 #define RTE_DMA_CAPA_SILENT		RTE_BIT64(5)
+/** DMA device supports error handling
+ *
+ * With this bit set, invalid input addresses will be reported as operation failures
+ * to the user but other operations can continue.
+ * Without this bit set, invalid data is not handled by either HW or driver, so user
+ * must ensure that all memory addresses are valid and accessible by HW.
+ */
+#define RTE_DMA_CAPA_HANDLES_ERRORS	RTE_BIT64(6)
+
 /** DMA device support copy ops.
  * This capability start with index of 32, so that it could leave gap between
  * normal capability and ops capability.
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v6 11/13] app/test: test dmadev instance failure handling
  2021-09-24 10:29 ` [dpdk-dev] [PATCH v6 00/13] add test suite for DMA drivers Bruce Richardson
                     ` (9 preceding siblings ...)
  2021-09-24 10:31   ` [dpdk-dev] [PATCH v6 10/13] dmadev: add flag for error handling support Bruce Richardson
@ 2021-09-24 10:31   ` Bruce Richardson
  2021-09-24 10:31   ` [dpdk-dev] [PATCH v6 12/13] app/test: add dmadev fill tests Bruce Richardson
                     ` (2 subsequent siblings)
  13 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-09-24 10:31 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

Add a series of tests to inject bad copy operations into a dmadev to
test the error handling and reporting capabilities. Various combinations
of errors in various positions in a burst are tested, as are errors in
bursts with fence flag set, and multiple errors in a single burst.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
---
 app/test/test_dmadev.c | 361 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 361 insertions(+)

diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
index 0aecfa6289..d5885ba10e 100644
--- a/app/test/test_dmadev.c
+++ b/app/test/test_dmadev.c
@@ -279,6 +279,354 @@ test_enqueue_copies(int dev_id, uint16_t vchan)
 			|| do_multi_copies(dev_id, vchan, 0, 0, 1);
 }
 
+/* Failure handling test cases - global macros and variables for those tests*/
+#define COMP_BURST_SZ	16
+#define OPT_FENCE(idx) ((fence && idx == 8) ? RTE_DMA_OP_FLAG_FENCE : 0)
+
+static int
+test_failure_in_full_burst(int dev_id, uint16_t vchan, bool fence,
+		struct rte_mbuf **srcs, struct rte_mbuf **dsts, unsigned int fail_idx)
+{
+	/* Test single full batch statuses with failures */
+	enum rte_dma_status_code status[COMP_BURST_SZ];
+	struct rte_dma_stats baseline, stats;
+	uint16_t invalid_addr_id = 0;
+	uint16_t idx;
+	uint16_t count, status_count;
+	unsigned int i;
+	bool error = false;
+	int err_count = 0;
+
+	rte_dma_stats_get(dev_id, vchan, &baseline); /* get a baseline set of stats */
+	for (i = 0; i < COMP_BURST_SZ; i++) {
+		int id = rte_dma_copy(dev_id, vchan,
+				(i == fail_idx ? 0 : (srcs[i]->buf_iova + srcs[i]->data_off)),
+				dsts[i]->buf_iova + dsts[i]->data_off,
+				COPY_LEN, OPT_FENCE(i));
+		if (id < 0)
+			ERR_RETURN("Error with rte_dma_copy for buffer %u\n", i);
+		if (i == fail_idx)
+			invalid_addr_id = id;
+	}
+	rte_dma_submit(dev_id, vchan);
+	rte_dma_stats_get(dev_id, vchan, &stats);
+	if (stats.submitted != baseline.submitted + COMP_BURST_SZ)
+		ERR_RETURN("Submitted stats value not as expected, %"PRIu64" not %"PRIu64"\n",
+				stats.submitted, baseline.submitted + COMP_BURST_SZ);
+
+	await_hw(dev_id, vchan);
+
+	count = rte_dma_completed(dev_id, vchan, COMP_BURST_SZ, &idx, &error);
+	if (count != fail_idx)
+		ERR_RETURN("Error with rte_dma_completed for failure test. Got returned %u not %u.\n",
+				count, fail_idx);
+	if (!error)
+		ERR_RETURN("Error, missing expected failed copy, %u. has_error is not set\n",
+				fail_idx);
+	if (idx != invalid_addr_id - 1)
+		ERR_RETURN("Error, missing expected failed copy, %u. Got last idx %u, not %u\n",
+				fail_idx, idx, invalid_addr_id - 1);
+
+	/* all checks ok, now verify calling completed() again always returns 0 */
+	for (i = 0; i < 10; i++)
+		if (rte_dma_completed(dev_id, vchan, COMP_BURST_SZ, &idx, &error) != 0
+				|| error == false || idx != (invalid_addr_id - 1))
+			ERR_RETURN("Error with follow-up completed calls for fail idx %u\n",
+					fail_idx);
+
+	status_count = rte_dma_completed_status(dev_id, vchan, COMP_BURST_SZ,
+			&idx, status);
+	/* some HW may stop on error and be restarted after getting error status for single value
+	 * To handle this case, if we get just one error back, wait for more completions and get
+	 * status for rest of the burst
+	 */
+	if (status_count == 1) {
+		await_hw(dev_id, vchan);
+		status_count += rte_dma_completed_status(dev_id, vchan, COMP_BURST_SZ - 1,
+					&idx, &status[1]);
+	}
+	/* check that at this point we have all status values */
+	if (status_count != COMP_BURST_SZ - count)
+		ERR_RETURN("Error with completed_status calls for fail idx %u. Got %u not %u\n",
+				fail_idx, status_count, COMP_BURST_SZ - count);
+	/* now verify just one failure followed by multiple successful or skipped entries */
+	if (status[0] == RTE_DMA_STATUS_SUCCESSFUL)
+		ERR_RETURN("Error with status returned for fail idx %u. First status was not failure\n",
+				fail_idx);
+	for (i = 1; i < status_count; i++)
+		/* after a failure in a burst, depending on ordering/fencing,
+		 * operations may be successful or skipped because of previous error.
+		 */
+		if (status[i] != RTE_DMA_STATUS_SUCCESSFUL
+				&& status[i] != RTE_DMA_STATUS_NOT_ATTEMPTED)
+			ERR_RETURN("Error with status calls for fail idx %u. Status for job %u (of %u) is not successful\n",
+					fail_idx, count + i, COMP_BURST_SZ);
+
+	/* check the completed + errors stats are as expected */
+	rte_dma_stats_get(dev_id, vchan, &stats);
+	if (stats.completed != baseline.completed + COMP_BURST_SZ)
+		ERR_RETURN("Completed stats value not as expected, %"PRIu64" not %"PRIu64"\n",
+				stats.completed, baseline.completed + COMP_BURST_SZ);
+	for (i = 0; i < status_count; i++)
+		err_count += (status[i] != RTE_DMA_STATUS_SUCCESSFUL);
+	if (stats.errors != baseline.errors + err_count)
+		ERR_RETURN("'Errors' stats value not as expected, %"PRIu64" not %"PRIu64"\n",
+				stats.errors, baseline.errors + err_count);
+
+	return 0;
+}
+
+static int
+test_individual_status_query_with_failure(int dev_id, uint16_t vchan, bool fence,
+		struct rte_mbuf **srcs, struct rte_mbuf **dsts, unsigned int fail_idx)
+{
+	/* Test gathering batch statuses one at a time */
+	enum rte_dma_status_code status[COMP_BURST_SZ];
+	uint16_t invalid_addr_id = 0;
+	uint16_t idx;
+	uint16_t count = 0, status_count = 0;
+	unsigned int j;
+	bool error = false;
+
+	for (j = 0; j < COMP_BURST_SZ; j++) {
+		int id = rte_dma_copy(dev_id, vchan,
+				(j == fail_idx ? 0 : (srcs[j]->buf_iova + srcs[j]->data_off)),
+				dsts[j]->buf_iova + dsts[j]->data_off,
+				COPY_LEN, OPT_FENCE(j));
+		if (id < 0)
+			ERR_RETURN("Error with rte_dma_copy for buffer %u\n", j);
+		if (j == fail_idx)
+			invalid_addr_id = id;
+	}
+	rte_dma_submit(dev_id, vchan);
+	await_hw(dev_id, vchan);
+
+	/* use regular "completed" until we hit error */
+	while (!error) {
+		uint16_t n = rte_dma_completed(dev_id, vchan, 1, &idx, &error);
+		count += n;
+		if (n > 1 || count >= COMP_BURST_SZ)
+			ERR_RETURN("Error - too many completions got\n");
+		if (n == 0 && !error)
+			ERR_RETURN("Error, unexpectedly got zero completions after %u completed\n",
+					count);
+	}
+	if (idx != invalid_addr_id - 1)
+		ERR_RETURN("Error, last successful index not as expected, got %u, expected %u\n",
+				idx, invalid_addr_id - 1);
+
+	/* use completed_status until we hit end of burst */
+	while (count + status_count < COMP_BURST_SZ) {
+		uint16_t n = rte_dma_completed_status(dev_id, vchan, 1, &idx,
+				&status[status_count]);
+		await_hw(dev_id, vchan); /* allow delay to ensure jobs are completed */
+		status_count += n;
+		if (n != 1)
+			ERR_RETURN("Error: unexpected number of completions received, %u, not 1\n",
+					n);
+	}
+
+	/* check for single failure */
+	if (status[0] == RTE_DMA_STATUS_SUCCESSFUL)
+		ERR_RETURN("Error, unexpected successful DMA transaction\n");
+	for (j = 1; j < status_count; j++)
+		if (status[j] != RTE_DMA_STATUS_SUCCESSFUL
+				&& status[j] != RTE_DMA_STATUS_NOT_ATTEMPTED)
+			ERR_RETURN("Error, unexpected DMA error reported\n");
+
+	return 0;
+}
+
+static int
+test_single_item_status_query_with_failure(int dev_id, uint16_t vchan,
+		struct rte_mbuf **srcs, struct rte_mbuf **dsts, unsigned int fail_idx)
+{
+	/* When error occurs just collect a single error using "completed_status()"
+	 * before going to back to completed() calls
+	 */
+	enum rte_dma_status_code status;
+	uint16_t invalid_addr_id = 0;
+	uint16_t idx;
+	uint16_t count, status_count, count2;
+	unsigned int j;
+	bool error = false;
+
+	for (j = 0; j < COMP_BURST_SZ; j++) {
+		int id = rte_dma_copy(dev_id, vchan,
+				(j == fail_idx ? 0 : (srcs[j]->buf_iova + srcs[j]->data_off)),
+				dsts[j]->buf_iova + dsts[j]->data_off,
+				COPY_LEN, 0);
+		if (id < 0)
+			ERR_RETURN("Error with rte_dma_copy for buffer %u\n", j);
+		if (j == fail_idx)
+			invalid_addr_id = id;
+	}
+	rte_dma_submit(dev_id, vchan);
+	await_hw(dev_id, vchan);
+
+	/* get up to the error point */
+	count = rte_dma_completed(dev_id, vchan, COMP_BURST_SZ, &idx, &error);
+	if (count != fail_idx)
+		ERR_RETURN("Error with rte_dma_completed for failure test. Got returned %u not %u.\n",
+				count, fail_idx);
+	if (!error)
+		ERR_RETURN("Error, missing expected failed copy, %u. has_error is not set\n",
+				fail_idx);
+	if (idx != invalid_addr_id - 1)
+		ERR_RETURN("Error, missing expected failed copy, %u. Got last idx %u, not %u\n",
+				fail_idx, idx, invalid_addr_id - 1);
+
+	/* get the error code */
+	status_count = rte_dma_completed_status(dev_id, vchan, 1, &idx, &status);
+	if (status_count != 1)
+		ERR_RETURN("Error with completed_status calls for fail idx %u. Got %u not %u\n",
+				fail_idx, status_count, COMP_BURST_SZ - count);
+	if (status == RTE_DMA_STATUS_SUCCESSFUL)
+		ERR_RETURN("Error with status returned for fail idx %u. First status was not failure\n",
+				fail_idx);
+
+	/* delay in case time needed after err handled to complete other jobs */
+	await_hw(dev_id, vchan);
+
+	/* get the rest of the completions without status */
+	count2 = rte_dma_completed(dev_id, vchan, COMP_BURST_SZ, &idx, &error);
+	if (error == true)
+		ERR_RETURN("Error, got further errors post completed_status() call, for failure case %u.\n",
+				fail_idx);
+	if (count + status_count + count2 != COMP_BURST_SZ)
+		ERR_RETURN("Error, incorrect number of completions received, got %u not %u\n",
+				count + status_count + count2, COMP_BURST_SZ);
+
+	return 0;
+}
+
+static int
+test_multi_failure(int dev_id, uint16_t vchan, struct rte_mbuf **srcs, struct rte_mbuf **dsts,
+		const unsigned int *fail, size_t num_fail)
+{
+	/* test having multiple errors in one go */
+	enum rte_dma_status_code status[COMP_BURST_SZ];
+	unsigned int i, j;
+	uint16_t count, err_count = 0;
+	bool error = false;
+
+	/* enqueue and gather completions in one go */
+	for (j = 0; j < COMP_BURST_SZ; j++) {
+		uintptr_t src = srcs[j]->buf_iova + srcs[j]->data_off;
+		/* set up for failure if the current index is anywhere is the fails array */
+		for (i = 0; i < num_fail; i++)
+			if (j == fail[i])
+				src = 0;
+
+		int id = rte_dma_copy(dev_id, vchan,
+				src, dsts[j]->buf_iova + dsts[j]->data_off,
+				COPY_LEN, 0);
+		if (id < 0)
+			ERR_RETURN("Error with rte_dma_copy for buffer %u\n", j);
+	}
+	rte_dma_submit(dev_id, vchan);
+	await_hw(dev_id, vchan);
+
+	count = rte_dma_completed_status(dev_id, vchan, COMP_BURST_SZ, NULL, status);
+	while (count < COMP_BURST_SZ) {
+		await_hw(dev_id, vchan);
+
+		uint16_t ret = rte_dma_completed_status(dev_id, vchan, COMP_BURST_SZ - count,
+				NULL, &status[count]);
+		if (ret == 0)
+			ERR_RETURN("Error getting all completions for jobs. Got %u of %u\n",
+					count, COMP_BURST_SZ);
+		count += ret;
+	}
+	for (i = 0; i < count; i++)
+		if (status[i] != RTE_DMA_STATUS_SUCCESSFUL)
+			err_count++;
+
+	if (err_count != num_fail)
+		ERR_RETURN("Error: Invalid number of failed completions returned, %u; expected %zu\n",
+			err_count, num_fail);
+
+	/* enqueue and gather completions in bursts, but getting errors one at a time */
+	for (j = 0; j < COMP_BURST_SZ; j++) {
+		uintptr_t src = srcs[j]->buf_iova + srcs[j]->data_off;
+		/* set up for failure if the current index is anywhere is the fails array */
+		for (i = 0; i < num_fail; i++)
+			if (j == fail[i])
+				src = 0;
+
+		int id = rte_dma_copy(dev_id, vchan,
+				src, dsts[j]->buf_iova + dsts[j]->data_off,
+				COPY_LEN, 0);
+		if (id < 0)
+			ERR_RETURN("Error with rte_dma_copy for buffer %u\n", j);
+	}
+	rte_dma_submit(dev_id, vchan);
+	await_hw(dev_id, vchan);
+
+	count = 0;
+	err_count = 0;
+	while (count + err_count < COMP_BURST_SZ) {
+		count += rte_dma_completed(dev_id, vchan, COMP_BURST_SZ, NULL, &error);
+		if (error) {
+			uint16_t ret = rte_dma_completed_status(dev_id, vchan, 1,
+					NULL, status);
+			if (ret != 1)
+				ERR_RETURN("Error getting error-status for completions\n");
+			err_count += ret;
+			await_hw(dev_id, vchan);
+		}
+	}
+	if (err_count != num_fail)
+		ERR_RETURN("Error: Incorrect number of failed completions received, got %u not %zu\n",
+				err_count, num_fail);
+
+	return 0;
+}
+
+static int
+test_completion_status(int dev_id, uint16_t vchan, bool fence)
+{
+	const unsigned int fail[] = {0, 7, 14, 15};
+	struct rte_mbuf *srcs[COMP_BURST_SZ], *dsts[COMP_BURST_SZ];
+	unsigned int i;
+
+	for (i = 0; i < COMP_BURST_SZ; i++) {
+		srcs[i] = rte_pktmbuf_alloc(pool);
+		dsts[i] = rte_pktmbuf_alloc(pool);
+	}
+
+	for (i = 0; i < RTE_DIM(fail); i++) {
+		if (test_failure_in_full_burst(dev_id, vchan, fence, srcs, dsts, fail[i]) < 0)
+			return -1;
+
+		if (test_individual_status_query_with_failure(dev_id, vchan, fence,
+				srcs, dsts, fail[i]) < 0)
+			return -1;
+
+		/* test is run the same fenced, or unfenced, but no harm in running it twice */
+		if (test_single_item_status_query_with_failure(dev_id, vchan,
+				srcs, dsts, fail[i]) < 0)
+			return -1;
+	}
+
+	if (test_multi_failure(dev_id, vchan, srcs, dsts, fail, RTE_DIM(fail)) < 0)
+		return -1;
+
+	for (i = 0; i < COMP_BURST_SZ; i++) {
+		rte_pktmbuf_free(srcs[i]);
+		rte_pktmbuf_free(dsts[i]);
+	}
+	return 0;
+}
+
+static int
+test_completion_handling(int dev_id, uint16_t vchan)
+{
+	return test_completion_status(dev_id, vchan, false)              /* without fences */
+			|| test_completion_status(dev_id, vchan, true);  /* with fences */
+
+}
+
 static int
 test_dmadev_instance(uint16_t dev_id)
 {
@@ -336,6 +684,19 @@ test_dmadev_instance(uint16_t dev_id)
 	if (runtest("copy", test_enqueue_copies, 640, dev_id, vchan, CHECK_ERRS) < 0)
 		goto err;
 
+	/* to test error handling we can provide null pointers for source or dest in copies. This
+	 * requires VA mode in DPDK, since NULL(0) is a valid physical address.
+	 * We also need hardware that can report errors back.
+	 */
+	if (rte_eal_iova_mode() != RTE_IOVA_VA)
+		printf("DMA Dev %u: DPDK not in VA mode, skipping error handling tests\n", dev_id);
+	else if ((info.dev_capa & RTE_DMA_CAPA_HANDLES_ERRORS) == 0)
+		printf("DMA Dev %u: device does not report errors, skipping error handling tests\n",
+				dev_id);
+	else if (runtest("error handling", test_completion_handling, 1,
+			dev_id, vchan, !CHECK_ERRS) < 0)
+		goto err;
+
 	rte_mempool_free(pool);
 	rte_dma_stop(dev_id);
 	rte_dma_stats_reset(dev_id, vchan);
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v6 12/13] app/test: add dmadev fill tests
  2021-09-24 10:29 ` [dpdk-dev] [PATCH v6 00/13] add test suite for DMA drivers Bruce Richardson
                     ` (10 preceding siblings ...)
  2021-09-24 10:31   ` [dpdk-dev] [PATCH v6 11/13] app/test: test dmadev instance failure handling Bruce Richardson
@ 2021-09-24 10:31   ` Bruce Richardson
  2021-09-24 10:31   ` [dpdk-dev] [PATCH v6 13/13] app/test: add dmadev burst capacity API test Bruce Richardson
  2021-10-13 15:17   ` [dpdk-dev] [PATCH v7 00/13] add test suite for DMA drivers Bruce Richardson
  13 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-09-24 10:31 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

From: Kevin Laatz <kevin.laatz@intel.com>

For dma devices which support the fill operation, run unit tests to
verify fill behaviour is correct.

Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
---
 app/test/test_dmadev.c | 49 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 49 insertions(+)

diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
index d5885ba10e..9f555f456d 100644
--- a/app/test/test_dmadev.c
+++ b/app/test/test_dmadev.c
@@ -624,7 +624,51 @@ test_completion_handling(int dev_id, uint16_t vchan)
 {
 	return test_completion_status(dev_id, vchan, false)              /* without fences */
 			|| test_completion_status(dev_id, vchan, true);  /* with fences */
+}
+
+static int
+test_enqueue_fill(int dev_id, uint16_t vchan)
+{
+	const unsigned int lengths[] = {8, 64, 1024, 50, 100, 89};
+	struct rte_mbuf *dst;
+	char *dst_data;
+	uint64_t pattern = 0xfedcba9876543210;
+	unsigned int i, j;
+
+	dst = rte_pktmbuf_alloc(pool);
+	if (dst == NULL)
+		ERR_RETURN("Failed to allocate mbuf\n");
+	dst_data = rte_pktmbuf_mtod(dst, char *);
+
+	for (i = 0; i < RTE_DIM(lengths); i++) {
+		/* reset dst_data */
+		memset(dst_data, 0, rte_pktmbuf_data_len(dst));
+
+		/* perform the fill operation */
+		int id = rte_dma_fill(dev_id, vchan, pattern,
+				rte_pktmbuf_iova(dst), lengths[i], RTE_DMA_OP_FLAG_SUBMIT);
+		if (id < 0)
+			ERR_RETURN("Error with rte_dma_fill\n");
+		await_hw(dev_id, vchan);
+
+		if (rte_dma_completed(dev_id, vchan, 1, NULL, NULL) != 1)
+			ERR_RETURN("Error: fill operation failed (length: %u)\n", lengths[i]);
+		/* check the data from the fill operation is correct */
+		for (j = 0; j < lengths[i]; j++) {
+			char pat_byte = ((char *)&pattern)[j % 8];
+			if (dst_data[j] != pat_byte)
+				ERR_RETURN("Error with fill operation (lengths = %u): got (%x), not (%x)\n",
+						lengths[i], dst_data[j], pat_byte);
+		}
+		/* check that the data after the fill operation was not written to */
+		for (; j < rte_pktmbuf_data_len(dst); j++)
+			if (dst_data[j] != 0)
+				ERR_RETURN("Error, fill operation wrote too far (lengths = %u): got (%x), not (%x)\n",
+						lengths[i], dst_data[j], 0);
+	}
 
+	rte_pktmbuf_free(dst);
+	return 0;
 }
 
 static int
@@ -697,6 +741,11 @@ test_dmadev_instance(uint16_t dev_id)
 			dev_id, vchan, !CHECK_ERRS) < 0)
 		goto err;
 
+	if ((info.dev_capa & RTE_DMA_CAPA_OPS_FILL) == 0)
+		printf("DMA Dev %u: No device fill support, skipping fill tests\n", dev_id);
+	else if (runtest("fill", test_enqueue_fill, 1, dev_id, vchan, CHECK_ERRS) < 0)
+		goto err;
+
 	rte_mempool_free(pool);
 	rte_dma_stop(dev_id);
 	rte_dma_stats_reset(dev_id, vchan);
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v6 13/13] app/test: add dmadev burst capacity API test
  2021-09-24 10:29 ` [dpdk-dev] [PATCH v6 00/13] add test suite for DMA drivers Bruce Richardson
                     ` (11 preceding siblings ...)
  2021-09-24 10:31   ` [dpdk-dev] [PATCH v6 12/13] app/test: add dmadev fill tests Bruce Richardson
@ 2021-09-24 10:31   ` Bruce Richardson
  2021-10-13 15:17   ` [dpdk-dev] [PATCH v7 00/13] add test suite for DMA drivers Bruce Richardson
  13 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-09-24 10:31 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

From: Kevin Laatz <kevin.laatz@intel.com>

Add a test case to validate the functionality of drivers' burst capacity
API implementations.

Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
---
 app/test/test_dmadev.c | 67 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 67 insertions(+)

diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
index 9f555f456d..7a320565e3 100644
--- a/app/test/test_dmadev.c
+++ b/app/test/test_dmadev.c
@@ -671,6 +671,69 @@ test_enqueue_fill(int dev_id, uint16_t vchan)
 	return 0;
 }
 
+static int
+test_burst_capacity(int dev_id, uint16_t vchan)
+{
+#define CAP_TEST_BURST_SIZE	64
+	const int ring_space = rte_dma_burst_capacity(dev_id, vchan);
+	struct rte_mbuf *src, *dst;
+	int i, j, iter;
+	int cap, ret;
+	bool dma_err;
+
+	src = rte_pktmbuf_alloc(pool);
+	dst = rte_pktmbuf_alloc(pool);
+
+	/* to test capacity, we enqueue elements and check capacity is reduced
+	 * by one each time - rebaselining the expected value after each burst
+	 * as the capacity is only for a burst. We enqueue multiple bursts to
+	 * fill up half the ring, before emptying it again. We do this twice to
+	 * ensure that we get to test scenarios where we get ring wrap-around
+	 */
+	for (iter = 0; iter < 2; iter++) {
+		for (i = 0; i < (ring_space / (2 * CAP_TEST_BURST_SIZE)) + 1; i++) {
+			cap = rte_dma_burst_capacity(dev_id, vchan);
+
+			for (j = 0; j < CAP_TEST_BURST_SIZE; j++) {
+				ret = rte_dma_copy(dev_id, vchan, rte_pktmbuf_iova(src),
+						rte_pktmbuf_iova(dst), COPY_LEN, 0);
+				if (ret < 0)
+					ERR_RETURN("Error with rte_dmadev_copy\n");
+
+				if (rte_dma_burst_capacity(dev_id, vchan) != cap - (j + 1))
+					ERR_RETURN("Error, ring capacity did not change as expected\n");
+			}
+			if (rte_dma_submit(dev_id, vchan) < 0)
+				ERR_RETURN("Error, failed to submit burst\n");
+
+			if (cap < rte_dma_burst_capacity(dev_id, vchan))
+				ERR_RETURN("Error, avail ring capacity has gone up, not down\n");
+		}
+		await_hw(dev_id, vchan);
+
+		for (i = 0; i < (ring_space / (2 * CAP_TEST_BURST_SIZE)) + 1; i++) {
+			ret = rte_dma_completed(dev_id, vchan,
+					CAP_TEST_BURST_SIZE, NULL, &dma_err);
+			if (ret != CAP_TEST_BURST_SIZE || dma_err) {
+				enum rte_dma_status_code status;
+
+				rte_dma_completed_status(dev_id, vchan, 1, NULL, &status);
+				ERR_RETURN("Error with rte_dmadev_completed, %u [expected: %u], dma_err = %d, i = %u, iter = %u, status = %u\n",
+						ret, CAP_TEST_BURST_SIZE, dma_err, i, iter, status);
+			}
+		}
+		cap = rte_dma_burst_capacity(dev_id, vchan);
+		if (cap != ring_space)
+			ERR_RETURN("Error, ring capacity has not reset to original value, got %u, expected %u\n",
+					cap, ring_space);
+	}
+
+	rte_pktmbuf_free(src);
+	rte_pktmbuf_free(dst);
+
+	return 0;
+}
+
 static int
 test_dmadev_instance(uint16_t dev_id)
 {
@@ -728,6 +791,10 @@ test_dmadev_instance(uint16_t dev_id)
 	if (runtest("copy", test_enqueue_copies, 640, dev_id, vchan, CHECK_ERRS) < 0)
 		goto err;
 
+	/* run some burst capacity tests */
+	if (runtest("burst capacity", test_burst_capacity, 1, dev_id, vchan, CHECK_ERRS) < 0)
+		goto err;
+
 	/* to test error handling we can provide null pointers for source or dest in copies. This
 	 * requires VA mode in DPDK, since NULL(0) is a valid physical address.
 	 * We also need hardware that can report errors back.
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v6 04/13] dma/skeleton: add burst capacity function
  2021-09-24 10:29   ` [dpdk-dev] [PATCH v6 04/13] dma/skeleton: add burst capacity function Bruce Richardson
@ 2021-09-24 14:51     ` Conor Walsh
  0 siblings, 0 replies; 130+ messages in thread
From: Conor Walsh @ 2021-09-24 14:51 UTC (permalink / raw)
  To: Bruce Richardson, dev; +Cc: kevin.laatz, fengchengwen, jerinj

On 24/09/2021 11:29, Bruce Richardson wrote:
> Implement function to return the remaining space for operations.
>
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---

<snip>

Reviewed-by: Conor Walsh <conor.walsh@intel.com>


^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v6 05/13] dmadev: add device iterator
  2021-09-24 10:29   ` [dpdk-dev] [PATCH v6 05/13] dmadev: add device iterator Bruce Richardson
@ 2021-09-24 14:52     ` Conor Walsh
  2021-09-24 15:58     ` Kevin Laatz
  1 sibling, 0 replies; 130+ messages in thread
From: Conor Walsh @ 2021-09-24 14:52 UTC (permalink / raw)
  To: Bruce Richardson, dev; +Cc: kevin.laatz, fengchengwen, jerinj

On 24/09/2021 11:29, Bruce Richardson wrote:
> Add a function and wrapper macro to iterate over all dma devices.
>
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---

<snip>

Reviewed-by: Conor Walsh <conor.walsh@intel.com>


^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v6 10/13] dmadev: add flag for error handling support
  2021-09-24 10:31   ` [dpdk-dev] [PATCH v6 10/13] dmadev: add flag for error handling support Bruce Richardson
@ 2021-09-24 14:52     ` Conor Walsh
  2021-09-24 15:58     ` Kevin Laatz
  1 sibling, 0 replies; 130+ messages in thread
From: Conor Walsh @ 2021-09-24 14:52 UTC (permalink / raw)
  To: Bruce Richardson, dev; +Cc: kevin.laatz, fengchengwen, jerinj

On 24/09/2021 11:31, Bruce Richardson wrote:
> Due to HW or driver limiations, not all dmadevs may support full error
> handling e.g. safely managing and reporting an invalid address to a copy
> operation. The skeleton dmadev, for example, being pure software will
> always seg-fault if passed an invalid address. To indicate the
> availability of safe error handling by a device, we add a capability
> flag for it.
>
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---

<snip>

This is very useful for the IOAT driver too, thanks!

Reviewed-by: Conor Walsh <conor.walsh@intel.com>


^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v6 10/13] dmadev: add flag for error handling support
  2021-09-24 10:31   ` [dpdk-dev] [PATCH v6 10/13] dmadev: add flag for error handling support Bruce Richardson
  2021-09-24 14:52     ` Conor Walsh
@ 2021-09-24 15:58     ` Kevin Laatz
  1 sibling, 0 replies; 130+ messages in thread
From: Kevin Laatz @ 2021-09-24 15:58 UTC (permalink / raw)
  To: Bruce Richardson, dev; +Cc: conor.walsh, fengchengwen, jerinj

On 24/09/2021 11:31, Bruce Richardson wrote:
> Due to HW or driver limiations, not all dmadevs may support full error
> handling e.g. safely managing and reporting an invalid address to a copy
> operation. The skeleton dmadev, for example, being pure software will
> always seg-fault if passed an invalid address. To indicate the
> availability of safe error handling by a device, we add a capability
> flag for it.
>
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
>   lib/dmadev/rte_dmadev.c | 1 +
>   lib/dmadev/rte_dmadev.h | 9 +++++++++
>   2 files changed, 10 insertions(+)

Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>



^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v6 08/13] app/test: run test suite on skeleton driver
  2021-09-24 10:29   ` [dpdk-dev] [PATCH v6 08/13] app/test: run test suite on skeleton driver Bruce Richardson
@ 2021-09-24 15:58     ` Kevin Laatz
  0 siblings, 0 replies; 130+ messages in thread
From: Kevin Laatz @ 2021-09-24 15:58 UTC (permalink / raw)
  To: Bruce Richardson, dev; +Cc: conor.walsh, fengchengwen, jerinj

On 24/09/2021 11:29, Bruce Richardson wrote:
> When running the dmadev_autotest, run the suite of copy tests on the
> skeleton driver created for API testing too, rather than just destroying
> the driver instances once the API tests are complete. This helps to
> sanity check the tests themselves are reasonable.
>
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
>   app/test/test_dmadev.c | 5 ++---
>   1 file changed, 2 insertions(+), 3 deletions(-)
>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>



^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v6 05/13] dmadev: add device iterator
  2021-09-24 10:29   ` [dpdk-dev] [PATCH v6 05/13] dmadev: add device iterator Bruce Richardson
  2021-09-24 14:52     ` Conor Walsh
@ 2021-09-24 15:58     ` Kevin Laatz
  1 sibling, 0 replies; 130+ messages in thread
From: Kevin Laatz @ 2021-09-24 15:58 UTC (permalink / raw)
  To: Bruce Richardson, dev; +Cc: conor.walsh, fengchengwen, jerinj

On 24/09/2021 11:29, Bruce Richardson wrote:
> Add a function and wrapper macro to iterate over all dma devices.
>
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
>   lib/dmadev/rte_dmadev.c | 13 +++++++++++++
>   lib/dmadev/rte_dmadev.h | 18 ++++++++++++++++++
>   lib/dmadev/version.map  |  1 +
>   3 files changed, 32 insertions(+)
>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>



^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v7 00/13] add test suite for DMA drivers
  2021-09-24 10:29 ` [dpdk-dev] [PATCH v6 00/13] add test suite for DMA drivers Bruce Richardson
                     ` (12 preceding siblings ...)
  2021-09-24 10:31   ` [dpdk-dev] [PATCH v6 13/13] app/test: add dmadev burst capacity API test Bruce Richardson
@ 2021-10-13 15:17   ` Bruce Richardson
  2021-10-13 15:17     ` [dpdk-dev] [PATCH v7 01/13] dmadev: add channel status check for testing use Bruce Richardson
                       ` (13 more replies)
  13 siblings, 14 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-10-13 15:17 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

This patchset adds a fairly comprehensive set of tests for basic dmadev
functionality. Tests are added to verify basic copy operation in each
device, using both submit function and submit flag, and verifying
completion gathering using both "completed()" and "completed_status()"
functions. Beyond that, tests are then added for the error reporting and
handling, as is a suite of tests for the fill() operation for devices that
support those.

New in version 6 of the series are a couple of new patches to add support
for the skeleton dmadev to pass the suite of tests. This includes adding
missing function support to it, as well as adding a capability flag
to dmadev to indicate that it can't support invalid addresses.

Depends-on: series-19594 ("support dmadev")

V7:
* rebased to dmadev v26 patchset

V6:
* changed type of dev_id from uint16_t to int16_t
* moved burst_capacity function to datapath function set
* added burst_capacity function and vchan_status functions to skeleton driver
* added capability flag to indicate if device supports handling errors
* enabled running unit tests on skeleton driver

V5:
* added missing reviewed-by tags from v3 reviewed.

V4:
* rebased to v22 of dmadev set
* added patch for iteration macro for dmadevs to allow testing each dmadev in
  turn

V3:
* add patch and tests for a burst-capacity function
* addressed review feedback from v2
* code cleanups to try and shorten code where possible

V2:
* added into dmadev a API to check for a device being idle
* removed the hard-coded timeout delays before checking completions, and instead
  wait for device to be idle
* added in checks for statistics updates as part of some tests
* fixed issue identified by internal coverity scan
* other minor miscellaneous changes and fixes.

Bruce Richardson (10):
  dmadev: add channel status check for testing use
  dma/skeleton: add channel status function
  dma/skeleton: add burst capacity function
  dmadev: add device iterator
  app/test: add basic dmadev instance tests
  app/test: add basic dmadev copy tests
  app/test: run test suite on skeleton driver
  app/test: add more comprehensive dmadev copy tests
  dmadev: add flag for error handling support
  app/test: test dmadev instance failure handling

Kevin Laatz (3):
  dmadev: add burst capacity API
  app/test: add dmadev fill tests
  app/test: add dmadev burst capacity API test

 app/test/test_dmadev.c                 | 830 ++++++++++++++++++++++++-
 drivers/dma/skeleton/skeleton_dmadev.c |  28 +-
 drivers/dma/skeleton/skeleton_dmadev.h |   2 +-
 lib/dmadev/rte_dmadev.c                |  40 ++
 lib/dmadev/rte_dmadev.h                |  89 +++
 lib/dmadev/rte_dmadev_core.h           |   4 +
 lib/dmadev/rte_dmadev_pmd.h            |   5 +
 lib/dmadev/version.map                 |   3 +
 8 files changed, 995 insertions(+), 6 deletions(-)

--
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v7 01/13] dmadev: add channel status check for testing use
  2021-10-13 15:17   ` [dpdk-dev] [PATCH v7 00/13] add test suite for DMA drivers Bruce Richardson
@ 2021-10-13 15:17     ` Bruce Richardson
  2021-10-13 15:17     ` [dpdk-dev] [PATCH v7 02/13] dma/skeleton: add channel status function Bruce Richardson
                       ` (12 subsequent siblings)
  13 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-10-13 15:17 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

Add in a function to check if a device or vchan has completed all jobs
assigned to it, without gathering in the results. This is primarily for
use in testing, to allow the hardware to be in a known-state prior to
gathering completions.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
---
 lib/dmadev/rte_dmadev.c     | 17 +++++++++++++++++
 lib/dmadev/rte_dmadev.h     | 34 ++++++++++++++++++++++++++++++++++
 lib/dmadev/rte_dmadev_pmd.h |  5 +++++
 lib/dmadev/version.map      |  1 +
 4 files changed, 57 insertions(+)

diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c
index 7099bbb28d..3f9154e619 100644
--- a/lib/dmadev/rte_dmadev.c
+++ b/lib/dmadev/rte_dmadev.c
@@ -679,6 +679,23 @@ rte_dma_stats_reset(int16_t dev_id, uint16_t vchan)
 	return (*dev->dev_ops->stats_reset)(dev, vchan);
 }
 
+int
+rte_dma_vchan_status(int16_t dev_id, uint16_t vchan, enum rte_dma_vchan_status *status)
+{
+	struct rte_dma_dev *dev = &rte_dma_devices[dev_id];
+
+	if (!rte_dma_is_valid(dev_id))
+		return -EINVAL;
+
+	if (vchan >= dev->data->dev_conf.nb_vchans) {
+		RTE_DMA_LOG(ERR, "Device %u vchan %u out of range\n", dev_id, vchan);
+		return -EINVAL;
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vchan_status, -ENOTSUP);
+	return (*dev->dev_ops->vchan_status)(dev, vchan, status);
+}
+
 static const char *
 dma_capability_name(uint64_t capability)
 {
diff --git a/lib/dmadev/rte_dmadev.h b/lib/dmadev/rte_dmadev.h
index e46c001404..e35aca7d1c 100644
--- a/lib/dmadev/rte_dmadev.h
+++ b/lib/dmadev/rte_dmadev.h
@@ -645,6 +645,40 @@ int rte_dma_stats_get(int16_t dev_id, uint16_t vchan,
 __rte_experimental
 int rte_dma_stats_reset(int16_t dev_id, uint16_t vchan);
 
+/**
+ * device vchannel status
+ *
+ * Enum with the options for the channel status, either idle, active or halted due to error
+ * @see rte_dma_vchan_status
+ */
+enum rte_dma_vchan_status {
+	RTE_DMA_VCHAN_IDLE,          /**< not processing, awaiting ops */
+	RTE_DMA_VCHAN_ACTIVE,        /**< currently processing jobs */
+	RTE_DMA_VCHAN_HALTED_ERROR,  /**< not processing due to error, cannot accept new ops */
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Determine if all jobs have completed on a device channel.
+ * This function is primarily designed for testing use, as it allows a process to check if
+ * all jobs are completed, without actually gathering completions from those jobs.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @param vchan
+ *   The identifier of virtual DMA channel.
+ * @param[out] status
+ *   The vchan status
+ * @return
+ *   0 - call completed successfully
+ *   < 0 - error code indicating there was a problem calling the API
+ */
+__rte_experimental
+int
+rte_dma_vchan_status(int16_t dev_id, uint16_t vchan, enum rte_dma_vchan_status *status);
+
 /**
  * @warning
  * @b EXPERIMENTAL: this API may change without prior notice.
diff --git a/lib/dmadev/rte_dmadev_pmd.h b/lib/dmadev/rte_dmadev_pmd.h
index 23b07a4e1c..b97b5bf10b 100644
--- a/lib/dmadev/rte_dmadev_pmd.h
+++ b/lib/dmadev/rte_dmadev_pmd.h
@@ -54,6 +54,10 @@ typedef int (*rte_dma_stats_get_t)(const struct rte_dma_dev *dev,
 /** @internal Used to reset basic statistics. */
 typedef int (*rte_dma_stats_reset_t)(struct rte_dma_dev *dev, uint16_t vchan);
 
+/** @internal Used to check if a virtual channel has finished all jobs. */
+typedef int (*rte_dma_vchan_status_t)(const struct rte_dma_dev *dev, uint16_t vchan,
+		enum rte_dma_vchan_status *status);
+
 /** @internal Used to dump internal information. */
 typedef int (*rte_dma_dump_t)(const struct rte_dma_dev *dev, FILE *f);
 
@@ -74,6 +78,7 @@ struct rte_dma_dev_ops {
 	rte_dma_stats_get_t        stats_get;
 	rte_dma_stats_reset_t      stats_reset;
 
+	rte_dma_vchan_status_t     vchan_status;
 	rte_dma_dump_t             dev_dump;
 };
 
diff --git a/lib/dmadev/version.map b/lib/dmadev/version.map
index e17207b212..8785e14648 100644
--- a/lib/dmadev/version.map
+++ b/lib/dmadev/version.map
@@ -20,6 +20,7 @@ EXPERIMENTAL {
 	rte_dma_stop;
 	rte_dma_submit;
 	rte_dma_vchan_setup;
+	rte_dma_vchan_status;
 
 	local: *;
 };
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v7 02/13] dma/skeleton: add channel status function
  2021-10-13 15:17   ` [dpdk-dev] [PATCH v7 00/13] add test suite for DMA drivers Bruce Richardson
  2021-10-13 15:17     ` [dpdk-dev] [PATCH v7 01/13] dmadev: add channel status check for testing use Bruce Richardson
@ 2021-10-13 15:17     ` Bruce Richardson
  2021-10-13 15:17     ` [dpdk-dev] [PATCH v7 03/13] dmadev: add burst capacity API Bruce Richardson
                       ` (11 subsequent siblings)
  13 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-10-13 15:17 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

In order to avoid timing errors with the unit tests, we need to ensure
we have the vchan_status function to report when a channel is idle.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 drivers/dma/skeleton/skeleton_dmadev.c | 18 +++++++++++++++++-
 drivers/dma/skeleton/skeleton_dmadev.h |  2 +-
 2 files changed, 18 insertions(+), 2 deletions(-)

diff --git a/drivers/dma/skeleton/skeleton_dmadev.c b/drivers/dma/skeleton/skeleton_dmadev.c
index 22a73c6178..dd2f1c9b57 100644
--- a/drivers/dma/skeleton/skeleton_dmadev.c
+++ b/drivers/dma/skeleton/skeleton_dmadev.c
@@ -79,7 +79,7 @@ cpucopy_thread(void *param)
 
 		hw->zero_req_count = 0;
 		rte_memcpy(desc->dst, desc->src, desc->len);
-		hw->completed_count++;
+		__atomic_add_fetch(&hw->completed_count, 1, __ATOMIC_RELEASE);
 		(void)rte_ring_enqueue(hw->desc_completed, (void *)desc);
 	}
 
@@ -257,6 +257,21 @@ skeldma_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan,
 	return vchan_setup(hw, conf->nb_desc);
 }
 
+static int
+skeldma_vchan_status(const struct rte_dma_dev *dev,
+		uint16_t vchan, enum rte_dma_vchan_status *status)
+{
+	struct skeldma_hw *hw = dev->data->dev_private;
+
+	RTE_SET_USED(vchan);
+
+	*status = RTE_DMA_VCHAN_IDLE;
+	if (hw->submitted_count != __atomic_load_n(&hw->completed_count, __ATOMIC_ACQUIRE)
+			|| hw->zero_req_count == 0)
+		*status = RTE_DMA_VCHAN_ACTIVE;
+	return 0;
+}
+
 static int
 skeldma_stats_get(const struct rte_dma_dev *dev, uint16_t vchan,
 		  struct rte_dma_stats *stats, uint32_t stats_sz)
@@ -424,6 +439,7 @@ static const struct rte_dma_dev_ops skeldma_ops = {
 	.dev_close        = skeldma_close,
 
 	.vchan_setup      = skeldma_vchan_setup,
+	.vchan_status     = skeldma_vchan_status,
 
 	.stats_get        = skeldma_stats_get,
 	.stats_reset      = skeldma_stats_reset,
diff --git a/drivers/dma/skeleton/skeleton_dmadev.h b/drivers/dma/skeleton/skeleton_dmadev.h
index eaa52364bf..91eb5460fc 100644
--- a/drivers/dma/skeleton/skeleton_dmadev.h
+++ b/drivers/dma/skeleton/skeleton_dmadev.h
@@ -54,7 +54,7 @@ struct skeldma_hw {
 
 	/* Cache delimiter for cpucopy thread's operation data */
 	char cache2 __rte_cache_aligned;
-	uint32_t zero_req_count;
+	volatile uint32_t zero_req_count;
 	uint64_t completed_count;
 };
 
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v7 03/13] dmadev: add burst capacity API
  2021-10-13 15:17   ` [dpdk-dev] [PATCH v7 00/13] add test suite for DMA drivers Bruce Richardson
  2021-10-13 15:17     ` [dpdk-dev] [PATCH v7 01/13] dmadev: add channel status check for testing use Bruce Richardson
  2021-10-13 15:17     ` [dpdk-dev] [PATCH v7 02/13] dma/skeleton: add channel status function Bruce Richardson
@ 2021-10-13 15:17     ` Bruce Richardson
  2021-10-13 15:17     ` [dpdk-dev] [PATCH v7 04/13] dma/skeleton: add burst capacity function Bruce Richardson
                       ` (10 subsequent siblings)
  13 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-10-13 15:17 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj

From: Kevin Laatz <kevin.laatz@intel.com>

Add a burst capacity check API to the dmadev library. This API is useful to
applications which need to how many descriptors can be enqueued in the
current batch. For example, it could be used to determine whether all
segments of a multi-segment packet can be enqueued in the same batch or not
(to avoid half-offload of the packet).

Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
---
 lib/dmadev/rte_dmadev.c      |  9 +++++++++
 lib/dmadev/rte_dmadev.h      | 29 +++++++++++++++++++++++++++++
 lib/dmadev/rte_dmadev_core.h |  4 ++++
 lib/dmadev/version.map       |  1 +
 4 files changed, 43 insertions(+)

diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c
index 3f9154e619..c737cc6cdc 100644
--- a/lib/dmadev/rte_dmadev.c
+++ b/lib/dmadev/rte_dmadev.c
@@ -830,6 +830,14 @@ dummy_completed_status(__rte_unused void *dev_private,
 	return 0;
 }
 
+static uint16_t
+dummy_burst_capacity(__rte_unused const void *dev_private,
+		     __rte_unused uint16_t vchan)
+{
+	RTE_DMA_LOG(ERR, "burst_capacity is not configured or not supported.");
+	return 0;
+}
+
 static void
 dma_fp_object_dummy(struct rte_dma_fp_object *obj)
 {
@@ -840,4 +848,5 @@ dma_fp_object_dummy(struct rte_dma_fp_object *obj)
 	obj->submit           = dummy_submit;
 	obj->completed        = dummy_completed;
 	obj->completed_status = dummy_completed_status;
+	obj->burst_capacity   = dummy_burst_capacity;
 }
diff --git a/lib/dmadev/rte_dmadev.h b/lib/dmadev/rte_dmadev.h
index e35aca7d1c..a0824be20d 100644
--- a/lib/dmadev/rte_dmadev.h
+++ b/lib/dmadev/rte_dmadev.h
@@ -1076,6 +1076,35 @@ rte_dma_completed_status(int16_t dev_id, uint16_t vchan,
 					last_idx, status);
 }
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Check remaining capacity in descriptor ring for the current burst.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @param vchan
+ *   The identifier of virtual DMA channel.
+ *
+ * @return
+ *   - Remaining space in the descriptor ring for the current burst.
+ *   - 0 on error
+ */
+__rte_experimental
+static inline uint16_t
+rte_dma_burst_capacity(int16_t dev_id, uint16_t vchan)
+{
+	struct rte_dma_fp_object *obj = &rte_dma_fp_objs[dev_id];
+
+#ifdef RTE_DMADEV_DEBUG
+	if (!rte_dma_is_valid(dev_id))
+		return 0;
+	RTE_FUNC_PTR_OR_ERR_RET(*obbj->burst_capacity, 0);
+#endif
+	return (*obj->burst_capacity)(obj->dev_private, vchan);
+}
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/dmadev/rte_dmadev_core.h b/lib/dmadev/rte_dmadev_core.h
index 236d9d38e5..e42d8739ab 100644
--- a/lib/dmadev/rte_dmadev_core.h
+++ b/lib/dmadev/rte_dmadev_core.h
@@ -47,6 +47,9 @@ typedef uint16_t (*rte_dma_completed_status_t)(void *dev_private,
 			uint16_t vchan, const uint16_t nb_cpls,
 			uint16_t *last_idx, enum rte_dma_status_code *status);
 
+/** @internal Used to check the remaining space in descriptor ring. */
+typedef uint16_t (*rte_dma_burst_capacity_t)(const void *dev_private, uint16_t vchan);
+
 /**
  * @internal
  * Fast-path dmadev functions and related data are hold in a flat array.
@@ -69,6 +72,7 @@ struct rte_dma_fp_object {
 	rte_dma_submit_t           submit;
 	rte_dma_completed_t        completed;
 	rte_dma_completed_status_t completed_status;
+	rte_dma_burst_capacity_t   burst_capacity;
 } __rte_aligned(128);
 
 extern struct rte_dma_fp_object *rte_dma_fp_objs;
diff --git a/lib/dmadev/version.map b/lib/dmadev/version.map
index 8785e14648..4bbfdd52f6 100644
--- a/lib/dmadev/version.map
+++ b/lib/dmadev/version.map
@@ -1,6 +1,7 @@
 EXPERIMENTAL {
 	global:
 
+	rte_dma_burst_capacity;
 	rte_dma_close;
 	rte_dma_completed;
 	rte_dma_completed_status;
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v7 04/13] dma/skeleton: add burst capacity function
  2021-10-13 15:17   ` [dpdk-dev] [PATCH v7 00/13] add test suite for DMA drivers Bruce Richardson
                       ` (2 preceding siblings ...)
  2021-10-13 15:17     ` [dpdk-dev] [PATCH v7 03/13] dmadev: add burst capacity API Bruce Richardson
@ 2021-10-13 15:17     ` Bruce Richardson
  2021-10-13 15:17     ` [dpdk-dev] [PATCH v7 05/13] dmadev: add device iterator Bruce Richardson
                       ` (9 subsequent siblings)
  13 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-10-13 15:17 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

Implement function to return the remaining space for operations.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
---
 drivers/dma/skeleton/skeleton_dmadev.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/drivers/dma/skeleton/skeleton_dmadev.c b/drivers/dma/skeleton/skeleton_dmadev.c
index dd2f1c9b57..2952417126 100644
--- a/drivers/dma/skeleton/skeleton_dmadev.c
+++ b/drivers/dma/skeleton/skeleton_dmadev.c
@@ -431,6 +431,15 @@ skeldma_completed_status(void *dev_private,
 	return count;
 }
 
+static uint16_t
+skeldma_burst_capacity(const void *dev_private, uint16_t vchan)
+{
+	const struct skeldma_hw *hw = dev_private;
+
+	RTE_SET_USED(vchan);
+	return rte_ring_count(hw->desc_empty);
+}
+
 static const struct rte_dma_dev_ops skeldma_ops = {
 	.dev_info_get     = skeldma_info_get,
 	.dev_configure    = skeldma_configure,
@@ -469,6 +478,7 @@ skeldma_create(const char *name, struct rte_vdev_device *vdev, int lcore_id)
 	dev->fp_obj->submit = skeldma_submit;
 	dev->fp_obj->completed = skeldma_completed;
 	dev->fp_obj->completed_status = skeldma_completed_status;
+	dev->fp_obj->burst_capacity = skeldma_burst_capacity;
 
 	hw = dev->data->dev_private;
 	hw->lcore_id = lcore_id;
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v7 05/13] dmadev: add device iterator
  2021-10-13 15:17   ` [dpdk-dev] [PATCH v7 00/13] add test suite for DMA drivers Bruce Richardson
                       ` (3 preceding siblings ...)
  2021-10-13 15:17     ` [dpdk-dev] [PATCH v7 04/13] dma/skeleton: add burst capacity function Bruce Richardson
@ 2021-10-13 15:17     ` Bruce Richardson
  2021-10-13 15:17     ` [dpdk-dev] [PATCH v7 06/13] app/test: add basic dmadev instance tests Bruce Richardson
                       ` (8 subsequent siblings)
  13 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-10-13 15:17 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

Add a function and wrapper macro to iterate over all dma devices.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
---
 lib/dmadev/rte_dmadev.c | 13 +++++++++++++
 lib/dmadev/rte_dmadev.h | 18 ++++++++++++++++++
 lib/dmadev/version.map  |  1 +
 3 files changed, 32 insertions(+)

diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c
index c737cc6cdc..b6647e6ff8 100644
--- a/lib/dmadev/rte_dmadev.c
+++ b/lib/dmadev/rte_dmadev.c
@@ -49,6 +49,19 @@ rte_dma_dev_max(size_t dev_max)
 	return 0;
 }
 
+int16_t
+rte_dma_next_dev(int16_t start_dev_id)
+{
+	int16_t dev_id = start_dev_id;
+	while (dev_id < dma_devices_max && rte_dma_devices[dev_id].state == RTE_DMA_DEV_UNUSED)
+		dev_id++;
+
+	if (dev_id < dma_devices_max)
+		return dev_id;
+
+	return -1;
+}
+
 static int
 dma_check_name(const char *name)
 {
diff --git a/lib/dmadev/rte_dmadev.h b/lib/dmadev/rte_dmadev.h
index a0824be20d..bf78748b0c 100644
--- a/lib/dmadev/rte_dmadev.h
+++ b/lib/dmadev/rte_dmadev.h
@@ -220,6 +220,24 @@ bool rte_dma_is_valid(int16_t dev_id);
 __rte_experimental
 uint16_t rte_dma_count_avail(void);
 
+/**
+ * Iterates over valid dmadev instances.
+ *
+ * @param start_dev_id
+ *   The id of the next possible dmadev.
+ * @return
+ *   Next valid dmadev, UINT16_MAX if there is none.
+ */
+__rte_experimental
+int16_t rte_dma_next_dev(int16_t start_dev_id);
+
+/** Utility macro to iterate over all available dmadevs */
+#define RTE_DMA_FOREACH_DEV(p) \
+	for (p = rte_dma_next_dev(0); \
+	     p != -1; \
+	     p = rte_dma_next_dev(p + 1))
+
+
 /**@{@name DMA capability
  * @see struct rte_dma_info::dev_capa
  */
diff --git a/lib/dmadev/version.map b/lib/dmadev/version.map
index 4bbfdd52f6..ef561acd46 100644
--- a/lib/dmadev/version.map
+++ b/lib/dmadev/version.map
@@ -15,6 +15,7 @@ EXPERIMENTAL {
 	rte_dma_get_dev_id_by_name;
 	rte_dma_info_get;
 	rte_dma_is_valid;
+	rte_dma_next_dev;
 	rte_dma_start;
 	rte_dma_stats_get;
 	rte_dma_stats_reset;
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v7 06/13] app/test: add basic dmadev instance tests
  2021-10-13 15:17   ` [dpdk-dev] [PATCH v7 00/13] add test suite for DMA drivers Bruce Richardson
                       ` (4 preceding siblings ...)
  2021-10-13 15:17     ` [dpdk-dev] [PATCH v7 05/13] dmadev: add device iterator Bruce Richardson
@ 2021-10-13 15:17     ` Bruce Richardson
  2021-10-13 15:17     ` [dpdk-dev] [PATCH v7 07/13] app/test: add basic dmadev copy tests Bruce Richardson
                       ` (7 subsequent siblings)
  13 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-10-13 15:17 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

Run basic sanity tests for configuring, starting and stopping a dmadev
instance to help validate drivers. This also provides the framework for
future tests for data-path operation.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
---
 app/test/test_dmadev.c | 74 +++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 73 insertions(+), 1 deletion(-)

diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
index 45da6b76fe..e974936f25 100644
--- a/app/test/test_dmadev.c
+++ b/app/test/test_dmadev.c
@@ -3,12 +3,75 @@
  * Copyright(c) 2021 Intel Corporation
  */
 
+#include <inttypes.h>
+
 #include <rte_dmadev.h>
 #include <rte_bus_vdev.h>
+#include <rte_dmadev_pmd.h>
 
 #include "test.h"
 #include "test_dmadev_api.h"
 
+#define ERR_RETURN(...) do { print_err(__func__, __LINE__, __VA_ARGS__); return -1; } while (0)
+
+static void
+__rte_format_printf(3, 4)
+print_err(const char *func, int lineno, const char *format, ...)
+{
+	va_list ap;
+
+	fprintf(stderr, "In %s:%d - ", func, lineno);
+	va_start(ap, format);
+	vfprintf(stderr, format, ap);
+	va_end(ap);
+}
+
+static int
+test_dmadev_instance(int16_t dev_id)
+{
+#define TEST_RINGSIZE 512
+	struct rte_dma_stats stats;
+	struct rte_dma_info info;
+	const struct rte_dma_conf conf = { .nb_vchans = 1};
+	const struct rte_dma_vchan_conf qconf = {
+			.direction = RTE_DMA_DIR_MEM_TO_MEM,
+			.nb_desc = TEST_RINGSIZE,
+	};
+	const int vchan = 0;
+
+	printf("\n### Test dmadev instance %u [%s]\n",
+			dev_id, rte_dma_devices[dev_id].data->dev_name);
+
+	rte_dma_info_get(dev_id, &info);
+	if (info.max_vchans < 1)
+		ERR_RETURN("Error, no channels available on device id %u\n", dev_id);
+
+	if (rte_dma_configure(dev_id, &conf) != 0)
+		ERR_RETURN("Error with rte_dma_configure()\n");
+
+	if (rte_dma_vchan_setup(dev_id, vchan, &qconf) < 0)
+		ERR_RETURN("Error with queue configuration\n");
+
+	rte_dma_info_get(dev_id, &info);
+	if (info.nb_vchans != 1)
+		ERR_RETURN("Error, no configured queues reported on device id %u\n", dev_id);
+
+	if (rte_dma_start(dev_id) != 0)
+		ERR_RETURN("Error with rte_dma_start()\n");
+
+	if (rte_dma_stats_get(dev_id, vchan, &stats) != 0)
+		ERR_RETURN("Error with rte_dma_stats_get()\n");
+
+	if (stats.completed != 0 || stats.submitted != 0 || stats.errors != 0)
+		ERR_RETURN("Error device stats are not all zero: completed = %"PRIu64", "
+				"submitted = %"PRIu64", errors = %"PRIu64"\n",
+				stats.completed, stats.submitted, stats.errors);
+
+	rte_dma_stop(dev_id);
+	rte_dma_stats_reset(dev_id, vchan);
+	return 0;
+}
+
 static int
 test_apis(void)
 {
@@ -31,9 +94,18 @@ test_apis(void)
 static int
 test_dma(void)
 {
+	int i;
+
 	/* basic sanity on dmadev infrastructure */
 	if (test_apis() < 0)
-		return -1;
+		ERR_RETURN("Error performing API tests\n");
+
+	if (rte_dma_count_avail() == 0)
+		return TEST_SKIPPED;
+
+	RTE_DMA_FOREACH_DEV(i)
+		if (test_dmadev_instance(i) < 0)
+			ERR_RETURN("Error, test failure for device %d\n", i);
 
 	return 0;
 }
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v7 07/13] app/test: add basic dmadev copy tests
  2021-10-13 15:17   ` [dpdk-dev] [PATCH v7 00/13] add test suite for DMA drivers Bruce Richardson
                       ` (5 preceding siblings ...)
  2021-10-13 15:17     ` [dpdk-dev] [PATCH v7 06/13] app/test: add basic dmadev instance tests Bruce Richardson
@ 2021-10-13 15:17     ` Bruce Richardson
  2021-10-13 15:17     ` [dpdk-dev] [PATCH v7 08/13] app/test: run test suite on skeleton driver Bruce Richardson
                       ` (6 subsequent siblings)
  13 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-10-13 15:17 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

For each dmadev instance, perform some basic copy tests to validate that
functionality.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
---
 app/test/test_dmadev.c | 175 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 175 insertions(+)

diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
index e974936f25..f4537a87c1 100644
--- a/app/test/test_dmadev.c
+++ b/app/test/test_dmadev.c
@@ -6,6 +6,10 @@
 #include <inttypes.h>
 
 #include <rte_dmadev.h>
+#include <rte_mbuf.h>
+#include <rte_pause.h>
+#include <rte_cycles.h>
+#include <rte_random.h>
 #include <rte_bus_vdev.h>
 #include <rte_dmadev_pmd.h>
 
@@ -14,6 +18,11 @@
 
 #define ERR_RETURN(...) do { print_err(__func__, __LINE__, __VA_ARGS__); return -1; } while (0)
 
+#define COPY_LEN 1024
+
+static struct rte_mempool *pool;
+static uint16_t id_count;
+
 static void
 __rte_format_printf(3, 4)
 print_err(const char *func, int lineno, const char *format, ...)
@@ -26,10 +35,155 @@ print_err(const char *func, int lineno, const char *format, ...)
 	va_end(ap);
 }
 
+static int
+runtest(const char *printable, int (*test_fn)(int16_t dev_id, uint16_t vchan), int iterations,
+		int16_t dev_id, uint16_t vchan, bool check_err_stats)
+{
+	struct rte_dma_stats stats;
+	int i;
+
+	rte_dma_stats_reset(dev_id, vchan);
+	printf("DMA Dev %d: Running %s Tests %s\n", dev_id, printable,
+			check_err_stats ? " " : "(errors expected)");
+	for (i = 0; i < iterations; i++) {
+		if (test_fn(dev_id, vchan) < 0)
+			return -1;
+
+		rte_dma_stats_get(dev_id, 0, &stats);
+		printf("Ops submitted: %"PRIu64"\t", stats.submitted);
+		printf("Ops completed: %"PRIu64"\t", stats.completed);
+		printf("Errors: %"PRIu64"\r", stats.errors);
+
+		if (stats.completed != stats.submitted)
+			ERR_RETURN("\nError, not all submitted jobs are reported as completed\n");
+		if (check_err_stats && stats.errors != 0)
+			ERR_RETURN("\nErrors reported during op processing, aborting tests\n");
+	}
+	printf("\n");
+	return 0;
+}
+
+static void
+await_hw(int16_t dev_id, uint16_t vchan)
+{
+	enum rte_dma_vchan_status st;
+
+	if (rte_dma_vchan_status(dev_id, vchan, &st) < 0) {
+		/* for drivers that don't support this op, just sleep for 1 millisecond */
+		rte_delay_us_sleep(1000);
+		return;
+	}
+
+	/* for those that do, *max* end time is one second from now, but all should be faster */
+	const uint64_t end_cycles = rte_get_timer_cycles() + rte_get_timer_hz();
+	while (st == RTE_DMA_VCHAN_ACTIVE && rte_get_timer_cycles() < end_cycles) {
+		rte_pause();
+		rte_dma_vchan_status(dev_id, vchan, &st);
+	}
+}
+
+static int
+test_enqueue_copies(int16_t dev_id, uint16_t vchan)
+{
+	unsigned int i;
+	uint16_t id;
+
+	/* test doing a single copy */
+	do {
+		struct rte_mbuf *src, *dst;
+		char *src_data, *dst_data;
+
+		src = rte_pktmbuf_alloc(pool);
+		dst = rte_pktmbuf_alloc(pool);
+		src_data = rte_pktmbuf_mtod(src, char *);
+		dst_data = rte_pktmbuf_mtod(dst, char *);
+
+		for (i = 0; i < COPY_LEN; i++)
+			src_data[i] = rte_rand() & 0xFF;
+
+		id = rte_dma_copy(dev_id, vchan, rte_pktmbuf_iova(src), rte_pktmbuf_iova(dst),
+				COPY_LEN, RTE_DMA_OP_FLAG_SUBMIT);
+		if (id != id_count)
+			ERR_RETURN("Error with rte_dma_copy, got %u, expected %u\n",
+					id, id_count);
+
+		/* give time for copy to finish, then check it was done */
+		await_hw(dev_id, vchan);
+
+		for (i = 0; i < COPY_LEN; i++)
+			if (dst_data[i] != src_data[i])
+				ERR_RETURN("Data mismatch at char %u [Got %02x not %02x]\n", i,
+						dst_data[i], src_data[i]);
+
+		/* now check completion works */
+		if (rte_dma_completed(dev_id, vchan, 1, &id, NULL) != 1)
+			ERR_RETURN("Error with rte_dma_completed\n");
+
+		if (id != id_count)
+			ERR_RETURN("Error:incorrect job id received, %u [expected %u]\n",
+					id, id_count);
+
+		rte_pktmbuf_free(src);
+		rte_pktmbuf_free(dst);
+
+		/* now check completion returns nothing more */
+		if (rte_dma_completed(dev_id, 0, 1, NULL, NULL) != 0)
+			ERR_RETURN("Error with rte_dma_completed in empty check\n");
+
+		id_count++;
+
+	} while (0);
+
+	/* test doing a multiple single copies */
+	do {
+		const uint16_t max_ops = 4;
+		struct rte_mbuf *src, *dst;
+		char *src_data, *dst_data;
+		uint16_t count;
+
+		src = rte_pktmbuf_alloc(pool);
+		dst = rte_pktmbuf_alloc(pool);
+		src_data = rte_pktmbuf_mtod(src, char *);
+		dst_data = rte_pktmbuf_mtod(dst, char *);
+
+		for (i = 0; i < COPY_LEN; i++)
+			src_data[i] = rte_rand() & 0xFF;
+
+		/* perform the same copy <max_ops> times */
+		for (i = 0; i < max_ops; i++)
+			if (rte_dma_copy(dev_id, vchan,
+					rte_pktmbuf_iova(src),
+					rte_pktmbuf_iova(dst),
+					COPY_LEN, RTE_DMA_OP_FLAG_SUBMIT) != id_count++)
+				ERR_RETURN("Error with rte_dma_copy\n");
+
+		await_hw(dev_id, vchan);
+
+		count = rte_dma_completed(dev_id, vchan, max_ops * 2, &id, NULL);
+		if (count != max_ops)
+			ERR_RETURN("Error with rte_dma_completed, got %u not %u\n",
+					count, max_ops);
+
+		if (id != id_count - 1)
+			ERR_RETURN("Error, incorrect job id returned: got %u not %u\n",
+					id, id_count - 1);
+
+		for (i = 0; i < COPY_LEN; i++)
+			if (dst_data[i] != src_data[i])
+				ERR_RETURN("Data mismatch at char %u\n", i);
+
+		rte_pktmbuf_free(src);
+		rte_pktmbuf_free(dst);
+	} while (0);
+
+	return 0;
+}
+
 static int
 test_dmadev_instance(int16_t dev_id)
 {
 #define TEST_RINGSIZE 512
+#define CHECK_ERRS    true
 	struct rte_dma_stats stats;
 	struct rte_dma_info info;
 	const struct rte_dma_conf conf = { .nb_vchans = 1};
@@ -66,10 +220,31 @@ test_dmadev_instance(int16_t dev_id)
 		ERR_RETURN("Error device stats are not all zero: completed = %"PRIu64", "
 				"submitted = %"PRIu64", errors = %"PRIu64"\n",
 				stats.completed, stats.submitted, stats.errors);
+	id_count = 0;
+
+	/* create a mempool for running tests */
+	pool = rte_pktmbuf_pool_create("TEST_DMADEV_POOL",
+			TEST_RINGSIZE * 2, /* n == num elements */
+			32,  /* cache size */
+			0,   /* priv size */
+			2048, /* data room size */
+			info.numa_node);
+	if (pool == NULL)
+		ERR_RETURN("Error with mempool creation\n");
 
+	/* run the test cases, use many iterations to ensure UINT16_MAX id wraparound */
+	if (runtest("copy", test_enqueue_copies, 640, dev_id, vchan, CHECK_ERRS) < 0)
+		goto err;
+
+	rte_mempool_free(pool);
 	rte_dma_stop(dev_id);
 	rte_dma_stats_reset(dev_id, vchan);
 	return 0;
+
+err:
+	rte_mempool_free(pool);
+	rte_dma_stop(dev_id);
+	return -1;
 }
 
 static int
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v7 08/13] app/test: run test suite on skeleton driver
  2021-10-13 15:17   ` [dpdk-dev] [PATCH v7 00/13] add test suite for DMA drivers Bruce Richardson
                       ` (6 preceding siblings ...)
  2021-10-13 15:17     ` [dpdk-dev] [PATCH v7 07/13] app/test: add basic dmadev copy tests Bruce Richardson
@ 2021-10-13 15:17     ` Bruce Richardson
  2021-10-13 15:17     ` [dpdk-dev] [PATCH v7 09/13] app/test: add more comprehensive dmadev copy tests Bruce Richardson
                       ` (5 subsequent siblings)
  13 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-10-13 15:17 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

When running the dmadev_autotest, run the suite of copy tests on the
skeleton driver created for API testing too, rather than just destroying
the driver instances once the API tests are complete. This helps to
sanity check the tests themselves are reasonable.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
---
 app/test/test_dmadev.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
index f4537a87c1..b0fba1d84e 100644
--- a/app/test/test_dmadev.c
+++ b/app/test/test_dmadev.c
@@ -254,14 +254,13 @@ test_apis(void)
 	int id;
 	int ret;
 
-	if (rte_vdev_init(pmd, NULL) < 0)
-		return TEST_SKIPPED;
+	/* attempt to create skeleton instance - ignore errors due to one being already present */
+	rte_vdev_init(pmd, NULL);
 	id = rte_dma_get_dev_id_by_name(pmd);
 	if (id < 0)
 		return TEST_SKIPPED;
 	printf("\n### Test dmadev infrastructure using skeleton driver\n");
 	ret = test_dma_api(id);
-	rte_vdev_uninit(pmd);
 
 	return ret;
 }
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v7 09/13] app/test: add more comprehensive dmadev copy tests
  2021-10-13 15:17   ` [dpdk-dev] [PATCH v7 00/13] add test suite for DMA drivers Bruce Richardson
                       ` (7 preceding siblings ...)
  2021-10-13 15:17     ` [dpdk-dev] [PATCH v7 08/13] app/test: run test suite on skeleton driver Bruce Richardson
@ 2021-10-13 15:17     ` Bruce Richardson
  2021-10-13 15:17     ` [dpdk-dev] [PATCH v7 10/13] dmadev: add flag for error handling support Bruce Richardson
                       ` (4 subsequent siblings)
  13 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-10-13 15:17 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

Add unit tests for various combinations of use for dmadev, copying
bursts of packets in various formats, e.g.

1. enqueuing two smaller bursts and completing them as one burst
2. enqueuing one burst and gathering completions in smaller bursts
3. using completed_status() function to gather completions rather than
   just completed()

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
---
 app/test/test_dmadev.c | 101 ++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 100 insertions(+), 1 deletion(-)

diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
index b0fba1d84e..98b20f75ef 100644
--- a/app/test/test_dmadev.c
+++ b/app/test/test_dmadev.c
@@ -82,6 +82,98 @@ await_hw(int16_t dev_id, uint16_t vchan)
 	}
 }
 
+/* run a series of copy tests just using some different options for enqueues and completions */
+static int
+do_multi_copies(int16_t dev_id, uint16_t vchan,
+		int split_batches,     /* submit 2 x 16 or 1 x 32 burst */
+		int split_completions, /* gather 2 x 16 or 1 x 32 completions */
+		int use_completed_status) /* use completed or completed_status function */
+{
+	struct rte_mbuf *srcs[32], *dsts[32];
+	enum rte_dma_status_code sc[32];
+	unsigned int i, j;
+	bool dma_err = false;
+
+	/* Enqueue burst of copies and hit doorbell */
+	for (i = 0; i < RTE_DIM(srcs); i++) {
+		uint64_t *src_data;
+
+		if (split_batches && i == RTE_DIM(srcs) / 2)
+			rte_dma_submit(dev_id, vchan);
+
+		srcs[i] = rte_pktmbuf_alloc(pool);
+		dsts[i] = rte_pktmbuf_alloc(pool);
+		if (srcs[i] == NULL || dsts[i] == NULL)
+			ERR_RETURN("Error allocating buffers\n");
+
+		src_data = rte_pktmbuf_mtod(srcs[i], uint64_t *);
+		for (j = 0; j < COPY_LEN/sizeof(uint64_t); j++)
+			src_data[j] = rte_rand();
+
+		if (rte_dma_copy(dev_id, vchan, srcs[i]->buf_iova + srcs[i]->data_off,
+				dsts[i]->buf_iova + dsts[i]->data_off, COPY_LEN, 0) != id_count++)
+			ERR_RETURN("Error with rte_dma_copy for buffer %u\n", i);
+	}
+	rte_dma_submit(dev_id, vchan);
+
+	await_hw(dev_id, vchan);
+
+	if (split_completions) {
+		/* gather completions in two halves */
+		uint16_t half_len = RTE_DIM(srcs) / 2;
+		int ret = rte_dma_completed(dev_id, vchan, half_len, NULL, &dma_err);
+		if (ret != half_len || dma_err)
+			ERR_RETURN("Error with rte_dma_completed - first half. ret = %d, expected ret = %u, dma_err = %d\n",
+					ret, half_len, dma_err);
+
+		ret = rte_dma_completed(dev_id, vchan, half_len, NULL, &dma_err);
+		if (ret != half_len || dma_err)
+			ERR_RETURN("Error with rte_dma_completed - second half. ret = %d, expected ret = %u, dma_err = %d\n",
+					ret, half_len, dma_err);
+	} else {
+		/* gather all completions in one go, using either
+		 * completed or completed_status fns
+		 */
+		if (!use_completed_status) {
+			int n = rte_dma_completed(dev_id, vchan, RTE_DIM(srcs), NULL, &dma_err);
+			if (n != RTE_DIM(srcs) || dma_err)
+				ERR_RETURN("Error with rte_dma_completed, %u [expected: %zu], dma_err = %d\n",
+						n, RTE_DIM(srcs), dma_err);
+		} else {
+			int n = rte_dma_completed_status(dev_id, vchan, RTE_DIM(srcs), NULL, sc);
+			if (n != RTE_DIM(srcs))
+				ERR_RETURN("Error with rte_dma_completed_status, %u [expected: %zu]\n",
+						n, RTE_DIM(srcs));
+
+			for (j = 0; j < (uint16_t)n; j++)
+				if (sc[j] != RTE_DMA_STATUS_SUCCESSFUL)
+					ERR_RETURN("Error with rte_dma_completed_status, job %u reports failure [code %u]\n",
+							j, sc[j]);
+		}
+	}
+
+	/* check for empty */
+	int ret = use_completed_status ?
+			rte_dma_completed_status(dev_id, vchan, RTE_DIM(srcs), NULL, sc) :
+			rte_dma_completed(dev_id, vchan, RTE_DIM(srcs), NULL, &dma_err);
+	if (ret != 0)
+		ERR_RETURN("Error with completion check - ops unexpectedly returned\n");
+
+	for (i = 0; i < RTE_DIM(srcs); i++) {
+		char *src_data, *dst_data;
+
+		src_data = rte_pktmbuf_mtod(srcs[i], char *);
+		dst_data = rte_pktmbuf_mtod(dsts[i], char *);
+		for (j = 0; j < COPY_LEN; j++)
+			if (src_data[j] != dst_data[j])
+				ERR_RETURN("Error with copy of packet %u, byte %u\n", i, j);
+
+		rte_pktmbuf_free(srcs[i]);
+		rte_pktmbuf_free(dsts[i]);
+	}
+	return 0;
+}
+
 static int
 test_enqueue_copies(int16_t dev_id, uint16_t vchan)
 {
@@ -176,7 +268,14 @@ test_enqueue_copies(int16_t dev_id, uint16_t vchan)
 		rte_pktmbuf_free(dst);
 	} while (0);
 
-	return 0;
+	/* test doing multiple copies */
+	return do_multi_copies(dev_id, vchan, 0, 0, 0) /* enqueue and complete 1 batch at a time */
+			/* enqueue 2 batches and then complete both */
+			|| do_multi_copies(dev_id, vchan, 1, 0, 0)
+			/* enqueue 1 batch, then complete in two halves */
+			|| do_multi_copies(dev_id, vchan, 0, 1, 0)
+			/* test using completed_status in place of regular completed API */
+			|| do_multi_copies(dev_id, vchan, 0, 0, 1);
 }
 
 static int
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v7 10/13] dmadev: add flag for error handling support
  2021-10-13 15:17   ` [dpdk-dev] [PATCH v7 00/13] add test suite for DMA drivers Bruce Richardson
                       ` (8 preceding siblings ...)
  2021-10-13 15:17     ` [dpdk-dev] [PATCH v7 09/13] app/test: add more comprehensive dmadev copy tests Bruce Richardson
@ 2021-10-13 15:17     ` Bruce Richardson
  2021-10-13 15:17     ` [dpdk-dev] [PATCH v7 11/13] app/test: test dmadev instance failure handling Bruce Richardson
                       ` (3 subsequent siblings)
  13 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-10-13 15:17 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

Due to HW or driver limiations, not all dmadevs may support full error
handling e.g. safely managing and reporting an invalid address to a copy
operation. The skeleton dmadev, for example, being pure software will
always seg-fault if passed an invalid address. To indicate the
availability of safe error handling by a device, we add a capability
flag for it.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
---
 lib/dmadev/rte_dmadev.c | 1 +
 lib/dmadev/rte_dmadev.h | 8 ++++++++
 2 files changed, 9 insertions(+)

diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c
index b6647e6ff8..182d32aedb 100644
--- a/lib/dmadev/rte_dmadev.c
+++ b/lib/dmadev/rte_dmadev.c
@@ -722,6 +722,7 @@ dma_capability_name(uint64_t capability)
 		{ RTE_DMA_CAPA_DEV_TO_DEV,  "dev2dev" },
 		{ RTE_DMA_CAPA_SVA,         "sva"     },
 		{ RTE_DMA_CAPA_SILENT,      "silent"  },
+		{ RTE_DMA_CAPA_HANDLES_ERRORS, "handles_errors" },
 		{ RTE_DMA_CAPA_OPS_COPY,    "copy"    },
 		{ RTE_DMA_CAPA_OPS_COPY_SG, "copy_sg" },
 		{ RTE_DMA_CAPA_OPS_FILL,    "fill"    },
diff --git a/lib/dmadev/rte_dmadev.h b/lib/dmadev/rte_dmadev.h
index bf78748b0c..f5d23017b1 100644
--- a/lib/dmadev/rte_dmadev.h
+++ b/lib/dmadev/rte_dmadev.h
@@ -262,6 +262,14 @@ int16_t rte_dma_next_dev(int16_t start_dev_id);
  * @see struct rte_dma_conf::silent_mode
  */
 #define RTE_DMA_CAPA_SILENT             RTE_BIT64(5)
+/** Supports error handling
+ *
+ * With this bit set, invalid input addresses will be reported as operation failures
+ * to the user but other operations can continue.
+ * Without this bit set, invalid data is not handled by either HW or driver, so user
+ * must ensure that all memory addresses are valid and accessible by HW.
+ */
+#define RTE_DMA_CAPA_HANDLES_ERRORS	RTE_BIT64(6)
 /** Support copy operation.
  * This capability start with index of 32, so that it could leave gap between
  * normal capability and ops capability.
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v7 11/13] app/test: test dmadev instance failure handling
  2021-10-13 15:17   ` [dpdk-dev] [PATCH v7 00/13] add test suite for DMA drivers Bruce Richardson
                       ` (9 preceding siblings ...)
  2021-10-13 15:17     ` [dpdk-dev] [PATCH v7 10/13] dmadev: add flag for error handling support Bruce Richardson
@ 2021-10-13 15:17     ` Bruce Richardson
  2021-10-13 15:17     ` [dpdk-dev] [PATCH v7 12/13] app/test: add dmadev fill tests Bruce Richardson
                       ` (2 subsequent siblings)
  13 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-10-13 15:17 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

Add a series of tests to inject bad copy operations into a dmadev to
test the error handling and reporting capabilities. Various combinations
of errors in various positions in a burst are tested, as are errors in
bursts with fence flag set, and multiple errors in a single burst.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
---
 app/test/test_dmadev.c | 361 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 361 insertions(+)

diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
index 98b20f75ef..8e61216f04 100644
--- a/app/test/test_dmadev.c
+++ b/app/test/test_dmadev.c
@@ -278,6 +278,354 @@ test_enqueue_copies(int16_t dev_id, uint16_t vchan)
 			|| do_multi_copies(dev_id, vchan, 0, 0, 1);
 }
 
+/* Failure handling test cases - global macros and variables for those tests*/
+#define COMP_BURST_SZ	16
+#define OPT_FENCE(idx) ((fence && idx == 8) ? RTE_DMA_OP_FLAG_FENCE : 0)
+
+static int
+test_failure_in_full_burst(int16_t dev_id, uint16_t vchan, bool fence,
+		struct rte_mbuf **srcs, struct rte_mbuf **dsts, unsigned int fail_idx)
+{
+	/* Test single full batch statuses with failures */
+	enum rte_dma_status_code status[COMP_BURST_SZ];
+	struct rte_dma_stats baseline, stats;
+	uint16_t invalid_addr_id = 0;
+	uint16_t idx;
+	uint16_t count, status_count;
+	unsigned int i;
+	bool error = false;
+	int err_count = 0;
+
+	rte_dma_stats_get(dev_id, vchan, &baseline); /* get a baseline set of stats */
+	for (i = 0; i < COMP_BURST_SZ; i++) {
+		int id = rte_dma_copy(dev_id, vchan,
+				(i == fail_idx ? 0 : (srcs[i]->buf_iova + srcs[i]->data_off)),
+				dsts[i]->buf_iova + dsts[i]->data_off,
+				COPY_LEN, OPT_FENCE(i));
+		if (id < 0)
+			ERR_RETURN("Error with rte_dma_copy for buffer %u\n", i);
+		if (i == fail_idx)
+			invalid_addr_id = id;
+	}
+	rte_dma_submit(dev_id, vchan);
+	rte_dma_stats_get(dev_id, vchan, &stats);
+	if (stats.submitted != baseline.submitted + COMP_BURST_SZ)
+		ERR_RETURN("Submitted stats value not as expected, %"PRIu64" not %"PRIu64"\n",
+				stats.submitted, baseline.submitted + COMP_BURST_SZ);
+
+	await_hw(dev_id, vchan);
+
+	count = rte_dma_completed(dev_id, vchan, COMP_BURST_SZ, &idx, &error);
+	if (count != fail_idx)
+		ERR_RETURN("Error with rte_dma_completed for failure test. Got returned %u not %u.\n",
+				count, fail_idx);
+	if (!error)
+		ERR_RETURN("Error, missing expected failed copy, %u. has_error is not set\n",
+				fail_idx);
+	if (idx != invalid_addr_id - 1)
+		ERR_RETURN("Error, missing expected failed copy, %u. Got last idx %u, not %u\n",
+				fail_idx, idx, invalid_addr_id - 1);
+
+	/* all checks ok, now verify calling completed() again always returns 0 */
+	for (i = 0; i < 10; i++)
+		if (rte_dma_completed(dev_id, vchan, COMP_BURST_SZ, &idx, &error) != 0
+				|| error == false || idx != (invalid_addr_id - 1))
+			ERR_RETURN("Error with follow-up completed calls for fail idx %u\n",
+					fail_idx);
+
+	status_count = rte_dma_completed_status(dev_id, vchan, COMP_BURST_SZ,
+			&idx, status);
+	/* some HW may stop on error and be restarted after getting error status for single value
+	 * To handle this case, if we get just one error back, wait for more completions and get
+	 * status for rest of the burst
+	 */
+	if (status_count == 1) {
+		await_hw(dev_id, vchan);
+		status_count += rte_dma_completed_status(dev_id, vchan, COMP_BURST_SZ - 1,
+					&idx, &status[1]);
+	}
+	/* check that at this point we have all status values */
+	if (status_count != COMP_BURST_SZ - count)
+		ERR_RETURN("Error with completed_status calls for fail idx %u. Got %u not %u\n",
+				fail_idx, status_count, COMP_BURST_SZ - count);
+	/* now verify just one failure followed by multiple successful or skipped entries */
+	if (status[0] == RTE_DMA_STATUS_SUCCESSFUL)
+		ERR_RETURN("Error with status returned for fail idx %u. First status was not failure\n",
+				fail_idx);
+	for (i = 1; i < status_count; i++)
+		/* after a failure in a burst, depending on ordering/fencing,
+		 * operations may be successful or skipped because of previous error.
+		 */
+		if (status[i] != RTE_DMA_STATUS_SUCCESSFUL
+				&& status[i] != RTE_DMA_STATUS_NOT_ATTEMPTED)
+			ERR_RETURN("Error with status calls for fail idx %u. Status for job %u (of %u) is not successful\n",
+					fail_idx, count + i, COMP_BURST_SZ);
+
+	/* check the completed + errors stats are as expected */
+	rte_dma_stats_get(dev_id, vchan, &stats);
+	if (stats.completed != baseline.completed + COMP_BURST_SZ)
+		ERR_RETURN("Completed stats value not as expected, %"PRIu64" not %"PRIu64"\n",
+				stats.completed, baseline.completed + COMP_BURST_SZ);
+	for (i = 0; i < status_count; i++)
+		err_count += (status[i] != RTE_DMA_STATUS_SUCCESSFUL);
+	if (stats.errors != baseline.errors + err_count)
+		ERR_RETURN("'Errors' stats value not as expected, %"PRIu64" not %"PRIu64"\n",
+				stats.errors, baseline.errors + err_count);
+
+	return 0;
+}
+
+static int
+test_individual_status_query_with_failure(int16_t dev_id, uint16_t vchan, bool fence,
+		struct rte_mbuf **srcs, struct rte_mbuf **dsts, unsigned int fail_idx)
+{
+	/* Test gathering batch statuses one at a time */
+	enum rte_dma_status_code status[COMP_BURST_SZ];
+	uint16_t invalid_addr_id = 0;
+	uint16_t idx;
+	uint16_t count = 0, status_count = 0;
+	unsigned int j;
+	bool error = false;
+
+	for (j = 0; j < COMP_BURST_SZ; j++) {
+		int id = rte_dma_copy(dev_id, vchan,
+				(j == fail_idx ? 0 : (srcs[j]->buf_iova + srcs[j]->data_off)),
+				dsts[j]->buf_iova + dsts[j]->data_off,
+				COPY_LEN, OPT_FENCE(j));
+		if (id < 0)
+			ERR_RETURN("Error with rte_dma_copy for buffer %u\n", j);
+		if (j == fail_idx)
+			invalid_addr_id = id;
+	}
+	rte_dma_submit(dev_id, vchan);
+	await_hw(dev_id, vchan);
+
+	/* use regular "completed" until we hit error */
+	while (!error) {
+		uint16_t n = rte_dma_completed(dev_id, vchan, 1, &idx, &error);
+		count += n;
+		if (n > 1 || count >= COMP_BURST_SZ)
+			ERR_RETURN("Error - too many completions got\n");
+		if (n == 0 && !error)
+			ERR_RETURN("Error, unexpectedly got zero completions after %u completed\n",
+					count);
+	}
+	if (idx != invalid_addr_id - 1)
+		ERR_RETURN("Error, last successful index not as expected, got %u, expected %u\n",
+				idx, invalid_addr_id - 1);
+
+	/* use completed_status until we hit end of burst */
+	while (count + status_count < COMP_BURST_SZ) {
+		uint16_t n = rte_dma_completed_status(dev_id, vchan, 1, &idx,
+				&status[status_count]);
+		await_hw(dev_id, vchan); /* allow delay to ensure jobs are completed */
+		status_count += n;
+		if (n != 1)
+			ERR_RETURN("Error: unexpected number of completions received, %u, not 1\n",
+					n);
+	}
+
+	/* check for single failure */
+	if (status[0] == RTE_DMA_STATUS_SUCCESSFUL)
+		ERR_RETURN("Error, unexpected successful DMA transaction\n");
+	for (j = 1; j < status_count; j++)
+		if (status[j] != RTE_DMA_STATUS_SUCCESSFUL
+				&& status[j] != RTE_DMA_STATUS_NOT_ATTEMPTED)
+			ERR_RETURN("Error, unexpected DMA error reported\n");
+
+	return 0;
+}
+
+static int
+test_single_item_status_query_with_failure(int16_t dev_id, uint16_t vchan,
+		struct rte_mbuf **srcs, struct rte_mbuf **dsts, unsigned int fail_idx)
+{
+	/* When error occurs just collect a single error using "completed_status()"
+	 * before going to back to completed() calls
+	 */
+	enum rte_dma_status_code status;
+	uint16_t invalid_addr_id = 0;
+	uint16_t idx;
+	uint16_t count, status_count, count2;
+	unsigned int j;
+	bool error = false;
+
+	for (j = 0; j < COMP_BURST_SZ; j++) {
+		int id = rte_dma_copy(dev_id, vchan,
+				(j == fail_idx ? 0 : (srcs[j]->buf_iova + srcs[j]->data_off)),
+				dsts[j]->buf_iova + dsts[j]->data_off,
+				COPY_LEN, 0);
+		if (id < 0)
+			ERR_RETURN("Error with rte_dma_copy for buffer %u\n", j);
+		if (j == fail_idx)
+			invalid_addr_id = id;
+	}
+	rte_dma_submit(dev_id, vchan);
+	await_hw(dev_id, vchan);
+
+	/* get up to the error point */
+	count = rte_dma_completed(dev_id, vchan, COMP_BURST_SZ, &idx, &error);
+	if (count != fail_idx)
+		ERR_RETURN("Error with rte_dma_completed for failure test. Got returned %u not %u.\n",
+				count, fail_idx);
+	if (!error)
+		ERR_RETURN("Error, missing expected failed copy, %u. has_error is not set\n",
+				fail_idx);
+	if (idx != invalid_addr_id - 1)
+		ERR_RETURN("Error, missing expected failed copy, %u. Got last idx %u, not %u\n",
+				fail_idx, idx, invalid_addr_id - 1);
+
+	/* get the error code */
+	status_count = rte_dma_completed_status(dev_id, vchan, 1, &idx, &status);
+	if (status_count != 1)
+		ERR_RETURN("Error with completed_status calls for fail idx %u. Got %u not %u\n",
+				fail_idx, status_count, COMP_BURST_SZ - count);
+	if (status == RTE_DMA_STATUS_SUCCESSFUL)
+		ERR_RETURN("Error with status returned for fail idx %u. First status was not failure\n",
+				fail_idx);
+
+	/* delay in case time needed after err handled to complete other jobs */
+	await_hw(dev_id, vchan);
+
+	/* get the rest of the completions without status */
+	count2 = rte_dma_completed(dev_id, vchan, COMP_BURST_SZ, &idx, &error);
+	if (error == true)
+		ERR_RETURN("Error, got further errors post completed_status() call, for failure case %u.\n",
+				fail_idx);
+	if (count + status_count + count2 != COMP_BURST_SZ)
+		ERR_RETURN("Error, incorrect number of completions received, got %u not %u\n",
+				count + status_count + count2, COMP_BURST_SZ);
+
+	return 0;
+}
+
+static int
+test_multi_failure(int16_t dev_id, uint16_t vchan, struct rte_mbuf **srcs, struct rte_mbuf **dsts,
+		const unsigned int *fail, size_t num_fail)
+{
+	/* test having multiple errors in one go */
+	enum rte_dma_status_code status[COMP_BURST_SZ];
+	unsigned int i, j;
+	uint16_t count, err_count = 0;
+	bool error = false;
+
+	/* enqueue and gather completions in one go */
+	for (j = 0; j < COMP_BURST_SZ; j++) {
+		uintptr_t src = srcs[j]->buf_iova + srcs[j]->data_off;
+		/* set up for failure if the current index is anywhere is the fails array */
+		for (i = 0; i < num_fail; i++)
+			if (j == fail[i])
+				src = 0;
+
+		int id = rte_dma_copy(dev_id, vchan,
+				src, dsts[j]->buf_iova + dsts[j]->data_off,
+				COPY_LEN, 0);
+		if (id < 0)
+			ERR_RETURN("Error with rte_dma_copy for buffer %u\n", j);
+	}
+	rte_dma_submit(dev_id, vchan);
+	await_hw(dev_id, vchan);
+
+	count = rte_dma_completed_status(dev_id, vchan, COMP_BURST_SZ, NULL, status);
+	while (count < COMP_BURST_SZ) {
+		await_hw(dev_id, vchan);
+
+		uint16_t ret = rte_dma_completed_status(dev_id, vchan, COMP_BURST_SZ - count,
+				NULL, &status[count]);
+		if (ret == 0)
+			ERR_RETURN("Error getting all completions for jobs. Got %u of %u\n",
+					count, COMP_BURST_SZ);
+		count += ret;
+	}
+	for (i = 0; i < count; i++)
+		if (status[i] != RTE_DMA_STATUS_SUCCESSFUL)
+			err_count++;
+
+	if (err_count != num_fail)
+		ERR_RETURN("Error: Invalid number of failed completions returned, %u; expected %zu\n",
+			err_count, num_fail);
+
+	/* enqueue and gather completions in bursts, but getting errors one at a time */
+	for (j = 0; j < COMP_BURST_SZ; j++) {
+		uintptr_t src = srcs[j]->buf_iova + srcs[j]->data_off;
+		/* set up for failure if the current index is anywhere is the fails array */
+		for (i = 0; i < num_fail; i++)
+			if (j == fail[i])
+				src = 0;
+
+		int id = rte_dma_copy(dev_id, vchan,
+				src, dsts[j]->buf_iova + dsts[j]->data_off,
+				COPY_LEN, 0);
+		if (id < 0)
+			ERR_RETURN("Error with rte_dma_copy for buffer %u\n", j);
+	}
+	rte_dma_submit(dev_id, vchan);
+	await_hw(dev_id, vchan);
+
+	count = 0;
+	err_count = 0;
+	while (count + err_count < COMP_BURST_SZ) {
+		count += rte_dma_completed(dev_id, vchan, COMP_BURST_SZ, NULL, &error);
+		if (error) {
+			uint16_t ret = rte_dma_completed_status(dev_id, vchan, 1,
+					NULL, status);
+			if (ret != 1)
+				ERR_RETURN("Error getting error-status for completions\n");
+			err_count += ret;
+			await_hw(dev_id, vchan);
+		}
+	}
+	if (err_count != num_fail)
+		ERR_RETURN("Error: Incorrect number of failed completions received, got %u not %zu\n",
+				err_count, num_fail);
+
+	return 0;
+}
+
+static int
+test_completion_status(int16_t dev_id, uint16_t vchan, bool fence)
+{
+	const unsigned int fail[] = {0, 7, 14, 15};
+	struct rte_mbuf *srcs[COMP_BURST_SZ], *dsts[COMP_BURST_SZ];
+	unsigned int i;
+
+	for (i = 0; i < COMP_BURST_SZ; i++) {
+		srcs[i] = rte_pktmbuf_alloc(pool);
+		dsts[i] = rte_pktmbuf_alloc(pool);
+	}
+
+	for (i = 0; i < RTE_DIM(fail); i++) {
+		if (test_failure_in_full_burst(dev_id, vchan, fence, srcs, dsts, fail[i]) < 0)
+			return -1;
+
+		if (test_individual_status_query_with_failure(dev_id, vchan, fence,
+				srcs, dsts, fail[i]) < 0)
+			return -1;
+
+		/* test is run the same fenced, or unfenced, but no harm in running it twice */
+		if (test_single_item_status_query_with_failure(dev_id, vchan,
+				srcs, dsts, fail[i]) < 0)
+			return -1;
+	}
+
+	if (test_multi_failure(dev_id, vchan, srcs, dsts, fail, RTE_DIM(fail)) < 0)
+		return -1;
+
+	for (i = 0; i < COMP_BURST_SZ; i++) {
+		rte_pktmbuf_free(srcs[i]);
+		rte_pktmbuf_free(dsts[i]);
+	}
+	return 0;
+}
+
+static int
+test_completion_handling(int16_t dev_id, uint16_t vchan)
+{
+	return test_completion_status(dev_id, vchan, false)              /* without fences */
+			|| test_completion_status(dev_id, vchan, true);  /* with fences */
+
+}
+
 static int
 test_dmadev_instance(int16_t dev_id)
 {
@@ -335,6 +683,19 @@ test_dmadev_instance(int16_t dev_id)
 	if (runtest("copy", test_enqueue_copies, 640, dev_id, vchan, CHECK_ERRS) < 0)
 		goto err;
 
+	/* to test error handling we can provide null pointers for source or dest in copies. This
+	 * requires VA mode in DPDK, since NULL(0) is a valid physical address.
+	 * We also need hardware that can report errors back.
+	 */
+	if (rte_eal_iova_mode() != RTE_IOVA_VA)
+		printf("DMA Dev %u: DPDK not in VA mode, skipping error handling tests\n", dev_id);
+	else if ((info.dev_capa & RTE_DMA_CAPA_HANDLES_ERRORS) == 0)
+		printf("DMA Dev %u: device does not report errors, skipping error handling tests\n",
+				dev_id);
+	else if (runtest("error handling", test_completion_handling, 1,
+			dev_id, vchan, !CHECK_ERRS) < 0)
+		goto err;
+
 	rte_mempool_free(pool);
 	rte_dma_stop(dev_id);
 	rte_dma_stats_reset(dev_id, vchan);
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v7 12/13] app/test: add dmadev fill tests
  2021-10-13 15:17   ` [dpdk-dev] [PATCH v7 00/13] add test suite for DMA drivers Bruce Richardson
                       ` (10 preceding siblings ...)
  2021-10-13 15:17     ` [dpdk-dev] [PATCH v7 11/13] app/test: test dmadev instance failure handling Bruce Richardson
@ 2021-10-13 15:17     ` Bruce Richardson
  2021-10-13 15:17     ` [dpdk-dev] [PATCH v7 13/13] app/test: add dmadev burst capacity API test Bruce Richardson
  2021-10-18  9:20     ` [dpdk-dev] [PATCH v7 00/13] add test suite for DMA drivers Thomas Monjalon
  13 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-10-13 15:17 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

From: Kevin Laatz <kevin.laatz@intel.com>

For dma devices which support the fill operation, run unit tests to
verify fill behaviour is correct.

Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
---
 app/test/test_dmadev.c | 49 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 49 insertions(+)

diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
index 8e61216f04..27d2e7a5c4 100644
--- a/app/test/test_dmadev.c
+++ b/app/test/test_dmadev.c
@@ -623,7 +623,51 @@ test_completion_handling(int16_t dev_id, uint16_t vchan)
 {
 	return test_completion_status(dev_id, vchan, false)              /* without fences */
 			|| test_completion_status(dev_id, vchan, true);  /* with fences */
+}
+
+static int
+test_enqueue_fill(int16_t dev_id, uint16_t vchan)
+{
+	const unsigned int lengths[] = {8, 64, 1024, 50, 100, 89};
+	struct rte_mbuf *dst;
+	char *dst_data;
+	uint64_t pattern = 0xfedcba9876543210;
+	unsigned int i, j;
+
+	dst = rte_pktmbuf_alloc(pool);
+	if (dst == NULL)
+		ERR_RETURN("Failed to allocate mbuf\n");
+	dst_data = rte_pktmbuf_mtod(dst, char *);
+
+	for (i = 0; i < RTE_DIM(lengths); i++) {
+		/* reset dst_data */
+		memset(dst_data, 0, rte_pktmbuf_data_len(dst));
+
+		/* perform the fill operation */
+		int id = rte_dma_fill(dev_id, vchan, pattern,
+				rte_pktmbuf_iova(dst), lengths[i], RTE_DMA_OP_FLAG_SUBMIT);
+		if (id < 0)
+			ERR_RETURN("Error with rte_dma_fill\n");
+		await_hw(dev_id, vchan);
+
+		if (rte_dma_completed(dev_id, vchan, 1, NULL, NULL) != 1)
+			ERR_RETURN("Error: fill operation failed (length: %u)\n", lengths[i]);
+		/* check the data from the fill operation is correct */
+		for (j = 0; j < lengths[i]; j++) {
+			char pat_byte = ((char *)&pattern)[j % 8];
+			if (dst_data[j] != pat_byte)
+				ERR_RETURN("Error with fill operation (lengths = %u): got (%x), not (%x)\n",
+						lengths[i], dst_data[j], pat_byte);
+		}
+		/* check that the data after the fill operation was not written to */
+		for (; j < rte_pktmbuf_data_len(dst); j++)
+			if (dst_data[j] != 0)
+				ERR_RETURN("Error, fill operation wrote too far (lengths = %u): got (%x), not (%x)\n",
+						lengths[i], dst_data[j], 0);
+	}
 
+	rte_pktmbuf_free(dst);
+	return 0;
 }
 
 static int
@@ -696,6 +740,11 @@ test_dmadev_instance(int16_t dev_id)
 			dev_id, vchan, !CHECK_ERRS) < 0)
 		goto err;
 
+	if ((info.dev_capa & RTE_DMA_CAPA_OPS_FILL) == 0)
+		printf("DMA Dev %u: No device fill support, skipping fill tests\n", dev_id);
+	else if (runtest("fill", test_enqueue_fill, 1, dev_id, vchan, CHECK_ERRS) < 0)
+		goto err;
+
 	rte_mempool_free(pool);
 	rte_dma_stop(dev_id);
 	rte_dma_stats_reset(dev_id, vchan);
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* [dpdk-dev] [PATCH v7 13/13] app/test: add dmadev burst capacity API test
  2021-10-13 15:17   ` [dpdk-dev] [PATCH v7 00/13] add test suite for DMA drivers Bruce Richardson
                       ` (11 preceding siblings ...)
  2021-10-13 15:17     ` [dpdk-dev] [PATCH v7 12/13] app/test: add dmadev fill tests Bruce Richardson
@ 2021-10-13 15:17     ` Bruce Richardson
  2021-10-18  9:20     ` [dpdk-dev] [PATCH v7 00/13] add test suite for DMA drivers Thomas Monjalon
  13 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-10-13 15:17 UTC (permalink / raw)
  To: dev; +Cc: conor.walsh, kevin.laatz, fengchengwen, jerinj, Bruce Richardson

From: Kevin Laatz <kevin.laatz@intel.com>

Add a test case to validate the functionality of drivers' burst capacity
API implementations.

Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
---
 app/test/test_dmadev.c | 67 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 67 insertions(+)

diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
index 27d2e7a5c4..1e327bd45f 100644
--- a/app/test/test_dmadev.c
+++ b/app/test/test_dmadev.c
@@ -670,6 +670,69 @@ test_enqueue_fill(int16_t dev_id, uint16_t vchan)
 	return 0;
 }
 
+static int
+test_burst_capacity(int16_t dev_id, uint16_t vchan)
+{
+#define CAP_TEST_BURST_SIZE	64
+	const int ring_space = rte_dma_burst_capacity(dev_id, vchan);
+	struct rte_mbuf *src, *dst;
+	int i, j, iter;
+	int cap, ret;
+	bool dma_err;
+
+	src = rte_pktmbuf_alloc(pool);
+	dst = rte_pktmbuf_alloc(pool);
+
+	/* to test capacity, we enqueue elements and check capacity is reduced
+	 * by one each time - rebaselining the expected value after each burst
+	 * as the capacity is only for a burst. We enqueue multiple bursts to
+	 * fill up half the ring, before emptying it again. We do this twice to
+	 * ensure that we get to test scenarios where we get ring wrap-around
+	 */
+	for (iter = 0; iter < 2; iter++) {
+		for (i = 0; i < (ring_space / (2 * CAP_TEST_BURST_SIZE)) + 1; i++) {
+			cap = rte_dma_burst_capacity(dev_id, vchan);
+
+			for (j = 0; j < CAP_TEST_BURST_SIZE; j++) {
+				ret = rte_dma_copy(dev_id, vchan, rte_pktmbuf_iova(src),
+						rte_pktmbuf_iova(dst), COPY_LEN, 0);
+				if (ret < 0)
+					ERR_RETURN("Error with rte_dmadev_copy\n");
+
+				if (rte_dma_burst_capacity(dev_id, vchan) != cap - (j + 1))
+					ERR_RETURN("Error, ring capacity did not change as expected\n");
+			}
+			if (rte_dma_submit(dev_id, vchan) < 0)
+				ERR_RETURN("Error, failed to submit burst\n");
+
+			if (cap < rte_dma_burst_capacity(dev_id, vchan))
+				ERR_RETURN("Error, avail ring capacity has gone up, not down\n");
+		}
+		await_hw(dev_id, vchan);
+
+		for (i = 0; i < (ring_space / (2 * CAP_TEST_BURST_SIZE)) + 1; i++) {
+			ret = rte_dma_completed(dev_id, vchan,
+					CAP_TEST_BURST_SIZE, NULL, &dma_err);
+			if (ret != CAP_TEST_BURST_SIZE || dma_err) {
+				enum rte_dma_status_code status;
+
+				rte_dma_completed_status(dev_id, vchan, 1, NULL, &status);
+				ERR_RETURN("Error with rte_dmadev_completed, %u [expected: %u], dma_err = %d, i = %u, iter = %u, status = %u\n",
+						ret, CAP_TEST_BURST_SIZE, dma_err, i, iter, status);
+			}
+		}
+		cap = rte_dma_burst_capacity(dev_id, vchan);
+		if (cap != ring_space)
+			ERR_RETURN("Error, ring capacity has not reset to original value, got %u, expected %u\n",
+					cap, ring_space);
+	}
+
+	rte_pktmbuf_free(src);
+	rte_pktmbuf_free(dst);
+
+	return 0;
+}
+
 static int
 test_dmadev_instance(int16_t dev_id)
 {
@@ -727,6 +790,10 @@ test_dmadev_instance(int16_t dev_id)
 	if (runtest("copy", test_enqueue_copies, 640, dev_id, vchan, CHECK_ERRS) < 0)
 		goto err;
 
+	/* run some burst capacity tests */
+	if (runtest("burst capacity", test_burst_capacity, 1, dev_id, vchan, CHECK_ERRS) < 0)
+		goto err;
+
 	/* to test error handling we can provide null pointers for source or dest in copies. This
 	 * requires VA mode in DPDK, since NULL(0) is a valid physical address.
 	 * We also need hardware that can report errors back.
-- 
2.30.2


^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v7 00/13] add test suite for DMA drivers
  2021-10-13 15:17   ` [dpdk-dev] [PATCH v7 00/13] add test suite for DMA drivers Bruce Richardson
                       ` (12 preceding siblings ...)
  2021-10-13 15:17     ` [dpdk-dev] [PATCH v7 13/13] app/test: add dmadev burst capacity API test Bruce Richardson
@ 2021-10-18  9:20     ` Thomas Monjalon
  2021-10-21 12:06       ` fengchengwen
  13 siblings, 1 reply; 130+ messages in thread
From: Thomas Monjalon @ 2021-10-18  9:20 UTC (permalink / raw)
  To: conor.walsh, kevin.laatz, Bruce Richardson; +Cc: dev, fengchengwen, jerinj

13/10/2021 17:17, Bruce Richardson:
> Bruce Richardson (10):
>   dmadev: add channel status check for testing use
>   dma/skeleton: add channel status function
>   dma/skeleton: add burst capacity function
>   dmadev: add device iterator
>   app/test: add basic dmadev instance tests
>   app/test: add basic dmadev copy tests
>   app/test: run test suite on skeleton driver
>   app/test: add more comprehensive dmadev copy tests
>   dmadev: add flag for error handling support
>   app/test: test dmadev instance failure handling
> 
> Kevin Laatz (3):
>   dmadev: add burst capacity API
>   app/test: add dmadev fill tests
>   app/test: add dmadev burst capacity API test

Applied, thanks.




^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v7 00/13] add test suite for DMA drivers
  2021-10-18  9:20     ` [dpdk-dev] [PATCH v7 00/13] add test suite for DMA drivers Thomas Monjalon
@ 2021-10-21 12:06       ` fengchengwen
  2021-10-21 14:55         ` Bruce Richardson
  0 siblings, 1 reply; 130+ messages in thread
From: fengchengwen @ 2021-10-21 12:06 UTC (permalink / raw)
  To: Thomas Monjalon, conor.walsh, kevin.laatz, Bruce Richardson; +Cc: dev, jerinj

Hi Bruce,

I observed a large number of checkpatch errors [1] when synchronizing to our inner CI,
almost all of them are over 80 lines, and many are not LONG LOG.

The DPDK coding style recommends to be not more than 80 characters unless rarest
situations (which LONG LOG belongs to this one I think).

I don't know which to follow: just ignore or should fix it ?


[1]:
lib/dmadev/rte_dmadev.c:56: WARNING:LONG_LINE: line length of 95 exceeds 80 columns
lib/dmadev/rte_dmadev.c:696: WARNING:LONG_LINE: line length of 87 exceeds 80 columns
lib/dmadev/rte_dmadev.c:704: WARNING:LONG_LINE: line length of 85 exceeds 80 columns
lib/dmadev/rte_dmadev.h:267: WARNING:LONG_LINE_COMMENT: line length of 84 exceeds 80 columns
lib/dmadev/rte_dmadev.h:269: WARNING:LONG_LINE_COMMENT: line length of 84 exceeds 80 columns
lib/dmadev/rte_dmadev.h:677: WARNING:LONG_LINE_COMMENT: line length of 91 exceeds 80 columns
lib/dmadev/rte_dmadev.h:683: WARNING:LONG_LINE_COMMENT: line length of 95 exceeds 80 columns
lib/dmadev/rte_dmadev.h:691: WARNING:LONG_LINE_COMMENT: line length of 90 exceeds 80 columns
lib/dmadev/rte_dmadev.h:692: WARNING:LONG_LINE_COMMENT: line length of 82 exceeds 80 columns
lib/dmadev/rte_dmadev.h:706: WARNING:LONG_LINE: line length of 88 exceeds 80 columns
lib/dmadev/rte_dmadev_core.h:51: WARNING:LONG_LINE: line length of 86 exceeds 80 columns
lib/dmadev/rte_dmadev_pmd.h:58: WARNING:LONG_LINE: line length of 84 exceeds 80 columns
total: 1 errors, 12 warnings, 235 lines checked

app/test/test_dmadev.c:19: WARNING:LONG_LINE: line length of 95 exceeds 80 columns
app/test/test_dmadev.c:19: WARNING:MACRO_WITH_FLOW_CONTROL: Macros with flow control statements should be avoided
app/test/test_dmadev.c:39: WARNING:LONG_LINE: line length of 94 exceeds 80 columns
app/test/test_dmadev.c:72: WARNING:LONG_LINE_COMMENT: line length of 90 exceeds 80 columns
app/test/test_dmadev.c:77: WARNING:LONG_LINE_COMMENT: line length of 96 exceeds 80 columns
app/test/test_dmadev.c:79: WARNING:LONG_LINE: line length of 83 exceeds 80 columns
app/test/test_dmadev.c:85: WARNING:LONG_LINE_COMMENT: line length of 95 exceeds 80 columns
app/test/test_dmadev.c:90: WARNING:LONG_LINE_COMMENT: line length of 90 exceeds 80 columns
app/test/test_dmadev.c:113: WARNING:LONG_LINE: line length of 86 exceeds 80 columns
app/test/test_dmadev.c:114: WARNING:LONG_LINE: line length of 98 exceeds 80 columns
app/test/test_dmadev.c:115: WARNING:LONG_LINE: line length of 81 exceeds 80 columns
app/test/test_dmadev.c:124: WARNING:LONG_LINE: line length of 85 exceeds 80 columns
app/test/test_dmadev.c:129: WARNING:LONG_LINE: line length of 81 exceeds 80 columns
app/test/test_dmadev.c:138: WARNING:LONG_LINE: line length of 96 exceeds 80 columns
app/test/test_dmadev.c:143: WARNING:LONG_LINE: line length of 97 exceeds 80 columns
app/test/test_dmadev.c:157: WARNING:LONG_LINE: line length of 90 exceeds 80 columns
app/test/test_dmadev.c:158: WARNING:LONG_LINE: line length of 88 exceeds 80 columns
app/test/test_dmadev.c:169: WARNING:LONG_LINE: line length of 92 exceeds 80 columns
app/test/test_dmadev.c:196: WARNING:LONG_LINE: line length of 94 exceeds 80 columns
app/test/test_dmadev.c:207: WARNING:LONG_LINE: line length of 95 exceeds 80 columns
app/test/test_dmadev.c:249: WARNING:LONG_LINE: line length of 88 exceeds 80 columns
app/test/test_dmadev.c:254: WARNING:LONG_LINE: line length of 81 exceeds 80 columns
app/test/test_dmadev.c:272: WARNING:LONG_LINE_COMMENT: line length of 99 exceeds 80 columns
app/test/test_dmadev.c:277: WARNING:LONG_LINE_COMMENT: line length of 91 exceeds 80 columns
app/test/test_dmadev.c:287: WARNING:LONG_LINE: line length of 86 exceeds 80 columns
app/test/test_dmadev.c:299: WARNING:LONG_LINE_COMMENT: line length of 86 exceeds 80 columns
app/test/test_dmadev.c:302: WARNING:LONG_LINE: line length of 94 exceeds 80 columns
app/test/test_dmadev.c:306: WARNING:LONG_LINE: line length of 81 exceeds 80 columns
app/test/test_dmadev.c:313: WARNING:LONG_LINE: line length of 94 exceeds 80 columns
app/test/test_dmadev.c:314: WARNING:LONG_LINE: line length of 85 exceeds 80 columns
app/test/test_dmadev.c:329: WARNING:LONG_LINE_COMMENT: line length of 82 exceeds 80 columns
app/test/test_dmadev.c:331: WARNING:LONG_LINE: line length of 86 exceeds 80 columns
app/test/test_dmadev.c:332: WARNING:LONG_LINE: line length of 82 exceeds 80 columns
app/test/test_dmadev.c:338: WARNING:LONG_LINE_COMMENT: line length of 97 exceeds 80 columns
app/test/test_dmadev.c:339: WARNING:LONG_LINE_COMMENT: line length of 96 exceeds 80 columns
app/test/test_dmadev.c:344: WARNING:LONG_LINE: line length of 90 exceeds 80 columns
app/test/test_dmadev.c:351: WARNING:LONG_LINE_COMMENT: line length of 92 exceeds 80 columns
app/test/test_dmadev.c:357: WARNING:LONG_LINE_COMMENT: line length of 85 exceeds 80 columns
app/test/test_dmadev.c:367: WARNING:LONG_LINE: line length of 94 exceeds 80 columns
app/test/test_dmadev.c:368: WARNING:LONG_LINE: line length of 85 exceeds 80 columns
app/test/test_dmadev.c:372: WARNING:LONG_LINE: line length of 93 exceeds 80 columns
app/test/test_dmadev.c:379: WARNING:LONG_LINE: line length of 85 exceeds 80 columns
app/test/test_dmadev.c:380: WARNING:LONG_LINE: line length of 86 exceeds 80 columns
app/test/test_dmadev.c:392: WARNING:LONG_LINE: line length of 94 exceeds 80 columns
app/test/test_dmadev.c:396: WARNING:LONG_LINE: line length of 81 exceeds 80 columns
app/test/test_dmadev.c:421: WARNING:LONG_LINE_COMMENT: line length of 87 exceeds 80 columns
app/test/test_dmadev.c:441: WARNING:LONG_LINE: line length of 86 exceeds 80 columns
app/test/test_dmadev.c:443: WARNING:LONG_LINE_COMMENT: line length of 83 exceeds 80 columns
app/test/test_dmadev.c:455: WARNING:LONG_LINE: line length of 94 exceeds 80 columns
app/test/test_dmadev.c:459: WARNING:LONG_LINE: line length of 81 exceeds 80 columns
app/test/test_dmadev.c:479: WARNING:LONG_LINE: line length of 81 exceeds 80 columns
app/test/test_dmadev.c:503: WARNING:LONG_LINE: line length of 98 exceeds 80 columns
app/test/test_dmadev.c:515: WARNING:LONG_LINE_COMMENT: line length of 92 exceeds 80 columns
app/test/test_dmadev.c:524: WARNING:LONG_LINE: line length of 81 exceeds 80 columns
app/test/test_dmadev.c:529: WARNING:LONG_LINE: line length of 85 exceeds 80 columns
app/test/test_dmadev.c:533: WARNING:LONG_LINE: line length of 93 exceeds 80 columns
app/test/test_dmadev.c:548: WARNING:LONG_LINE_COMMENT: line length of 88 exceeds 80 columns
app/test/test_dmadev.c:551: WARNING:LONG_LINE_COMMENT: line length of 92 exceeds 80 columns
app/test/test_dmadev.c:560: WARNING:LONG_LINE: line length of 81 exceeds 80 columns
app/test/test_dmadev.c:568: WARNING:LONG_LINE: line length of 87 exceeds 80 columns
app/test/test_dmadev.c:570: WARNING:LONG_LINE: line length of 81 exceeds 80 columns
app/test/test_dmadev.c:598: WARNING:LONG_LINE: line length of 94 exceeds 80 columns
app/test/test_dmadev.c:601: WARNING:LONG_LINE: line length of 83 exceeds 80 columns
app/test/test_dmadev.c:605: WARNING:LONG_LINE_COMMENT: line length of 95 exceeds 80 columns
app/test/test_dmadev.c:611: WARNING:LONG_LINE: line length of 83 exceeds 80 columns
app/test/test_dmadev.c:624: WARNING:LONG_LINE_COMMENT: line length of 93 exceeds 80 columns
app/test/test_dmadev.c:625: WARNING:LONG_LINE_COMMENT: line length of 90 exceeds 80 columns
app/test/test_dmadev.c:648: WARNING:LONG_LINE: line length of 91 exceeds 80 columns
app/test/test_dmadev.c:654: WARNING:LONG_LINE: line length of 94 exceeds 80 columns
app/test/test_dmadev.c:660: WARNING:LONG_LINE: line length of 83 exceeds 80 columns
app/test/test_dmadev.c:662: WARNING:LONG_LINE_COMMENT: line length of 85 exceeds 80 columns
app/test/test_dmadev.c:693: WARNING:LONG_LINE: line length of 84 exceeds 80 columns
app/test/test_dmadev.c:697: WARNING:LONG_LINE: line length of 88 exceeds 80 columns
app/test/test_dmadev.c:698: WARNING:LONG_LINE: line length of 84 exceeds 80 columns
app/test/test_dmadev.c:702: WARNING:LONG_LINE: line length of 91 exceeds 80 columns
app/test/test_dmadev.c:713: WARNING:LONG_LINE: line length of 84 exceeds 80 columns
app/test/test_dmadev.c:719: WARNING:LONG_LINE: line length of 90 exceeds 80 columns
app/test/test_dmadev.c:721: WARNING:LONG_LINE: line length of 100 exceeds 80 columns
app/test/test_dmadev.c:755: WARNING:LONG_LINE: line length of 85 exceeds 80 columns
app/test/test_dmadev.c:765: WARNING:LONG_LINE: line length of 93 exceeds 80 columns
app/test/test_dmadev.c:774: WARNING:LONG_LINE: line length of 89 exceeds 80 columns
app/test/test_dmadev.c:789: WARNING:LONG_LINE_COMMENT: line length of 88 exceeds 80 columns
app/test/test_dmadev.c:790: WARNING:LONG_LINE: line length of 85 exceeds 80 columns
app/test/test_dmadev.c:794: WARNING:LONG_LINE: line length of 93 exceeds 80 columns
app/test/test_dmadev.c:797: WARNING:LONG_LINE_COMMENT: line length of 97 exceeds 80 columns
app/test/test_dmadev.c:802: WARNING:LONG_LINE: line length of 99 exceeds 80 columns
app/test/test_dmadev.c:811: WARNING:LONG_LINE: line length of 92 exceeds 80 columns
app/test/test_dmadev.c:812: WARNING:LONG_LINE: line length of 86 exceeds 80 columns
app/test/test_dmadev.c:833: WARNING:LONG_LINE_COMMENT: line length of 98 exceeds 80 columns
total: 0 errors, 89 warnings, 861 lines checked


On 2021/10/18 17:20, Thomas Monjalon wrote:
> 13/10/2021 17:17, Bruce Richardson:
>> Bruce Richardson (10):
>>   dmadev: add channel status check for testing use
>>   dma/skeleton: add channel status function
>>   dma/skeleton: add burst capacity function
>>   dmadev: add device iterator
>>   app/test: add basic dmadev instance tests
>>   app/test: add basic dmadev copy tests
>>   app/test: run test suite on skeleton driver
>>   app/test: add more comprehensive dmadev copy tests
>>   dmadev: add flag for error handling support
>>   app/test: test dmadev instance failure handling
>>
>> Kevin Laatz (3):
>>   dmadev: add burst capacity API
>>   app/test: add dmadev fill tests
>>   app/test: add dmadev burst capacity API test
> 
> Applied, thanks.
> 
> 
> 
> 
> .
> 


^ permalink raw reply	[flat|nested] 130+ messages in thread

* Re: [dpdk-dev] [PATCH v7 00/13] add test suite for DMA drivers
  2021-10-21 12:06       ` fengchengwen
@ 2021-10-21 14:55         ` Bruce Richardson
  0 siblings, 0 replies; 130+ messages in thread
From: Bruce Richardson @ 2021-10-21 14:55 UTC (permalink / raw)
  To: fengchengwen; +Cc: Thomas Monjalon, conor.walsh, kevin.laatz, dev, jerinj

On Thu, Oct 21, 2021 at 08:06:13PM +0800, fengchengwen wrote:
> Hi Bruce,
> 
> I observed a large number of checkpatch errors [1] when synchronizing to our inner CI,
> almost all of them are over 80 lines, and many are not LONG LOG.
> 
> The DPDK coding style recommends to be not more than 80 characters unless rarest
> situations (which LONG LOG belongs to this one I think).
> 
> I don't know which to follow: just ignore or should fix it ?

I'd rather not change these as the code is more readable unwrapped.
Although the docs do indeed recommend 80 characters, in practice we allow
up to 100 characters and the CI's run using that limit for checkpatch. I've
actually submitted a patchset to try and update our doc and usertools to
match the in-practice policy.

/Bruce

[1] http://patches.dpdk.org/project/dpdk/patch/20211020142601.157649-1-bruce.richardson@intel.com/

^ permalink raw reply	[flat|nested] 130+ messages in thread

end of thread, other threads:[~2021-10-21 14:55 UTC | newest]

Thread overview: 130+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-26 18:32 [dpdk-dev] [RFC PATCH 0/7] add test suite for DMA drivers Bruce Richardson
2021-08-26 18:32 ` [dpdk-dev] [RFC PATCH 1/7] app/test: take API tests from skeleton dmadev Bruce Richardson
2021-08-26 18:32 ` [dpdk-dev] [RFC PATCH 2/7] dmadev: remove selftest support Bruce Richardson
2021-08-26 18:32 ` [dpdk-dev] [RFC PATCH 3/7] app/test: add basic dmadev instance tests Bruce Richardson
2021-08-26 18:32 ` [dpdk-dev] [RFC PATCH 4/7] app/test: add basic dmadev copy tests Bruce Richardson
2021-08-27  7:14   ` Jerin Jacob
2021-08-27 10:41     ` Bruce Richardson
2021-08-26 18:32 ` [dpdk-dev] [RFC PATCH 5/7] app/test: add more comprehensive " Bruce Richardson
2021-08-26 18:33 ` [dpdk-dev] [RFC PATCH 6/7] app/test: test dmadev instance failure handling Bruce Richardson
2021-08-26 18:33 ` [dpdk-dev] [RFC PATCH 7/7] app/test: add dmadev fill tests Bruce Richardson
2021-09-01 16:32 ` [dpdk-dev] [PATCH v2 0/6] add test suite for DMA drivers Bruce Richardson
2021-09-01 16:32   ` [dpdk-dev] [PATCH v2 1/6] dmadev: add device idle check for testing use Bruce Richardson
2021-09-02 12:54     ` fengchengwen
2021-09-02 14:21       ` Bruce Richardson
2021-09-01 16:32   ` [dpdk-dev] [PATCH v2 2/6] app/test: add basic dmadev instance tests Bruce Richardson
2021-09-01 19:24     ` Mattias Rönnblom
2021-09-02 10:30       ` Bruce Richardson
2021-09-03 16:07     ` Conor Walsh
2021-09-01 16:32   ` [dpdk-dev] [PATCH v2 3/6] app/test: add basic dmadev copy tests Bruce Richardson
2021-09-02  7:44     ` Jerin Jacob
2021-09-02  8:06       ` Bruce Richardson
2021-09-02 10:54         ` Jerin Jacob
2021-09-02 11:43           ` Bruce Richardson
2021-09-02 13:05             ` Jerin Jacob
2021-09-02 14:21               ` Bruce Richardson
2021-09-03 16:05     ` Kevin Laatz
2021-09-03 16:07     ` Conor Walsh
2021-09-01 16:32   ` [dpdk-dev] [PATCH v2 4/6] app/test: add more comprehensive " Bruce Richardson
2021-09-03 16:08     ` Conor Walsh
2021-09-03 16:11     ` Kevin Laatz
2021-09-01 16:32   ` [dpdk-dev] [PATCH v2 5/6] app/test: test dmadev instance failure handling Bruce Richardson
2021-09-01 19:53     ` Mattias Rönnblom
2021-09-03 16:08     ` Conor Walsh
2021-09-03 16:21     ` Kevin Laatz
2021-09-01 16:32   ` [dpdk-dev] [PATCH v2 6/6] app/test: add dmadev fill tests Bruce Richardson
2021-09-03 16:09     ` Conor Walsh
2021-09-03 16:17       ` Conor Walsh
2021-09-03 16:33         ` Bruce Richardson
2021-09-07 16:49 ` [dpdk-dev] [PATCH v3 0/8] add test suite for DMA drivers Bruce Richardson
2021-09-07 16:49   ` [dpdk-dev] [PATCH v3 1/8] dmadev: add channel status check for testing use Bruce Richardson
2021-09-08 10:50     ` Walsh, Conor
2021-09-08 13:20     ` Kevin Laatz
2021-09-07 16:49   ` [dpdk-dev] [PATCH v3 2/8] dmadev: add burst capacity API Bruce Richardson
2021-09-08 10:53     ` Walsh, Conor
2021-09-08 18:17     ` Jerin Jacob
2021-09-09  8:16       ` Bruce Richardson
2021-09-17 13:54         ` Jerin Jacob
2021-09-17 14:37           ` Pai G, Sunil
2021-09-18 12:18             ` Jerin Jacob
2021-09-18  1:06           ` Hu, Jiayu
2021-09-18 12:12             ` Jerin Jacob
2021-09-21 13:57               ` Pai G, Sunil
2021-09-21 14:56                 ` Jerin Jacob
2021-09-21 15:34                   ` Pai G, Sunil
2021-09-21 16:58                     ` Jerin Jacob
2021-09-21 17:12                       ` Pai G, Sunil
2021-09-21 18:11                         ` Jerin Jacob
2021-09-22  1:51                           ` fengchengwen
2021-09-22  7:56                             ` Bruce Richardson
2021-09-22 16:35                               ` Bruce Richardson
2021-09-22 17:29                                 ` Jerin Jacob
2021-09-23 13:24                                   ` fengchengwen
2021-09-07 16:49   ` [dpdk-dev] [PATCH v3 3/8] app/test: add basic dmadev instance tests Bruce Richardson
2021-09-08 13:21     ` Kevin Laatz
2021-09-07 16:49   ` [dpdk-dev] [PATCH v3 4/8] app/test: add basic dmadev copy tests Bruce Richardson
2021-09-07 16:49   ` [dpdk-dev] [PATCH v3 5/8] app/test: add more comprehensive " Bruce Richardson
2021-09-07 16:49   ` [dpdk-dev] [PATCH v3 6/8] app/test: test dmadev instance failure handling Bruce Richardson
2021-09-07 16:49   ` [dpdk-dev] [PATCH v3 7/8] app/test: add dmadev fill tests Bruce Richardson
2021-09-07 16:49   ` [dpdk-dev] [PATCH v3 8/8] app/test: add dmadev burst capacity API test Bruce Richardson
2021-09-08 11:03     ` Walsh, Conor
2021-09-17 13:30 ` [dpdk-dev] [PATCH v4 0/9] add test suite for DMA drivers Bruce Richardson
2021-09-17 13:30   ` [dpdk-dev] [PATCH v4 1/9] dmadev: add channel status check for testing use Bruce Richardson
2021-09-17 13:30   ` [dpdk-dev] [PATCH v4 2/9] dmadev: add burst capacity API Bruce Richardson
2021-09-17 13:30   ` [dpdk-dev] [PATCH v4 3/9] dmadev: add device iterator Bruce Richardson
2021-09-17 13:30   ` [dpdk-dev] [PATCH v4 4/9] app/test: add basic dmadev instance tests Bruce Richardson
2021-09-17 13:30   ` [dpdk-dev] [PATCH v4 5/9] app/test: add basic dmadev copy tests Bruce Richardson
2021-09-17 13:30   ` [dpdk-dev] [PATCH v4 6/9] app/test: add more comprehensive " Bruce Richardson
2021-09-17 13:30   ` [dpdk-dev] [PATCH v4 7/9] app/test: test dmadev instance failure handling Bruce Richardson
2021-09-17 13:30   ` [dpdk-dev] [PATCH v4 8/9] app/test: add dmadev fill tests Bruce Richardson
2021-09-17 13:30   ` [dpdk-dev] [PATCH v4 9/9] app/test: add dmadev burst capacity API test Bruce Richardson
2021-09-17 13:54 ` [dpdk-dev] [PATCH v5 0/9] add test suite for DMA drivers Bruce Richardson
2021-09-17 13:54   ` [dpdk-dev] [PATCH v5 1/9] dmadev: add channel status check for testing use Bruce Richardson
2021-09-22  8:25     ` fengchengwen
2021-09-22  8:31       ` Bruce Richardson
2021-09-17 13:54   ` [dpdk-dev] [PATCH v5 2/9] dmadev: add burst capacity API Bruce Richardson
2021-09-17 13:54   ` [dpdk-dev] [PATCH v5 3/9] dmadev: add device iterator Bruce Richardson
2021-09-22  8:46     ` fengchengwen
2021-09-17 13:54   ` [dpdk-dev] [PATCH v5 4/9] app/test: add basic dmadev instance tests Bruce Richardson
2021-09-17 13:54   ` [dpdk-dev] [PATCH v5 5/9] app/test: add basic dmadev copy tests Bruce Richardson
2021-09-17 13:54   ` [dpdk-dev] [PATCH v5 6/9] app/test: add more comprehensive " Bruce Richardson
2021-09-17 13:54   ` [dpdk-dev] [PATCH v5 7/9] app/test: test dmadev instance failure handling Bruce Richardson
2021-09-17 13:54   ` [dpdk-dev] [PATCH v5 8/9] app/test: add dmadev fill tests Bruce Richardson
2021-09-17 13:54   ` [dpdk-dev] [PATCH v5 9/9] app/test: add dmadev burst capacity API test Bruce Richardson
2021-09-24 10:29 ` [dpdk-dev] [PATCH v6 00/13] add test suite for DMA drivers Bruce Richardson
2021-09-24 10:29   ` [dpdk-dev] [PATCH v6 01/13] dmadev: add channel status check for testing use Bruce Richardson
2021-09-24 10:29   ` [dpdk-dev] [PATCH v6 02/13] dma/skeleton: add channel status function Bruce Richardson
2021-09-24 10:29   ` [dpdk-dev] [PATCH v6 03/13] dmadev: add burst capacity API Bruce Richardson
2021-09-24 10:29   ` [dpdk-dev] [PATCH v6 04/13] dma/skeleton: add burst capacity function Bruce Richardson
2021-09-24 14:51     ` Conor Walsh
2021-09-24 10:29   ` [dpdk-dev] [PATCH v6 05/13] dmadev: add device iterator Bruce Richardson
2021-09-24 14:52     ` Conor Walsh
2021-09-24 15:58     ` Kevin Laatz
2021-09-24 10:29   ` [dpdk-dev] [PATCH v6 06/13] app/test: add basic dmadev instance tests Bruce Richardson
2021-09-24 10:29   ` [dpdk-dev] [PATCH v6 07/13] app/test: add basic dmadev copy tests Bruce Richardson
2021-09-24 10:29   ` [dpdk-dev] [PATCH v6 08/13] app/test: run test suite on skeleton driver Bruce Richardson
2021-09-24 15:58     ` Kevin Laatz
2021-09-24 10:29   ` [dpdk-dev] [PATCH v6 09/13] app/test: add more comprehensive dmadev copy tests Bruce Richardson
2021-09-24 10:31   ` [dpdk-dev] [PATCH v6 10/13] dmadev: add flag for error handling support Bruce Richardson
2021-09-24 14:52     ` Conor Walsh
2021-09-24 15:58     ` Kevin Laatz
2021-09-24 10:31   ` [dpdk-dev] [PATCH v6 11/13] app/test: test dmadev instance failure handling Bruce Richardson
2021-09-24 10:31   ` [dpdk-dev] [PATCH v6 12/13] app/test: add dmadev fill tests Bruce Richardson
2021-09-24 10:31   ` [dpdk-dev] [PATCH v6 13/13] app/test: add dmadev burst capacity API test Bruce Richardson
2021-10-13 15:17   ` [dpdk-dev] [PATCH v7 00/13] add test suite for DMA drivers Bruce Richardson
2021-10-13 15:17     ` [dpdk-dev] [PATCH v7 01/13] dmadev: add channel status check for testing use Bruce Richardson
2021-10-13 15:17     ` [dpdk-dev] [PATCH v7 02/13] dma/skeleton: add channel status function Bruce Richardson
2021-10-13 15:17     ` [dpdk-dev] [PATCH v7 03/13] dmadev: add burst capacity API Bruce Richardson
2021-10-13 15:17     ` [dpdk-dev] [PATCH v7 04/13] dma/skeleton: add burst capacity function Bruce Richardson
2021-10-13 15:17     ` [dpdk-dev] [PATCH v7 05/13] dmadev: add device iterator Bruce Richardson
2021-10-13 15:17     ` [dpdk-dev] [PATCH v7 06/13] app/test: add basic dmadev instance tests Bruce Richardson
2021-10-13 15:17     ` [dpdk-dev] [PATCH v7 07/13] app/test: add basic dmadev copy tests Bruce Richardson
2021-10-13 15:17     ` [dpdk-dev] [PATCH v7 08/13] app/test: run test suite on skeleton driver Bruce Richardson
2021-10-13 15:17     ` [dpdk-dev] [PATCH v7 09/13] app/test: add more comprehensive dmadev copy tests Bruce Richardson
2021-10-13 15:17     ` [dpdk-dev] [PATCH v7 10/13] dmadev: add flag for error handling support Bruce Richardson
2021-10-13 15:17     ` [dpdk-dev] [PATCH v7 11/13] app/test: test dmadev instance failure handling Bruce Richardson
2021-10-13 15:17     ` [dpdk-dev] [PATCH v7 12/13] app/test: add dmadev fill tests Bruce Richardson
2021-10-13 15:17     ` [dpdk-dev] [PATCH v7 13/13] app/test: add dmadev burst capacity API test Bruce Richardson
2021-10-18  9:20     ` [dpdk-dev] [PATCH v7 00/13] add test suite for DMA drivers Thomas Monjalon
2021-10-21 12:06       ` fengchengwen
2021-10-21 14:55         ` Bruce Richardson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).