DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 20.11 00/20] raw/ioat: enhancements and new hardware support
@ 2020-07-21  9:51 Bruce Richardson
  2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 01/20] raw/ioat: add a flag to control copying handle parameters Bruce Richardson
                   ` (24 more replies)
  0 siblings, 25 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-07-21  9:51 UTC (permalink / raw)
  To: dev; +Cc: cheng1.jiang, patrick.fu, kevin.laatz, Bruce Richardson

This patchset adds some small enhancements, some rework and also support
for new hardware to the ioat rawdev driver. Most rework and enhancements
are largely self-explanatory from the individual patches.

The new hardware support is for the Intel(R) DSA accelerator which will be
present in future Intel processors. A description of this new hardware is
covered in [1]. Functions specific to the new hardware use the "idxd"
prefix, for consistency with the kernel driver.

This set is being sent for initial review and evaluation. Any documentation
changes needed and a few additional updates to the code are planned for V2
of this patchset in the 20.11 timeframe.

[1] https://01.org/blogs/2019/introducing-intel-data-streaming-accelerator

Bruce Richardson (16):
  raw/ioat: support multiple devices being tested
  app/test: change rawdev autotest to run selftest on all devs
  app/test: remove ioat-specific autotest
  raw/ioat: split header for readability
  raw/ioat: make the HW register spec private
  raw/ioat: add skeleton for vfio/uio based DSA device
  raw/ioat: create rawdev instances on idxd PCI probe
  raw/ioat: add datapath data structures for idxd devices
  raw/ioat: add configure function for idxd devices
  raw/ioat: add start and stop functions for idxd devices
  raw/ioat: add data path support for idxd devices
  raw/ioat: add info function for idxd devices
  raw/ioat: create separate statistics structure
  raw/ioat: move xstats functions to common file
  raw/ioat: add xstats tracking for idxd devices
  raw/ioat: clean up use of common test function

Cheng Jiang (1):
  raw/ioat: add a flag to control copying handle parameters

Kevin Laatz (3):
  usertools/dpdk-devbind.py: add support for DSA HW
  raw/ioat: add vdev probe for DSA/idxd devices
  raw/ioat: create rawdev instances for idxd vdevs

 app/test/test_rawdev.c                        |  37 +-
 drivers/raw/ioat/idxd_pci.c                   | 345 +++++++++++++
 drivers/raw/ioat/idxd_vdev.c                  | 344 +++++++++++++
 drivers/raw/ioat/ioat_common.c                | 250 +++++++++
 drivers/raw/ioat/ioat_private.h               |  82 +++
 drivers/raw/ioat/ioat_rawdev.c                |  94 +---
 drivers/raw/ioat/ioat_rawdev_test.c           |  18 +-
 .../raw/ioat/{rte_ioat_spec.h => ioat_spec.h} |  78 ++-
 drivers/raw/ioat/meson.build                  |  20 +-
 drivers/raw/ioat/rte_ioat_rawdev.h            | 169 ++-----
 drivers/raw/ioat/rte_ioat_rawdev_fns.h        | 475 ++++++++++++++++++
 usertools/dpdk-devbind.py                     |   4 +-
 12 files changed, 1637 insertions(+), 279 deletions(-)
 create mode 100644 drivers/raw/ioat/idxd_pci.c
 create mode 100644 drivers/raw/ioat/idxd_vdev.c
 create mode 100644 drivers/raw/ioat/ioat_common.c
 create mode 100644 drivers/raw/ioat/ioat_private.h
 rename drivers/raw/ioat/{rte_ioat_spec.h => ioat_spec.h} (78%)
 create mode 100644 drivers/raw/ioat/rte_ioat_rawdev_fns.h

-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH 20.11 01/20] raw/ioat: add a flag to control copying handle parameters
  2020-07-21  9:51 [dpdk-dev] [PATCH 20.11 00/20] raw/ioat: enhancements and new hardware support Bruce Richardson
@ 2020-07-21  9:51 ` Bruce Richardson
  2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 02/20] raw/ioat: support multiple devices being tested Bruce Richardson
                   ` (23 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-07-21  9:51 UTC (permalink / raw)
  To: dev; +Cc: cheng1.jiang, patrick.fu, kevin.laatz, Cheng Jiang, Bruce Richardson

From: Cheng Jiang <Cheng1.jiang@intel.com>

Add a flag which controls whether rte_ioat_enqueue_copy and
rte_ioat_completed_copies function should process handle parameters. Not
doing so can improve the performance when handle parameters are not
necessary.

Signed-off-by: Cheng Jiang <Cheng1.jiang@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 drivers/raw/ioat/ioat_rawdev.c     |  1 +
 drivers/raw/ioat/rte_ioat_rawdev.h | 45 ++++++++++++++++++++++--------
 2 files changed, 35 insertions(+), 11 deletions(-)

diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
index 7f1a15436..53b33c1a7 100644
--- a/drivers/raw/ioat/ioat_rawdev.c
+++ b/drivers/raw/ioat/ioat_rawdev.c
@@ -58,6 +58,7 @@ ioat_dev_configure(const struct rte_rawdev *dev, rte_rawdev_obj_t config,
 		return -EINVAL;
 
 	ioat->ring_size = params->ring_size;
+	ioat->hdls_disable = params->hdls_disable;
 	if (ioat->desc_ring != NULL) {
 		rte_memzone_free(ioat->desc_mz);
 		ioat->desc_ring = NULL;
diff --git a/drivers/raw/ioat/rte_ioat_rawdev.h b/drivers/raw/ioat/rte_ioat_rawdev.h
index f765a6557..fd3a8fe14 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev.h
@@ -34,7 +34,8 @@
  * an ioat rawdev instance.
  */
 struct rte_ioat_rawdev_config {
-	unsigned short ring_size;
+	unsigned short ring_size; /**< size of job submission descriptor ring */
+	bool hdls_disable;        /**< when set, ignore user-supplied handle parameters */
 };
 
 /**
@@ -52,6 +53,7 @@ struct rte_ioat_rawdev {
 
 	unsigned short ring_size;
 	struct rte_ioat_generic_hw_desc *desc_ring;
+	bool hdls_disable;
 	__m128i *hdls; /* completion handles for returning to user */
 
 
@@ -84,10 +86,14 @@ struct rte_ioat_rawdev {
  *   The length of the data to be copied
  * @param src_hdl
  *   An opaque handle for the source data, to be returned when this operation
- *   has been completed and the user polls for the completion details
+ *   has been completed and the user polls for the completion details.
+ *   NOTE: If hdls_disable configuration option for the device is set, this
+ *   parameter is ignored.
  * @param dst_hdl
  *   An opaque handle for the destination data, to be returned when this
- *   operation has been completed and the user polls for the completion details
+ *   operation has been completed and the user polls for the completion details.
+ *   NOTE: If hdls_disable configuration option for the device is set, this
+ *   parameter is ignored.
  * @param fence
  *   A flag parameter indicating that hardware should not begin to perform any
  *   subsequently enqueued copy operations until after this operation has
@@ -121,8 +127,10 @@ rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 	desc->u.control_raw = (uint32_t)((!!fence << 4) | (!(write & 0xF)) << 3);
 	desc->src_addr = src;
 	desc->dest_addr = dst;
+	if (!ioat->hdls_disable)
+		ioat->hdls[write] = _mm_set_epi64x((int64_t)dst_hdl,
+					(int64_t)src_hdl);
 
-	ioat->hdls[write] = _mm_set_epi64x((int64_t)dst_hdl, (int64_t)src_hdl);
 	rte_prefetch0(&ioat->desc_ring[ioat->next_write & mask]);
 
 	ioat->enqueued++;
@@ -168,19 +176,29 @@ rte_ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
 /**
  * Returns details of copy operations that have been completed
  *
- * Returns to the caller the user-provided "handles" for the copy operations
- * which have been completed by the hardware, and not already returned by
- * a previous call to this API.
+ * If the hdls_disable option was not set when the device was configured,
+ * the function will return to the caller the user-provided "handles" for
+ * the copy operations which have been completed by the hardware, and not
+ * already returned by a previous call to this API.
+ * If the hdls_disable option for the device was set on configure, the
+ * max_copies, src_hdls and dst_hdls parameters will be ignored, and the
+ * function returns the number of newly-completed operations.
  *
  * @param dev_id
  *   The rawdev device id of the ioat instance
  * @param max_copies
  *   The number of entries which can fit in the src_hdls and dst_hdls
- *   arrays, i.e. max number of completed operations to report
+ *   arrays, i.e. max number of completed operations to report.
+ *   NOTE: If hdls_disable configuration option for the device is set, this
+ *   parameter is ignored.
  * @param src_hdls
- *   Array to hold the source handle parameters of the completed copies
+ *   Array to hold the source handle parameters of the completed copies.
+ *   NOTE: If hdls_disable configuration option for the device is set, this
+ *   parameter is ignored.
  * @param dst_hdls
- *   Array to hold the destination handle parameters of the completed copies
+ *   Array to hold the destination handle parameters of the completed copies.
+ *   NOTE: If hdls_disable configuration option for the device is set, this
+ *   parameter is ignored.
  * @return
  *   -1 on error, with rte_errno set appropriately.
  *   Otherwise number of completed operations i.e. number of entries written
@@ -205,6 +223,11 @@ rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
 		return -1;
 	}
 
+	if (ioat->hdls_disable) {
+		read += count;
+		goto end;
+	}
+
 	if (count > max_copies)
 		count = max_copies;
 
@@ -222,7 +245,7 @@ rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
 		src_hdls[i] = hdls[0];
 		dst_hdls[i] = hdls[1];
 	}
-
+end:
 	ioat->next_read = read;
 	ioat->completed += count;
 	return count;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH 20.11 02/20] raw/ioat: support multiple devices being tested
  2020-07-21  9:51 [dpdk-dev] [PATCH 20.11 00/20] raw/ioat: enhancements and new hardware support Bruce Richardson
  2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 01/20] raw/ioat: add a flag to control copying handle parameters Bruce Richardson
@ 2020-07-21  9:51 ` Bruce Richardson
  2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 03/20] app/test: change rawdev autotest to run selftest on all devs Bruce Richardson
                   ` (22 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-07-21  9:51 UTC (permalink / raw)
  To: dev; +Cc: cheng1.jiang, patrick.fu, kevin.laatz, Bruce Richardson

The current selftest function uses a single global variable to track state
which implies that only a single instance can have the selftest function
called on it. Change this to an array to allow multiple instances to be
tested.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 drivers/raw/ioat/ioat_rawdev_test.c | 17 +++++++++++++----
 1 file changed, 13 insertions(+), 4 deletions(-)

diff --git a/drivers/raw/ioat/ioat_rawdev_test.c b/drivers/raw/ioat/ioat_rawdev_test.c
index c463a82ad..e5b50ae9f 100644
--- a/drivers/raw/ioat/ioat_rawdev_test.c
+++ b/drivers/raw/ioat/ioat_rawdev_test.c
@@ -8,10 +8,13 @@
 #include "rte_rawdev.h"
 #include "rte_ioat_rawdev.h"
 
+#define MAX_SUPPORTED_RAWDEVS 64
+#define TEST_SKIPPED 77
+
 int ioat_rawdev_test(uint16_t dev_id); /* pre-define to keep compiler happy */
 
 static struct rte_mempool *pool;
-static unsigned short expected_ring_size;
+static unsigned short expected_ring_size[MAX_SUPPORTED_RAWDEVS];
 
 static int
 test_enqueue_copies(int dev_id)
@@ -148,10 +151,16 @@ ioat_rawdev_test(uint16_t dev_id)
 	unsigned int nb_xstats;
 	unsigned int i;
 
+	if (dev_id >= MAX_SUPPORTED_RAWDEVS) {
+		printf("Skipping test. Cannot test rawdevs with id's greater than %d\n",
+				MAX_SUPPORTED_RAWDEVS);
+		return TEST_SKIPPED;
+	}
+
 	rte_rawdev_info_get(dev_id, &info, sizeof(p));
-	if (p.ring_size != expected_ring_size) {
+	if (p.ring_size != expected_ring_size[dev_id]) {
 		printf("Error, initial ring size is not as expected (Actual: %d, Expected: %d)\n",
-				(int)p.ring_size, expected_ring_size);
+				(int)p.ring_size, expected_ring_size[dev_id]);
 		return -1;
 	}
 
@@ -166,7 +175,7 @@ ioat_rawdev_test(uint16_t dev_id)
 				IOAT_TEST_RINGSIZE, (int)p.ring_size);
 		return -1;
 	}
-	expected_ring_size = p.ring_size;
+	expected_ring_size[dev_id] = p.ring_size;
 
 	if (rte_rawdev_start(dev_id) != 0) {
 		printf("Error with rte_rawdev_start()\n");
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH 20.11 03/20] app/test: change rawdev autotest to run selftest on all devs
  2020-07-21  9:51 [dpdk-dev] [PATCH 20.11 00/20] raw/ioat: enhancements and new hardware support Bruce Richardson
  2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 01/20] raw/ioat: add a flag to control copying handle parameters Bruce Richardson
  2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 02/20] raw/ioat: support multiple devices being tested Bruce Richardson
@ 2020-07-21  9:51 ` Bruce Richardson
  2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 04/20] app/test: remove ioat-specific autotest Bruce Richardson
                   ` (21 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-07-21  9:51 UTC (permalink / raw)
  To: dev
  Cc: cheng1.jiang, patrick.fu, kevin.laatz, Bruce Richardson,
	hemant.agrawal, nipun.gupta

Rather than having each rawdev provide its own autotest command, we can
instead just use the generic rawdev_autotest to test any and all available
rawdevs.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Cc: hemant.agrawal@nxp.com
Cc: nipun.gupta@nxp.com
---
 app/test/test_rawdev.c | 27 +++++++++++++++++++++++++--
 1 file changed, 25 insertions(+), 2 deletions(-)

diff --git a/app/test/test_rawdev.c b/app/test/test_rawdev.c
index d8d9595be..a62f719d6 100644
--- a/app/test/test_rawdev.c
+++ b/app/test/test_rawdev.c
@@ -14,14 +14,37 @@
 static int
 test_rawdev_selftest_impl(const char *pmd, const char *opts)
 {
+	int ret;
+
+	printf("\n### Test rawdev infrastructure using skeleton driver\n");
 	rte_vdev_init(pmd, opts);
-	return rte_rawdev_selftest(rte_rawdev_get_dev_id(pmd));
+	ret = rte_rawdev_selftest(rte_rawdev_get_dev_id(pmd));
+	rte_vdev_uninit(pmd);
+	return ret;
 }
 
 static int
 test_rawdev_selftest_skeleton(void)
 {
-	return test_rawdev_selftest_impl("rawdev_skeleton", "");
+	const int count = rte_rawdev_count();
+	int ret = 0;
+	int i;
+
+	/* basic sanity on rawdev infrastructure */
+	if (test_rawdev_selftest_impl("rawdev_skeleton", "") < 0)
+		return -1;
+
+	/* now run self-test on all rawdevs */
+	if (count > 0)
+		printf("\n### Run selftest on each available rawdev\n");
+	for (i = 0; i < count; i++) {
+		int result = rte_rawdev_selftest(i);
+		printf("Rawdev %u selftest: %s\n", i,
+				result == 0 ? "Passed" : "Failed");
+		ret |=  result;
+	}
+
+	return ret;
 }
 
 REGISTER_TEST_COMMAND(rawdev_autotest, test_rawdev_selftest_skeleton);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH 20.11 04/20] app/test: remove ioat-specific autotest
  2020-07-21  9:51 [dpdk-dev] [PATCH 20.11 00/20] raw/ioat: enhancements and new hardware support Bruce Richardson
                   ` (2 preceding siblings ...)
  2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 03/20] app/test: change rawdev autotest to run selftest on all devs Bruce Richardson
@ 2020-07-21  9:51 ` Bruce Richardson
  2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 05/20] raw/ioat: split header for readability Bruce Richardson
                   ` (20 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-07-21  9:51 UTC (permalink / raw)
  To: dev; +Cc: cheng1.jiang, patrick.fu, kevin.laatz, Bruce Richardson

Since the rawdev autotest can now be used to test all rawdevs on the
system, there is no need for a dedicated ioat autotest command.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 app/test/test_rawdev.c | 20 --------------------
 1 file changed, 20 deletions(-)

diff --git a/app/test/test_rawdev.c b/app/test/test_rawdev.c
index a62f719d6..2944b3a27 100644
--- a/app/test/test_rawdev.c
+++ b/app/test/test_rawdev.c
@@ -48,23 +48,3 @@ test_rawdev_selftest_skeleton(void)
 }
 
 REGISTER_TEST_COMMAND(rawdev_autotest, test_rawdev_selftest_skeleton);
-
-static int
-test_rawdev_selftest_ioat(void)
-{
-	const int count = rte_rawdev_count();
-	int i;
-
-	for (i = 0; i < count; i++) {
-		struct rte_rawdev_info info = { .dev_private = NULL };
-		if (rte_rawdev_info_get(i, &info, 0) == 0 &&
-				strstr(info.driver_name, "ioat") != NULL)
-			return rte_rawdev_selftest(i) == 0 ?
-					TEST_SUCCESS : TEST_FAILED;
-	}
-
-	printf("No IOAT rawdev found, skipping tests\n");
-	return TEST_SKIPPED;
-}
-
-REGISTER_TEST_COMMAND(ioat_rawdev_autotest, test_rawdev_selftest_ioat);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH 20.11 05/20] raw/ioat: split header for readability
  2020-07-21  9:51 [dpdk-dev] [PATCH 20.11 00/20] raw/ioat: enhancements and new hardware support Bruce Richardson
                   ` (3 preceding siblings ...)
  2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 04/20] app/test: remove ioat-specific autotest Bruce Richardson
@ 2020-07-21  9:51 ` Bruce Richardson
  2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 06/20] raw/ioat: make the HW register spec private Bruce Richardson
                   ` (19 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-07-21  9:51 UTC (permalink / raw)
  To: dev; +Cc: cheng1.jiang, patrick.fu, kevin.laatz, Bruce Richardson

Rather than having a single long complicated header file for general use we
can split things so that there is one header with all the publically needed
information - data structs and function prototypes - while the rest of the
internal details are put separately. This makes it easier to read,
understand and use the APIs.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 drivers/raw/ioat/meson.build           |   1 +
 drivers/raw/ioat/rte_ioat_rawdev.h     | 144 +---------------------
 drivers/raw/ioat/rte_ioat_rawdev_fns.h | 164 +++++++++++++++++++++++++
 3 files changed, 171 insertions(+), 138 deletions(-)
 create mode 100644 drivers/raw/ioat/rte_ioat_rawdev_fns.h

diff --git a/drivers/raw/ioat/meson.build b/drivers/raw/ioat/meson.build
index 0878418ae..f66e9b605 100644
--- a/drivers/raw/ioat/meson.build
+++ b/drivers/raw/ioat/meson.build
@@ -8,4 +8,5 @@ sources = files('ioat_rawdev.c',
 deps += ['rawdev', 'bus_pci', 'mbuf']
 
 install_headers('rte_ioat_rawdev.h',
+		'rte_ioat_rawdev_fns.h',
 		'rte_ioat_spec.h')
diff --git a/drivers/raw/ioat/rte_ioat_rawdev.h b/drivers/raw/ioat/rte_ioat_rawdev.h
index fd3a8fe14..6d338f50c 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev.h
@@ -14,12 +14,7 @@
  * @b EXPERIMENTAL: these structures and APIs may change without prior notice
  */
 
-#include <x86intrin.h>
-#include <rte_atomic.h>
-#include <rte_memory.h>
-#include <rte_memzone.h>
-#include <rte_prefetch.h>
-#include "rte_ioat_spec.h"
+#include <rte_common.h>
 
 /** Name of the device driver */
 #define IOAT_PMD_RAWDEV_NAME rawdev_ioat
@@ -38,38 +33,6 @@ struct rte_ioat_rawdev_config {
 	bool hdls_disable;        /**< when set, ignore user-supplied handle parameters */
 };
 
-/**
- * @internal
- * Structure representing a device instance
- */
-struct rte_ioat_rawdev {
-	struct rte_rawdev *rawdev;
-	const struct rte_memzone *mz;
-	const struct rte_memzone *desc_mz;
-
-	volatile struct rte_ioat_registers *regs;
-	phys_addr_t status_addr;
-	phys_addr_t ring_addr;
-
-	unsigned short ring_size;
-	struct rte_ioat_generic_hw_desc *desc_ring;
-	bool hdls_disable;
-	__m128i *hdls; /* completion handles for returning to user */
-
-
-	unsigned short next_read;
-	unsigned short next_write;
-
-	/* some statistics for tracking, if added/changed update xstats fns*/
-	uint64_t enqueue_failed __rte_cache_aligned;
-	uint64_t enqueued;
-	uint64_t started;
-	uint64_t completed;
-
-	/* to report completions, the device will write status back here */
-	volatile uint64_t status __rte_cache_aligned;
-};
-
 /**
  * Enqueue a copy operation onto the ioat device
  *
@@ -104,38 +67,7 @@ struct rte_ioat_rawdev {
 static inline int
 rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl,
-		int fence)
-{
-	struct rte_ioat_rawdev *ioat = rte_rawdevs[dev_id].dev_private;
-	unsigned short read = ioat->next_read;
-	unsigned short write = ioat->next_write;
-	unsigned short mask = ioat->ring_size - 1;
-	unsigned short space = mask + read - write;
-	struct rte_ioat_generic_hw_desc *desc;
-
-	if (space == 0) {
-		ioat->enqueue_failed++;
-		return 0;
-	}
-
-	ioat->next_write = write + 1;
-	write &= mask;
-
-	desc = &ioat->desc_ring[write];
-	desc->size = length;
-	/* set descriptor write-back every 16th descriptor */
-	desc->u.control_raw = (uint32_t)((!!fence << 4) | (!(write & 0xF)) << 3);
-	desc->src_addr = src;
-	desc->dest_addr = dst;
-	if (!ioat->hdls_disable)
-		ioat->hdls[write] = _mm_set_epi64x((int64_t)dst_hdl,
-					(int64_t)src_hdl);
-
-	rte_prefetch0(&ioat->desc_ring[ioat->next_write & mask]);
-
-	ioat->enqueued++;
-	return 1;
-}
+		int fence);
 
 /**
  * Trigger hardware to begin performing enqueued copy operations
@@ -147,31 +79,7 @@ rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
  *   The rawdev device id of the ioat instance
  */
 static inline void
-rte_ioat_do_copies(int dev_id)
-{
-	struct rte_ioat_rawdev *ioat = rte_rawdevs[dev_id].dev_private;
-	ioat->desc_ring[(ioat->next_write - 1) & (ioat->ring_size - 1)].u
-			.control.completion_update = 1;
-	rte_compiler_barrier();
-	ioat->regs->dmacount = ioat->next_write;
-	ioat->started = ioat->enqueued;
-}
-
-/**
- * @internal
- * Returns the index of the last completed operation.
- */
-static inline int
-rte_ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
-{
-	uint64_t status = ioat->status;
-
-	/* lower 3 bits indicate "transfer status" : active, idle, halted.
-	 * We can ignore bit 0.
-	 */
-	*error = status & (RTE_IOAT_CHANSTS_SUSPENDED | RTE_IOAT_CHANSTS_ARMED);
-	return (status - ioat->ring_addr) >> 6;
-}
+rte_ioat_do_copies(int dev_id);
 
 /**
  * Returns details of copy operations that have been completed
@@ -206,49 +114,9 @@ rte_ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
  */
 static inline int
 rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
-		uintptr_t *src_hdls, uintptr_t *dst_hdls)
-{
-	struct rte_ioat_rawdev *ioat = rte_rawdevs[dev_id].dev_private;
-	unsigned short mask = (ioat->ring_size - 1);
-	unsigned short read = ioat->next_read;
-	unsigned short end_read, count;
-	int error;
-	int i = 0;
-
-	end_read = (rte_ioat_get_last_completed(ioat, &error) + 1) & mask;
-	count = (end_read - (read & mask)) & mask;
-
-	if (error) {
-		rte_errno = EIO;
-		return -1;
-	}
-
-	if (ioat->hdls_disable) {
-		read += count;
-		goto end;
-	}
-
-	if (count > max_copies)
-		count = max_copies;
-
-	for (; i < count - 1; i += 2, read += 2) {
-		__m128i hdls0 = _mm_load_si128(&ioat->hdls[read & mask]);
-		__m128i hdls1 = _mm_load_si128(&ioat->hdls[(read + 1) & mask]);
+		uintptr_t *src_hdls, uintptr_t *dst_hdls);
 
-		_mm_storeu_si128((void *)&src_hdls[i],
-				_mm_unpacklo_epi64(hdls0, hdls1));
-		_mm_storeu_si128((void *)&dst_hdls[i],
-				_mm_unpackhi_epi64(hdls0, hdls1));
-	}
-	for (; i < count; i++, read++) {
-		uintptr_t *hdls = (void *)&ioat->hdls[read & mask];
-		src_hdls[i] = hdls[0];
-		dst_hdls[i] = hdls[1];
-	}
-end:
-	ioat->next_read = read;
-	ioat->completed += count;
-	return count;
-}
+/* include the implementation details from a separate file */
+#include "rte_ioat_rawdev_fns.h"
 
 #endif /* _RTE_IOAT_RAWDEV_H_ */
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
new file mode 100644
index 000000000..06b4edcbb
--- /dev/null
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -0,0 +1,164 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Intel Corporation
+ */
+#ifndef _RTE_IOAT_RAWDEV_FNS_H_
+#define _RTE_IOAT_RAWDEV_FNS_H_
+
+#include <x86intrin.h>
+#include <rte_memzone.h>
+#include <rte_prefetch.h>
+#include "rte_ioat_spec.h"
+
+/**
+ * @internal
+ * Structure representing a device instance
+ */
+struct rte_ioat_rawdev {
+	struct rte_rawdev *rawdev;
+	const struct rte_memzone *mz;
+	const struct rte_memzone *desc_mz;
+
+	volatile struct rte_ioat_registers *regs;
+	phys_addr_t status_addr;
+	phys_addr_t ring_addr;
+
+	unsigned short ring_size;
+	bool hdls_disable;
+	struct rte_ioat_generic_hw_desc *desc_ring;
+	__m128i *hdls; /* completion handles for returning to user */
+
+
+	unsigned short next_read;
+	unsigned short next_write;
+
+	/* some statistics for tracking, if added/changed update xstats fns*/
+	uint64_t enqueue_failed __rte_cache_aligned;
+	uint64_t enqueued;
+	uint64_t started;
+	uint64_t completed;
+
+	/* to report completions, the device will write status back here */
+	volatile uint64_t status __rte_cache_aligned;
+};
+
+/**
+ * Enqueue a copy operation onto the ioat device
+ */
+static inline int
+rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
+		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl,
+		int fence)
+{
+	struct rte_ioat_rawdev *ioat = rte_rawdevs[dev_id].dev_private;
+	unsigned short read = ioat->next_read;
+	unsigned short write = ioat->next_write;
+	unsigned short mask = ioat->ring_size - 1;
+	unsigned short space = mask + read - write;
+	struct rte_ioat_generic_hw_desc *desc;
+
+	if (space == 0) {
+		ioat->enqueue_failed++;
+		return 0;
+	}
+
+	ioat->next_write = write + 1;
+	write &= mask;
+
+	desc = &ioat->desc_ring[write];
+	desc->size = length;
+	/* set descriptor write-back every 16th descriptor */
+	desc->u.control_raw = (uint32_t)((!!fence << 4) | (!(write & 0xF)) << 3);
+	desc->src_addr = src;
+	desc->dest_addr = dst;
+
+	if (!ioat->hdls_disable)
+		ioat->hdls[write] = _mm_set_epi64x((int64_t)dst_hdl,
+					(int64_t)src_hdl);
+	rte_prefetch0(&ioat->desc_ring[ioat->next_write & mask]);
+
+	ioat->enqueued++;
+	return 1;
+}
+
+/**
+ * Trigger hardware to begin performing enqueued copy operations
+ */
+static inline void
+rte_ioat_do_copies(int dev_id)
+{
+	struct rte_ioat_rawdev *ioat = rte_rawdevs[dev_id].dev_private;
+	ioat->desc_ring[(ioat->next_write - 1) & (ioat->ring_size - 1)].u
+			.control.completion_update = 1;
+	rte_compiler_barrier();
+	ioat->regs->dmacount = ioat->next_write;
+	ioat->started = ioat->enqueued;
+}
+
+/**
+ * @internal
+ * Returns the index of the last completed operation.
+ */
+static inline int
+rte_ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
+{
+	uint64_t status = ioat->status;
+
+	/* lower 3 bits indicate "transfer status" : active, idle, halted.
+	 * We can ignore bit 0.
+	 */
+	*error = status & (RTE_IOAT_CHANSTS_SUSPENDED | RTE_IOAT_CHANSTS_ARMED);
+	return (status - ioat->ring_addr) >> 6;
+}
+
+/**
+ * Returns details of copy operations that have been completed
+ */
+static inline int
+rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
+		uintptr_t *src_hdls, uintptr_t *dst_hdls)
+{
+	struct rte_ioat_rawdev *ioat = rte_rawdevs[dev_id].dev_private;
+	unsigned short mask = (ioat->ring_size - 1);
+	unsigned short read = ioat->next_read;
+	unsigned short end_read, count;
+	int error;
+	int i = 0;
+
+	end_read = (rte_ioat_get_last_completed(ioat, &error) + 1) & mask;
+	count = (end_read - (read & mask)) & mask;
+
+	if (error) {
+		rte_errno = EIO;
+		return -1;
+	}
+
+	if (ioat->hdls_disable) {
+		read += count;
+		goto end;
+	}
+
+	if (count > max_copies)
+		count = max_copies;
+
+	for (; i < count - 1; i += 2, read += 2) {
+		__m128i hdls0 = _mm_load_si128(&ioat->hdls[read & mask]);
+		__m128i hdls1 = _mm_load_si128(&ioat->hdls[(read + 1) & mask]);
+
+		_mm_storeu_si128((void *)&src_hdls[i],
+				_mm_unpacklo_epi64(hdls0, hdls1));
+		_mm_storeu_si128((void *)&dst_hdls[i],
+				_mm_unpackhi_epi64(hdls0, hdls1));
+	}
+	for (; i < count; i++, read++) {
+		uintptr_t *hdls = (void *)&ioat->hdls[read & mask];
+		src_hdls[i] = hdls[0];
+		dst_hdls[i] = hdls[1];
+	}
+
+end:
+	ioat->next_read = read;
+	ioat->completed += count;
+	return count;
+}
+
+#endif /* _RTE_IOAT_RAWDEV_FNS_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH 20.11 06/20] raw/ioat: make the HW register spec private
  2020-07-21  9:51 [dpdk-dev] [PATCH 20.11 00/20] raw/ioat: enhancements and new hardware support Bruce Richardson
                   ` (4 preceding siblings ...)
  2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 05/20] raw/ioat: split header for readability Bruce Richardson
@ 2020-07-21  9:51 ` Bruce Richardson
  2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 07/20] usertools/dpdk-devbind.py: add support for DSA HW Bruce Richardson
                   ` (18 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-07-21  9:51 UTC (permalink / raw)
  To: dev; +Cc: cheng1.jiang, patrick.fu, kevin.laatz, Bruce Richardson

Only a few definitions from the hardware spec are actually used in the
driver runtime, so we can copy over those few and make the rest of the spec
a private header in the driver.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 drivers/raw/ioat/ioat_rawdev.c                |  3 ++
 .../raw/ioat/{rte_ioat_spec.h => ioat_spec.h} | 26 -----------
 drivers/raw/ioat/meson.build                  |  3 +-
 drivers/raw/ioat/rte_ioat_rawdev_fns.h        | 44 +++++++++++++++++--
 4 files changed, 44 insertions(+), 32 deletions(-)
 rename drivers/raw/ioat/{rte_ioat_spec.h => ioat_spec.h} (91%)

diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
index 53b33c1a7..11c6a57e1 100644
--- a/drivers/raw/ioat/ioat_rawdev.c
+++ b/drivers/raw/ioat/ioat_rawdev.c
@@ -4,10 +4,12 @@
 
 #include <rte_cycles.h>
 #include <rte_bus_pci.h>
+#include <rte_memzone.h>
 #include <rte_string_fns.h>
 #include <rte_rawdev_pmd.h>
 
 #include "rte_ioat_rawdev.h"
+#include "ioat_spec.h"
 
 static struct rte_pci_driver ioat_pmd_drv;
 
@@ -260,6 +262,7 @@ ioat_rawdev_create(const char *name, struct rte_pci_device *dev)
 	ioat->rawdev = rawdev;
 	ioat->mz = mz;
 	ioat->regs = dev->mem_resource[0].addr;
+	ioat->doorbell = &ioat->regs->dmacount;
 	ioat->ring_size = 0;
 	ioat->desc_ring = NULL;
 	ioat->status_addr = ioat->mz->iova +
diff --git a/drivers/raw/ioat/rte_ioat_spec.h b/drivers/raw/ioat/ioat_spec.h
similarity index 91%
rename from drivers/raw/ioat/rte_ioat_spec.h
rename to drivers/raw/ioat/ioat_spec.h
index c6e7929b2..9645e16d4 100644
--- a/drivers/raw/ioat/rte_ioat_spec.h
+++ b/drivers/raw/ioat/ioat_spec.h
@@ -86,32 +86,6 @@ struct rte_ioat_registers {
 
 #define RTE_IOAT_CHANCMP_ALIGN			8	/* CHANCMP address must be 64-bit aligned */
 
-struct rte_ioat_generic_hw_desc {
-	uint32_t size;
-	union {
-		uint32_t control_raw;
-		struct {
-			uint32_t int_enable: 1;
-			uint32_t src_snoop_disable: 1;
-			uint32_t dest_snoop_disable: 1;
-			uint32_t completion_update: 1;
-			uint32_t fence: 1;
-			uint32_t reserved2: 1;
-			uint32_t src_page_break: 1;
-			uint32_t dest_page_break: 1;
-			uint32_t bundle: 1;
-			uint32_t dest_dca: 1;
-			uint32_t hint: 1;
-			uint32_t reserved: 13;
-			uint32_t op: 8;
-		} control;
-	} u;
-	uint64_t src_addr;
-	uint64_t dest_addr;
-	uint64_t next;
-	uint64_t op_specific[4];
-};
-
 struct rte_ioat_dma_hw_desc {
 	uint32_t size;
 	union {
diff --git a/drivers/raw/ioat/meson.build b/drivers/raw/ioat/meson.build
index f66e9b605..06636f8a9 100644
--- a/drivers/raw/ioat/meson.build
+++ b/drivers/raw/ioat/meson.build
@@ -8,5 +8,4 @@ sources = files('ioat_rawdev.c',
 deps += ['rawdev', 'bus_pci', 'mbuf']
 
 install_headers('rte_ioat_rawdev.h',
-		'rte_ioat_rawdev_fns.h',
-		'rte_ioat_spec.h')
+		'rte_ioat_rawdev_fns.h')
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index 06b4edcbb..0cee6b1b0 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -5,9 +5,37 @@
 #define _RTE_IOAT_RAWDEV_FNS_H_
 
 #include <x86intrin.h>
-#include <rte_memzone.h>
 #include <rte_prefetch.h>
-#include "rte_ioat_spec.h"
+
+/**
+ * @internal
+ * Structure representing a device descriptor
+ */
+struct rte_ioat_generic_hw_desc {
+	uint32_t size;
+	union {
+		uint32_t control_raw;
+		struct {
+			uint32_t int_enable: 1;
+			uint32_t src_snoop_disable: 1;
+			uint32_t dest_snoop_disable: 1;
+			uint32_t completion_update: 1;
+			uint32_t fence: 1;
+			uint32_t reserved2: 1;
+			uint32_t src_page_break: 1;
+			uint32_t dest_page_break: 1;
+			uint32_t bundle: 1;
+			uint32_t dest_dca: 1;
+			uint32_t hint: 1;
+			uint32_t reserved: 13;
+			uint32_t op: 8;
+		} control;
+	} u;
+	uint64_t src_addr;
+	uint64_t dest_addr;
+	uint64_t next;
+	uint64_t op_specific[4];
+};
 
 /**
  * @internal
@@ -18,7 +46,7 @@ struct rte_ioat_rawdev {
 	const struct rte_memzone *mz;
 	const struct rte_memzone *desc_mz;
 
-	volatile struct rte_ioat_registers *regs;
+	volatile uint16_t *doorbell;
 	phys_addr_t status_addr;
 	phys_addr_t ring_addr;
 
@@ -39,8 +67,16 @@ struct rte_ioat_rawdev {
 
 	/* to report completions, the device will write status back here */
 	volatile uint64_t status __rte_cache_aligned;
+
+	/* pointer to the register bar */
+	volatile struct rte_ioat_registers *regs;
 };
 
+#define RTE_IOAT_CHANSTS_IDLE			0x1
+#define RTE_IOAT_CHANSTS_SUSPENDED		0x2
+#define RTE_IOAT_CHANSTS_HALTED		0x3
+#define RTE_IOAT_CHANSTS_ARMED			0x4
+
 /**
  * Enqueue a copy operation onto the ioat device
  */
@@ -90,7 +126,7 @@ rte_ioat_do_copies(int dev_id)
 	ioat->desc_ring[(ioat->next_write - 1) & (ioat->ring_size - 1)].u
 			.control.completion_update = 1;
 	rte_compiler_barrier();
-	ioat->regs->dmacount = ioat->next_write;
+	*ioat->doorbell = ioat->next_write;
 	ioat->started = ioat->enqueued;
 }
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH 20.11 07/20] usertools/dpdk-devbind.py: add support for DSA HW
  2020-07-21  9:51 [dpdk-dev] [PATCH 20.11 00/20] raw/ioat: enhancements and new hardware support Bruce Richardson
                   ` (5 preceding siblings ...)
  2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 06/20] raw/ioat: make the HW register spec private Bruce Richardson
@ 2020-07-21  9:51 ` Bruce Richardson
  2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 08/20] raw/ioat: add skeleton for vfio/uio based DSA device Bruce Richardson
                   ` (17 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-07-21  9:51 UTC (permalink / raw)
  To: dev; +Cc: cheng1.jiang, patrick.fu, kevin.laatz, Bruce Richardson

From: Kevin Laatz <kevin.laatz@intel.com>

Intel Data Streaming Accelerator (Intel DSA) is a high-performance data
copy and transformation accelerator which will be integrated in future
Intel processors [1].

Add DSA device support to dpdk-devbind.py script.

[1] https://01.org/blogs/2019/introducing-intel-data-streaming-accelerator

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
---
 usertools/dpdk-devbind.py | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/usertools/dpdk-devbind.py b/usertools/dpdk-devbind.py
index dc008823f..39743abb2 100755
--- a/usertools/dpdk-devbind.py
+++ b/usertools/dpdk-devbind.py
@@ -47,6 +47,8 @@
               'SVendor': None, 'SDevice': None}
 intel_ioat_icx = {'Class': '08', 'Vendor': '8086', 'Device': '0b00',
               'SVendor': None, 'SDevice': None}
+intel_idxd_spr = {'Class': '08', 'Vendor': '8086', 'Device': '0b25',
+              'SVendor': None, 'SDevice': None}
 intel_ntb_skx = {'Class': '06', 'Vendor': '8086', 'Device': '201c',
               'SVendor': None, 'SDevice': None}
 
@@ -56,7 +58,7 @@
 eventdev_devices = [cavium_sso, cavium_tim, octeontx2_sso]
 mempool_devices = [cavium_fpa, octeontx2_npa]
 compress_devices = [cavium_zip]
-misc_devices = [intel_ioat_bdw, intel_ioat_skx, intel_ioat_icx, intel_ntb_skx, octeontx2_dma]
+misc_devices = [intel_ioat_bdw, intel_ioat_skx, intel_ioat_icx, intel_idxd_spr, intel_ntb_skx, octeontx2_dma]
 
 # global dict ethernet devices present. Dictionary indexed by PCI address.
 # Each device within this is itself a dictionary of device properties
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH 20.11 08/20] raw/ioat: add skeleton for vfio/uio based DSA device
  2020-07-21  9:51 [dpdk-dev] [PATCH 20.11 00/20] raw/ioat: enhancements and new hardware support Bruce Richardson
                   ` (6 preceding siblings ...)
  2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 07/20] usertools/dpdk-devbind.py: add support for DSA HW Bruce Richardson
@ 2020-07-21  9:51 ` Bruce Richardson
  2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 09/20] raw/ioat: add vdev probe for DSA/idxd devices Bruce Richardson
                   ` (16 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-07-21  9:51 UTC (permalink / raw)
  To: dev; +Cc: cheng1.jiang, patrick.fu, kevin.laatz, Bruce Richardson

Add in the basic probe/remove skeleton code for DSA devices which are bound
directly to vfio or uio driver. The kernel module for supporting these uses
the "idxd" name, so that name is used as function and file prefix to avoid
conflict with existing "ioat" prefixed functions.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 drivers/raw/ioat/idxd_pci.c     | 56 +++++++++++++++++++++++++++++++++
 drivers/raw/ioat/ioat_private.h | 27 ++++++++++++++++
 drivers/raw/ioat/ioat_rawdev.c  |  9 +-----
 drivers/raw/ioat/meson.build    |  6 ++--
 4 files changed, 88 insertions(+), 10 deletions(-)
 create mode 100644 drivers/raw/ioat/idxd_pci.c
 create mode 100644 drivers/raw/ioat/ioat_private.h

diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
new file mode 100644
index 000000000..f6af9d33a
--- /dev/null
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -0,0 +1,56 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_bus_pci.h>
+
+#include "ioat_private.h"
+
+#define IDXD_VENDOR_ID		0x8086
+#define IDXD_DEVICE_ID_SPR	0x0B25
+
+#define IDXD_PMD_RAWDEV_NAME_PCI rawdev_idxd_pci
+
+const struct rte_pci_id pci_id_idxd_map[] = {
+	{ RTE_PCI_DEVICE(IDXD_VENDOR_ID, IDXD_DEVICE_ID_SPR) },
+	{ .vendor_id = 0, /* sentinel */ },
+};
+
+static int
+idxd_rawdev_probe_pci(struct rte_pci_driver *drv, struct rte_pci_device *dev)
+{
+	int ret = 0;
+	char name[32];
+
+	rte_pci_device_name(&dev->addr, name, sizeof(name));
+	IOAT_PMD_INFO("Init %s on NUMA node %d", name, dev->device.numa_node);
+	dev->device.driver = &drv->driver;
+
+	return ret;
+}
+
+static int
+idxd_rawdev_remove_pci(struct rte_pci_device *dev)
+{
+	char name[32];
+	int ret = 0;
+
+	rte_pci_device_name(&dev->addr, name, sizeof(name));
+
+	IOAT_PMD_INFO("Closing %s on NUMA node %d", name, dev->device.numa_node);
+
+	return ret;
+}
+
+struct rte_pci_driver idxd_pmd_drv_pci = {
+	.id_table = pci_id_idxd_map,
+	.drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+	.probe = idxd_rawdev_probe_pci,
+	.remove = idxd_rawdev_remove_pci,
+};
+
+RTE_PMD_REGISTER_PCI(IDXD_PMD_RAWDEV_NAME_PCI, idxd_pmd_drv_pci);
+RTE_PMD_REGISTER_PCI_TABLE(IDXD_PMD_RAWDEV_NAME_PCI, pci_id_idxd_map);
+RTE_PMD_REGISTER_KMOD_DEP(IDXD_PMD_RAWDEV_NAME_PCI,
+			  "* igb_uio | uio_pci_generic | vfio-pci");
+
diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
new file mode 100644
index 000000000..d87d4d055
--- /dev/null
+++ b/drivers/raw/ioat/ioat_private.h
@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#ifndef _IOAT_PRIVATE_H_
+#define _IOAT_PRIVATE_H_
+
+/**
+ * @file idxd_private.h
+ *
+ * Private data structures for the idxd/DSA part of ioat device driver
+ *
+ * @warning
+ * @b EXPERIMENTAL: these structures and APIs may change without prior notice
+ */
+
+extern int ioat_pmd_logtype;
+
+#define IOAT_PMD_LOG(level, fmt, args...) rte_log(RTE_LOG_ ## level, \
+		ioat_pmd_logtype, "%s(): " fmt "\n", __func__, ##args)
+
+#define IOAT_PMD_DEBUG(fmt, args...)  IOAT_PMD_LOG(DEBUG, fmt, ## args)
+#define IOAT_PMD_INFO(fmt, args...)   IOAT_PMD_LOG(INFO, fmt, ## args)
+#define IOAT_PMD_ERR(fmt, args...)    IOAT_PMD_LOG(ERR, fmt, ## args)
+#define IOAT_PMD_WARN(fmt, args...)   IOAT_PMD_LOG(WARNING, fmt, ## args)
+
+#endif /* _IOAT_PRIVATE_H_ */
diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
index 11c6a57e1..8f9c8b56f 100644
--- a/drivers/raw/ioat/ioat_rawdev.c
+++ b/drivers/raw/ioat/ioat_rawdev.c
@@ -10,6 +10,7 @@
 
 #include "rte_ioat_rawdev.h"
 #include "ioat_spec.h"
+#include "ioat_private.h"
 
 static struct rte_pci_driver ioat_pmd_drv;
 
@@ -29,14 +30,6 @@ static struct rte_pci_driver ioat_pmd_drv;
 
 RTE_LOG_REGISTER(ioat_pmd_logtype, rawdev.ioat, INFO);
 
-#define IOAT_PMD_LOG(level, fmt, args...) rte_log(RTE_LOG_ ## level, \
-	ioat_pmd_logtype, "%s(): " fmt "\n", __func__, ##args)
-
-#define IOAT_PMD_DEBUG(fmt, args...)  IOAT_PMD_LOG(DEBUG, fmt, ## args)
-#define IOAT_PMD_INFO(fmt, args...)   IOAT_PMD_LOG(INFO, fmt, ## args)
-#define IOAT_PMD_ERR(fmt, args...)    IOAT_PMD_LOG(ERR, fmt, ## args)
-#define IOAT_PMD_WARN(fmt, args...)   IOAT_PMD_LOG(WARNING, fmt, ## args)
-
 #define DESC_SZ sizeof(struct rte_ioat_generic_hw_desc)
 #define COMPLETION_SZ sizeof(__m128i)
 
diff --git a/drivers/raw/ioat/meson.build b/drivers/raw/ioat/meson.build
index 06636f8a9..3529635e9 100644
--- a/drivers/raw/ioat/meson.build
+++ b/drivers/raw/ioat/meson.build
@@ -3,8 +3,10 @@
 
 build = dpdk_conf.has('RTE_ARCH_X86')
 reason = 'only supported on x86'
-sources = files('ioat_rawdev.c',
-		'ioat_rawdev_test.c')
+sources = files(
+	'idxd_pci.c',
+	'ioat_rawdev.c',
+	'ioat_rawdev_test.c')
 deps += ['rawdev', 'bus_pci', 'mbuf']
 
 install_headers('rte_ioat_rawdev.h',
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH 20.11 09/20] raw/ioat: add vdev probe for DSA/idxd devices
  2020-07-21  9:51 [dpdk-dev] [PATCH 20.11 00/20] raw/ioat: enhancements and new hardware support Bruce Richardson
                   ` (7 preceding siblings ...)
  2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 08/20] raw/ioat: add skeleton for vfio/uio based DSA device Bruce Richardson
@ 2020-07-21  9:51 ` Bruce Richardson
  2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 10/20] raw/ioat: create rawdev instances on idxd PCI probe Bruce Richardson
                   ` (15 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-07-21  9:51 UTC (permalink / raw)
  To: dev; +Cc: cheng1.jiang, patrick.fu, kevin.laatz, Bruce Richardson

From: Kevin Laatz <kevin.laatz@intel.com>

The Intel DSA devices can be exposed to userspace via kernel driver, so can
be used without having to bind them to vfio/uio. Therefore we add support
for using those kernel-configured devices as vdevs, taking as parameter the
individual HW work queue to be used by the vdev.

Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 drivers/raw/ioat/idxd_vdev.c | 127 +++++++++++++++++++++++++++++++++++
 drivers/raw/ioat/meson.build |   6 +-
 2 files changed, 132 insertions(+), 1 deletion(-)
 create mode 100644 drivers/raw/ioat/idxd_vdev.c

diff --git a/drivers/raw/ioat/idxd_vdev.c b/drivers/raw/ioat/idxd_vdev.c
new file mode 100644
index 000000000..73fce6d87
--- /dev/null
+++ b/drivers/raw/ioat/idxd_vdev.c
@@ -0,0 +1,127 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_bus_vdev.h>
+#include <rte_kvargs.h>
+#include <rte_string_fns.h>
+#include <rte_rawdev_pmd.h>
+
+#include "ioat_private.h"
+
+/** Name of the device driver */
+#define IDXD_PMD_RAWDEV_NAME rawdev_idxd
+/* takes a work queue(WQ) as parameter */
+#define IDXD_ARG_WQ		"wq"
+
+extern struct rte_vdev_driver idxd_rawdev_drv_vdev;
+
+static const char * const valid_args[] = {
+	IDXD_ARG_WQ,
+	NULL
+};
+
+struct idxd_vdev_args {
+	uint8_t device_id;
+	uint8_t wq_id;
+};
+
+static int
+idxd_rawdev_parse_wq(const char *key __rte_unused, const char *value,
+			  void *extra_args)
+{
+	struct idxd_vdev_args *args = (struct idxd_vdev_args *)extra_args;
+	int dev, wq, bytes = -1;
+	int read = sscanf(value, "%d.%d%n", &dev, &wq, &bytes);
+
+	if (read != 2 || bytes != (int)strlen(value)) {
+		IOAT_PMD_ERR("Error parsing work-queue id. Must be in <dev_id>.<queue_id> format");
+		return -EINVAL;
+	}
+
+	if (dev >= UINT8_MAX || wq >= UINT8_MAX) {
+		IOAT_PMD_ERR("Device or work queue id out of range");
+		return -EINVAL;
+	}
+
+	args->device_id = dev;
+	args->wq_id = wq;
+
+	return 0;
+}
+
+static int
+idxd_vdev_parse_params(struct rte_kvargs *kvlist, struct idxd_vdev_args *args)
+{
+	if (rte_kvargs_count(kvlist, IDXD_ARG_WQ) == 1) {
+		if (rte_kvargs_process(kvlist, IDXD_ARG_WQ,
+				&idxd_rawdev_parse_wq, args) < 0) {
+			IOAT_PMD_ERR("Error parsing %s", IDXD_ARG_WQ);
+			goto free;
+		}
+	} else {
+		IOAT_PMD_ERR("%s is a mandatory arg", IDXD_ARG_WQ);
+		return -EINVAL;
+	}
+
+	return 0;
+
+free:
+	if (kvlist)
+		rte_kvargs_free(kvlist);
+	return -EINVAL;
+}
+
+static int
+idxd_rawdev_probe_vdev(struct rte_vdev_device *vdev)
+{
+	struct rte_kvargs *kvlist;
+	struct idxd_vdev_args vdev_args;
+	const char *name;
+	int ret = 0;
+
+	name = rte_vdev_device_name(vdev);
+	if (name == NULL)
+		return -EINVAL;
+
+	IOAT_PMD_INFO("Initializing pmd_idxd for %s", name);
+
+	kvlist = rte_kvargs_parse(rte_vdev_device_args(vdev), valid_args);
+	if (kvlist == NULL) {
+		IOAT_PMD_ERR("Invalid kvargs key");
+		return -EINVAL;
+	}
+
+	ret = idxd_vdev_parse_params(kvlist, &vdev_args);
+	if (ret) {
+		IOAT_PMD_ERR("Failed to parse kvargs");
+		return -EINVAL;
+	}
+
+	vdev->device.driver = &idxd_rawdev_drv_vdev.driver;
+
+	return 0;
+}
+
+static int
+idxd_rawdev_remove_vdev(struct rte_vdev_device *vdev)
+{
+	const char *name;
+
+	name = rte_vdev_device_name(vdev);
+	if (name == NULL)
+		return -EINVAL;
+
+	IOAT_PMD_INFO("Remove DSA vdev %p", name);
+
+	return 0;
+}
+
+struct rte_vdev_driver idxd_rawdev_drv_vdev = {
+	.probe = idxd_rawdev_probe_vdev,
+	.remove = idxd_rawdev_remove_vdev,
+};
+
+RTE_PMD_REGISTER_VDEV(IDXD_PMD_RAWDEV_NAME, idxd_rawdev_drv_vdev);
+RTE_PMD_REGISTER_PARAM_STRING(IDXD_PMD_RAWDEV_NAME,
+			      "wq=<string>");
diff --git a/drivers/raw/ioat/meson.build b/drivers/raw/ioat/meson.build
index 3529635e9..b343b7367 100644
--- a/drivers/raw/ioat/meson.build
+++ b/drivers/raw/ioat/meson.build
@@ -5,9 +5,13 @@ build = dpdk_conf.has('RTE_ARCH_X86')
 reason = 'only supported on x86'
 sources = files(
 	'idxd_pci.c',
+	'idxd_vdev.c',
 	'ioat_rawdev.c',
 	'ioat_rawdev_test.c')
-deps += ['rawdev', 'bus_pci', 'mbuf']
+deps += ['bus_pci',
+	'bus_vdev',
+	'mbuf',
+	'rawdev']
 
 install_headers('rte_ioat_rawdev.h',
 		'rte_ioat_rawdev_fns.h')
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH 20.11 10/20] raw/ioat: create rawdev instances on idxd PCI probe
  2020-07-21  9:51 [dpdk-dev] [PATCH 20.11 00/20] raw/ioat: enhancements and new hardware support Bruce Richardson
                   ` (8 preceding siblings ...)
  2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 09/20] raw/ioat: add vdev probe for DSA/idxd devices Bruce Richardson
@ 2020-07-21  9:51 ` Bruce Richardson
  2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 11/20] raw/ioat: create rawdev instances for idxd vdevs Bruce Richardson
                   ` (14 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-07-21  9:51 UTC (permalink / raw)
  To: dev; +Cc: cheng1.jiang, patrick.fu, kevin.laatz, Bruce Richardson

When a matching device is found via PCI probe create a rawdev instance for
each queue on the hardware. Use empty self-test function for these devices
so that the overall rawdev_autotest does not report failures.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 drivers/raw/ioat/idxd_pci.c            | 235 ++++++++++++++++++++++++-
 drivers/raw/ioat/ioat_common.c         |  62 +++++++
 drivers/raw/ioat/ioat_private.h        |  37 ++++
 drivers/raw/ioat/ioat_rawdev_test.c    |   7 +
 drivers/raw/ioat/ioat_spec.h           |  52 ++++++
 drivers/raw/ioat/meson.build           |   1 +
 drivers/raw/ioat/rte_ioat_rawdev_fns.h |  35 +++-
 7 files changed, 426 insertions(+), 3 deletions(-)
 create mode 100644 drivers/raw/ioat/ioat_common.c

diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
index f6af9d33a..11c07efaa 100644
--- a/drivers/raw/ioat/idxd_pci.c
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -3,8 +3,10 @@
  */
 
 #include <rte_bus_pci.h>
+#include <rte_memzone.h>
 
 #include "ioat_private.h"
+#include "ioat_spec.h"
 
 #define IDXD_VENDOR_ID		0x8086
 #define IDXD_DEVICE_ID_SPR	0x0B25
@@ -16,17 +18,244 @@ const struct rte_pci_id pci_id_idxd_map[] = {
 	{ .vendor_id = 0, /* sentinel */ },
 };
 
+static inline int
+idxd_pci_dev_command(struct idxd_rawdev *idxd, enum rte_idxd_cmds command)
+{
+	uint8_t err_code;
+	uint16_t qid = idxd->qid;
+	int i = 0;
+
+	if (command >= idxd_disable_wq && command <= idxd_reset_wq)
+		qid = (1 << qid);
+	rte_spinlock_lock(&idxd->u.pci->lk);
+	idxd->u.pci->regs->cmd = (command << IDXD_CMD_SHIFT) | qid;
+
+	do {
+		rte_pause();
+		err_code = idxd->u.pci->regs->cmdstatus;
+		if (++i >= 1000) {
+			IOAT_PMD_ERR("Timeout waiting for command response from HW");
+			rte_spinlock_unlock(&idxd->u.pci->lk);
+			return err_code;
+		}
+	} while (idxd->u.pci->regs->cmdstatus & CMDSTATUS_ACTIVE_MASK);
+	rte_spinlock_unlock(&idxd->u.pci->lk);
+
+	return err_code & CMDSTATUS_ERR_MASK;
+}
+
+static int
+idxd_is_wq_enabled(struct idxd_rawdev *idxd)
+{
+	uint32_t state = (idxd->u.pci->wq_regs[idxd->qid].wqcfg[WQ_STATE_IDX] >> WQ_STATE_SHIFT);
+	return (state & WQ_STATE_MASK) == 0x1;
+}
+
+static const struct rte_rawdev_ops idxd_pci_ops = {
+		.dev_selftest = idxd_rawdev_test
+};
+
+/* each portal uses 4 x 4k pages */
+#define IDXD_PORTAL_SIZE (4096 * 4)
+
+static int
+init_pci_device(struct rte_pci_device *dev, struct idxd_rawdev *idxd)
+{
+	struct idxd_pci_common *pci;
+	uint8_t nb_groups, nb_engines, nb_wqs;
+	uint16_t grp_offset, wq_offset; /* how far into bar0 the regs are */
+	uint16_t wq_size, total_wq_size;
+	uint8_t lg2_max_batch, lg2_max_copy_size;
+	unsigned int i, err_code;
+
+	pci = malloc(sizeof(*pci));
+	if (pci == NULL) {
+		IOAT_PMD_ERR("%s: Can't allocate memory", __func__);
+		goto err;
+	}
+	rte_spinlock_init(&pci->lk);
+
+	/* assign the bar registers, and then configure device */
+	pci->regs = dev->mem_resource[0].addr;
+	grp_offset = (uint16_t)pci->regs->offsets[0];
+	pci->grp_regs = RTE_PTR_ADD(pci->regs, grp_offset * 0x100);
+	wq_offset = (uint16_t)(pci->regs->offsets[0] >> 16);
+	pci->wq_regs = RTE_PTR_ADD(pci->regs, wq_offset * 0x100);
+	pci->portals = dev->mem_resource[2].addr;
+
+	/* sanity check device status */
+	if (pci->regs->gensts & GENSTS_DEV_STATE_MASK) {
+		/* need function-level-reset (FLR) or is enabled */
+		IOAT_PMD_ERR("Device status is not disabled, cannot init");
+		goto err;
+	}
+	if (pci->regs->cmdstatus & CMDSTATUS_ACTIVE_MASK) {
+		/* command in progress */
+		IOAT_PMD_ERR("Device has a command in progress, cannot init");
+		goto err;
+	}
+
+	/* read basic info about the hardware for use when configuring */
+	nb_groups = (uint8_t)pci->regs->grpcap;
+	nb_engines = (uint8_t)pci->regs->engcap;
+	nb_wqs = (uint8_t)(pci->regs->wqcap >> 16);
+	total_wq_size = (uint16_t)pci->regs->wqcap;
+	lg2_max_copy_size = (uint8_t)(pci->regs->gencap >> 16) & 0x1F;
+	lg2_max_batch = (uint8_t)(pci->regs->gencap >> 21) & 0x0F;
+
+	IOAT_PMD_DEBUG("nb_groups = %u, nb_engines = %u, nb_wqs = %u",
+			nb_groups, nb_engines, nb_wqs);
+
+	/* zero out any old config */
+	for (i = 0; i < nb_groups; i++) {
+		pci->grp_regs[i].grpengcfg = 0;
+		pci->grp_regs[i].grpwqcfg[0] = 0;
+	}
+	for (i = 0; i < nb_wqs; i++)
+		pci->wq_regs[i].wqcfg[0] = 0;
+
+	/* put each engine into a separate group to avoid reordering */
+	if (nb_groups > nb_engines)
+		nb_groups = nb_engines;
+	if (nb_groups < nb_engines)
+		nb_engines = nb_groups;
+
+	/* assign engines to groups, round-robin style */
+	for (i = 0; i < nb_engines; i++) {
+		IOAT_PMD_DEBUG("Assigning engine %u to group %u",
+				i, i % nb_groups);
+		pci->grp_regs[i % nb_groups].grpengcfg |= (1ULL << i);
+	}
+
+	/* now do the same for queues and give work slots to each queue */
+	wq_size = total_wq_size / nb_wqs;
+	IOAT_PMD_DEBUG("Work queue size = %u, max batch = 2^%u, max copy = 2^%u",
+			wq_size, lg2_max_batch, lg2_max_copy_size);
+	for (i = 0; i < nb_wqs; i++) {
+		/* add engine "i" to a group */
+		IOAT_PMD_DEBUG("Assigning work queue %u to group %u",
+				i, i % nb_groups);
+		pci->grp_regs[i % nb_groups].grpwqcfg[0] |= (1ULL << i);
+		/* now configure it, in terms of size, max batch, mode */
+		pci->wq_regs[i].wqcfg[0] = wq_size;
+		pci->wq_regs[i].wqcfg[2] = 0x11; /* TODO: use defines - dedicated mode, priority 1 */
+		pci->wq_regs[i].wqcfg[3] = (lg2_max_batch << 5) |
+				lg2_max_copy_size;
+	}
+
+	/* dump the group configuration to output */
+	for (i = 0; i < nb_groups; i++) {
+		IOAT_PMD_DEBUG("## Group %d", i);
+		IOAT_PMD_DEBUG("    GRPWQCFG: %"PRIx64, pci->grp_regs[i].grpwqcfg[0]);
+		IOAT_PMD_DEBUG("    GRPENGCFG: %"PRIx64, pci->grp_regs[i].grpengcfg);
+		IOAT_PMD_DEBUG("    GRPFLAGS: %"PRIx32, pci->grp_regs[i].grpflags);
+	}
+
+	idxd->u.pci = pci;
+	idxd->max_batches = wq_size;
+
+	/* enable the device itself */
+	err_code = idxd_pci_dev_command(idxd, idxd_enable_dev);
+	if (err_code) {
+		IOAT_PMD_ERR("Error enabling device: code %#x", err_code);
+		return err_code;
+	}
+	IOAT_PMD_DEBUG("IDXD Device enabled OK");
+
+	return nb_wqs;
+
+err:
+	free(pci);
+	return -1;
+}
+
 static int
 idxd_rawdev_probe_pci(struct rte_pci_driver *drv, struct rte_pci_device *dev)
 {
-	int ret = 0;
+	struct idxd_rawdev idxd = {0};
+	uint8_t nb_wqs;
+	int qid, ret = 0;
 	char name[32];
 
 	rte_pci_device_name(&dev->addr, name, sizeof(name));
 	IOAT_PMD_INFO("Init %s on NUMA node %d", name, dev->device.numa_node);
 	dev->device.driver = &drv->driver;
 
-	return ret;
+	ret = init_pci_device(dev, &idxd);
+	if (ret < 0) {
+		IOAT_PMD_ERR("Error initializing PCI hardware");
+		return ret;
+	}
+	nb_wqs = (uint8_t)ret;
+
+	/* set up one device for each queue */
+	for (qid = 0; qid < nb_wqs; qid++) {
+		char qname[32];
+
+		/* add the queue number to each device name */
+		snprintf(qname, sizeof(qname), "%s-q%d", name, qid);
+		idxd.qid = qid;
+		idxd.public.portal = RTE_PTR_ADD(idxd.u.pci->portals,
+				qid * IDXD_PORTAL_SIZE);
+		if (idxd_is_wq_enabled(&idxd))
+			IOAT_PMD_ERR("Error, WQ %u seems enabled", qid);
+		ret = idxd_rawdev_create(qname, &dev->device,
+				&idxd, &idxd_pci_ops);
+		if (ret != 0){
+			IOAT_PMD_ERR("Failed to create rawdev %s", name);
+			if (qid == 0) /* if no devices using this, free pci */
+				free(idxd.u.pci);
+			return ret;
+		}
+	}
+
+	return 0;
+}
+
+static int
+idxd_rawdev_destroy(const char *name)
+{
+	int ret;
+	uint8_t err_code;
+	struct rte_rawdev *rdev;
+	struct idxd_rawdev *idxd;
+
+	if (!name) {
+		IOAT_PMD_ERR("Invalid device name");
+		return -EINVAL;
+	}
+
+	rdev = rte_rawdev_pmd_get_named_dev(name);
+	if (!rdev) {
+		IOAT_PMD_ERR("Invalid device name (%s)", name);
+		return -EINVAL;
+	}
+
+	idxd = rdev->dev_private;
+
+	/* disable the device */
+	err_code = idxd_pci_dev_command(idxd, idxd_disable_dev);
+	if (err_code) {
+		IOAT_PMD_ERR("Error disabling device: code %#x", err_code);
+		return err_code;
+	}
+	IOAT_PMD_DEBUG("IDXD Device disabled OK");
+
+	/* free device memory */
+	if (rdev->dev_private != NULL) {
+		IOAT_PMD_DEBUG("Freeing device driver memory");
+		rdev->dev_private = NULL;
+		rte_free(idxd->public.batch_ring);
+		rte_free(idxd->public.hdl_ring);
+		rte_memzone_free(idxd->mz);
+	}
+
+	/* rte_rawdev_close is called by pmd_release */
+	ret = rte_rawdev_pmd_release(rdev);
+	if (ret)
+		IOAT_PMD_DEBUG("Device cleanup failed");
+
+	return 0;
 }
 
 static int
@@ -39,6 +268,8 @@ idxd_rawdev_remove_pci(struct rte_pci_device *dev)
 
 	IOAT_PMD_INFO("Closing %s on NUMA node %d", name, dev->device.numa_node);
 
+	ret = idxd_rawdev_destroy(name);
+
 	return ret;
 }
 
diff --git a/drivers/raw/ioat/ioat_common.c b/drivers/raw/ioat/ioat_common.c
new file mode 100644
index 000000000..4397be886
--- /dev/null
+++ b/drivers/raw/ioat/ioat_common.c
@@ -0,0 +1,62 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_rawdev_pmd.h>
+#include <rte_memzone.h>
+
+#include "ioat_private.h"
+
+int
+idxd_rawdev_create(const char *name, struct rte_device *dev,
+		   const struct idxd_rawdev *base_idxd,
+		   const struct rte_rawdev_ops *ops)
+{
+	struct idxd_rawdev *idxd;
+	struct rte_rawdev *rawdev = NULL;
+	const struct rte_memzone *mz = NULL;
+	char mz_name[RTE_MEMZONE_NAMESIZE];
+	int ret = 0;
+
+	if (!name) {
+		IOAT_PMD_ERR("Invalid name of the device!");
+		ret = -EINVAL;
+		goto cleanup;
+	}
+
+	/* Allocate device structure */
+	rawdev = rte_rawdev_pmd_allocate(name, sizeof(struct idxd_rawdev),
+					 dev->numa_node);
+	if (rawdev == NULL) {
+		IOAT_PMD_ERR("Unable to allocate raw device");
+		ret = -ENOMEM;
+		goto cleanup;
+	}
+
+	snprintf(mz_name, sizeof(mz_name), "rawdev%u_private", rawdev->dev_id);
+	mz = rte_memzone_reserve(mz_name, sizeof(struct idxd_rawdev),
+			dev->numa_node, RTE_MEMZONE_IOVA_CONTIG);
+	if (mz == NULL) {
+		IOAT_PMD_ERR("Unable to reserve memzone for private data\n");
+		ret = -ENOMEM;
+		goto cleanup;
+	}
+	rawdev->dev_private = mz->addr;
+	rawdev->dev_ops = ops;
+	rawdev->device = dev;
+	rawdev->driver_name = RTE_STR(IOAT_PMD_RAWDEV_NAME);
+
+	idxd = rawdev->dev_private;
+	*idxd = *base_idxd; /* copy over the main fields already passed in */
+	idxd->rawdev = rawdev;
+	idxd->mz = mz;
+
+	return 0;
+
+cleanup:
+	if (rawdev)
+		rte_rawdev_pmd_release(rawdev);
+
+	return ret;
+}
+
diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
index d87d4d055..2ddaddc37 100644
--- a/drivers/raw/ioat/ioat_private.h
+++ b/drivers/raw/ioat/ioat_private.h
@@ -14,6 +14,10 @@
  * @b EXPERIMENTAL: these structures and APIs may change without prior notice
  */
 
+#include <rte_spinlock.h>
+#include <rte_rawdev_pmd.h>
+#include "rte_ioat_rawdev.h"
+
 extern int ioat_pmd_logtype;
 
 #define IOAT_PMD_LOG(level, fmt, args...) rte_log(RTE_LOG_ ## level, \
@@ -24,4 +28,37 @@ extern int ioat_pmd_logtype;
 #define IOAT_PMD_ERR(fmt, args...)    IOAT_PMD_LOG(ERR, fmt, ## args)
 #define IOAT_PMD_WARN(fmt, args...)   IOAT_PMD_LOG(WARNING, fmt, ## args)
 
+struct idxd_pci_common {
+	rte_spinlock_t lk;
+	volatile struct rte_idxd_bar0 *regs;
+	volatile struct rte_idxd_wqcfg *wq_regs;
+	volatile struct rte_idxd_grpcfg *grp_regs;
+	volatile void *portals;
+};
+
+struct idxd_rawdev {
+	struct rte_idxd_rawdev public; /* the public members, must be first */
+
+	struct rte_rawdev *rawdev;
+	const struct rte_memzone *mz;
+	uint8_t qid;
+	uint16_t max_batches;
+
+	union {
+		struct {
+			struct accfg_ctx *ctx;
+			struct accfg_device *device;
+			struct accfg_wq *wq;
+		} vdev;
+
+		struct idxd_pci_common *pci;
+	} u;
+};
+
+extern int idxd_rawdev_create(const char *name, struct rte_device *dev,
+		       const struct idxd_rawdev *idxd,
+		       const struct rte_rawdev_ops *ops);
+
+extern int idxd_rawdev_test(uint16_t dev_id);
+
 #endif /* _IOAT_PRIVATE_H_ */
diff --git a/drivers/raw/ioat/ioat_rawdev_test.c b/drivers/raw/ioat/ioat_rawdev_test.c
index e5b50ae9f..b208b8c19 100644
--- a/drivers/raw/ioat/ioat_rawdev_test.c
+++ b/drivers/raw/ioat/ioat_rawdev_test.c
@@ -7,6 +7,7 @@
 #include <rte_mbuf.h>
 #include "rte_rawdev.h"
 #include "rte_ioat_rawdev.h"
+#include "ioat_private.h"
 
 #define MAX_SUPPORTED_RAWDEVS 64
 #define TEST_SKIPPED 77
@@ -252,3 +253,9 @@ ioat_rawdev_test(uint16_t dev_id)
 	free(ids);
 	return -1;
 }
+
+int
+idxd_rawdev_test(uint16_t dev_id __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/raw/ioat/ioat_spec.h b/drivers/raw/ioat/ioat_spec.h
index 9645e16d4..b865fc0ec 100644
--- a/drivers/raw/ioat/ioat_spec.h
+++ b/drivers/raw/ioat/ioat_spec.h
@@ -268,6 +268,58 @@ union rte_ioat_hw_desc {
 	struct rte_ioat_pq_update_hw_desc pq_update;
 };
 
+/*** Definitions for Intel(R) Data Streaming Accelerator Follow ***/
+
+#define IDXD_CMD_SHIFT 20
+enum rte_idxd_cmds {
+	idxd_enable_dev = 1,
+	idxd_disable_dev,
+	idxd_drain_all,
+	idxd_abort_all,
+	idxd_reset_device,
+	idxd_enable_wq,
+	idxd_disable_wq,
+	idxd_drain_wq,
+	idxd_abort_wq,
+	idxd_reset_wq,
+};
+
+/* General bar0 registers */
+struct rte_idxd_bar0 {
+	uint32_t version    __rte_cache_aligned; /* offset 0x00 */
+	uint64_t gencap     __rte_aligned(0x10); /* offset 0x10 */
+	uint64_t wqcap      __rte_aligned(0x10); /* offset 0x20 */
+	uint64_t grpcap     __rte_aligned(0x10); /* offset 0x30 */
+	uint64_t engcap     __rte_aligned(0x08); /* offset 0x38 */
+	uint64_t opcap      __rte_aligned(0x10); /* offset 0x40 */
+	uint64_t offsets[2] __rte_aligned(0x20); /* offset 0x60 */
+	uint32_t gencfg     __rte_aligned(0x20); /* offset 0x80 */
+	uint32_t genctrl    __rte_aligned(0x08); /* offset 0x88 */
+	uint32_t gensts     __rte_aligned(0x10); /* offset 0x90 */
+	uint32_t intcause   __rte_aligned(0x08); /* offset 0x98 */
+	uint32_t cmd        __rte_aligned(0x10); /* offset 0xA0 */
+	uint32_t cmdstatus  __rte_aligned(0x08); /* offset 0xA8 */
+	uint64_t swerror[4] __rte_aligned(0x20); /* offset 0xC0 */
+};
+
+struct rte_idxd_wqcfg {
+	uint32_t wqcfg[8] __rte_aligned(32); /* 32-byte register */
+};
+#define WQ_STATE_IDX    6 /* 7th 32-bit value in array */
+#define WQ_STATE_SHIFT 30
+#define WQ_STATE_MASK 0x3
+
+struct rte_idxd_grpcfg {
+	uint64_t grpwqcfg[4]  __rte_cache_aligned; /* 64-byte register set */
+	uint64_t grpengcfg;  /* offset 32 */
+	uint32_t grpflags;   /* offset 40 */
+};
+
+#define GENSTS_DEV_STATE_MASK 0x03
+#define CMDSTATUS_ACTIVE_SHIFT 31
+#define CMDSTATUS_ACTIVE_MASK (1 << 31)
+#define CMDSTATUS_ERR_MASK 0xFF
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/drivers/raw/ioat/meson.build b/drivers/raw/ioat/meson.build
index b343b7367..5eff76a1a 100644
--- a/drivers/raw/ioat/meson.build
+++ b/drivers/raw/ioat/meson.build
@@ -6,6 +6,7 @@ reason = 'only supported on x86'
 sources = files(
 	'idxd_pci.c',
 	'idxd_vdev.c',
+	'ioat_common.c',
 	'ioat_rawdev.c',
 	'ioat_rawdev_test.c')
 deps += ['bus_pci',
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index 0cee6b1b0..aca91dd4f 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -39,9 +39,20 @@ struct rte_ioat_generic_hw_desc {
 
 /**
  * @internal
- * Structure representing a device instance
+ * Identify the data path to use.
+ * Must be first field of rte_ioat_rawdev and rte_idxd_rawdev structs
+ */
+enum rte_ioat_dev_type {
+	RTE_IOAT_DEV,
+	RTE_IDXD_DEV,
+};
+
+/**
+ * @internal
+ * Structure representing an IOAT device instance  * Structure representing a device instance
  */
 struct rte_ioat_rawdev {
+	enum rte_ioat_dev_type type;
 	struct rte_rawdev *rawdev;
 	const struct rte_memzone *mz;
 	const struct rte_memzone *desc_mz;
@@ -77,6 +88,28 @@ struct rte_ioat_rawdev {
 #define RTE_IOAT_CHANSTS_HALTED		0x3
 #define RTE_IOAT_CHANSTS_ARMED			0x4
 
+/**
+ * @internal
+ * Structure representing an IDXD device instance
+ */
+struct rte_idxd_rawdev {
+	enum rte_ioat_dev_type type;
+	void *portal; /* address to write the batch descriptor */
+
+	/* counters to track the batches and the individual op handles */
+	uint16_t batch_ring_sz;  /* size of batch ring */
+	uint16_t hdl_ring_sz;    /* size of the user hdl ring */
+
+	uint16_t next_batch;     /* where we write descriptor ops */
+	uint16_t next_completed; /* batch where we read completions */
+	uint16_t next_ret_hdl;   /* the next user hdl to return */
+	uint16_t last_completed_hdl; /* the last user hdl that has completed */
+	uint16_t next_free_hdl;  /* where the handle for next op will go */
+
+	struct rte_idxd_user_hdl *hdl_ring;
+	struct rte_idxd_desc_batch *batch_ring;
+};
+
 /**
  * Enqueue a copy operation onto the ioat device
  */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH 20.11 11/20] raw/ioat: create rawdev instances for idxd vdevs
  2020-07-21  9:51 [dpdk-dev] [PATCH 20.11 00/20] raw/ioat: enhancements and new hardware support Bruce Richardson
                   ` (9 preceding siblings ...)
  2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 10/20] raw/ioat: create rawdev instances on idxd PCI probe Bruce Richardson
@ 2020-07-21  9:51 ` Bruce Richardson
  2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 12/20] raw/ioat: add datapath data structures for idxd devices Bruce Richardson
                   ` (13 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-07-21  9:51 UTC (permalink / raw)
  To: dev; +Cc: cheng1.jiang, patrick.fu, kevin.laatz, Bruce Richardson

From: Kevin Laatz <kevin.laatz@intel.com>

For each vdev (DSA work queue) instance, create a rawdev instance. Since
the vdev support depends upon the accel-config libraries, make the vdev
support conditional upon that in meson.build.

Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 drivers/raw/ioat/idxd_vdev.c | 163 ++++++++++++++++++++++++++++++++++-
 drivers/raw/ioat/meson.build |   9 +-
 2 files changed, 169 insertions(+), 3 deletions(-)

diff --git a/drivers/raw/ioat/idxd_vdev.c b/drivers/raw/ioat/idxd_vdev.c
index 73fce6d87..e81bd7326 100644
--- a/drivers/raw/ioat/idxd_vdev.c
+++ b/drivers/raw/ioat/idxd_vdev.c
@@ -2,6 +2,12 @@
  * Copyright(c) 2020 Intel Corporation
  */
 
+#include <fcntl.h>
+#include <limits.h>
+#include <sys/mman.h>
+#include <accel-config/libaccel_config.h>
+
+#include <rte_memzone.h>
 #include <rte_bus_vdev.h>
 #include <rte_kvargs.h>
 #include <rte_string_fns.h>
@@ -26,6 +32,79 @@ struct idxd_vdev_args {
 	uint8_t wq_id;
 };
 
+static const struct rte_rawdev_ops idxd_vdev_ops = {
+		.dev_selftest = idxd_rawdev_test,
+};
+
+static void *
+idxd_vdev_mmap_wq(struct accfg_device *dsa_dev, struct accfg_wq *wq)
+{
+	void *addr;
+	int major, minor;
+	char path[PATH_MAX];
+	int fd;
+
+	major = accfg_device_get_cdev_major(dsa_dev);
+	if (major < 0) {
+		IOAT_PMD_ERR("Invalid major version %d", major);
+		return NULL;
+	}
+
+	minor = accfg_wq_get_cdev_minor(wq);
+	if (minor < 0) {
+		IOAT_PMD_ERR("Invalid minor version %d", minor);
+		return NULL;
+	}
+
+	snprintf(path, sizeof(path), "/dev/char/%u:%u", major, minor);
+	fd = open(path, O_RDWR);
+	if (fd < 0) {
+		IOAT_PMD_ERR("Failed to open device path");
+		return NULL;
+	}
+
+	addr = mmap(NULL, 0x1000, PROT_WRITE, MAP_SHARED | MAP_POPULATE, fd, 0);
+	if (addr == MAP_FAILED) {
+		IOAT_PMD_ERR("Failed to mmap device");
+		return NULL;
+	}
+
+	return addr;
+}
+
+static int
+idxd_rawdev_vdev_config(struct idxd_rawdev *idxd, struct idxd_vdev_args *args)
+{
+	struct accfg_ctx *dsa_ctx;
+	struct accfg_device *dsa_dev;
+	struct accfg_wq *dsa_wq;
+	int ret;
+
+	ret = accfg_new(&dsa_ctx);
+	if (ret < 0) {
+		IOAT_PMD_ERR("Failed to create device context");
+		ret = -ENOMEM;
+	}
+
+	dsa_dev = accfg_ctx_device_get_by_id(dsa_ctx, args->device_id);
+	if (dsa_dev == NULL) {
+		IOAT_PMD_ERR("device not found: %u", args->device_id);
+		return -1;
+	}
+	dsa_wq = accfg_device_wq_get_by_id(dsa_dev, args->wq_id);
+	if (dsa_wq == NULL) {
+		IOAT_PMD_ERR("queue not found: %u", args->wq_id);
+		return -1;
+	}
+
+	idxd->u.vdev.ctx = dsa_ctx;
+	idxd->u.vdev.device = dsa_dev;
+	idxd->u.vdev.wq = dsa_wq;
+	idxd->max_batches = accfg_wq_get_size(dsa_wq);
+
+	return ret;
+}
+
 static int
 idxd_rawdev_parse_wq(const char *key __rte_unused, const char *value,
 			  void *extra_args)
@@ -76,6 +155,7 @@ static int
 idxd_rawdev_probe_vdev(struct rte_vdev_device *vdev)
 {
 	struct rte_kvargs *kvlist;
+	struct idxd_rawdev idxd = {0};
 	struct idxd_vdev_args vdev_args;
 	const char *name;
 	int ret = 0;
@@ -98,15 +178,52 @@ idxd_rawdev_probe_vdev(struct rte_vdev_device *vdev)
 		return -EINVAL;
 	}
 
+	ret = idxd_rawdev_vdev_config(&idxd, &vdev_args);
+	if (ret) {
+		IOAT_PMD_ERR("Failed to init vdev context");
+		return ret;
+	}
+
+	idxd.qid = vdev_args.wq_id;
+	idxd.public.portal = idxd_vdev_mmap_wq(idxd.u.vdev.device, idxd.u.vdev.wq);
+	if (idxd.public.portal == NULL) {
+		IOAT_PMD_ERR("WQ mmap failed");
+		return -ENOMEM;
+	}
+
 	vdev->device.driver = &idxd_rawdev_drv_vdev.driver;
 
+	ret = idxd_rawdev_create(name, &vdev->device, &idxd, &idxd_vdev_ops);
+	if (ret) {
+		IOAT_PMD_ERR("Failed to create rawdev %s", name);
+		return ret;
+	}
+
+	/* enable the device itself */
+	if (accfg_device_is_active(idxd.u.vdev.device)) {
+		IOAT_PMD_INFO("Device %s already enabled",
+				accfg_device_get_devname(idxd.u.vdev.device));
+	} else {
+		ret = accfg_device_enable(idxd.u.vdev.device);
+		if (ret) {
+			IOAT_PMD_ERR("Error enabling device %s",
+					accfg_device_get_devname(idxd.u.vdev.device));
+			return -1;
+		}
+		IOAT_PMD_DEBUG("Enabling device %s OK",
+				accfg_device_get_devname(idxd.u.vdev.device));
+	}
+
 	return 0;
 }
 
 static int
 idxd_rawdev_remove_vdev(struct rte_vdev_device *vdev)
 {
+	struct idxd_rawdev *idxd;
 	const char *name;
+	struct rte_rawdev *rdev;
+	int ret = 0;
 
 	name = rte_vdev_device_name(vdev);
 	if (name == NULL)
@@ -114,7 +231,51 @@ idxd_rawdev_remove_vdev(struct rte_vdev_device *vdev)
 
 	IOAT_PMD_INFO("Remove DSA vdev %p", name);
 
-	return 0;
+	rdev = rte_rawdev_pmd_get_named_dev(name);
+	if (!rdev) {
+		IOAT_PMD_ERR("Invalid device name (%s)", name);
+		return -EINVAL;
+	}
+
+	idxd = rdev->dev_private;
+
+	/* disable the device */
+	if (!accfg_device_is_active(idxd->u.vdev.device)) {
+		IOAT_PMD_ERR("Device %s already disabled",
+				accfg_device_get_devname(idxd->u.vdev.device));
+	}
+
+	ret = accfg_device_disable(idxd->u.vdev.device);
+	if (ret) {
+		IOAT_PMD_ERR("Not able to disable device %s",
+				accfg_device_get_devname(idxd->u.vdev.device));
+		return ret;
+	}
+	IOAT_PMD_DEBUG("Disabling device %s OK",
+			accfg_device_get_devname(idxd->u.vdev.device));
+
+	/* free context and memory */
+	if (rdev->dev_private != NULL) {
+		IOAT_PMD_DEBUG("Freeing device driver memory");
+		rdev->dev_private = NULL;
+		accfg_unref(idxd->u.vdev.ctx);
+
+		if (munmap(idxd->public.portal, 0x1000) < 0) {
+			IOAT_PMD_ERR("Error unmapping %s",
+				accfg_wq_get_devname(idxd->u.vdev.wq));
+			ret = -errno;
+		}
+
+		rte_free(idxd->public.batch_ring);
+		rte_free(idxd->public.hdl_ring);
+
+		rte_memzone_free(idxd->mz);
+	}
+
+	if (rte_rawdev_pmd_release(rdev))
+		IOAT_PMD_ERR("Device cleanup failed");
+
+	return ret;
 }
 
 struct rte_vdev_driver idxd_rawdev_drv_vdev = {
diff --git a/drivers/raw/ioat/meson.build b/drivers/raw/ioat/meson.build
index 5eff76a1a..a730953f8 100644
--- a/drivers/raw/ioat/meson.build
+++ b/drivers/raw/ioat/meson.build
@@ -5,14 +5,19 @@ build = dpdk_conf.has('RTE_ARCH_X86')
 reason = 'only supported on x86'
 sources = files(
 	'idxd_pci.c',
-	'idxd_vdev.c',
 	'ioat_common.c',
 	'ioat_rawdev.c',
 	'ioat_rawdev_test.c')
 deps += ['bus_pci',
-	'bus_vdev',
 	'mbuf',
 	'rawdev']
 
 install_headers('rte_ioat_rawdev.h',
 		'rte_ioat_rawdev_fns.h')
+
+accfg_dep = dependency('libaccel-config', required: false)
+if accfg_dep.found()
+	sources += files('idxd_vdev.c')
+	deps += ['bus_vdev']
+	ext_deps += accfg_dep
+endif
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH 20.11 12/20] raw/ioat: add datapath data structures for idxd devices
  2020-07-21  9:51 [dpdk-dev] [PATCH 20.11 00/20] raw/ioat: enhancements and new hardware support Bruce Richardson
                   ` (10 preceding siblings ...)
  2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 11/20] raw/ioat: create rawdev instances for idxd vdevs Bruce Richardson
@ 2020-07-21  9:51 ` Bruce Richardson
  2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 13/20] raw/ioat: add configure function " Bruce Richardson
                   ` (12 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-07-21  9:51 UTC (permalink / raw)
  To: dev; +Cc: cheng1.jiang, patrick.fu, kevin.laatz, Bruce Richardson

Add in the relevant data structures for the data path for DSA devices. Also
include a device dump function to output the status of each device.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 drivers/raw/ioat/idxd_pci.c            |  3 +-
 drivers/raw/ioat/idxd_vdev.c           |  1 +
 drivers/raw/ioat/ioat_common.c         | 34 +++++++++++
 drivers/raw/ioat/ioat_private.h        |  2 +
 drivers/raw/ioat/ioat_rawdev_test.c    |  3 +-
 drivers/raw/ioat/rte_ioat_rawdev_fns.h | 80 ++++++++++++++++++++++++++
 6 files changed, 121 insertions(+), 2 deletions(-)

diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
index 11c07efaa..78c443703 100644
--- a/drivers/raw/ioat/idxd_pci.c
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -52,7 +52,8 @@ idxd_is_wq_enabled(struct idxd_rawdev *idxd)
 }
 
 static const struct rte_rawdev_ops idxd_pci_ops = {
-		.dev_selftest = idxd_rawdev_test
+		.dev_selftest = idxd_rawdev_test,
+		.dump = idxd_dev_dump,
 };
 
 /* each portal uses 4 x 4k pages */
diff --git a/drivers/raw/ioat/idxd_vdev.c b/drivers/raw/ioat/idxd_vdev.c
index e81bd7326..2b8122cbc 100644
--- a/drivers/raw/ioat/idxd_vdev.c
+++ b/drivers/raw/ioat/idxd_vdev.c
@@ -34,6 +34,7 @@ struct idxd_vdev_args {
 
 static const struct rte_rawdev_ops idxd_vdev_ops = {
 		.dev_selftest = idxd_rawdev_test,
+		.dump = idxd_dev_dump,
 };
 
 static void *
diff --git a/drivers/raw/ioat/ioat_common.c b/drivers/raw/ioat/ioat_common.c
index 4397be886..3dda4ab8e 100644
--- a/drivers/raw/ioat/ioat_common.c
+++ b/drivers/raw/ioat/ioat_common.c
@@ -7,6 +7,36 @@
 
 #include "ioat_private.h"
 
+int
+idxd_dev_dump(struct rte_rawdev *dev, FILE *f)
+{
+	struct idxd_rawdev *idxd = dev->dev_private;
+	struct rte_idxd_rawdev *rte_idxd = &idxd->public;
+	int i;
+
+	fprintf(f, "Raw Device #%d\n", dev->dev_id);
+	fprintf(f, "Driver: %s\n\n", dev->driver_name);
+
+	fprintf(f, "Portal: %p\n", rte_idxd->portal);
+	fprintf(f, "Batch Ring size: %u\n", rte_idxd->batch_ring_sz);
+	fprintf(f, "Comp Handle Ring size: %u\n\n", rte_idxd->hdl_ring_sz);
+
+	fprintf(f, "Next batch: %u\n", rte_idxd->next_batch);
+	fprintf(f, "Next batch to be completed: %u\n", rte_idxd->next_completed);
+	for (i = 0; i < rte_idxd->batch_ring_sz; i++) {
+		struct rte_idxd_desc_batch *b = &rte_idxd->batch_ring[i];
+		fprintf(f, "Batch %u @%p: submitted=%u, op_count=%u, hdl_end=%u\n",
+				i, b, b->submitted, b->op_count, b->hdl_end);
+	}
+
+	fprintf(f, "\n");
+	fprintf(f, "Next free hdl: %u\n", rte_idxd->next_free_hdl);
+	fprintf(f, "Last completed hdl: %u\n", rte_idxd->last_completed_hdl);
+	fprintf(f, "Next returned hdl: %u\n", rte_idxd->next_ret_hdl);
+
+	return 0;
+}
+
 int
 idxd_rawdev_create(const char *name, struct rte_device *dev,
 		   const struct idxd_rawdev *base_idxd,
@@ -18,6 +48,10 @@ idxd_rawdev_create(const char *name, struct rte_device *dev,
 	char mz_name[RTE_MEMZONE_NAMESIZE];
 	int ret = 0;
 
+	RTE_BUILD_BUG_ON(sizeof(struct rte_idxd_hw_desc) != 64);
+	RTE_BUILD_BUG_ON(offsetof(struct rte_idxd_hw_desc, size ) != 32);
+	RTE_BUILD_BUG_ON(sizeof(struct rte_idxd_completion) != 32);
+
 	if (!name) {
 		IOAT_PMD_ERR("Invalid name of the device!");
 		ret = -EINVAL;
diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
index 2ddaddc37..a873f3f2c 100644
--- a/drivers/raw/ioat/ioat_private.h
+++ b/drivers/raw/ioat/ioat_private.h
@@ -61,4 +61,6 @@ extern int idxd_rawdev_create(const char *name, struct rte_device *dev,
 
 extern int idxd_rawdev_test(uint16_t dev_id);
 
+extern int idxd_dev_dump(struct rte_rawdev *dev, FILE *f);
+
 #endif /* _IOAT_PRIVATE_H_ */
diff --git a/drivers/raw/ioat/ioat_rawdev_test.c b/drivers/raw/ioat/ioat_rawdev_test.c
index b208b8c19..7864138fb 100644
--- a/drivers/raw/ioat/ioat_rawdev_test.c
+++ b/drivers/raw/ioat/ioat_rawdev_test.c
@@ -255,7 +255,8 @@ ioat_rawdev_test(uint16_t dev_id)
 }
 
 int
-idxd_rawdev_test(uint16_t dev_id __rte_unused)
+idxd_rawdev_test(uint16_t dev_id)
 {
+	rte_rawdev_dump(dev_id, stdout);
 	return 0;
 }
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index aca91dd4f..abd90514b 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -88,6 +88,86 @@ struct rte_ioat_rawdev {
 #define RTE_IOAT_CHANSTS_HALTED		0x3
 #define RTE_IOAT_CHANSTS_ARMED			0x4
 
+/*
+ * Defines used in the data path for interacting with hardware.
+ */
+#define IDXD_CMD_OP_SHIFT 24
+enum rte_idxd_ops {
+	idxd_op_nop = 0,
+	idxd_op_batch,
+	idxd_op_drain,
+	idxd_op_memmove,
+	idxd_op_fill
+};
+
+#define IDXD_FLAG_FENCE                 (1 << 0)
+#define IDXD_FLAG_COMPLETION_ADDR_VALID (1 << 2)
+#define IDXD_FLAG_REQUEST_COMPLETION    (1 << 3)
+#define IDXD_FLAG_CACHE_CONTROL         (1 << 8)
+
+/**
+ * Hardware descriptor used by DSA hardware, for both bursts and
+ * for individual operations.
+ */
+struct rte_idxd_hw_desc {
+	uint32_t pasid;
+	uint32_t op_flags;
+	rte_iova_t completion;
+
+	RTE_STD_C11
+	union {
+		rte_iova_t src;      /* source address for copy ops etc. */
+		rte_iova_t desc_addr; /* descriptor pointer for batch */
+	};
+	rte_iova_t dst;
+
+	uint32_t size;    /* length of data for op, or batch size */
+
+	/* 28 bytes of padding here */
+} __rte_aligned(64);
+
+/**
+ * Completion record structure written back by DSA
+ */
+struct rte_idxd_completion {
+	uint8_t status;
+	uint8_t result;
+	/* 16-bits pad here */
+	uint32_t completed_size; /* data length, or descriptors for batch */
+
+	rte_iova_t fault_address;
+	uint32_t invalid_flags;
+} __rte_aligned(32);
+
+#define BATCH_SIZE 64
+
+/**
+ * Structure used inside the driver for building up and submitting
+ * a batch of operations to the DSA hardware.
+ */
+struct rte_idxd_desc_batch {
+	struct rte_idxd_completion comp; /* the completion record for batch */
+
+	uint16_t submitted;
+	uint16_t op_count;
+	uint16_t hdl_end;
+
+	struct rte_idxd_hw_desc batch_desc;
+
+	/* batches must always have 2 descriptors, so put a null at the start */
+	struct rte_idxd_hw_desc null_desc;
+	struct rte_idxd_hw_desc ops[BATCH_SIZE];
+};
+
+/**
+ * structure used to save the "handles" provided by the user to be
+ * returned to the user on job completion.
+ */
+struct rte_idxd_user_hdl {
+	uint64_t src;
+	uint64_t dst;
+};
+
 /**
  * @internal
  * Structure representing an IDXD device instance
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH 20.11 13/20] raw/ioat: add configure function for idxd devices
  2020-07-21  9:51 [dpdk-dev] [PATCH 20.11 00/20] raw/ioat: enhancements and new hardware support Bruce Richardson
                   ` (11 preceding siblings ...)
  2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 12/20] raw/ioat: add datapath data structures for idxd devices Bruce Richardson
@ 2020-07-21  9:51 ` Bruce Richardson
  2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 14/20] raw/ioat: add start and stop functions " Bruce Richardson
                   ` (11 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-07-21  9:51 UTC (permalink / raw)
  To: dev; +Cc: cheng1.jiang, patrick.fu, kevin.laatz, Bruce Richardson

Add configure function for idxd devices, taking the same parameters as the
existing configure function for ioat. The ring_size parameter is used to
compute the maximum number of bursts to be supported by the driver, given
that the hardware works on individual bursts of descriptors at a time.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 drivers/raw/ioat/idxd_pci.c            |  1 +
 drivers/raw/ioat/idxd_vdev.c           |  1 +
 drivers/raw/ioat/ioat_common.c         | 61 ++++++++++++++++++++++++++
 drivers/raw/ioat/ioat_private.h        |  3 ++
 drivers/raw/ioat/rte_ioat_rawdev_fns.h |  1 +
 5 files changed, 67 insertions(+)

diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
index 78c443703..762efd5ac 100644
--- a/drivers/raw/ioat/idxd_pci.c
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -54,6 +54,7 @@ idxd_is_wq_enabled(struct idxd_rawdev *idxd)
 static const struct rte_rawdev_ops idxd_pci_ops = {
 		.dev_selftest = idxd_rawdev_test,
 		.dump = idxd_dev_dump,
+		.dev_configure = idxd_dev_configure,
 };
 
 /* each portal uses 4 x 4k pages */
diff --git a/drivers/raw/ioat/idxd_vdev.c b/drivers/raw/ioat/idxd_vdev.c
index 2b8122cbc..90ad11006 100644
--- a/drivers/raw/ioat/idxd_vdev.c
+++ b/drivers/raw/ioat/idxd_vdev.c
@@ -35,6 +35,7 @@ struct idxd_vdev_args {
 static const struct rte_rawdev_ops idxd_vdev_ops = {
 		.dev_selftest = idxd_rawdev_test,
 		.dump = idxd_dev_dump,
+		.dev_configure = idxd_dev_configure,
 };
 
 static void *
diff --git a/drivers/raw/ioat/ioat_common.c b/drivers/raw/ioat/ioat_common.c
index 3dda4ab8e..699661e27 100644
--- a/drivers/raw/ioat/ioat_common.c
+++ b/drivers/raw/ioat/ioat_common.c
@@ -37,6 +37,67 @@ idxd_dev_dump(struct rte_rawdev *dev, FILE *f)
 	return 0;
 }
 
+int
+idxd_dev_configure(const struct rte_rawdev *dev,
+		rte_rawdev_obj_t config, size_t config_size)
+{
+	struct idxd_rawdev *idxd = dev->dev_private;
+	struct rte_idxd_rawdev *rte_idxd = &idxd->public;
+	struct rte_ioat_rawdev_config *cfg = config;
+	uint16_t max_desc = cfg->ring_size;
+	uint16_t max_batches = max_desc / BATCH_SIZE;
+	uint16_t i;
+
+	if (config_size != sizeof(*cfg))
+		return -EINVAL;
+
+	if (dev->started) {
+		IOAT_PMD_ERR("%s: Error, device is started.", __func__);
+		return -EAGAIN;
+	}
+
+	rte_idxd->hdls_disable = cfg->hdls_disable;
+
+	/* limit the batches to what can be stored in hardware */
+	if (max_batches > idxd->max_batches) {
+		IOAT_PMD_DEBUG("Ring size of %u is too large for this device, need to limit to %u batches of %u",
+				max_desc, idxd->max_batches, BATCH_SIZE);
+		max_batches = idxd->max_batches;
+		max_desc = max_batches * BATCH_SIZE;
+	}
+	if (!rte_is_power_of_2(max_desc))
+		max_desc = rte_align32pow2(max_desc);
+	IOAT_PMD_DEBUG("Rawdev %u using %u descriptors in %u batches",
+			dev->dev_id, max_desc, max_batches);
+
+	/* TODO clean up if reconfiguring an existing device */
+	rte_idxd->batch_ring = rte_zmalloc(NULL,
+			sizeof(*rte_idxd->batch_ring) * max_batches, 0);
+	if (rte_idxd->batch_ring == NULL)
+		return -ENOMEM;
+
+	rte_idxd->hdl_ring = rte_zmalloc(NULL,
+			sizeof(*rte_idxd->hdl_ring) * max_desc, 0);
+	if (rte_idxd->hdl_ring == NULL) {
+		rte_free(rte_idxd->batch_ring);
+		rte_idxd->batch_ring = NULL;
+		return -ENOMEM;
+	}
+	rte_idxd->batch_ring_sz = max_batches;
+	rte_idxd->hdl_ring_sz = max_desc;
+
+	for (i = 0; i < rte_idxd->batch_ring_sz; i++) {
+		struct rte_idxd_desc_batch *b = &rte_idxd->batch_ring[i];
+		b->batch_desc.completion = rte_mem_virt2iova(&b->comp);
+		b->batch_desc.desc_addr = rte_mem_virt2iova(&b->null_desc);
+		b->batch_desc.op_flags = (idxd_op_batch << IDXD_CMD_OP_SHIFT) |
+				IDXD_FLAG_COMPLETION_ADDR_VALID |
+				IDXD_FLAG_REQUEST_COMPLETION;
+	}
+
+	return 0;
+}
+
 int
 idxd_rawdev_create(const char *name, struct rte_device *dev,
 		   const struct idxd_rawdev *base_idxd,
diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
index a873f3f2c..91ded444b 100644
--- a/drivers/raw/ioat/ioat_private.h
+++ b/drivers/raw/ioat/ioat_private.h
@@ -59,6 +59,9 @@ extern int idxd_rawdev_create(const char *name, struct rte_device *dev,
 		       const struct idxd_rawdev *idxd,
 		       const struct rte_rawdev_ops *ops);
 
+extern int idxd_dev_configure(const struct rte_rawdev *dev,
+		rte_rawdev_obj_t config, size_t config_size);
+
 extern int idxd_rawdev_test(uint16_t dev_id);
 
 extern int idxd_dev_dump(struct rte_rawdev *dev, FILE *f);
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index abd90514b..7090ac0a1 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -185,6 +185,7 @@ struct rte_idxd_rawdev {
 	uint16_t next_ret_hdl;   /* the next user hdl to return */
 	uint16_t last_completed_hdl; /* the last user hdl that has completed */
 	uint16_t next_free_hdl;  /* where the handle for next op will go */
+	uint16_t hdls_disable;   /* disable tracking completion handles */
 
 	struct rte_idxd_user_hdl *hdl_ring;
 	struct rte_idxd_desc_batch *batch_ring;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH 20.11 14/20] raw/ioat: add start and stop functions for idxd devices
  2020-07-21  9:51 [dpdk-dev] [PATCH 20.11 00/20] raw/ioat: enhancements and new hardware support Bruce Richardson
                   ` (12 preceding siblings ...)
  2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 13/20] raw/ioat: add configure function " Bruce Richardson
@ 2020-07-21  9:51 ` Bruce Richardson
  2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 15/20] raw/ioat: add data path support " Bruce Richardson
                   ` (10 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-07-21  9:51 UTC (permalink / raw)
  To: dev; +Cc: cheng1.jiang, patrick.fu, kevin.laatz, Bruce Richardson

Add the start and stop functions for DSA hardware devices.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
---
 drivers/raw/ioat/idxd_pci.c  | 52 ++++++++++++++++++++++++++++++++++++
 drivers/raw/ioat/idxd_vdev.c | 50 ++++++++++++++++++++++++++++++++++
 2 files changed, 102 insertions(+)

diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
index 762efd5ac..6655cf9b7 100644
--- a/drivers/raw/ioat/idxd_pci.c
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -51,10 +51,62 @@ idxd_is_wq_enabled(struct idxd_rawdev *idxd)
 	return (state & WQ_STATE_MASK) == 0x1;
 }
 
+static void
+idxd_pci_dev_stop(struct rte_rawdev *dev)
+{
+	struct idxd_rawdev *idxd = dev->dev_private;
+	uint8_t err_code;
+
+	if (!idxd_is_wq_enabled(idxd)) {
+		IOAT_PMD_ERR("Work queue %d already disabled", idxd->qid);
+		return;
+	}
+
+	err_code = idxd_pci_dev_command(idxd, idxd_disable_wq);
+	if (err_code || idxd_is_wq_enabled(idxd)) {
+		IOAT_PMD_ERR("Failed disabling work queue %d, error code: %#x",
+				idxd->qid, err_code);
+		return;
+	}
+	IOAT_PMD_DEBUG("Work queue %d disabled OK", idxd->qid);
+
+	return;
+}
+
+static int
+idxd_pci_dev_start(struct rte_rawdev *dev)
+{
+	struct idxd_rawdev *idxd = dev->dev_private;
+	uint8_t err_code;
+
+	if (idxd_is_wq_enabled(idxd)) {
+		IOAT_PMD_WARN("WQ %d already enabled", idxd->qid);
+		return 0;
+	}
+
+	if (idxd->public.batch_ring == NULL) {
+		IOAT_PMD_ERR("WQ %d has not been fully configured", idxd->qid);
+		return -EINVAL;
+	}
+
+	err_code = idxd_pci_dev_command(idxd, idxd_enable_wq);
+	if (err_code || !idxd_is_wq_enabled(idxd)) {
+		IOAT_PMD_ERR("Failed enabling work queue %d, error code: %#x",
+				idxd->qid, err_code);
+		return err_code == 0 ? -1 : err_code;
+	}
+
+	IOAT_PMD_DEBUG("Work queue %d enabled OK", idxd->qid);
+
+	return 0;
+}
+
 static const struct rte_rawdev_ops idxd_pci_ops = {
 		.dev_selftest = idxd_rawdev_test,
 		.dump = idxd_dev_dump,
 		.dev_configure = idxd_dev_configure,
+		.dev_start = idxd_pci_dev_start,
+		.dev_stop = idxd_pci_dev_stop,
 };
 
 /* each portal uses 4 x 4k pages */
diff --git a/drivers/raw/ioat/idxd_vdev.c b/drivers/raw/ioat/idxd_vdev.c
index 90ad11006..ab7efd216 100644
--- a/drivers/raw/ioat/idxd_vdev.c
+++ b/drivers/raw/ioat/idxd_vdev.c
@@ -32,10 +32,60 @@ struct idxd_vdev_args {
 	uint8_t wq_id;
 };
 
+static void
+idxd_vdev_stop(struct rte_rawdev *dev)
+{
+	struct idxd_rawdev *idxd = dev->dev_private;
+	int ret;
+
+	if (!accfg_wq_is_enabled(idxd->u.vdev.wq)) {
+		IOAT_PMD_ERR("Work queue %s already disabled",
+				accfg_wq_get_devname(idxd->u.vdev.wq));
+		return;
+	}
+
+	ret = accfg_wq_disable(idxd->u.vdev.wq);
+	if (ret) {
+		IOAT_PMD_INFO("Work queue %s not disabled, continuing...",
+				accfg_wq_get_devname(idxd->u.vdev.wq));
+		return;
+	}
+	IOAT_PMD_DEBUG("Disabling work queue %s OK",
+			accfg_wq_get_devname(idxd->u.vdev.wq));
+
+	return;
+}
+
+static int
+idxd_vdev_start(struct rte_rawdev *dev)
+{
+	struct idxd_rawdev *idxd = dev->dev_private;
+	int ret;
+
+	if (accfg_wq_is_enabled(idxd->u.vdev.wq)) {
+		IOAT_PMD_ERR("Work queue %s already enabled",
+				accfg_wq_get_devname(idxd->u.vdev.wq));
+		return 0;
+	}
+
+	ret = accfg_wq_enable(idxd->u.vdev.wq);
+	if (ret) {
+		IOAT_PMD_ERR("Error enabling work queue %s",
+				accfg_wq_get_devname(idxd->u.vdev.wq));
+		return -1;
+	}
+	IOAT_PMD_DEBUG("Enabling work queue %s OK",
+			accfg_wq_get_devname(idxd->u.vdev.wq));
+
+	return 0;
+}
+
 static const struct rte_rawdev_ops idxd_vdev_ops = {
 		.dev_selftest = idxd_rawdev_test,
 		.dump = idxd_dev_dump,
 		.dev_configure = idxd_dev_configure,
+		.dev_start = idxd_vdev_start,
+		.dev_stop = idxd_vdev_stop,
 };
 
 static void *
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH 20.11 15/20] raw/ioat: add data path support for idxd devices
  2020-07-21  9:51 [dpdk-dev] [PATCH 20.11 00/20] raw/ioat: enhancements and new hardware support Bruce Richardson
                   ` (13 preceding siblings ...)
  2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 14/20] raw/ioat: add start and stop functions " Bruce Richardson
@ 2020-07-21  9:51 ` Bruce Richardson
  2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 16/20] raw/ioat: add info function " Bruce Richardson
                   ` (9 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-07-21  9:51 UTC (permalink / raw)
  To: dev; +Cc: cheng1.jiang, patrick.fu, kevin.laatz, Bruce Richardson

Add support for doing copies using DSA hardware. This is implemented by
just switching on the device type field at the start of the inline
functions. Since there is no hardware which will have both device types
present this branch will always be predictable after the first call,
meaning it has little to no perf penalty.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 drivers/raw/ioat/ioat_common.c         |   1 +
 drivers/raw/ioat/ioat_rawdev.c         |   1 +
 drivers/raw/ioat/rte_ioat_rawdev_fns.h | 165 +++++++++++++++++++++++--
 3 files changed, 158 insertions(+), 9 deletions(-)

diff --git a/drivers/raw/ioat/ioat_common.c b/drivers/raw/ioat/ioat_common.c
index 699661e27..33e5bb4a6 100644
--- a/drivers/raw/ioat/ioat_common.c
+++ b/drivers/raw/ioat/ioat_common.c
@@ -143,6 +143,7 @@ idxd_rawdev_create(const char *name, struct rte_device *dev,
 
 	idxd = rawdev->dev_private;
 	*idxd = *base_idxd; /* copy over the main fields already passed in */
+	idxd->public.type = RTE_IDXD_DEV;
 	idxd->rawdev = rawdev;
 	idxd->mz = mz;
 
diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
index 8f9c8b56f..48fe32d0a 100644
--- a/drivers/raw/ioat/ioat_rawdev.c
+++ b/drivers/raw/ioat/ioat_rawdev.c
@@ -252,6 +252,7 @@ ioat_rawdev_create(const char *name, struct rte_pci_device *dev)
 	rawdev->driver_name = dev->device.driver->name;
 
 	ioat = rawdev->dev_private;
+	ioat->type = RTE_IOAT_DEV;
 	ioat->rawdev = rawdev;
 	ioat->mz = mz;
 	ioat->regs = dev->mem_resource[0].addr;
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index 7090ac0a1..98af40894 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -194,8 +194,8 @@ struct rte_idxd_rawdev {
 /**
  * Enqueue a copy operation onto the ioat device
  */
-static inline int
-rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
+static __rte_always_inline int
+__ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl,
 		int fence)
 {
@@ -233,8 +233,8 @@ rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 /**
  * Trigger hardware to begin performing enqueued copy operations
  */
-static inline void
-rte_ioat_do_copies(int dev_id)
+static __rte_always_inline void
+__ioat_perform_ops(int dev_id)
 {
 	struct rte_ioat_rawdev *ioat = rte_rawdevs[dev_id].dev_private;
 	ioat->desc_ring[(ioat->next_write - 1) & (ioat->ring_size - 1)].u
@@ -248,8 +248,8 @@ rte_ioat_do_copies(int dev_id)
  * @internal
  * Returns the index of the last completed operation.
  */
-static inline int
-rte_ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
+static __rte_always_inline int
+__ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
 {
 	uint64_t status = ioat->status;
 
@@ -263,8 +263,8 @@ rte_ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
 /**
  * Returns details of copy operations that have been completed
  */
-static inline int
-rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
+static __rte_always_inline int
+__ioat_completed_ops(int dev_id, uint8_t max_copies,
 		uintptr_t *src_hdls, uintptr_t *dst_hdls)
 {
 	struct rte_ioat_rawdev *ioat = rte_rawdevs[dev_id].dev_private;
@@ -274,7 +274,7 @@ rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
 	int error;
 	int i = 0;
 
-	end_read = (rte_ioat_get_last_completed(ioat, &error) + 1) & mask;
+	end_read = (__ioat_get_last_completed(ioat, &error) + 1) & mask;
 	count = (end_read - (read & mask)) & mask;
 
 	if (error) {
@@ -311,4 +311,151 @@ rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
 	return count;
 }
 
+static __rte_always_inline int
+__idxd_enqueue_copy(int dev_id, rte_iova_t src, rte_iova_t dst,
+		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl,
+		int fence __rte_unused)
+{
+	struct rte_idxd_rawdev *idxd = rte_rawdevs[dev_id].dev_private;
+	struct rte_idxd_desc_batch *b = &idxd->batch_ring[idxd->next_batch];
+	uint32_t op_flags = (idxd_op_memmove << IDXD_CMD_OP_SHIFT) |
+			IDXD_FLAG_CACHE_CONTROL;
+
+	/* check for room in the handle ring */
+	if (((idxd->next_free_hdl + 1) & (idxd->hdl_ring_sz - 1)) == idxd->next_ret_hdl) {
+		rte_errno = ENOSPC;
+		return 0;
+	}
+	if (b->op_count >= BATCH_SIZE) {
+		/* TODO change to submit batch and move on */
+		rte_errno = ENOSPC;
+		return 0;
+	}
+	/* check that we can actually use the current batch */
+	if (b->submitted) {
+		rte_errno = ENOSPC;
+		return 0;
+	}
+
+	/* write the descriptor */
+	b->ops[b->op_count++] = (struct rte_idxd_hw_desc){
+		.op_flags =  op_flags,
+		.src = src,
+		.dst = dst,
+		.size = length
+	};
+
+	/* store the completion details */
+	if (!idxd->hdls_disable)
+		idxd->hdl_ring[idxd->next_free_hdl] = (struct rte_idxd_user_hdl) {
+			.src = src_hdl,
+			.dst =dst_hdl
+		};
+	if (++idxd->next_free_hdl == idxd->hdl_ring_sz)
+		idxd->next_free_hdl = 0;
+
+	return 1;
+}
+
+static __rte_always_inline void
+__idxd_movdir64b(volatile void *dst, const void *src)
+{
+	asm volatile (".byte 0x66, 0x0f, 0x38, 0xf8, 0x02"
+			:
+			: "a" (dst), "d" (src));
+}
+
+static __rte_always_inline void
+__idxd_perform_ops(int dev_id)
+{
+	struct rte_idxd_rawdev *idxd = rte_rawdevs[dev_id].dev_private;
+	struct rte_idxd_desc_batch *b = &idxd->batch_ring[idxd->next_batch];
+
+	if (b->submitted || b->op_count == 0)
+		return;
+	b->hdl_end = idxd->next_free_hdl;
+	b->comp.status = 0;
+	b->submitted = 1;
+	b->batch_desc.size = b->op_count + 1;
+	__idxd_movdir64b(idxd->portal, &b->batch_desc);
+
+	if (++idxd->next_batch == idxd->batch_ring_sz)
+		idxd->next_batch = 0;
+}
+
+static __rte_always_inline int
+__idxd_completed_ops(int dev_id, uint8_t max_ops,
+		uintptr_t *src_hdls, uintptr_t *dst_hdls)
+{
+	struct rte_idxd_rawdev *idxd = rte_rawdevs[dev_id].dev_private;
+	struct rte_idxd_desc_batch *b = &idxd->batch_ring[idxd->next_completed];
+	uint16_t h_idx = idxd->next_ret_hdl;
+	int n = 0;
+
+	while (b->submitted && b->comp.status != 0) {
+		idxd->last_completed_hdl = b->hdl_end;
+		b->submitted = 0;
+		b->op_count = 0;
+		if (++idxd->next_completed == idxd->batch_ring_sz)
+			idxd->next_completed = 0;
+		b = &idxd->batch_ring[idxd->next_completed];
+	}
+
+	if (!idxd->hdls_disable)
+		for (n = 0; n < max_ops && h_idx != idxd->last_completed_hdl; n++) {
+			src_hdls[n] = idxd->hdl_ring[h_idx].src;
+			dst_hdls[n] = idxd->hdl_ring[h_idx].dst;
+			if (++h_idx == idxd->hdl_ring_sz)
+				h_idx = 0;
+		}
+	else
+		while (h_idx != idxd->last_completed_hdl) {
+			n++;
+			if (++h_idx == idxd->hdl_ring_sz)
+				h_idx = 0;
+		}
+
+	idxd->next_ret_hdl = h_idx;
+
+	return n;
+}
+
+static inline int
+rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
+		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl,
+		int fence)
+{
+	enum rte_ioat_dev_type *type = rte_rawdevs[dev_id].dev_private;
+	if (*type == RTE_IDXD_DEV)
+		return __idxd_enqueue_copy(dev_id, src, dst, length,
+				src_hdl, dst_hdl, fence);
+	else
+		return __ioat_enqueue_copy(dev_id, src, dst, length,
+				src_hdl, dst_hdl, fence);
+}
+
+static inline void
+rte_ioat_do_copies(int dev_id)
+{
+	enum rte_ioat_dev_type *type = rte_rawdevs[dev_id].dev_private;
+	if (*type == RTE_IDXD_DEV)
+		return __idxd_perform_ops(dev_id);
+	else
+		return __ioat_perform_ops(dev_id);
+}
+
+static inline int
+rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
+		uintptr_t *src_hdls, uintptr_t *dst_hdls)
+{
+	enum rte_ioat_dev_type *type = rte_rawdevs[dev_id].dev_private;
+	if (*type == RTE_IDXD_DEV)
+		return __idxd_completed_ops(dev_id, max_copies,
+				src_hdls, dst_hdls);
+	else
+		return __ioat_completed_ops(dev_id,  max_copies,
+				src_hdls, dst_hdls);
+}
+
+
 #endif /* _RTE_IOAT_RAWDEV_FNS_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH 20.11 16/20] raw/ioat: add info function for idxd devices
  2020-07-21  9:51 [dpdk-dev] [PATCH 20.11 00/20] raw/ioat: enhancements and new hardware support Bruce Richardson
                   ` (14 preceding siblings ...)
  2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 15/20] raw/ioat: add data path support " Bruce Richardson
@ 2020-07-21  9:51 ` Bruce Richardson
  2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 17/20] raw/ioat: create separate statistics structure Bruce Richardson
                   ` (8 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-07-21  9:51 UTC (permalink / raw)
  To: dev; +Cc: cheng1.jiang, patrick.fu, kevin.laatz, Bruce Richardson

Add the info get function for DSA devices, returning just the ring size
info about the device, same as is returned for existing IOAT/CBDMA devices.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 drivers/raw/ioat/idxd_pci.c     |  1 +
 drivers/raw/ioat/idxd_vdev.c    |  1 +
 drivers/raw/ioat/ioat_common.c  | 16 ++++++++++++++++
 drivers/raw/ioat/ioat_private.h |  3 +++
 4 files changed, 21 insertions(+)

diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
index 6655cf9b7..2b5a0e1c8 100644
--- a/drivers/raw/ioat/idxd_pci.c
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -107,6 +107,7 @@ static const struct rte_rawdev_ops idxd_pci_ops = {
 		.dev_configure = idxd_dev_configure,
 		.dev_start = idxd_pci_dev_start,
 		.dev_stop = idxd_pci_dev_stop,
+		.dev_info_get = idxd_dev_info_get,
 };
 
 /* each portal uses 4 x 4k pages */
diff --git a/drivers/raw/ioat/idxd_vdev.c b/drivers/raw/ioat/idxd_vdev.c
index ab7efd216..8b7c2cacd 100644
--- a/drivers/raw/ioat/idxd_vdev.c
+++ b/drivers/raw/ioat/idxd_vdev.c
@@ -86,6 +86,7 @@ static const struct rte_rawdev_ops idxd_vdev_ops = {
 		.dev_configure = idxd_dev_configure,
 		.dev_start = idxd_vdev_start,
 		.dev_stop = idxd_vdev_stop,
+		.dev_info_get = idxd_dev_info_get,
 };
 
 static void *
diff --git a/drivers/raw/ioat/ioat_common.c b/drivers/raw/ioat/ioat_common.c
index 33e5bb4a6..fe293aebf 100644
--- a/drivers/raw/ioat/ioat_common.c
+++ b/drivers/raw/ioat/ioat_common.c
@@ -37,6 +37,22 @@ idxd_dev_dump(struct rte_rawdev *dev, FILE *f)
 	return 0;
 }
 
+int
+idxd_dev_info_get(struct rte_rawdev *dev, rte_rawdev_obj_t dev_info,
+		size_t info_size)
+{
+	struct rte_ioat_rawdev_config *cfg = dev_info;
+	struct idxd_rawdev *idxd = dev->dev_private;
+	struct rte_idxd_rawdev *rte_idxd = &idxd->public;
+
+	if (info_size != sizeof(*cfg))
+		return -EINVAL;
+
+	if (cfg != NULL)
+		cfg->ring_size = rte_idxd->hdl_ring_sz;
+	return 0;
+}
+
 int
 idxd_dev_configure(const struct rte_rawdev *dev,
 		rte_rawdev_obj_t config, size_t config_size)
diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
index 91ded444b..35189b15f 100644
--- a/drivers/raw/ioat/ioat_private.h
+++ b/drivers/raw/ioat/ioat_private.h
@@ -62,6 +62,9 @@ extern int idxd_rawdev_create(const char *name, struct rte_device *dev,
 extern int idxd_dev_configure(const struct rte_rawdev *dev,
 		rte_rawdev_obj_t config, size_t config_size);
 
+extern int idxd_dev_info_get(struct rte_rawdev *dev, rte_rawdev_obj_t dev_info,
+		size_t info_size);
+
 extern int idxd_rawdev_test(uint16_t dev_id);
 
 extern int idxd_dev_dump(struct rte_rawdev *dev, FILE *f);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH 20.11 17/20] raw/ioat: create separate statistics structure
  2020-07-21  9:51 [dpdk-dev] [PATCH 20.11 00/20] raw/ioat: enhancements and new hardware support Bruce Richardson
                   ` (15 preceding siblings ...)
  2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 16/20] raw/ioat: add info function " Bruce Richardson
@ 2020-07-21  9:51 ` Bruce Richardson
  2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 18/20] raw/ioat: move xstats functions to common file Bruce Richardson
                   ` (7 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-07-21  9:51 UTC (permalink / raw)
  To: dev; +Cc: cheng1.jiang, patrick.fu, kevin.laatz, Bruce Richardson

Rather than having the xstats as fields inside the main driver structure,
create a separate structure type for them.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 drivers/raw/ioat/ioat_rawdev.c         | 21 ++++++++----------
 drivers/raw/ioat/rte_ioat_rawdev_fns.h | 30 ++++++++++++++++----------
 2 files changed, 28 insertions(+), 23 deletions(-)

diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
index 48fe32d0a..e4d39a2ee 100644
--- a/drivers/raw/ioat/ioat_rawdev.c
+++ b/drivers/raw/ioat/ioat_rawdev.c
@@ -135,10 +135,10 @@ ioat_xstats_get(const struct rte_rawdev *dev, const unsigned int ids[],
 
 	for (i = 0; i < n; i++) {
 		switch (ids[i]) {
-		case 0: values[i] = ioat->enqueue_failed; break;
-		case 1: values[i] = ioat->enqueued; break;
-		case 2: values[i] = ioat->started; break;
-		case 3: values[i] = ioat->completed; break;
+		case 0: values[i] = ioat->xstats.enqueue_failed; break;
+		case 1: values[i] = ioat->xstats.enqueued; break;
+		case 2: values[i] = ioat->xstats.started; break;
+		case 3: values[i] = ioat->xstats.completed; break;
 		default: values[i] = 0; break;
 		}
 	}
@@ -169,26 +169,23 @@ ioat_xstats_reset(struct rte_rawdev *dev, const uint32_t *ids, uint32_t nb_ids)
 	unsigned int i;
 
 	if (!ids) {
-		ioat->enqueue_failed = 0;
-		ioat->enqueued = 0;
-		ioat->started = 0;
-		ioat->completed = 0;
+		memset(&ioat->xstats, 0, sizeof(ioat->xstats));
 		return 0;
 	}
 
 	for (i = 0; i < nb_ids; i++) {
 		switch (ids[i]) {
 		case 0:
-			ioat->enqueue_failed = 0;
+			ioat->xstats.enqueue_failed = 0;
 			break;
 		case 1:
-			ioat->enqueued = 0;
+			ioat->xstats.enqueued = 0;
 			break;
 		case 2:
-			ioat->started = 0;
+			ioat->xstats.started = 0;
 			break;
 		case 3:
-			ioat->completed = 0;
+			ioat->xstats.completed = 0;
 			break;
 		default:
 			IOAT_PMD_WARN("Invalid xstat id - cannot reset value");
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index 98af40894..813c1a157 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -47,17 +47,31 @@ enum rte_ioat_dev_type {
 	RTE_IDXD_DEV,
 };
 
+/**
+ * @internal
+ * some statistics for tracking, if added/changed update xstats fns
+ */
+struct rte_ioat_xstats {
+	uint64_t enqueue_failed;
+	uint64_t enqueued;
+	uint64_t started;
+	uint64_t completed;
+};
+
 /**
  * @internal
  * Structure representing an IOAT device instance  * Structure representing a device instance
  */
 struct rte_ioat_rawdev {
+	/* common fields at the top - match those in rte_idxd_rawdev */
 	enum rte_ioat_dev_type type;
+	struct rte_ioat_xstats xstats;
+
 	struct rte_rawdev *rawdev;
 	const struct rte_memzone *mz;
 	const struct rte_memzone *desc_mz;
 
-	volatile uint16_t *doorbell;
+	volatile uint16_t *doorbell __rte_cache_aligned;
 	phys_addr_t status_addr;
 	phys_addr_t ring_addr;
 
@@ -70,12 +84,6 @@ struct rte_ioat_rawdev {
 	unsigned short next_read;
 	unsigned short next_write;
 
-	/* some statistics for tracking, if added/changed update xstats fns*/
-	uint64_t enqueue_failed __rte_cache_aligned;
-	uint64_t enqueued;
-	uint64_t started;
-	uint64_t completed;
-
 	/* to report completions, the device will write status back here */
 	volatile uint64_t status __rte_cache_aligned;
 
@@ -207,7 +215,7 @@ __ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 	struct rte_ioat_generic_hw_desc *desc;
 
 	if (space == 0) {
-		ioat->enqueue_failed++;
+		ioat->xstats.enqueue_failed++;
 		return 0;
 	}
 
@@ -226,7 +234,7 @@ __ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 					(int64_t)src_hdl);
 	rte_prefetch0(&ioat->desc_ring[ioat->next_write & mask]);
 
-	ioat->enqueued++;
+	ioat->xstats.enqueued++;
 	return 1;
 }
 
@@ -241,7 +249,7 @@ __ioat_perform_ops(int dev_id)
 			.control.completion_update = 1;
 	rte_compiler_barrier();
 	*ioat->doorbell = ioat->next_write;
-	ioat->started = ioat->enqueued;
+	ioat->xstats.started = ioat->xstats.enqueued;
 }
 
 /**
@@ -307,7 +315,7 @@ __ioat_completed_ops(int dev_id, uint8_t max_copies,
 
 end:
 	ioat->next_read = read;
-	ioat->completed += count;
+	ioat->xstats.completed += count;
 	return count;
 }
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH 20.11 18/20] raw/ioat: move xstats functions to common file
  2020-07-21  9:51 [dpdk-dev] [PATCH 20.11 00/20] raw/ioat: enhancements and new hardware support Bruce Richardson
                   ` (16 preceding siblings ...)
  2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 17/20] raw/ioat: create separate statistics structure Bruce Richardson
@ 2020-07-21  9:51 ` Bruce Richardson
  2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 19/20] raw/ioat: add xstats tracking for idxd devices Bruce Richardson
                   ` (6 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-07-21  9:51 UTC (permalink / raw)
  To: dev; +Cc: cheng1.jiang, patrick.fu, kevin.laatz, Bruce Richardson

The xstats functions can be used by all ioat devices so move them from the
ioat_rawdev.c file to ioat_common.c, and add the function prototypes to the
internal header file.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 drivers/raw/ioat/ioat_common.c  | 76 +++++++++++++++++++++++++++++++++
 drivers/raw/ioat/ioat_private.h | 10 +++++
 drivers/raw/ioat/ioat_rawdev.c  | 75 --------------------------------
 3 files changed, 86 insertions(+), 75 deletions(-)

diff --git a/drivers/raw/ioat/ioat_common.c b/drivers/raw/ioat/ioat_common.c
index fe293aebf..5f366f009 100644
--- a/drivers/raw/ioat/ioat_common.c
+++ b/drivers/raw/ioat/ioat_common.c
@@ -4,9 +4,85 @@
 
 #include <rte_rawdev_pmd.h>
 #include <rte_memzone.h>
+#include <rte_string_fns.h>
 
 #include "ioat_private.h"
 
+static const char * const xstat_names[] = {
+		"failed_enqueues", "successful_enqueues",
+		"copies_started", "copies_completed"
+};
+
+int
+ioat_xstats_get(const struct rte_rawdev *dev, const unsigned int ids[],
+		uint64_t values[], unsigned int n)
+{
+	const struct rte_ioat_rawdev *ioat = dev->dev_private;
+	unsigned int i;
+
+	for (i = 0; i < n; i++) {
+		switch (ids[i]) {
+		case 0: values[i] = ioat->xstats.enqueue_failed; break;
+		case 1: values[i] = ioat->xstats.enqueued; break;
+		case 2: values[i] = ioat->xstats.started; break;
+		case 3: values[i] = ioat->xstats.completed; break;
+		default: values[i] = 0; break;
+		}
+	}
+	return n;
+}
+
+int
+ioat_xstats_get_names(const struct rte_rawdev *dev,
+		struct rte_rawdev_xstats_name *names,
+		unsigned int size)
+{
+	unsigned int i;
+
+	RTE_SET_USED(dev);
+	if (size < RTE_DIM(xstat_names))
+		return RTE_DIM(xstat_names);
+
+	for (i = 0; i < RTE_DIM(xstat_names); i++)
+		strlcpy(names[i].name, xstat_names[i], sizeof(names[i]));
+
+	return RTE_DIM(xstat_names);
+}
+
+int
+ioat_xstats_reset(struct rte_rawdev *dev, const uint32_t *ids, uint32_t nb_ids)
+{
+	struct rte_ioat_rawdev *ioat = dev->dev_private;
+	unsigned int i;
+
+	if (!ids) {
+		memset(&ioat->xstats, 0, sizeof(ioat->xstats));
+		return 0;
+	}
+
+	for (i = 0; i < nb_ids; i++) {
+		switch (ids[i]) {
+		case 0:
+			ioat->xstats.enqueue_failed = 0;
+			break;
+		case 1:
+			ioat->xstats.enqueued = 0;
+			break;
+		case 2:
+			ioat->xstats.started = 0;
+			break;
+		case 3:
+			ioat->xstats.completed = 0;
+			break;
+		default:
+			IOAT_PMD_WARN("Invalid xstat id - cannot reset value");
+			break;
+		}
+	}
+
+	return 0;
+}
+
 int
 idxd_dev_dump(struct rte_rawdev *dev, FILE *f)
 {
diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
index 35189b15f..46465e675 100644
--- a/drivers/raw/ioat/ioat_private.h
+++ b/drivers/raw/ioat/ioat_private.h
@@ -55,6 +55,16 @@ struct idxd_rawdev {
 	} u;
 };
 
+int ioat_xstats_get(const struct rte_rawdev *dev, const unsigned int ids[],
+		uint64_t values[], unsigned int n);
+
+int ioat_xstats_get_names(const struct rte_rawdev *dev,
+		struct rte_rawdev_xstats_name *names,
+		unsigned int size);
+
+int ioat_xstats_reset(struct rte_rawdev *dev, const uint32_t *ids,
+		uint32_t nb_ids);
+
 extern int idxd_rawdev_create(const char *name, struct rte_device *dev,
 		       const struct idxd_rawdev *idxd,
 		       const struct rte_rawdev_ops *ops);
diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
index e4d39a2ee..e3c98a825 100644
--- a/drivers/raw/ioat/ioat_rawdev.c
+++ b/drivers/raw/ioat/ioat_rawdev.c
@@ -121,81 +121,6 @@ ioat_dev_info_get(struct rte_rawdev *dev, rte_rawdev_obj_t dev_info,
 	return 0;
 }
 
-static const char * const xstat_names[] = {
-		"failed_enqueues", "successful_enqueues",
-		"copies_started", "copies_completed"
-};
-
-static int
-ioat_xstats_get(const struct rte_rawdev *dev, const unsigned int ids[],
-		uint64_t values[], unsigned int n)
-{
-	const struct rte_ioat_rawdev *ioat = dev->dev_private;
-	unsigned int i;
-
-	for (i = 0; i < n; i++) {
-		switch (ids[i]) {
-		case 0: values[i] = ioat->xstats.enqueue_failed; break;
-		case 1: values[i] = ioat->xstats.enqueued; break;
-		case 2: values[i] = ioat->xstats.started; break;
-		case 3: values[i] = ioat->xstats.completed; break;
-		default: values[i] = 0; break;
-		}
-	}
-	return n;
-}
-
-static int
-ioat_xstats_get_names(const struct rte_rawdev *dev,
-		struct rte_rawdev_xstats_name *names,
-		unsigned int size)
-{
-	unsigned int i;
-
-	RTE_SET_USED(dev);
-	if (size < RTE_DIM(xstat_names))
-		return RTE_DIM(xstat_names);
-
-	for (i = 0; i < RTE_DIM(xstat_names); i++)
-		strlcpy(names[i].name, xstat_names[i], sizeof(names[i]));
-
-	return RTE_DIM(xstat_names);
-}
-
-static int
-ioat_xstats_reset(struct rte_rawdev *dev, const uint32_t *ids, uint32_t nb_ids)
-{
-	struct rte_ioat_rawdev *ioat = dev->dev_private;
-	unsigned int i;
-
-	if (!ids) {
-		memset(&ioat->xstats, 0, sizeof(ioat->xstats));
-		return 0;
-	}
-
-	for (i = 0; i < nb_ids; i++) {
-		switch (ids[i]) {
-		case 0:
-			ioat->xstats.enqueue_failed = 0;
-			break;
-		case 1:
-			ioat->xstats.enqueued = 0;
-			break;
-		case 2:
-			ioat->xstats.started = 0;
-			break;
-		case 3:
-			ioat->xstats.completed = 0;
-			break;
-		default:
-			IOAT_PMD_WARN("Invalid xstat id - cannot reset value");
-			break;
-		}
-	}
-
-	return 0;
-}
-
 extern int ioat_rawdev_test(uint16_t dev_id);
 
 static int
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH 20.11 19/20] raw/ioat: add xstats tracking for idxd devices
  2020-07-21  9:51 [dpdk-dev] [PATCH 20.11 00/20] raw/ioat: enhancements and new hardware support Bruce Richardson
                   ` (17 preceding siblings ...)
  2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 18/20] raw/ioat: move xstats functions to common file Bruce Richardson
@ 2020-07-21  9:51 ` Bruce Richardson
  2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 20/20] raw/ioat: clean up use of common test function Bruce Richardson
                   ` (5 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-07-21  9:51 UTC (permalink / raw)
  To: dev; +Cc: cheng1.jiang, patrick.fu, kevin.laatz, Bruce Richardson

Add update of the relevant stats for the data path functions and point the
overall device struct xstats function pointers to the existing ioat
functions.

At this point, all necessary hooks for supporting the existing unit tests
are in place so call them for each device.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 drivers/raw/ioat/idxd_pci.c            |  3 +++
 drivers/raw/ioat/idxd_vdev.c           |  3 +++
 drivers/raw/ioat/ioat_rawdev_test.c    |  2 +-
 drivers/raw/ioat/rte_ioat_rawdev_fns.h | 26 ++++++++++++++++----------
 4 files changed, 23 insertions(+), 11 deletions(-)

diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
index 2b5a0e1c8..a426897b2 100644
--- a/drivers/raw/ioat/idxd_pci.c
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -108,6 +108,9 @@ static const struct rte_rawdev_ops idxd_pci_ops = {
 		.dev_start = idxd_pci_dev_start,
 		.dev_stop = idxd_pci_dev_stop,
 		.dev_info_get = idxd_dev_info_get,
+		.xstats_get = ioat_xstats_get,
+		.xstats_get_names = ioat_xstats_get_names,
+		.xstats_reset = ioat_xstats_reset,
 };
 
 /* each portal uses 4 x 4k pages */
diff --git a/drivers/raw/ioat/idxd_vdev.c b/drivers/raw/ioat/idxd_vdev.c
index 8b7c2cacd..aa2693368 100644
--- a/drivers/raw/ioat/idxd_vdev.c
+++ b/drivers/raw/ioat/idxd_vdev.c
@@ -87,6 +87,9 @@ static const struct rte_rawdev_ops idxd_vdev_ops = {
 		.dev_start = idxd_vdev_start,
 		.dev_stop = idxd_vdev_stop,
 		.dev_info_get = idxd_dev_info_get,
+		.xstats_get = ioat_xstats_get,
+		.xstats_get_names = ioat_xstats_get_names,
+		.xstats_reset = ioat_xstats_reset,
 };
 
 static void *
diff --git a/drivers/raw/ioat/ioat_rawdev_test.c b/drivers/raw/ioat/ioat_rawdev_test.c
index 7864138fb..678d135c4 100644
--- a/drivers/raw/ioat/ioat_rawdev_test.c
+++ b/drivers/raw/ioat/ioat_rawdev_test.c
@@ -258,5 +258,5 @@ int
 idxd_rawdev_test(uint16_t dev_id)
 {
 	rte_rawdev_dump(dev_id, stdout);
-	return 0;
+	return ioat_rawdev_test(dev_id);
 }
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index 813c1a157..2ddd0a024 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -182,6 +182,8 @@ struct rte_idxd_user_hdl {
  */
 struct rte_idxd_rawdev {
 	enum rte_ioat_dev_type type;
+	struct rte_ioat_xstats xstats;
+
 	void *portal; /* address to write the batch descriptor */
 
 	/* counters to track the batches and the individual op handles */
@@ -330,20 +332,16 @@ __idxd_enqueue_copy(int dev_id, rte_iova_t src, rte_iova_t dst,
 			IDXD_FLAG_CACHE_CONTROL;
 
 	/* check for room in the handle ring */
-	if (((idxd->next_free_hdl + 1) & (idxd->hdl_ring_sz - 1)) == idxd->next_ret_hdl) {
-		rte_errno = ENOSPC;
-		return 0;
-	}
+	if (((idxd->next_free_hdl + 1) & (idxd->hdl_ring_sz - 1)) == idxd->next_ret_hdl)
+		goto failed;
+
 	if (b->op_count >= BATCH_SIZE) {
 		/* TODO change to submit batch and move on */
-		rte_errno = ENOSPC;
-		return 0;
+		goto failed;
 	}
 	/* check that we can actually use the current batch */
-	if (b->submitted) {
-		rte_errno = ENOSPC;
-		return 0;
-	}
+	if (b->submitted)
+		goto failed;
 
 	/* write the descriptor */
 	b->ops[b->op_count++] = (struct rte_idxd_hw_desc){
@@ -362,7 +360,13 @@ __idxd_enqueue_copy(int dev_id, rte_iova_t src, rte_iova_t dst,
 	if (++idxd->next_free_hdl == idxd->hdl_ring_sz)
 		idxd->next_free_hdl = 0;
 
+	idxd->xstats.enqueued++;
 	return 1;
+
+failed:
+	idxd->xstats.enqueue_failed++;
+	rte_errno = ENOSPC;
+	return 0;
 }
 
 static __rte_always_inline void
@@ -389,6 +393,7 @@ __idxd_perform_ops(int dev_id)
 
 	if (++idxd->next_batch == idxd->batch_ring_sz)
 		idxd->next_batch = 0;
+	idxd->xstats.started = idxd->xstats.enqueued;
 }
 
 static __rte_always_inline int
@@ -425,6 +430,7 @@ __idxd_completed_ops(int dev_id, uint8_t max_ops,
 
 	idxd->next_ret_hdl = h_idx;
 
+	idxd->xstats.completed += n;
 	return n;
 }
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH 20.11 20/20] raw/ioat: clean up use of common test function
  2020-07-21  9:51 [dpdk-dev] [PATCH 20.11 00/20] raw/ioat: enhancements and new hardware support Bruce Richardson
                   ` (18 preceding siblings ...)
  2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 19/20] raw/ioat: add xstats tracking for idxd devices Bruce Richardson
@ 2020-07-21  9:51 ` Bruce Richardson
  2020-08-21 16:29 ` [dpdk-dev] [PATCH v2 00/18] raw/ioat: enhancements and new hardware support Bruce Richardson
                   ` (4 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-07-21  9:51 UTC (permalink / raw)
  To: dev; +Cc: cheng1.jiang, patrick.fu, kevin.laatz, Bruce Richardson

Now that all devices can pass the same set of unit tests, elminate the
temporary idxd_rawdev_test function and move the prototype for
ioat_rawdev_test to the proper internal header file, to be used by all
device instances.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 drivers/raw/ioat/idxd_pci.c         | 2 +-
 drivers/raw/ioat/idxd_vdev.c        | 2 +-
 drivers/raw/ioat/ioat_private.h     | 4 ++--
 drivers/raw/ioat/ioat_rawdev.c      | 2 --
 drivers/raw/ioat/ioat_rawdev_test.c | 7 -------
 5 files changed, 4 insertions(+), 13 deletions(-)

diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
index a426897b2..d5e87d2e4 100644
--- a/drivers/raw/ioat/idxd_pci.c
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -102,7 +102,7 @@ idxd_pci_dev_start(struct rte_rawdev *dev)
 }
 
 static const struct rte_rawdev_ops idxd_pci_ops = {
-		.dev_selftest = idxd_rawdev_test,
+		.dev_selftest = ioat_rawdev_test,
 		.dump = idxd_dev_dump,
 		.dev_configure = idxd_dev_configure,
 		.dev_start = idxd_pci_dev_start,
diff --git a/drivers/raw/ioat/idxd_vdev.c b/drivers/raw/ioat/idxd_vdev.c
index aa2693368..7c3c791ce 100644
--- a/drivers/raw/ioat/idxd_vdev.c
+++ b/drivers/raw/ioat/idxd_vdev.c
@@ -81,7 +81,7 @@ idxd_vdev_start(struct rte_rawdev *dev)
 }
 
 static const struct rte_rawdev_ops idxd_vdev_ops = {
-		.dev_selftest = idxd_rawdev_test,
+		.dev_selftest = ioat_rawdev_test,
 		.dump = idxd_dev_dump,
 		.dev_configure = idxd_dev_configure,
 		.dev_start = idxd_vdev_start,
diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
index 46465e675..0c626e6bf 100644
--- a/drivers/raw/ioat/ioat_private.h
+++ b/drivers/raw/ioat/ioat_private.h
@@ -65,6 +65,8 @@ int ioat_xstats_get_names(const struct rte_rawdev *dev,
 int ioat_xstats_reset(struct rte_rawdev *dev, const uint32_t *ids,
 		uint32_t nb_ids);
 
+extern int ioat_rawdev_test(uint16_t dev_id);
+
 extern int idxd_rawdev_create(const char *name, struct rte_device *dev,
 		       const struct idxd_rawdev *idxd,
 		       const struct rte_rawdev_ops *ops);
@@ -75,8 +77,6 @@ extern int idxd_dev_configure(const struct rte_rawdev *dev,
 extern int idxd_dev_info_get(struct rte_rawdev *dev, rte_rawdev_obj_t dev_info,
 		size_t info_size);
 
-extern int idxd_rawdev_test(uint16_t dev_id);
-
 extern int idxd_dev_dump(struct rte_rawdev *dev, FILE *f);
 
 #endif /* _IOAT_PRIVATE_H_ */
diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
index e3c98a825..1b6a58073 100644
--- a/drivers/raw/ioat/ioat_rawdev.c
+++ b/drivers/raw/ioat/ioat_rawdev.c
@@ -121,8 +121,6 @@ ioat_dev_info_get(struct rte_rawdev *dev, rte_rawdev_obj_t dev_info,
 	return 0;
 }
 
-extern int ioat_rawdev_test(uint16_t dev_id);
-
 static int
 ioat_rawdev_create(const char *name, struct rte_pci_device *dev)
 {
diff --git a/drivers/raw/ioat/ioat_rawdev_test.c b/drivers/raw/ioat/ioat_rawdev_test.c
index 678d135c4..626299525 100644
--- a/drivers/raw/ioat/ioat_rawdev_test.c
+++ b/drivers/raw/ioat/ioat_rawdev_test.c
@@ -253,10 +253,3 @@ ioat_rawdev_test(uint16_t dev_id)
 	free(ids);
 	return -1;
 }
-
-int
-idxd_rawdev_test(uint16_t dev_id)
-{
-	rte_rawdev_dump(dev_id, stdout);
-	return ioat_rawdev_test(dev_id);
-}
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v2 00/18] raw/ioat: enhancements and new hardware support
  2020-07-21  9:51 [dpdk-dev] [PATCH 20.11 00/20] raw/ioat: enhancements and new hardware support Bruce Richardson
                   ` (19 preceding siblings ...)
  2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 20/20] raw/ioat: clean up use of common test function Bruce Richardson
@ 2020-08-21 16:29 ` Bruce Richardson
  2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 01/18] raw/ioat: add a flag to control copying handle parameters Bruce Richardson
                     ` (18 more replies)
  2020-09-25 11:08 ` [dpdk-dev] [PATCH v3 00/25] " Bruce Richardson
                   ` (3 subsequent siblings)
  24 siblings, 19 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-08-21 16:29 UTC (permalink / raw)
  To: dev; +Cc: cheng1.jiang, patrick.fu, ping.yu, kevin.laatz, Bruce Richardson

This patchset adds some small enhancements, some rework and also support
for new hardware to the ioat rawdev driver. Most rework and enhancements
are largely self-explanatory from the individual patches.

The new hardware support is for the Intel(R) DSA accelerator which will be
present in future Intel processors. A description of this new hardware is
covered in [1]. Functions specific to the new hardware use the "idxd"
prefix, for consistency with the kernel driver. *** SUBJECT HERE ***

[1] https://01.org/blogs/2019/introducing-intel-data-streaming-accelerator

---
V2:
 * Included documentation additions in the set
 * Split off the rawdev unit test changes to a separate patchset for easier
   review
 * General code improvements and cleanups


Bruce Richardson (14):
  raw/ioat: split header for readability
  raw/ioat: make the HW register spec private
  raw/ioat: add skeleton for VFIO/UIO based DSA device
  raw/ioat: include example configuration script
  raw/ioat: create rawdev instances on idxd PCI probe
  raw/ioat: add datapath data structures for idxd devices
  raw/ioat: add configure function for idxd devices
  raw/ioat: add start and stop functions for idxd devices
  raw/ioat: add data path for idxd devices
  raw/ioat: add info function for idxd devices
  raw/ioat: create separate statistics structure
  raw/ioat: move xstats functions to common file
  raw/ioat: add xstats tracking for idxd devices
  raw/ioat: clean up use of common test function

Cheng Jiang (1):
  raw/ioat: add a flag to control copying handle parameters

Kevin Laatz (3):
  usertools/dpdk-devbind.py: add support for DSA HW
  raw/ioat: add vdev probe for DSA/idxd devices
  raw/ioat: create rawdev instances for idxd vdevs

 doc/guides/rawdevs/ioat.rst                   | 131 +++--
 drivers/raw/ioat/dpdk_idxd_cfg.py             |  79 +++
 drivers/raw/ioat/idxd_pci.c                   | 344 +++++++++++++
 drivers/raw/ioat/idxd_vdev.c                  | 234 +++++++++
 drivers/raw/ioat/ioat_common.c                | 237 +++++++++
 drivers/raw/ioat/ioat_private.h               |  80 +++
 drivers/raw/ioat/ioat_rawdev.c                |  95 +---
 drivers/raw/ioat/ioat_rawdev_test.c           |   1 +
 .../raw/ioat/{rte_ioat_spec.h => ioat_spec.h} |  90 +++-
 drivers/raw/ioat/meson.build                  |  15 +-
 drivers/raw/ioat/rte_ioat_rawdev.h            | 169 ++-----
 drivers/raw/ioat/rte_ioat_rawdev_fns.h        | 474 ++++++++++++++++++
 usertools/dpdk-devbind.py                     |   4 +-
 13 files changed, 1646 insertions(+), 307 deletions(-)
 create mode 100755 drivers/raw/ioat/dpdk_idxd_cfg.py
 create mode 100644 drivers/raw/ioat/idxd_pci.c
 create mode 100644 drivers/raw/ioat/idxd_vdev.c
 create mode 100644 drivers/raw/ioat/ioat_common.c
 create mode 100644 drivers/raw/ioat/ioat_private.h
 rename drivers/raw/ioat/{rte_ioat_spec.h => ioat_spec.h} (74%)
 create mode 100644 drivers/raw/ioat/rte_ioat_rawdev_fns.h

-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v2 01/18] raw/ioat: add a flag to control copying handle parameters
  2020-08-21 16:29 ` [dpdk-dev] [PATCH v2 00/18] raw/ioat: enhancements and new hardware support Bruce Richardson
@ 2020-08-21 16:29   ` Bruce Richardson
  2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 02/18] raw/ioat: split header for readability Bruce Richardson
                     ` (17 subsequent siblings)
  18 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-08-21 16:29 UTC (permalink / raw)
  To: dev
  Cc: cheng1.jiang, patrick.fu, ping.yu, kevin.laatz, Cheng Jiang,
	Bruce Richardson

From: Cheng Jiang <Cheng1.jiang@intel.com>

Add a flag which controls whether rte_ioat_enqueue_copy and
rte_ioat_completed_copies function should process handle parameters. Not
doing so can improve the performance when handle parameters are not
necessary.

Signed-off-by: Cheng Jiang <Cheng1.jiang@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 drivers/raw/ioat/ioat_rawdev.c     |  2 ++
 drivers/raw/ioat/rte_ioat_rawdev.h | 45 ++++++++++++++++++++++--------
 2 files changed, 36 insertions(+), 11 deletions(-)

diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
index 7f1a154360..856b55cb6e 100644
--- a/drivers/raw/ioat/ioat_rawdev.c
+++ b/drivers/raw/ioat/ioat_rawdev.c
@@ -58,6 +58,7 @@ ioat_dev_configure(const struct rte_rawdev *dev, rte_rawdev_obj_t config,
 		return -EINVAL;
 
 	ioat->ring_size = params->ring_size;
+	ioat->hdls_disable = params->hdls_disable;
 	if (ioat->desc_ring != NULL) {
 		rte_memzone_free(ioat->desc_mz);
 		ioat->desc_ring = NULL;
@@ -122,6 +123,7 @@ ioat_dev_info_get(struct rte_rawdev *dev, rte_rawdev_obj_t dev_info,
 		return -EINVAL;
 
 	cfg->ring_size = ioat->ring_size;
+	cfg->hdls_disable = ioat->hdls_disable;
 	return 0;
 }
 
diff --git a/drivers/raw/ioat/rte_ioat_rawdev.h b/drivers/raw/ioat/rte_ioat_rawdev.h
index f765a65571..4bc6491d91 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev.h
@@ -34,7 +34,8 @@
  * an ioat rawdev instance.
  */
 struct rte_ioat_rawdev_config {
-	unsigned short ring_size;
+	unsigned short ring_size; /**< size of job submission descriptor ring */
+	bool hdls_disable;    /**< if set, ignore user-supplied handle params */
 };
 
 /**
@@ -52,6 +53,7 @@ struct rte_ioat_rawdev {
 
 	unsigned short ring_size;
 	struct rte_ioat_generic_hw_desc *desc_ring;
+	bool hdls_disable;
 	__m128i *hdls; /* completion handles for returning to user */
 
 
@@ -84,10 +86,14 @@ struct rte_ioat_rawdev {
  *   The length of the data to be copied
  * @param src_hdl
  *   An opaque handle for the source data, to be returned when this operation
- *   has been completed and the user polls for the completion details
+ *   has been completed and the user polls for the completion details.
+ *   NOTE: If hdls_disable configuration option for the device is set, this
+ *   parameter is ignored.
  * @param dst_hdl
  *   An opaque handle for the destination data, to be returned when this
- *   operation has been completed and the user polls for the completion details
+ *   operation has been completed and the user polls for the completion details.
+ *   NOTE: If hdls_disable configuration option for the device is set, this
+ *   parameter is ignored.
  * @param fence
  *   A flag parameter indicating that hardware should not begin to perform any
  *   subsequently enqueued copy operations until after this operation has
@@ -121,8 +127,10 @@ rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 	desc->u.control_raw = (uint32_t)((!!fence << 4) | (!(write & 0xF)) << 3);
 	desc->src_addr = src;
 	desc->dest_addr = dst;
+	if (!ioat->hdls_disable)
+		ioat->hdls[write] = _mm_set_epi64x((int64_t)dst_hdl,
+					(int64_t)src_hdl);
 
-	ioat->hdls[write] = _mm_set_epi64x((int64_t)dst_hdl, (int64_t)src_hdl);
 	rte_prefetch0(&ioat->desc_ring[ioat->next_write & mask]);
 
 	ioat->enqueued++;
@@ -168,19 +176,29 @@ rte_ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
 /**
  * Returns details of copy operations that have been completed
  *
- * Returns to the caller the user-provided "handles" for the copy operations
- * which have been completed by the hardware, and not already returned by
- * a previous call to this API.
+ * If the hdls_disable option was not set when the device was configured,
+ * the function will return to the caller the user-provided "handles" for
+ * the copy operations which have been completed by the hardware, and not
+ * already returned by a previous call to this API.
+ * If the hdls_disable option for the device was set on configure, the
+ * max_copies, src_hdls and dst_hdls parameters will be ignored, and the
+ * function returns the number of newly-completed operations.
  *
  * @param dev_id
  *   The rawdev device id of the ioat instance
  * @param max_copies
  *   The number of entries which can fit in the src_hdls and dst_hdls
- *   arrays, i.e. max number of completed operations to report
+ *   arrays, i.e. max number of completed operations to report.
+ *   NOTE: If hdls_disable configuration option for the device is set, this
+ *   parameter is ignored.
  * @param src_hdls
- *   Array to hold the source handle parameters of the completed copies
+ *   Array to hold the source handle parameters of the completed copies.
+ *   NOTE: If hdls_disable configuration option for the device is set, this
+ *   parameter is ignored.
  * @param dst_hdls
- *   Array to hold the destination handle parameters of the completed copies
+ *   Array to hold the destination handle parameters of the completed copies.
+ *   NOTE: If hdls_disable configuration option for the device is set, this
+ *   parameter is ignored.
  * @return
  *   -1 on error, with rte_errno set appropriately.
  *   Otherwise number of completed operations i.e. number of entries written
@@ -205,6 +223,11 @@ rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
 		return -1;
 	}
 
+	if (ioat->hdls_disable) {
+		read += count;
+		goto end;
+	}
+
 	if (count > max_copies)
 		count = max_copies;
 
@@ -222,7 +245,7 @@ rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
 		src_hdls[i] = hdls[0];
 		dst_hdls[i] = hdls[1];
 	}
-
+end:
 	ioat->next_read = read;
 	ioat->completed += count;
 	return count;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v2 02/18] raw/ioat: split header for readability
  2020-08-21 16:29 ` [dpdk-dev] [PATCH v2 00/18] raw/ioat: enhancements and new hardware support Bruce Richardson
  2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 01/18] raw/ioat: add a flag to control copying handle parameters Bruce Richardson
@ 2020-08-21 16:29   ` Bruce Richardson
  2020-08-25 15:27     ` Laatz, Kevin
  2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 03/18] raw/ioat: make the HW register spec private Bruce Richardson
                     ` (16 subsequent siblings)
  18 siblings, 1 reply; 157+ messages in thread
From: Bruce Richardson @ 2020-08-21 16:29 UTC (permalink / raw)
  To: dev; +Cc: cheng1.jiang, patrick.fu, ping.yu, kevin.laatz, Bruce Richardson

Rather than having a single long complicated header file for general use we
can split things so that there is one header with all the publically needed
information - data structs and function prototypes - while the rest of the
internal details are put separately. This makes it easier to read,
understand and use the APIs.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---

There are a couple of checkpatch errors about spacing in this patch,
however, it appears that these are false positives.
---
 drivers/raw/ioat/meson.build           |   1 +
 drivers/raw/ioat/rte_ioat_rawdev.h     | 144 +---------------------
 drivers/raw/ioat/rte_ioat_rawdev_fns.h | 164 +++++++++++++++++++++++++
 3 files changed, 171 insertions(+), 138 deletions(-)
 create mode 100644 drivers/raw/ioat/rte_ioat_rawdev_fns.h

diff --git a/drivers/raw/ioat/meson.build b/drivers/raw/ioat/meson.build
index 0878418aee..f66e9b605e 100644
--- a/drivers/raw/ioat/meson.build
+++ b/drivers/raw/ioat/meson.build
@@ -8,4 +8,5 @@ sources = files('ioat_rawdev.c',
 deps += ['rawdev', 'bus_pci', 'mbuf']
 
 install_headers('rte_ioat_rawdev.h',
+		'rte_ioat_rawdev_fns.h',
 		'rte_ioat_spec.h')
diff --git a/drivers/raw/ioat/rte_ioat_rawdev.h b/drivers/raw/ioat/rte_ioat_rawdev.h
index 4bc6491d91..7ace5c085a 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev.h
@@ -14,12 +14,7 @@
  * @b EXPERIMENTAL: these structures and APIs may change without prior notice
  */
 
-#include <x86intrin.h>
-#include <rte_atomic.h>
-#include <rte_memory.h>
-#include <rte_memzone.h>
-#include <rte_prefetch.h>
-#include "rte_ioat_spec.h"
+#include <rte_common.h>
 
 /** Name of the device driver */
 #define IOAT_PMD_RAWDEV_NAME rawdev_ioat
@@ -38,38 +33,6 @@ struct rte_ioat_rawdev_config {
 	bool hdls_disable;    /**< if set, ignore user-supplied handle params */
 };
 
-/**
- * @internal
- * Structure representing a device instance
- */
-struct rte_ioat_rawdev {
-	struct rte_rawdev *rawdev;
-	const struct rte_memzone *mz;
-	const struct rte_memzone *desc_mz;
-
-	volatile struct rte_ioat_registers *regs;
-	phys_addr_t status_addr;
-	phys_addr_t ring_addr;
-
-	unsigned short ring_size;
-	struct rte_ioat_generic_hw_desc *desc_ring;
-	bool hdls_disable;
-	__m128i *hdls; /* completion handles for returning to user */
-
-
-	unsigned short next_read;
-	unsigned short next_write;
-
-	/* some statistics for tracking, if added/changed update xstats fns*/
-	uint64_t enqueue_failed __rte_cache_aligned;
-	uint64_t enqueued;
-	uint64_t started;
-	uint64_t completed;
-
-	/* to report completions, the device will write status back here */
-	volatile uint64_t status __rte_cache_aligned;
-};
-
 /**
  * Enqueue a copy operation onto the ioat device
  *
@@ -104,38 +67,7 @@ struct rte_ioat_rawdev {
 static inline int
 rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl,
-		int fence)
-{
-	struct rte_ioat_rawdev *ioat = rte_rawdevs[dev_id].dev_private;
-	unsigned short read = ioat->next_read;
-	unsigned short write = ioat->next_write;
-	unsigned short mask = ioat->ring_size - 1;
-	unsigned short space = mask + read - write;
-	struct rte_ioat_generic_hw_desc *desc;
-
-	if (space == 0) {
-		ioat->enqueue_failed++;
-		return 0;
-	}
-
-	ioat->next_write = write + 1;
-	write &= mask;
-
-	desc = &ioat->desc_ring[write];
-	desc->size = length;
-	/* set descriptor write-back every 16th descriptor */
-	desc->u.control_raw = (uint32_t)((!!fence << 4) | (!(write & 0xF)) << 3);
-	desc->src_addr = src;
-	desc->dest_addr = dst;
-	if (!ioat->hdls_disable)
-		ioat->hdls[write] = _mm_set_epi64x((int64_t)dst_hdl,
-					(int64_t)src_hdl);
-
-	rte_prefetch0(&ioat->desc_ring[ioat->next_write & mask]);
-
-	ioat->enqueued++;
-	return 1;
-}
+		int fence);
 
 /**
  * Trigger hardware to begin performing enqueued copy operations
@@ -147,31 +79,7 @@ rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
  *   The rawdev device id of the ioat instance
  */
 static inline void
-rte_ioat_do_copies(int dev_id)
-{
-	struct rte_ioat_rawdev *ioat = rte_rawdevs[dev_id].dev_private;
-	ioat->desc_ring[(ioat->next_write - 1) & (ioat->ring_size - 1)].u
-			.control.completion_update = 1;
-	rte_compiler_barrier();
-	ioat->regs->dmacount = ioat->next_write;
-	ioat->started = ioat->enqueued;
-}
-
-/**
- * @internal
- * Returns the index of the last completed operation.
- */
-static inline int
-rte_ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
-{
-	uint64_t status = ioat->status;
-
-	/* lower 3 bits indicate "transfer status" : active, idle, halted.
-	 * We can ignore bit 0.
-	 */
-	*error = status & (RTE_IOAT_CHANSTS_SUSPENDED | RTE_IOAT_CHANSTS_ARMED);
-	return (status - ioat->ring_addr) >> 6;
-}
+rte_ioat_do_copies(int dev_id);
 
 /**
  * Returns details of copy operations that have been completed
@@ -206,49 +114,9 @@ rte_ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
  */
 static inline int
 rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
-		uintptr_t *src_hdls, uintptr_t *dst_hdls)
-{
-	struct rte_ioat_rawdev *ioat = rte_rawdevs[dev_id].dev_private;
-	unsigned short mask = (ioat->ring_size - 1);
-	unsigned short read = ioat->next_read;
-	unsigned short end_read, count;
-	int error;
-	int i = 0;
-
-	end_read = (rte_ioat_get_last_completed(ioat, &error) + 1) & mask;
-	count = (end_read - (read & mask)) & mask;
-
-	if (error) {
-		rte_errno = EIO;
-		return -1;
-	}
-
-	if (ioat->hdls_disable) {
-		read += count;
-		goto end;
-	}
-
-	if (count > max_copies)
-		count = max_copies;
-
-	for (; i < count - 1; i += 2, read += 2) {
-		__m128i hdls0 = _mm_load_si128(&ioat->hdls[read & mask]);
-		__m128i hdls1 = _mm_load_si128(&ioat->hdls[(read + 1) & mask]);
+		uintptr_t *src_hdls, uintptr_t *dst_hdls);
 
-		_mm_storeu_si128((void *)&src_hdls[i],
-				_mm_unpacklo_epi64(hdls0, hdls1));
-		_mm_storeu_si128((void *)&dst_hdls[i],
-				_mm_unpackhi_epi64(hdls0, hdls1));
-	}
-	for (; i < count; i++, read++) {
-		uintptr_t *hdls = (void *)&ioat->hdls[read & mask];
-		src_hdls[i] = hdls[0];
-		dst_hdls[i] = hdls[1];
-	}
-end:
-	ioat->next_read = read;
-	ioat->completed += count;
-	return count;
-}
+/* include the implementation details from a separate file */
+#include "rte_ioat_rawdev_fns.h"
 
 #endif /* _RTE_IOAT_RAWDEV_H_ */
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
new file mode 100644
index 0000000000..06b4edcbb0
--- /dev/null
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -0,0 +1,164 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Intel Corporation
+ */
+#ifndef _RTE_IOAT_RAWDEV_FNS_H_
+#define _RTE_IOAT_RAWDEV_FNS_H_
+
+#include <x86intrin.h>
+#include <rte_memzone.h>
+#include <rte_prefetch.h>
+#include "rte_ioat_spec.h"
+
+/**
+ * @internal
+ * Structure representing a device instance
+ */
+struct rte_ioat_rawdev {
+	struct rte_rawdev *rawdev;
+	const struct rte_memzone *mz;
+	const struct rte_memzone *desc_mz;
+
+	volatile struct rte_ioat_registers *regs;
+	phys_addr_t status_addr;
+	phys_addr_t ring_addr;
+
+	unsigned short ring_size;
+	bool hdls_disable;
+	struct rte_ioat_generic_hw_desc *desc_ring;
+	__m128i *hdls; /* completion handles for returning to user */
+
+
+	unsigned short next_read;
+	unsigned short next_write;
+
+	/* some statistics for tracking, if added/changed update xstats fns*/
+	uint64_t enqueue_failed __rte_cache_aligned;
+	uint64_t enqueued;
+	uint64_t started;
+	uint64_t completed;
+
+	/* to report completions, the device will write status back here */
+	volatile uint64_t status __rte_cache_aligned;
+};
+
+/**
+ * Enqueue a copy operation onto the ioat device
+ */
+static inline int
+rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
+		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl,
+		int fence)
+{
+	struct rte_ioat_rawdev *ioat = rte_rawdevs[dev_id].dev_private;
+	unsigned short read = ioat->next_read;
+	unsigned short write = ioat->next_write;
+	unsigned short mask = ioat->ring_size - 1;
+	unsigned short space = mask + read - write;
+	struct rte_ioat_generic_hw_desc *desc;
+
+	if (space == 0) {
+		ioat->enqueue_failed++;
+		return 0;
+	}
+
+	ioat->next_write = write + 1;
+	write &= mask;
+
+	desc = &ioat->desc_ring[write];
+	desc->size = length;
+	/* set descriptor write-back every 16th descriptor */
+	desc->u.control_raw = (uint32_t)((!!fence << 4) | (!(write & 0xF)) << 3);
+	desc->src_addr = src;
+	desc->dest_addr = dst;
+
+	if (!ioat->hdls_disable)
+		ioat->hdls[write] = _mm_set_epi64x((int64_t)dst_hdl,
+					(int64_t)src_hdl);
+	rte_prefetch0(&ioat->desc_ring[ioat->next_write & mask]);
+
+	ioat->enqueued++;
+	return 1;
+}
+
+/**
+ * Trigger hardware to begin performing enqueued copy operations
+ */
+static inline void
+rte_ioat_do_copies(int dev_id)
+{
+	struct rte_ioat_rawdev *ioat = rte_rawdevs[dev_id].dev_private;
+	ioat->desc_ring[(ioat->next_write - 1) & (ioat->ring_size - 1)].u
+			.control.completion_update = 1;
+	rte_compiler_barrier();
+	ioat->regs->dmacount = ioat->next_write;
+	ioat->started = ioat->enqueued;
+}
+
+/**
+ * @internal
+ * Returns the index of the last completed operation.
+ */
+static inline int
+rte_ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
+{
+	uint64_t status = ioat->status;
+
+	/* lower 3 bits indicate "transfer status" : active, idle, halted.
+	 * We can ignore bit 0.
+	 */
+	*error = status & (RTE_IOAT_CHANSTS_SUSPENDED | RTE_IOAT_CHANSTS_ARMED);
+	return (status - ioat->ring_addr) >> 6;
+}
+
+/**
+ * Returns details of copy operations that have been completed
+ */
+static inline int
+rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
+		uintptr_t *src_hdls, uintptr_t *dst_hdls)
+{
+	struct rte_ioat_rawdev *ioat = rte_rawdevs[dev_id].dev_private;
+	unsigned short mask = (ioat->ring_size - 1);
+	unsigned short read = ioat->next_read;
+	unsigned short end_read, count;
+	int error;
+	int i = 0;
+
+	end_read = (rte_ioat_get_last_completed(ioat, &error) + 1) & mask;
+	count = (end_read - (read & mask)) & mask;
+
+	if (error) {
+		rte_errno = EIO;
+		return -1;
+	}
+
+	if (ioat->hdls_disable) {
+		read += count;
+		goto end;
+	}
+
+	if (count > max_copies)
+		count = max_copies;
+
+	for (; i < count - 1; i += 2, read += 2) {
+		__m128i hdls0 = _mm_load_si128(&ioat->hdls[read & mask]);
+		__m128i hdls1 = _mm_load_si128(&ioat->hdls[(read + 1) & mask]);
+
+		_mm_storeu_si128((void *)&src_hdls[i],
+				_mm_unpacklo_epi64(hdls0, hdls1));
+		_mm_storeu_si128((void *)&dst_hdls[i],
+				_mm_unpackhi_epi64(hdls0, hdls1));
+	}
+	for (; i < count; i++, read++) {
+		uintptr_t *hdls = (void *)&ioat->hdls[read & mask];
+		src_hdls[i] = hdls[0];
+		dst_hdls[i] = hdls[1];
+	}
+
+end:
+	ioat->next_read = read;
+	ioat->completed += count;
+	return count;
+}
+
+#endif /* _RTE_IOAT_RAWDEV_FNS_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v2 03/18] raw/ioat: make the HW register spec private
  2020-08-21 16:29 ` [dpdk-dev] [PATCH v2 00/18] raw/ioat: enhancements and new hardware support Bruce Richardson
  2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 01/18] raw/ioat: add a flag to control copying handle parameters Bruce Richardson
  2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 02/18] raw/ioat: split header for readability Bruce Richardson
@ 2020-08-21 16:29   ` Bruce Richardson
  2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 04/18] usertools/dpdk-devbind.py: add support for DSA HW Bruce Richardson
                     ` (15 subsequent siblings)
  18 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-08-21 16:29 UTC (permalink / raw)
  To: dev; +Cc: cheng1.jiang, patrick.fu, ping.yu, kevin.laatz, Bruce Richardson

Only a few definitions from the hardware spec are actually used in the
driver runtime, so we can copy over those few and make the rest of the spec
a private header in the driver.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 drivers/raw/ioat/ioat_rawdev.c                |  3 ++
 .../raw/ioat/{rte_ioat_spec.h => ioat_spec.h} | 26 -----------
 drivers/raw/ioat/meson.build                  |  3 +-
 drivers/raw/ioat/rte_ioat_rawdev_fns.h        | 44 +++++++++++++++++--
 4 files changed, 44 insertions(+), 32 deletions(-)
 rename drivers/raw/ioat/{rte_ioat_spec.h => ioat_spec.h} (91%)

diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
index 856b55cb6e..bba072f2a7 100644
--- a/drivers/raw/ioat/ioat_rawdev.c
+++ b/drivers/raw/ioat/ioat_rawdev.c
@@ -4,10 +4,12 @@
 
 #include <rte_cycles.h>
 #include <rte_bus_pci.h>
+#include <rte_memzone.h>
 #include <rte_string_fns.h>
 #include <rte_rawdev_pmd.h>
 
 #include "rte_ioat_rawdev.h"
+#include "ioat_spec.h"
 
 static struct rte_pci_driver ioat_pmd_drv;
 
@@ -261,6 +263,7 @@ ioat_rawdev_create(const char *name, struct rte_pci_device *dev)
 	ioat->rawdev = rawdev;
 	ioat->mz = mz;
 	ioat->regs = dev->mem_resource[0].addr;
+	ioat->doorbell = &ioat->regs->dmacount;
 	ioat->ring_size = 0;
 	ioat->desc_ring = NULL;
 	ioat->status_addr = ioat->mz->iova +
diff --git a/drivers/raw/ioat/rte_ioat_spec.h b/drivers/raw/ioat/ioat_spec.h
similarity index 91%
rename from drivers/raw/ioat/rte_ioat_spec.h
rename to drivers/raw/ioat/ioat_spec.h
index c6e7929b2c..9645e16d41 100644
--- a/drivers/raw/ioat/rte_ioat_spec.h
+++ b/drivers/raw/ioat/ioat_spec.h
@@ -86,32 +86,6 @@ struct rte_ioat_registers {
 
 #define RTE_IOAT_CHANCMP_ALIGN			8	/* CHANCMP address must be 64-bit aligned */
 
-struct rte_ioat_generic_hw_desc {
-	uint32_t size;
-	union {
-		uint32_t control_raw;
-		struct {
-			uint32_t int_enable: 1;
-			uint32_t src_snoop_disable: 1;
-			uint32_t dest_snoop_disable: 1;
-			uint32_t completion_update: 1;
-			uint32_t fence: 1;
-			uint32_t reserved2: 1;
-			uint32_t src_page_break: 1;
-			uint32_t dest_page_break: 1;
-			uint32_t bundle: 1;
-			uint32_t dest_dca: 1;
-			uint32_t hint: 1;
-			uint32_t reserved: 13;
-			uint32_t op: 8;
-		} control;
-	} u;
-	uint64_t src_addr;
-	uint64_t dest_addr;
-	uint64_t next;
-	uint64_t op_specific[4];
-};
-
 struct rte_ioat_dma_hw_desc {
 	uint32_t size;
 	union {
diff --git a/drivers/raw/ioat/meson.build b/drivers/raw/ioat/meson.build
index f66e9b605e..06636f8a9f 100644
--- a/drivers/raw/ioat/meson.build
+++ b/drivers/raw/ioat/meson.build
@@ -8,5 +8,4 @@ sources = files('ioat_rawdev.c',
 deps += ['rawdev', 'bus_pci', 'mbuf']
 
 install_headers('rte_ioat_rawdev.h',
-		'rte_ioat_rawdev_fns.h',
-		'rte_ioat_spec.h')
+		'rte_ioat_rawdev_fns.h')
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index 06b4edcbb0..0cee6b1b09 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -5,9 +5,37 @@
 #define _RTE_IOAT_RAWDEV_FNS_H_
 
 #include <x86intrin.h>
-#include <rte_memzone.h>
 #include <rte_prefetch.h>
-#include "rte_ioat_spec.h"
+
+/**
+ * @internal
+ * Structure representing a device descriptor
+ */
+struct rte_ioat_generic_hw_desc {
+	uint32_t size;
+	union {
+		uint32_t control_raw;
+		struct {
+			uint32_t int_enable: 1;
+			uint32_t src_snoop_disable: 1;
+			uint32_t dest_snoop_disable: 1;
+			uint32_t completion_update: 1;
+			uint32_t fence: 1;
+			uint32_t reserved2: 1;
+			uint32_t src_page_break: 1;
+			uint32_t dest_page_break: 1;
+			uint32_t bundle: 1;
+			uint32_t dest_dca: 1;
+			uint32_t hint: 1;
+			uint32_t reserved: 13;
+			uint32_t op: 8;
+		} control;
+	} u;
+	uint64_t src_addr;
+	uint64_t dest_addr;
+	uint64_t next;
+	uint64_t op_specific[4];
+};
 
 /**
  * @internal
@@ -18,7 +46,7 @@ struct rte_ioat_rawdev {
 	const struct rte_memzone *mz;
 	const struct rte_memzone *desc_mz;
 
-	volatile struct rte_ioat_registers *regs;
+	volatile uint16_t *doorbell;
 	phys_addr_t status_addr;
 	phys_addr_t ring_addr;
 
@@ -39,8 +67,16 @@ struct rte_ioat_rawdev {
 
 	/* to report completions, the device will write status back here */
 	volatile uint64_t status __rte_cache_aligned;
+
+	/* pointer to the register bar */
+	volatile struct rte_ioat_registers *regs;
 };
 
+#define RTE_IOAT_CHANSTS_IDLE			0x1
+#define RTE_IOAT_CHANSTS_SUSPENDED		0x2
+#define RTE_IOAT_CHANSTS_HALTED		0x3
+#define RTE_IOAT_CHANSTS_ARMED			0x4
+
 /**
  * Enqueue a copy operation onto the ioat device
  */
@@ -90,7 +126,7 @@ rte_ioat_do_copies(int dev_id)
 	ioat->desc_ring[(ioat->next_write - 1) & (ioat->ring_size - 1)].u
 			.control.completion_update = 1;
 	rte_compiler_barrier();
-	ioat->regs->dmacount = ioat->next_write;
+	*ioat->doorbell = ioat->next_write;
 	ioat->started = ioat->enqueued;
 }
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v2 04/18] usertools/dpdk-devbind.py: add support for DSA HW
  2020-08-21 16:29 ` [dpdk-dev] [PATCH v2 00/18] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (2 preceding siblings ...)
  2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 03/18] raw/ioat: make the HW register spec private Bruce Richardson
@ 2020-08-21 16:29   ` Bruce Richardson
  2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 05/18] raw/ioat: add skeleton for VFIO/UIO based DSA device Bruce Richardson
                     ` (14 subsequent siblings)
  18 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-08-21 16:29 UTC (permalink / raw)
  To: dev; +Cc: cheng1.jiang, patrick.fu, ping.yu, kevin.laatz, Bruce Richardson

From: Kevin Laatz <kevin.laatz@intel.com>

Intel Data Streaming Accelerator (Intel DSA) is a high-performance data
copy and transformation accelerator which will be integrated in future
Intel processors [1].

Add DSA device support to dpdk-devbind.py script.

[1] https://01.org/blogs/2019/introducing-intel-data-streaming-accelerator

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
---
 usertools/dpdk-devbind.py | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/usertools/dpdk-devbind.py b/usertools/dpdk-devbind.py
index 86b6b53c40..35d4384041 100755
--- a/usertools/dpdk-devbind.py
+++ b/usertools/dpdk-devbind.py
@@ -51,6 +51,8 @@
               'SVendor': None, 'SDevice': None}
 intel_ioat_icx = {'Class': '08', 'Vendor': '8086', 'Device': '0b00',
               'SVendor': None, 'SDevice': None}
+intel_idxd_spr = {'Class': '08', 'Vendor': '8086', 'Device': '0b25',
+              'SVendor': None, 'SDevice': None}
 intel_ntb_skx = {'Class': '06', 'Vendor': '8086', 'Device': '201c',
               'SVendor': None, 'SDevice': None}
 
@@ -60,7 +62,7 @@
 eventdev_devices = [cavium_sso, cavium_tim, octeontx2_sso]
 mempool_devices = [cavium_fpa, octeontx2_npa]
 compress_devices = [cavium_zip]
-misc_devices = [intel_ioat_bdw, intel_ioat_skx, intel_ioat_icx, intel_ntb_skx, octeontx2_dma]
+misc_devices = [intel_ioat_bdw, intel_ioat_skx, intel_ioat_icx, intel_idxd_spr, intel_ntb_skx, octeontx2_dma]
 
 # global dict ethernet devices present. Dictionary indexed by PCI address.
 # Each device within this is itself a dictionary of device properties
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v2 05/18] raw/ioat: add skeleton for VFIO/UIO based DSA device
  2020-08-21 16:29 ` [dpdk-dev] [PATCH v2 00/18] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (3 preceding siblings ...)
  2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 04/18] usertools/dpdk-devbind.py: add support for DSA HW Bruce Richardson
@ 2020-08-21 16:29   ` Bruce Richardson
  2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 06/18] raw/ioat: add vdev probe for DSA/idxd devices Bruce Richardson
                     ` (13 subsequent siblings)
  18 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-08-21 16:29 UTC (permalink / raw)
  To: dev; +Cc: cheng1.jiang, patrick.fu, ping.yu, kevin.laatz, Bruce Richardson

Add in the basic probe/remove skeleton code for DSA devices which are bound
directly to vfio or uio driver. The kernel module for supporting these uses
the "idxd" name, so that name is used as function and file prefix to avoid
conflict with existing "ioat" prefixed functions.

Since we are adding new files to the driver and there will be common
definitions shared between the various files, we create a new internal
header file ioat_private.h to hold common macros and function prototypes.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 doc/guides/rawdevs/ioat.rst     | 69 ++++++++++-----------------------
 drivers/raw/ioat/idxd_pci.c     | 56 ++++++++++++++++++++++++++
 drivers/raw/ioat/ioat_private.h | 27 +++++++++++++
 drivers/raw/ioat/ioat_rawdev.c  |  9 +----
 drivers/raw/ioat/meson.build    |  6 ++-
 5 files changed, 108 insertions(+), 59 deletions(-)
 create mode 100644 drivers/raw/ioat/idxd_pci.c
 create mode 100644 drivers/raw/ioat/ioat_private.h

diff --git a/doc/guides/rawdevs/ioat.rst b/doc/guides/rawdevs/ioat.rst
index c46460ff45..b83cf0f7db 100644
--- a/doc/guides/rawdevs/ioat.rst
+++ b/doc/guides/rawdevs/ioat.rst
@@ -3,10 +3,12 @@
 
 .. include:: <isonum.txt>
 
-IOAT Rawdev Driver for Intel\ |reg| QuickData Technology
-======================================================================
+IOAT Rawdev Driver
+===================
 
 The ``ioat`` rawdev driver provides a poll-mode driver (PMD) for Intel\ |reg|
+Data Streaming Accelerator `(Intel DSA)
+<https://01.org/blogs/2019/introducing-intel-data-streaming-accelerator>`_ and for Intel\ |reg|
 QuickData Technology, part of Intel\ |reg| I/O Acceleration Technology
 `(Intel I/OAT)
 <https://www.intel.com/content/www/us/en/wireless-network/accel-technology.html>`_.
@@ -17,61 +19,30 @@ be done by software, freeing up CPU cycles for other tasks.
 Hardware Requirements
 ----------------------
 
-On Linux, the presence of an Intel\ |reg| QuickData Technology hardware can
-be detected by checking the output of the ``lspci`` command, where the
-hardware will be often listed as "Crystal Beach DMA" or "CBDMA". For
-example, on a system with Intel\ |reg| Xeon\ |reg| CPU E5-2699 v4 @2.20GHz,
-lspci shows:
-
-.. code-block:: console
-
-  # lspci | grep DMA
-  00:04.0 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 0 (rev 01)
-  00:04.1 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 1 (rev 01)
-  00:04.2 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 2 (rev 01)
-  00:04.3 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 3 (rev 01)
-  00:04.4 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 4 (rev 01)
-  00:04.5 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 5 (rev 01)
-  00:04.6 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 6 (rev 01)
-  00:04.7 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 7 (rev 01)
-
-On a system with Intel\ |reg| Xeon\ |reg| Gold 6154 CPU @ 3.00GHz, lspci
-shows:
-
-.. code-block:: console
-
-  # lspci | grep DMA
-  00:04.0 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
-  00:04.1 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
-  00:04.2 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
-  00:04.3 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
-  00:04.4 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
-  00:04.5 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
-  00:04.6 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
-  00:04.7 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
-
+The ``dpdk-devbind.py`` script, included with DPDK,
+can be used to show the presence of supported hardware.
+Running ``dpdk-devbind.py --status-dev misc`` will show all the miscellaneous,
+or rawdev-based devices on the system.
+For Intel\ |reg| QuickData Technology devices, the hardware will be often listed as "Crystal Beach DMA",
+or "CBDMA".
+For Intel\ |reg| DSA devices, they are currently (at time of writing) appearing as devices with type "0b25",
+due to the absence of pci-id database entries for them at this point.
 
 Compilation
 ------------
 
-For builds done with ``make``, the driver compilation is enabled by the
-``CONFIG_RTE_LIBRTE_PMD_IOAT_RAWDEV`` build configuration option. This is
-enabled by default in builds for x86 platforms, and disabled in other
-configurations.
-
-For builds using ``meson`` and ``ninja``, the driver will be built when the
-target platform is x86-based.
+For builds using ``meson`` and ``ninja``, the driver will be built when the target platform is x86-based.
+No additional compilation steps are necessary.
 
 Device Setup
 -------------
 
-The Intel\ |reg| QuickData Technology HW devices will need to be bound to a
-user-space IO driver for use. The script ``dpdk-devbind.py`` script
-included with DPDK can be used to view the state of the devices and to bind
-them to a suitable DPDK-supported kernel driver. When querying the status
-of the devices, they will appear under the category of "Misc (rawdev)
-devices", i.e. the command ``dpdk-devbind.py --status-dev misc`` can be
-used to see the state of those devices alone.
+The HW devices to be used will need to be bound to a user-space IO driver for use.
+The ``dpdk-devbind.py`` script can be used to view the state of the devices
+and to bind them to a suitable DPDK-supported kernel driver, such as ``vfio-pci``.
+For example::
+
+	$ dpdk-devbind.py -b vfio-pci 00:04.0 00:04.1
 
 Device Probing and Initialization
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
new file mode 100644
index 0000000000..1a30e9c316
--- /dev/null
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -0,0 +1,56 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_bus_pci.h>
+
+#include "ioat_private.h"
+
+#define IDXD_VENDOR_ID		0x8086
+#define IDXD_DEVICE_ID_SPR	0x0B25
+
+#define IDXD_PMD_RAWDEV_NAME_PCI rawdev_idxd_pci
+
+const struct rte_pci_id pci_id_idxd_map[] = {
+	{ RTE_PCI_DEVICE(IDXD_VENDOR_ID, IDXD_DEVICE_ID_SPR) },
+	{ .vendor_id = 0, /* sentinel */ },
+};
+
+static int
+idxd_rawdev_probe_pci(struct rte_pci_driver *drv, struct rte_pci_device *dev)
+{
+	int ret = 0;
+	char name[PCI_PRI_STR_SIZE];
+
+	rte_pci_device_name(&dev->addr, name, sizeof(name));
+	IOAT_PMD_INFO("Init %s on NUMA node %d", name, dev->device.numa_node);
+	dev->device.driver = &drv->driver;
+
+	return ret;
+}
+
+static int
+idxd_rawdev_remove_pci(struct rte_pci_device *dev)
+{
+	char name[PCI_PRI_STR_SIZE];
+	int ret = 0;
+
+	rte_pci_device_name(&dev->addr, name, sizeof(name));
+
+	IOAT_PMD_INFO("Closing %s on NUMA node %d",
+			name, dev->device.numa_node);
+
+	return ret;
+}
+
+struct rte_pci_driver idxd_pmd_drv_pci = {
+	.id_table = pci_id_idxd_map,
+	.drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+	.probe = idxd_rawdev_probe_pci,
+	.remove = idxd_rawdev_remove_pci,
+};
+
+RTE_PMD_REGISTER_PCI(IDXD_PMD_RAWDEV_NAME_PCI, idxd_pmd_drv_pci);
+RTE_PMD_REGISTER_PCI_TABLE(IDXD_PMD_RAWDEV_NAME_PCI, pci_id_idxd_map);
+RTE_PMD_REGISTER_KMOD_DEP(IDXD_PMD_RAWDEV_NAME_PCI,
+			  "* igb_uio | uio_pci_generic | vfio-pci");
diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
new file mode 100644
index 0000000000..d87d4d055e
--- /dev/null
+++ b/drivers/raw/ioat/ioat_private.h
@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#ifndef _IOAT_PRIVATE_H_
+#define _IOAT_PRIVATE_H_
+
+/**
+ * @file idxd_private.h
+ *
+ * Private data structures for the idxd/DSA part of ioat device driver
+ *
+ * @warning
+ * @b EXPERIMENTAL: these structures and APIs may change without prior notice
+ */
+
+extern int ioat_pmd_logtype;
+
+#define IOAT_PMD_LOG(level, fmt, args...) rte_log(RTE_LOG_ ## level, \
+		ioat_pmd_logtype, "%s(): " fmt "\n", __func__, ##args)
+
+#define IOAT_PMD_DEBUG(fmt, args...)  IOAT_PMD_LOG(DEBUG, fmt, ## args)
+#define IOAT_PMD_INFO(fmt, args...)   IOAT_PMD_LOG(INFO, fmt, ## args)
+#define IOAT_PMD_ERR(fmt, args...)    IOAT_PMD_LOG(ERR, fmt, ## args)
+#define IOAT_PMD_WARN(fmt, args...)   IOAT_PMD_LOG(WARNING, fmt, ## args)
+
+#endif /* _IOAT_PRIVATE_H_ */
diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
index bba072f2a7..20c8b671a6 100644
--- a/drivers/raw/ioat/ioat_rawdev.c
+++ b/drivers/raw/ioat/ioat_rawdev.c
@@ -10,6 +10,7 @@
 
 #include "rte_ioat_rawdev.h"
 #include "ioat_spec.h"
+#include "ioat_private.h"
 
 static struct rte_pci_driver ioat_pmd_drv;
 
@@ -29,14 +30,6 @@ static struct rte_pci_driver ioat_pmd_drv;
 
 RTE_LOG_REGISTER(ioat_pmd_logtype, rawdev.ioat, INFO);
 
-#define IOAT_PMD_LOG(level, fmt, args...) rte_log(RTE_LOG_ ## level, \
-	ioat_pmd_logtype, "%s(): " fmt "\n", __func__, ##args)
-
-#define IOAT_PMD_DEBUG(fmt, args...)  IOAT_PMD_LOG(DEBUG, fmt, ## args)
-#define IOAT_PMD_INFO(fmt, args...)   IOAT_PMD_LOG(INFO, fmt, ## args)
-#define IOAT_PMD_ERR(fmt, args...)    IOAT_PMD_LOG(ERR, fmt, ## args)
-#define IOAT_PMD_WARN(fmt, args...)   IOAT_PMD_LOG(WARNING, fmt, ## args)
-
 #define DESC_SZ sizeof(struct rte_ioat_generic_hw_desc)
 #define COMPLETION_SZ sizeof(__m128i)
 
diff --git a/drivers/raw/ioat/meson.build b/drivers/raw/ioat/meson.build
index 06636f8a9f..3529635e9c 100644
--- a/drivers/raw/ioat/meson.build
+++ b/drivers/raw/ioat/meson.build
@@ -3,8 +3,10 @@
 
 build = dpdk_conf.has('RTE_ARCH_X86')
 reason = 'only supported on x86'
-sources = files('ioat_rawdev.c',
-		'ioat_rawdev_test.c')
+sources = files(
+	'idxd_pci.c',
+	'ioat_rawdev.c',
+	'ioat_rawdev_test.c')
 deps += ['rawdev', 'bus_pci', 'mbuf']
 
 install_headers('rte_ioat_rawdev.h',
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v2 06/18] raw/ioat: add vdev probe for DSA/idxd devices
  2020-08-21 16:29 ` [dpdk-dev] [PATCH v2 00/18] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (4 preceding siblings ...)
  2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 05/18] raw/ioat: add skeleton for VFIO/UIO based DSA device Bruce Richardson
@ 2020-08-21 16:29   ` Bruce Richardson
  2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 07/18] raw/ioat: include example configuration script Bruce Richardson
                     ` (12 subsequent siblings)
  18 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-08-21 16:29 UTC (permalink / raw)
  To: dev; +Cc: cheng1.jiang, patrick.fu, ping.yu, kevin.laatz, Bruce Richardson

From: Kevin Laatz <kevin.laatz@intel.com>

The Intel DSA devices can be exposed to userspace via kernel driver, so can
be used without having to bind them to vfio/uio. Therefore we add support
for using those kernel-configured devices as vdevs, taking as parameter the
individual HW work queue to be used by the vdev.

Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 doc/guides/rawdevs/ioat.rst  |  68 +++++++++++++++++--
 drivers/raw/ioat/idxd_vdev.c | 123 +++++++++++++++++++++++++++++++++++
 drivers/raw/ioat/meson.build |   6 +-
 3 files changed, 192 insertions(+), 5 deletions(-)
 create mode 100644 drivers/raw/ioat/idxd_vdev.c

diff --git a/doc/guides/rawdevs/ioat.rst b/doc/guides/rawdevs/ioat.rst
index b83cf0f7db..43a69ec4c6 100644
--- a/doc/guides/rawdevs/ioat.rst
+++ b/doc/guides/rawdevs/ioat.rst
@@ -37,9 +37,62 @@ No additional compilation steps are necessary.
 Device Setup
 -------------
 
+Depending on support provided by the PMD, HW devices can either use the kernel configured driver
+or be bound to a user-space IO driver for use.
+For example, Intel\ |reg| DSA devices can use the IDXD kernel driver or DPDK-supported drivers,
+such as ``vfio-pci``.
+
+Intel\ |reg| DSA devices using idxd kernel driver
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To use a Intel\ |reg| DSA device bound to the IDXD kernel driver, the device must first be configured.
+The `accel-config <https://github.com/intel/idxd-config>`_ utility library can be used for configuration.
+
+.. note::
+        The device configuration can also be done by directly interacting with the sysfs nodes.
+
+There are some mandatory configuration steps before being able to use a device with an application.
+The internal engines, which do the copies or other operations,
+and the work-queues, which are used by applications to assign work to the device,
+need to be assigned to groups, and the various other configuration options,
+such as priority or queue depth, need to be set for each queue.
+
+To assign an engine to a group::
+
+        $ accel-config config-engine dsa0/engine0.0 --group-id=0
+        $ accel-config config-engine dsa0/engine0.1 --group-id=1
+
+To assign work queues to groups for passing descriptors to the engines a similar accel-config command can be used.
+However, the work queues also need to be configured depending on the use-case.
+Some configuration options include:
+
+* mode (Dedicated/Shared): Indicates whether a WQ may accept jobs from multiple queues simultaneously.
+* priority: WQ priority between 1 and 15. Larger value means higher priority.
+* wq-size: the size of the WQ. Sum of all WQ sizes must be less that the total-size defined by the device.
+* type: WQ type (kernel/mdev/user). Determines how the device is presented.
+* name: identifier given to the WQ.
+
+Example configuration for a work queue::
+
+        $ accel-config config-wq dsa0/wq0.0 --group-id=0 \
+           --mode=dedicated --priority=10 --wq-size=8 \
+           --type=user --name=app1
+
+Once the devices have been configured, they need to be enabled::
+
+        $ accel-config enable-device dsa0
+        $ accel-config enable-wq dsa0/wq0.0
+
+Check the device configuration::
+
+        $ accel-config list
+
+Devices using VFIO/UIO drivers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
 The HW devices to be used will need to be bound to a user-space IO driver for use.
 The ``dpdk-devbind.py`` script can be used to view the state of the devices
-and to bind them to a suitable DPDK-supported kernel driver, such as ``vfio-pci``.
+and to bind them to a suitable DPDK-supported driver, such as ``vfio-pci``.
 For example::
 
 	$ dpdk-devbind.py -b vfio-pci 00:04.0 00:04.1
@@ -47,9 +100,16 @@ For example::
 Device Probing and Initialization
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-Once bound to a suitable kernel device driver, the HW devices will be found
-as part of the PCI scan done at application initialization time. No vdev
-parameters need to be passed to create or initialize the device.
+For devices bound to a suitable DPDK-supported VFIO/UIO driver, the HW devices will
+be found as part of the device scan done at application initialization time without
+the need to pass parameters to the application.
+
+If the device is bound to the IDXD kernel driver (and previously configured with sysfs),
+then a specific work queue needs to be passed to the application via a vdev parameter.
+This vdev parameter take the driver name and work queue name as parameters.
+For example, to use work queue 0 on Intel\ |reg| DSA instance 0::
+
+        $ dpdk-test --no-pci --vdev=rawdev_idxd,wq=0.0
 
 Once probed successfully, the device will appear as a ``rawdev``, that is a
 "raw device type" inside DPDK, and can be accessed using APIs from the
diff --git a/drivers/raw/ioat/idxd_vdev.c b/drivers/raw/ioat/idxd_vdev.c
new file mode 100644
index 0000000000..0509fc0842
--- /dev/null
+++ b/drivers/raw/ioat/idxd_vdev.c
@@ -0,0 +1,123 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_bus_vdev.h>
+#include <rte_kvargs.h>
+#include <rte_string_fns.h>
+#include <rte_rawdev_pmd.h>
+
+#include "ioat_private.h"
+
+/** Name of the device driver */
+#define IDXD_PMD_RAWDEV_NAME rawdev_idxd
+/* takes a work queue(WQ) as parameter */
+#define IDXD_ARG_WQ		"wq"
+
+static const char * const valid_args[] = {
+	IDXD_ARG_WQ,
+	NULL
+};
+
+struct idxd_vdev_args {
+	uint8_t device_id;
+	uint8_t wq_id;
+};
+
+static int
+idxd_rawdev_parse_wq(const char *key __rte_unused, const char *value,
+			  void *extra_args)
+{
+	struct idxd_vdev_args *args = (struct idxd_vdev_args *)extra_args;
+	int dev, wq, bytes = -1;
+	int read = sscanf(value, "%d.%d%n", &dev, &wq, &bytes);
+
+	if (read != 2 || bytes != (int)strlen(value)) {
+		IOAT_PMD_ERR("Error parsing work-queue id. Must be in <dev_id>.<queue_id> format");
+		return -EINVAL;
+	}
+
+	if (dev >= UINT8_MAX || wq >= UINT8_MAX) {
+		IOAT_PMD_ERR("Device or work queue id out of range");
+		return -EINVAL;
+	}
+
+	args->device_id = dev;
+	args->wq_id = wq;
+
+	return 0;
+}
+
+static int
+idxd_vdev_parse_params(struct rte_kvargs *kvlist, struct idxd_vdev_args *args)
+{
+	if (rte_kvargs_count(kvlist, IDXD_ARG_WQ) == 1) {
+		if (rte_kvargs_process(kvlist, IDXD_ARG_WQ,
+				&idxd_rawdev_parse_wq, args) < 0) {
+			IOAT_PMD_ERR("Error parsing %s", IDXD_ARG_WQ);
+			goto free;
+		}
+	} else {
+		IOAT_PMD_ERR("%s is a mandatory arg", IDXD_ARG_WQ);
+		return -EINVAL;
+	}
+
+	return 0;
+
+free:
+	if (kvlist)
+		rte_kvargs_free(kvlist);
+	return -EINVAL;
+}
+
+static int
+idxd_rawdev_probe_vdev(struct rte_vdev_device *vdev)
+{
+	struct rte_kvargs *kvlist;
+	struct idxd_vdev_args vdev_args;
+	const char *name;
+	int ret = 0;
+
+	name = rte_vdev_device_name(vdev);
+	if (name == NULL)
+		return -EINVAL;
+
+	IOAT_PMD_INFO("Initializing pmd_idxd for %s", name);
+
+	kvlist = rte_kvargs_parse(rte_vdev_device_args(vdev), valid_args);
+	if (kvlist == NULL) {
+		IOAT_PMD_ERR("Invalid kvargs key");
+		return -EINVAL;
+	}
+
+	ret = idxd_vdev_parse_params(kvlist, &vdev_args);
+	if (ret) {
+		IOAT_PMD_ERR("Failed to parse kvargs");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int
+idxd_rawdev_remove_vdev(struct rte_vdev_device *vdev)
+{
+	const char *name;
+
+	name = rte_vdev_device_name(vdev);
+	if (name == NULL)
+		return -EINVAL;
+
+	IOAT_PMD_INFO("Remove DSA vdev %p", name);
+
+	return 0;
+}
+
+struct rte_vdev_driver idxd_rawdev_drv_vdev = {
+	.probe = idxd_rawdev_probe_vdev,
+	.remove = idxd_rawdev_remove_vdev,
+};
+
+RTE_PMD_REGISTER_VDEV(IDXD_PMD_RAWDEV_NAME, idxd_rawdev_drv_vdev);
+RTE_PMD_REGISTER_PARAM_STRING(IDXD_PMD_RAWDEV_NAME,
+			      "wq=<string>");
diff --git a/drivers/raw/ioat/meson.build b/drivers/raw/ioat/meson.build
index 3529635e9c..b343b7367b 100644
--- a/drivers/raw/ioat/meson.build
+++ b/drivers/raw/ioat/meson.build
@@ -5,9 +5,13 @@ build = dpdk_conf.has('RTE_ARCH_X86')
 reason = 'only supported on x86'
 sources = files(
 	'idxd_pci.c',
+	'idxd_vdev.c',
 	'ioat_rawdev.c',
 	'ioat_rawdev_test.c')
-deps += ['rawdev', 'bus_pci', 'mbuf']
+deps += ['bus_pci',
+	'bus_vdev',
+	'mbuf',
+	'rawdev']
 
 install_headers('rte_ioat_rawdev.h',
 		'rte_ioat_rawdev_fns.h')
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v2 07/18] raw/ioat: include example configuration script
  2020-08-21 16:29 ` [dpdk-dev] [PATCH v2 00/18] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (5 preceding siblings ...)
  2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 06/18] raw/ioat: add vdev probe for DSA/idxd devices Bruce Richardson
@ 2020-08-21 16:29   ` Bruce Richardson
  2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 08/18] raw/ioat: create rawdev instances on idxd PCI probe Bruce Richardson
                     ` (11 subsequent siblings)
  18 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-08-21 16:29 UTC (permalink / raw)
  To: dev; +Cc: cheng1.jiang, patrick.fu, ping.yu, kevin.laatz, Bruce Richardson

Devices managed by the idxd kernel driver must be configured for DPDK use
before it can be used by the ioat driver. This example script serves both
as a quick way to get the driver set up with a simple configuration, and as
the basis for users to modify it and create their own configuration
scripts.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 doc/guides/rawdevs/ioat.rst       |  2 +
 drivers/raw/ioat/dpdk_idxd_cfg.py | 79 +++++++++++++++++++++++++++++++
 2 files changed, 81 insertions(+)
 create mode 100755 drivers/raw/ioat/dpdk_idxd_cfg.py

diff --git a/doc/guides/rawdevs/ioat.rst b/doc/guides/rawdevs/ioat.rst
index 43a69ec4c6..8d241d7e77 100644
--- a/doc/guides/rawdevs/ioat.rst
+++ b/doc/guides/rawdevs/ioat.rst
@@ -50,6 +50,8 @@ The `accel-config <https://github.com/intel/idxd-config>`_ utility library can b
 
 .. note::
         The device configuration can also be done by directly interacting with the sysfs nodes.
+        An example of how this may be done can be seen in the script ``dpdk_idxd_cfg.py``
+        included in the driver source directory.
 
 There are some mandatory configuration steps before being able to use a device with an application.
 The internal engines, which do the copies or other operations,
diff --git a/drivers/raw/ioat/dpdk_idxd_cfg.py b/drivers/raw/ioat/dpdk_idxd_cfg.py
new file mode 100755
index 0000000000..bce4bb5bd4
--- /dev/null
+++ b/drivers/raw/ioat/dpdk_idxd_cfg.py
@@ -0,0 +1,79 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2020 Intel Corporation
+
+"""
+Configure an entire Intel DSA instance, using idxd kernel driver, for DPDK use
+"""
+
+import sys
+import argparse
+import os
+import os.path
+
+
+class SysfsDir:
+    "Used to read/write paths in a sysfs directory"
+    def __init__(self, path):
+        self.path = path
+
+    def read_int(self, filename):
+        "Return a value from sysfs file"
+        with open(os.path.join(self.path, filename)) as f:
+            return int(f.readline())
+
+    def write_values(self, values):
+        "write dictionary, where key is filename and value is value to write"
+        for filename, contents in values.items():
+            with open(os.path.join(self.path, filename), "w") as f:
+                f.write(str(contents))
+
+
+def configure_dsa(dsa_id, queues):
+    "Configure the DSA instance with appropriate number of queues"
+    dsa_dir = SysfsDir(f"/sys/bus/dsa/devices/dsa{dsa_id}")
+    drv_dir = SysfsDir("/sys/bus/dsa/drivers/dsa")
+
+    max_groups = dsa_dir.read_int("max_groups")
+    max_engines = dsa_dir.read_int("max_engines")
+    max_queues = dsa_dir.read_int("max_work_queues")
+    max_tokens = dsa_dir.read_int("max_tokens")
+
+    # we want one engine per group
+    nb_groups = min(max_engines, max_groups)
+    for grp in range(nb_groups):
+        dsa_dir.write_values({f"engine{dsa_id}.{grp}/group_id": grp})
+
+    nb_queues = min(queues, max_queues)
+    if queues > nb_queues:
+        print(f"Setting number of queues to max supported value: {max_queues}")
+
+    # configure each queue
+    for q in range(nb_queues):
+        wq_dir = SysfsDir(os.path.join(dsa_dir.path, f"wq{dsa_id}.{q}"))
+        wq_dir.write_values({"group_id": q % nb_groups,
+                             "type": "user",
+                             "mode": "dedicated",
+                             "name": f"dpdk_wq{dsa_id}.{q}",
+                             "priority": 1,
+                             "size": int(max_tokens / nb_queues)})
+
+    # enable device and then queues
+    drv_dir.write_values({"bind": f"dsa{dsa_id}"})
+    for q in range(nb_queues):
+        drv_dir.write_values({"bind": f"wq{dsa_id}.{q}"})
+
+
+def main(args):
+    "Main function, does arg parsing and calls config function"
+    arg_p = argparse.ArgumentParser(
+        description="Configure whole DSA device instance for DPDK use")
+    arg_p.add_argument('dsa_id', type=int, help="DSA instance number")
+    arg_p.add_argument('-q', metavar='queues', type=int, default=255,
+                       help="Number of queues to set up")
+    parsed_args = arg_p.parse_args(args[1:])
+    configure_dsa(parsed_args.dsa_id, parsed_args.q)
+
+
+if __name__ == "__main__":
+    main(sys.argv)
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v2 08/18] raw/ioat: create rawdev instances on idxd PCI probe
  2020-08-21 16:29 ` [dpdk-dev] [PATCH v2 00/18] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (6 preceding siblings ...)
  2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 07/18] raw/ioat: include example configuration script Bruce Richardson
@ 2020-08-21 16:29   ` Bruce Richardson
  2020-08-25 15:27     ` Laatz, Kevin
  2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 09/18] raw/ioat: create rawdev instances for idxd vdevs Bruce Richardson
                     ` (10 subsequent siblings)
  18 siblings, 1 reply; 157+ messages in thread
From: Bruce Richardson @ 2020-08-21 16:29 UTC (permalink / raw)
  To: dev; +Cc: cheng1.jiang, patrick.fu, ping.yu, kevin.laatz, Bruce Richardson

When a matching device is found via PCI probe create a rawdev instance for
each queue on the hardware. Use empty self-test function for these devices
so that the overall rawdev_autotest does not report failures.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 drivers/raw/ioat/idxd_pci.c            | 236 ++++++++++++++++++++++++-
 drivers/raw/ioat/ioat_common.c         |  61 +++++++
 drivers/raw/ioat/ioat_private.h        |  31 ++++
 drivers/raw/ioat/ioat_rawdev_test.c    |   7 +
 drivers/raw/ioat/ioat_spec.h           |  64 +++++++
 drivers/raw/ioat/meson.build           |   1 +
 drivers/raw/ioat/rte_ioat_rawdev_fns.h |  35 +++-
 7 files changed, 432 insertions(+), 3 deletions(-)
 create mode 100644 drivers/raw/ioat/ioat_common.c

diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
index 1a30e9c316..72f4ecebb7 100644
--- a/drivers/raw/ioat/idxd_pci.c
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -3,8 +3,10 @@
  */
 
 #include <rte_bus_pci.h>
+#include <rte_memzone.h>
 
 #include "ioat_private.h"
+#include "ioat_spec.h"
 
 #define IDXD_VENDOR_ID		0x8086
 #define IDXD_DEVICE_ID_SPR	0x0B25
@@ -16,17 +18,245 @@ const struct rte_pci_id pci_id_idxd_map[] = {
 	{ .vendor_id = 0, /* sentinel */ },
 };
 
+static inline int
+idxd_pci_dev_command(struct idxd_rawdev *idxd, enum rte_idxd_cmds command)
+{
+	uint8_t err_code;
+	uint16_t qid = idxd->qid;
+	int i = 0;
+
+	if (command >= idxd_disable_wq && command <= idxd_reset_wq)
+		qid = (1 << qid);
+	rte_spinlock_lock(&idxd->u.pci->lk);
+	idxd->u.pci->regs->cmd = (command << IDXD_CMD_SHIFT) | qid;
+
+	do {
+		rte_pause();
+		err_code = idxd->u.pci->regs->cmdstatus;
+		if (++i >= 1000) {
+			IOAT_PMD_ERR("Timeout waiting for command response from HW");
+			rte_spinlock_unlock(&idxd->u.pci->lk);
+			return err_code;
+		}
+	} while (idxd->u.pci->regs->cmdstatus & CMDSTATUS_ACTIVE_MASK);
+	rte_spinlock_unlock(&idxd->u.pci->lk);
+
+	return err_code & CMDSTATUS_ERR_MASK;
+}
+
+static int
+idxd_is_wq_enabled(struct idxd_rawdev *idxd)
+{
+	uint32_t state = idxd->u.pci->wq_regs[idxd->qid].wqcfg[WQ_STATE_IDX];
+	return ((state >> WQ_STATE_SHIFT) & WQ_STATE_MASK) == 0x1;
+}
+
+static const struct rte_rawdev_ops idxd_pci_ops = {
+		.dev_selftest = idxd_rawdev_test
+};
+
+/* each portal uses 4 x 4k pages */
+#define IDXD_PORTAL_SIZE (4096 * 4)
+
+static int
+init_pci_device(struct rte_pci_device *dev, struct idxd_rawdev *idxd)
+{
+	struct idxd_pci_common *pci;
+	uint8_t nb_groups, nb_engines, nb_wqs;
+	uint16_t grp_offset, wq_offset; /* how far into bar0 the regs are */
+	uint16_t wq_size, total_wq_size;
+	uint8_t lg2_max_batch, lg2_max_copy_size;
+	unsigned int i, err_code;
+
+	pci = malloc(sizeof(*pci));
+	if (pci == NULL) {
+		IOAT_PMD_ERR("%s: Can't allocate memory", __func__);
+		goto err;
+	}
+	rte_spinlock_init(&pci->lk);
+
+	/* assign the bar registers, and then configure device */
+	pci->regs = dev->mem_resource[0].addr;
+	grp_offset = (uint16_t)pci->regs->offsets[0];
+	pci->grp_regs = RTE_PTR_ADD(pci->regs, grp_offset * 0x100);
+	wq_offset = (uint16_t)(pci->regs->offsets[0] >> 16);
+	pci->wq_regs = RTE_PTR_ADD(pci->regs, wq_offset * 0x100);
+	pci->portals = dev->mem_resource[2].addr;
+
+	/* sanity check device status */
+	if (pci->regs->gensts & GENSTS_DEV_STATE_MASK) {
+		/* need function-level-reset (FLR) or is enabled */
+		IOAT_PMD_ERR("Device status is not disabled, cannot init");
+		goto err;
+	}
+	if (pci->regs->cmdstatus & CMDSTATUS_ACTIVE_MASK) {
+		/* command in progress */
+		IOAT_PMD_ERR("Device has a command in progress, cannot init");
+		goto err;
+	}
+
+	/* read basic info about the hardware for use when configuring */
+	nb_groups = (uint8_t)pci->regs->grpcap;
+	nb_engines = (uint8_t)pci->regs->engcap;
+	nb_wqs = (uint8_t)(pci->regs->wqcap >> 16);
+	total_wq_size = (uint16_t)pci->regs->wqcap;
+	lg2_max_copy_size = (uint8_t)(pci->regs->gencap >> 16) & 0x1F;
+	lg2_max_batch = (uint8_t)(pci->regs->gencap >> 21) & 0x0F;
+
+	IOAT_PMD_DEBUG("nb_groups = %u, nb_engines = %u, nb_wqs = %u",
+			nb_groups, nb_engines, nb_wqs);
+
+	/* zero out any old config */
+	for (i = 0; i < nb_groups; i++) {
+		pci->grp_regs[i].grpengcfg = 0;
+		pci->grp_regs[i].grpwqcfg[0] = 0;
+	}
+	for (i = 0; i < nb_wqs; i++)
+		pci->wq_regs[i].wqcfg[0] = 0;
+
+	/* put each engine into a separate group to avoid reordering */
+	if (nb_groups > nb_engines)
+		nb_groups = nb_engines;
+	if (nb_groups < nb_engines)
+		nb_engines = nb_groups;
+
+	/* assign engines to groups, round-robin style */
+	for (i = 0; i < nb_engines; i++) {
+		IOAT_PMD_DEBUG("Assigning engine %u to group %u",
+				i, i % nb_groups);
+		pci->grp_regs[i % nb_groups].grpengcfg |= (1ULL << i);
+	}
+
+	/* now do the same for queues and give work slots to each queue */
+	wq_size = total_wq_size / nb_wqs;
+	IOAT_PMD_DEBUG("Work queue size = %u, max batch = 2^%u, max copy = 2^%u",
+			wq_size, lg2_max_batch, lg2_max_copy_size);
+	for (i = 0; i < nb_wqs; i++) {
+		/* add engine "i" to a group */
+		IOAT_PMD_DEBUG("Assigning work queue %u to group %u",
+				i, i % nb_groups);
+		pci->grp_regs[i % nb_groups].grpwqcfg[0] |= (1ULL << i);
+		/* now configure it, in terms of size, max batch, mode */
+		pci->wq_regs[i].wqcfg[WQ_SIZE_IDX] = wq_size;
+		pci->wq_regs[i].wqcfg[WQ_MODE_IDX] = (1 << WQ_PRIORITY_SHIFT) |
+				WQ_MODE_DEDICATED;
+		pci->wq_regs[i].wqcfg[WQ_SIZES_IDX] = lg2_max_copy_size |
+				(lg2_max_batch << WQ_BATCH_SZ_SHIFT);
+	}
+
+	/* dump the group configuration to output */
+	for (i = 0; i < nb_groups; i++) {
+		IOAT_PMD_DEBUG("## Group %d", i);
+		IOAT_PMD_DEBUG("    GRPWQCFG: %"PRIx64, pci->grp_regs[i].grpwqcfg[0]);
+		IOAT_PMD_DEBUG("    GRPENGCFG: %"PRIx64, pci->grp_regs[i].grpengcfg);
+		IOAT_PMD_DEBUG("    GRPFLAGS: %"PRIx32, pci->grp_regs[i].grpflags);
+	}
+
+	idxd->u.pci = pci;
+	idxd->max_batches = wq_size;
+
+	/* enable the device itself */
+	err_code = idxd_pci_dev_command(idxd, idxd_enable_dev);
+	if (err_code) {
+		IOAT_PMD_ERR("Error enabling device: code %#x", err_code);
+		return err_code;
+	}
+	IOAT_PMD_DEBUG("IDXD Device enabled OK");
+
+	return nb_wqs;
+
+err:
+	free(pci);
+	return -1;
+}
+
 static int
 idxd_rawdev_probe_pci(struct rte_pci_driver *drv, struct rte_pci_device *dev)
 {
-	int ret = 0;
+	struct idxd_rawdev idxd = {0};
+	uint8_t nb_wqs;
+	int qid, ret = 0;
 	char name[PCI_PRI_STR_SIZE];
 
 	rte_pci_device_name(&dev->addr, name, sizeof(name));
 	IOAT_PMD_INFO("Init %s on NUMA node %d", name, dev->device.numa_node);
 	dev->device.driver = &drv->driver;
 
-	return ret;
+	ret = init_pci_device(dev, &idxd);
+	if (ret < 0) {
+		IOAT_PMD_ERR("Error initializing PCI hardware");
+		return ret;
+	}
+	nb_wqs = (uint8_t)ret;
+
+	/* set up one device for each queue */
+	for (qid = 0; qid < nb_wqs; qid++) {
+		char qname[32];
+
+		/* add the queue number to each device name */
+		snprintf(qname, sizeof(qname), "%s-q%d", name, qid);
+		idxd.qid = qid;
+		idxd.public.portal = RTE_PTR_ADD(idxd.u.pci->portals,
+				qid * IDXD_PORTAL_SIZE);
+		if (idxd_is_wq_enabled(&idxd))
+			IOAT_PMD_ERR("Error, WQ %u seems enabled", qid);
+		ret = idxd_rawdev_create(qname, &dev->device,
+				&idxd, &idxd_pci_ops);
+		if (ret != 0) {
+			IOAT_PMD_ERR("Failed to create rawdev %s", name);
+			if (qid == 0) /* if no devices using this, free pci */
+				free(idxd.u.pci);
+			return ret;
+		}
+	}
+
+	return 0;
+}
+
+static int
+idxd_rawdev_destroy(const char *name)
+{
+	int ret;
+	uint8_t err_code;
+	struct rte_rawdev *rdev;
+	struct idxd_rawdev *idxd;
+
+	if (!name) {
+		IOAT_PMD_ERR("Invalid device name");
+		return -EINVAL;
+	}
+
+	rdev = rte_rawdev_pmd_get_named_dev(name);
+	if (!rdev) {
+		IOAT_PMD_ERR("Invalid device name (%s)", name);
+		return -EINVAL;
+	}
+
+	idxd = rdev->dev_private;
+
+	/* disable the device */
+	err_code = idxd_pci_dev_command(idxd, idxd_disable_dev);
+	if (err_code) {
+		IOAT_PMD_ERR("Error disabling device: code %#x", err_code);
+		return err_code;
+	}
+	IOAT_PMD_DEBUG("IDXD Device disabled OK");
+
+	/* free device memory */
+	if (rdev->dev_private != NULL) {
+		IOAT_PMD_DEBUG("Freeing device driver memory");
+		rdev->dev_private = NULL;
+		rte_free(idxd->public.batch_ring);
+		rte_free(idxd->public.hdl_ring);
+		rte_memzone_free(idxd->mz);
+	}
+
+	/* rte_rawdev_close is called by pmd_release */
+	ret = rte_rawdev_pmd_release(rdev);
+	if (ret)
+		IOAT_PMD_DEBUG("Device cleanup failed");
+
+	return 0;
 }
 
 static int
@@ -40,6 +270,8 @@ idxd_rawdev_remove_pci(struct rte_pci_device *dev)
 	IOAT_PMD_INFO("Closing %s on NUMA node %d",
 			name, dev->device.numa_node);
 
+	ret = idxd_rawdev_destroy(name);
+
 	return ret;
 }
 
diff --git a/drivers/raw/ioat/ioat_common.c b/drivers/raw/ioat/ioat_common.c
new file mode 100644
index 0000000000..37a14e514d
--- /dev/null
+++ b/drivers/raw/ioat/ioat_common.c
@@ -0,0 +1,61 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_rawdev_pmd.h>
+#include <rte_memzone.h>
+
+#include "ioat_private.h"
+
+int
+idxd_rawdev_create(const char *name, struct rte_device *dev,
+		   const struct idxd_rawdev *base_idxd,
+		   const struct rte_rawdev_ops *ops)
+{
+	struct idxd_rawdev *idxd;
+	struct rte_rawdev *rawdev = NULL;
+	const struct rte_memzone *mz = NULL;
+	char mz_name[RTE_MEMZONE_NAMESIZE];
+	int ret = 0;
+
+	if (!name) {
+		IOAT_PMD_ERR("Invalid name of the device!");
+		ret = -EINVAL;
+		goto cleanup;
+	}
+
+	/* Allocate device structure */
+	rawdev = rte_rawdev_pmd_allocate(name, sizeof(struct idxd_rawdev),
+					 dev->numa_node);
+	if (rawdev == NULL) {
+		IOAT_PMD_ERR("Unable to allocate raw device");
+		ret = -ENOMEM;
+		goto cleanup;
+	}
+
+	snprintf(mz_name, sizeof(mz_name), "rawdev%u_private", rawdev->dev_id);
+	mz = rte_memzone_reserve(mz_name, sizeof(struct idxd_rawdev),
+			dev->numa_node, RTE_MEMZONE_IOVA_CONTIG);
+	if (mz == NULL) {
+		IOAT_PMD_ERR("Unable to reserve memzone for private data\n");
+		ret = -ENOMEM;
+		goto cleanup;
+	}
+	rawdev->dev_private = mz->addr;
+	rawdev->dev_ops = ops;
+	rawdev->device = dev;
+	rawdev->driver_name = IOAT_PMD_RAWDEV_NAME_STR;
+
+	idxd = rawdev->dev_private;
+	*idxd = *base_idxd; /* copy over the main fields already passed in */
+	idxd->rawdev = rawdev;
+	idxd->mz = mz;
+
+	return 0;
+
+cleanup:
+	if (rawdev)
+		rte_rawdev_pmd_release(rawdev);
+
+	return ret;
+}
diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
index d87d4d055e..32c824536d 100644
--- a/drivers/raw/ioat/ioat_private.h
+++ b/drivers/raw/ioat/ioat_private.h
@@ -14,6 +14,10 @@
  * @b EXPERIMENTAL: these structures and APIs may change without prior notice
  */
 
+#include <rte_spinlock.h>
+#include <rte_rawdev_pmd.h>
+#include "rte_ioat_rawdev.h"
+
 extern int ioat_pmd_logtype;
 
 #define IOAT_PMD_LOG(level, fmt, args...) rte_log(RTE_LOG_ ## level, \
@@ -24,4 +28,31 @@ extern int ioat_pmd_logtype;
 #define IOAT_PMD_ERR(fmt, args...)    IOAT_PMD_LOG(ERR, fmt, ## args)
 #define IOAT_PMD_WARN(fmt, args...)   IOAT_PMD_LOG(WARNING, fmt, ## args)
 
+struct idxd_pci_common {
+	rte_spinlock_t lk;
+	volatile struct rte_idxd_bar0 *regs;
+	volatile struct rte_idxd_wqcfg *wq_regs;
+	volatile struct rte_idxd_grpcfg *grp_regs;
+	volatile void *portals;
+};
+
+struct idxd_rawdev {
+	struct rte_idxd_rawdev public; /* the public members, must be first */
+
+	struct rte_rawdev *rawdev;
+	const struct rte_memzone *mz;
+	uint8_t qid;
+	uint16_t max_batches;
+
+	union {
+		struct idxd_pci_common *pci;
+	} u;
+};
+
+extern int idxd_rawdev_create(const char *name, struct rte_device *dev,
+		       const struct idxd_rawdev *idxd,
+		       const struct rte_rawdev_ops *ops);
+
+extern int idxd_rawdev_test(uint16_t dev_id);
+
 #endif /* _IOAT_PRIVATE_H_ */
diff --git a/drivers/raw/ioat/ioat_rawdev_test.c b/drivers/raw/ioat/ioat_rawdev_test.c
index 626794c224..4f14b3d1bd 100644
--- a/drivers/raw/ioat/ioat_rawdev_test.c
+++ b/drivers/raw/ioat/ioat_rawdev_test.c
@@ -7,6 +7,7 @@
 #include <rte_mbuf.h>
 #include "rte_rawdev.h"
 #include "rte_ioat_rawdev.h"
+#include "ioat_private.h"
 
 #define MAX_SUPPORTED_RAWDEVS 64
 #define TEST_SKIPPED 77
@@ -268,3 +269,9 @@ ioat_rawdev_test(uint16_t dev_id)
 	free(ids);
 	return -1;
 }
+
+int
+idxd_rawdev_test(uint16_t dev_id __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/raw/ioat/ioat_spec.h b/drivers/raw/ioat/ioat_spec.h
index 9645e16d41..1aa768b9ae 100644
--- a/drivers/raw/ioat/ioat_spec.h
+++ b/drivers/raw/ioat/ioat_spec.h
@@ -268,6 +268,70 @@ union rte_ioat_hw_desc {
 	struct rte_ioat_pq_update_hw_desc pq_update;
 };
 
+/*** Definitions for Intel(R) Data Streaming Accelerator Follow ***/
+
+#define IDXD_CMD_SHIFT 20
+enum rte_idxd_cmds {
+	idxd_enable_dev = 1,
+	idxd_disable_dev,
+	idxd_drain_all,
+	idxd_abort_all,
+	idxd_reset_device,
+	idxd_enable_wq,
+	idxd_disable_wq,
+	idxd_drain_wq,
+	idxd_abort_wq,
+	idxd_reset_wq,
+};
+
+/* General bar0 registers */
+struct rte_idxd_bar0 {
+	uint32_t __rte_cache_aligned version;    /* offset 0x00 */
+	uint64_t __rte_aligned(0x10) gencap;     /* offset 0x10 */
+	uint64_t __rte_aligned(0x10) wqcap;      /* offset 0x20 */
+	uint64_t __rte_aligned(0x10) grpcap;     /* offset 0x30 */
+	uint64_t __rte_aligned(0x08) engcap;     /* offset 0x38 */
+	uint64_t __rte_aligned(0x10) opcap;      /* offset 0x40 */
+	uint64_t __rte_aligned(0x20) offsets[2]; /* offset 0x60 */
+	uint32_t __rte_aligned(0x20) gencfg;     /* offset 0x80 */
+	uint32_t __rte_aligned(0x08) genctrl;    /* offset 0x88 */
+	uint32_t __rte_aligned(0x10) gensts;     /* offset 0x90 */
+	uint32_t __rte_aligned(0x08) intcause;   /* offset 0x98 */
+	uint32_t __rte_aligned(0x10) cmd;        /* offset 0xA0 */
+	uint32_t __rte_aligned(0x08) cmdstatus;  /* offset 0xA8 */
+	uint64_t __rte_aligned(0x20) swerror[4]; /* offset 0xC0 */
+};
+
+struct rte_idxd_wqcfg {
+	uint32_t wqcfg[8] __rte_aligned(32); /* 32-byte register */
+};
+
+#define WQ_SIZE_IDX      0 /* size is in first 32-bit value */
+#define WQ_THRESHOLD_IDX 1 /* WQ threshold second 32-bits */
+#define WQ_MODE_IDX      2 /* WQ mode and other flags */
+#define WQ_SIZES_IDX     3 /* WQ transfer and batch sizes */
+#define WQ_OCC_INT_IDX   4 /* WQ occupancy interrupt handle */
+#define WQ_OCC_LIMIT_IDX 5 /* WQ occupancy limit */
+#define WQ_STATE_IDX     6 /* WQ state and occupancy state */
+
+#define WQ_MODE_SHARED    0
+#define WQ_MODE_DEDICATED 1
+#define WQ_PRIORITY_SHIFT 4
+#define WQ_BATCH_SZ_SHIFT 5
+#define WQ_STATE_SHIFT 30
+#define WQ_STATE_MASK 0x3
+
+struct rte_idxd_grpcfg {
+	uint64_t grpwqcfg[4]  __rte_cache_aligned; /* 64-byte register set */
+	uint64_t grpengcfg;  /* offset 32 */
+	uint32_t grpflags;   /* offset 40 */
+};
+
+#define GENSTS_DEV_STATE_MASK 0x03
+#define CMDSTATUS_ACTIVE_SHIFT 31
+#define CMDSTATUS_ACTIVE_MASK (1 << 31)
+#define CMDSTATUS_ERR_MASK 0xFF
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/drivers/raw/ioat/meson.build b/drivers/raw/ioat/meson.build
index b343b7367b..5eff76a1a3 100644
--- a/drivers/raw/ioat/meson.build
+++ b/drivers/raw/ioat/meson.build
@@ -6,6 +6,7 @@ reason = 'only supported on x86'
 sources = files(
 	'idxd_pci.c',
 	'idxd_vdev.c',
+	'ioat_common.c',
 	'ioat_rawdev.c',
 	'ioat_rawdev_test.c')
 deps += ['bus_pci',
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index 0cee6b1b09..db377fbfa7 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -39,9 +39,20 @@ struct rte_ioat_generic_hw_desc {
 
 /**
  * @internal
- * Structure representing a device instance
+ * Identify the data path to use.
+ * Must be first field of rte_ioat_rawdev and rte_idxd_rawdev structs
+ */
+enum rte_ioat_dev_type {
+	RTE_IOAT_DEV,
+	RTE_IDXD_DEV,
+};
+
+/**
+ * @internal
+ * Structure representing an IOAT device instance
  */
 struct rte_ioat_rawdev {
+	enum rte_ioat_dev_type type;
 	struct rte_rawdev *rawdev;
 	const struct rte_memzone *mz;
 	const struct rte_memzone *desc_mz;
@@ -77,6 +88,28 @@ struct rte_ioat_rawdev {
 #define RTE_IOAT_CHANSTS_HALTED		0x3
 #define RTE_IOAT_CHANSTS_ARMED			0x4
 
+/**
+ * @internal
+ * Structure representing an IDXD device instance
+ */
+struct rte_idxd_rawdev {
+	enum rte_ioat_dev_type type;
+	void *portal; /* address to write the batch descriptor */
+
+	/* counters to track the batches and the individual op handles */
+	uint16_t batch_ring_sz;  /* size of batch ring */
+	uint16_t hdl_ring_sz;    /* size of the user hdl ring */
+
+	uint16_t next_batch;     /* where we write descriptor ops */
+	uint16_t next_completed; /* batch where we read completions */
+	uint16_t next_ret_hdl;   /* the next user hdl to return */
+	uint16_t last_completed_hdl; /* the last user hdl that has completed */
+	uint16_t next_free_hdl;  /* where the handle for next op will go */
+
+	struct rte_idxd_user_hdl *hdl_ring;
+	struct rte_idxd_desc_batch *batch_ring;
+};
+
 /**
  * Enqueue a copy operation onto the ioat device
  */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v2 09/18] raw/ioat: create rawdev instances for idxd vdevs
  2020-08-21 16:29 ` [dpdk-dev] [PATCH v2 00/18] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (7 preceding siblings ...)
  2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 08/18] raw/ioat: create rawdev instances on idxd PCI probe Bruce Richardson
@ 2020-08-21 16:29   ` Bruce Richardson
  2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 10/18] raw/ioat: add datapath data structures for idxd devices Bruce Richardson
                     ` (9 subsequent siblings)
  18 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-08-21 16:29 UTC (permalink / raw)
  To: dev; +Cc: cheng1.jiang, patrick.fu, ping.yu, kevin.laatz, Bruce Richardson

From: Kevin Laatz <kevin.laatz@intel.com>

For each vdev (DSA work queue) instance, create a rawdev instance.

Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 drivers/raw/ioat/idxd_vdev.c    | 107 +++++++++++++++++++++++++++++++-
 drivers/raw/ioat/ioat_private.h |   4 ++
 2 files changed, 110 insertions(+), 1 deletion(-)

diff --git a/drivers/raw/ioat/idxd_vdev.c b/drivers/raw/ioat/idxd_vdev.c
index 0509fc0842..93c023a6e8 100644
--- a/drivers/raw/ioat/idxd_vdev.c
+++ b/drivers/raw/ioat/idxd_vdev.c
@@ -2,6 +2,12 @@
  * Copyright(c) 2020 Intel Corporation
  */
 
+#include <fcntl.h>
+#include <unistd.h>
+#include <limits.h>
+#include <sys/mman.h>
+
+#include <rte_memzone.h>
 #include <rte_bus_vdev.h>
 #include <rte_kvargs.h>
 #include <rte_string_fns.h>
@@ -24,6 +30,35 @@ struct idxd_vdev_args {
 	uint8_t wq_id;
 };
 
+static const struct rte_rawdev_ops idxd_vdev_ops = {
+		.dev_selftest = idxd_rawdev_test,
+};
+
+static void *
+idxd_vdev_mmap_wq(struct idxd_vdev_args *args)
+{
+	void *addr;
+	char path[PATH_MAX];
+	int fd;
+
+	snprintf(path, sizeof(path), "/dev/dsa/wq%u.%u",
+			args->device_id, args->wq_id);
+	fd = open(path, O_RDWR);
+	if (fd < 0) {
+		IOAT_PMD_ERR("Failed to open device path");
+		return NULL;
+	}
+
+	addr = mmap(NULL, 0x1000, PROT_WRITE, MAP_SHARED | MAP_POPULATE, fd, 0);
+	close(fd);
+	if (addr == MAP_FAILED) {
+		IOAT_PMD_ERR("Failed to mmap device");
+		return NULL;
+	}
+
+	return addr;
+}
+
 static int
 idxd_rawdev_parse_wq(const char *key __rte_unused, const char *value,
 			  void *extra_args)
@@ -70,10 +105,34 @@ idxd_vdev_parse_params(struct rte_kvargs *kvlist, struct idxd_vdev_args *args)
 	return -EINVAL;
 }
 
+static int
+idxd_vdev_get_max_batches(struct idxd_vdev_args *args)
+{
+	char sysfs_path[PATH_MAX];
+	FILE *f;
+	int ret = -1;
+
+	snprintf(sysfs_path, sizeof(sysfs_path),
+			"/sys/bus/dsa/devices/wq%u.%u/size",
+			args->device_id, args->wq_id);
+	f = fopen(sysfs_path, "r");
+	if (f == NULL)
+		return -1;
+
+	fscanf(f, "%d", &ret);
+	/* if fscanf does not read ret correctly, it will be -1, so no error
+	 * handling is necessary
+	 */
+
+	fclose(f);
+	return ret;
+}
+
 static int
 idxd_rawdev_probe_vdev(struct rte_vdev_device *vdev)
 {
 	struct rte_kvargs *kvlist;
+	struct idxd_rawdev idxd = {0};
 	struct idxd_vdev_args vdev_args;
 	const char *name;
 	int ret = 0;
@@ -96,13 +155,32 @@ idxd_rawdev_probe_vdev(struct rte_vdev_device *vdev)
 		return -EINVAL;
 	}
 
+	idxd.qid = vdev_args.wq_id;
+	idxd.u.vdev.dsa_id = vdev_args.device_id;
+	idxd.max_batches = idxd_vdev_get_max_batches(&vdev_args);
+
+	idxd.public.portal = idxd_vdev_mmap_wq(&vdev_args);
+	if (idxd.public.portal == NULL) {
+		IOAT_PMD_ERR("WQ mmap failed");
+		return -ENOENT;
+	}
+
+	ret = idxd_rawdev_create(name, &vdev->device, &idxd, &idxd_vdev_ops);
+	if (ret) {
+		IOAT_PMD_ERR("Failed to create rawdev %s", name);
+		return ret;
+	}
+
 	return 0;
 }
 
 static int
 idxd_rawdev_remove_vdev(struct rte_vdev_device *vdev)
 {
+	struct idxd_rawdev *idxd;
 	const char *name;
+	struct rte_rawdev *rdev;
+	int ret = 0;
 
 	name = rte_vdev_device_name(vdev);
 	if (name == NULL)
@@ -110,7 +188,34 @@ idxd_rawdev_remove_vdev(struct rte_vdev_device *vdev)
 
 	IOAT_PMD_INFO("Remove DSA vdev %p", name);
 
-	return 0;
+	rdev = rte_rawdev_pmd_get_named_dev(name);
+	if (!rdev) {
+		IOAT_PMD_ERR("Invalid device name (%s)", name);
+		return -EINVAL;
+	}
+
+	idxd = rdev->dev_private;
+
+	/* free context and memory */
+	if (rdev->dev_private != NULL) {
+		IOAT_PMD_DEBUG("Freeing device driver memory");
+		rdev->dev_private = NULL;
+
+		if (munmap(idxd->public.portal, 0x1000) < 0) {
+			IOAT_PMD_ERR("Error unmapping portal");
+			ret = -errno;
+		}
+
+		rte_free(idxd->public.batch_ring);
+		rte_free(idxd->public.hdl_ring);
+
+		rte_memzone_free(idxd->mz);
+	}
+
+	if (rte_rawdev_pmd_release(rdev))
+		IOAT_PMD_ERR("Device cleanup failed");
+
+	return ret;
 }
 
 struct rte_vdev_driver idxd_rawdev_drv_vdev = {
diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
index 32c824536d..bd2bef3834 100644
--- a/drivers/raw/ioat/ioat_private.h
+++ b/drivers/raw/ioat/ioat_private.h
@@ -45,6 +45,10 @@ struct idxd_rawdev {
 	uint16_t max_batches;
 
 	union {
+		struct {
+			unsigned dsa_id;
+		} vdev;
+
 		struct idxd_pci_common *pci;
 	} u;
 };
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v2 10/18] raw/ioat: add datapath data structures for idxd devices
  2020-08-21 16:29 ` [dpdk-dev] [PATCH v2 00/18] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (8 preceding siblings ...)
  2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 09/18] raw/ioat: create rawdev instances for idxd vdevs Bruce Richardson
@ 2020-08-21 16:29   ` Bruce Richardson
  2020-08-25 15:27     ` Laatz, Kevin
  2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 11/18] raw/ioat: add configure function " Bruce Richardson
                     ` (8 subsequent siblings)
  18 siblings, 1 reply; 157+ messages in thread
From: Bruce Richardson @ 2020-08-21 16:29 UTC (permalink / raw)
  To: dev; +Cc: cheng1.jiang, patrick.fu, ping.yu, kevin.laatz, Bruce Richardson

Add in the relevant data structures for the data path for DSA devices. Also
include a device dump function to output the status of each device.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 drivers/raw/ioat/idxd_pci.c            |  3 +-
 drivers/raw/ioat/idxd_vdev.c           |  1 +
 drivers/raw/ioat/ioat_common.c         | 34 +++++++++++
 drivers/raw/ioat/ioat_private.h        |  2 +
 drivers/raw/ioat/ioat_rawdev_test.c    |  3 +-
 drivers/raw/ioat/rte_ioat_rawdev_fns.h | 80 ++++++++++++++++++++++++++
 6 files changed, 121 insertions(+), 2 deletions(-)

diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
index 72f4ecebb7..ce238ae04c 100644
--- a/drivers/raw/ioat/idxd_pci.c
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -52,7 +52,8 @@ idxd_is_wq_enabled(struct idxd_rawdev *idxd)
 }
 
 static const struct rte_rawdev_ops idxd_pci_ops = {
-		.dev_selftest = idxd_rawdev_test
+		.dev_selftest = idxd_rawdev_test,
+		.dump = idxd_dev_dump,
 };
 
 /* each portal uses 4 x 4k pages */
diff --git a/drivers/raw/ioat/idxd_vdev.c b/drivers/raw/ioat/idxd_vdev.c
index 93c023a6e8..0f9aa48e84 100644
--- a/drivers/raw/ioat/idxd_vdev.c
+++ b/drivers/raw/ioat/idxd_vdev.c
@@ -32,6 +32,7 @@ struct idxd_vdev_args {
 
 static const struct rte_rawdev_ops idxd_vdev_ops = {
 		.dev_selftest = idxd_rawdev_test,
+		.dump = idxd_dev_dump,
 };
 
 static void *
diff --git a/drivers/raw/ioat/ioat_common.c b/drivers/raw/ioat/ioat_common.c
index 37a14e514d..fb4f7055de 100644
--- a/drivers/raw/ioat/ioat_common.c
+++ b/drivers/raw/ioat/ioat_common.c
@@ -7,6 +7,36 @@
 
 #include "ioat_private.h"
 
+int
+idxd_dev_dump(struct rte_rawdev *dev, FILE *f)
+{
+	struct idxd_rawdev *idxd = dev->dev_private;
+	struct rte_idxd_rawdev *rte_idxd = &idxd->public;
+	int i;
+
+	fprintf(f, "Raw Device #%d\n", dev->dev_id);
+	fprintf(f, "Driver: %s\n\n", dev->driver_name);
+
+	fprintf(f, "Portal: %p\n", rte_idxd->portal);
+	fprintf(f, "Batch Ring size: %u\n", rte_idxd->batch_ring_sz);
+	fprintf(f, "Comp Handle Ring size: %u\n\n", rte_idxd->hdl_ring_sz);
+
+	fprintf(f, "Next batch: %u\n", rte_idxd->next_batch);
+	fprintf(f, "Next batch to be completed: %u\n", rte_idxd->next_completed);
+	for (i = 0; i < rte_idxd->batch_ring_sz; i++) {
+		struct rte_idxd_desc_batch *b = &rte_idxd->batch_ring[i];
+		fprintf(f, "Batch %u @%p: submitted=%u, op_count=%u, hdl_end=%u\n",
+				i, b, b->submitted, b->op_count, b->hdl_end);
+	}
+
+	fprintf(f, "\n");
+	fprintf(f, "Next free hdl: %u\n", rte_idxd->next_free_hdl);
+	fprintf(f, "Last completed hdl: %u\n", rte_idxd->last_completed_hdl);
+	fprintf(f, "Next returned hdl: %u\n", rte_idxd->next_ret_hdl);
+
+	return 0;
+}
+
 int
 idxd_rawdev_create(const char *name, struct rte_device *dev,
 		   const struct idxd_rawdev *base_idxd,
@@ -18,6 +48,10 @@ idxd_rawdev_create(const char *name, struct rte_device *dev,
 	char mz_name[RTE_MEMZONE_NAMESIZE];
 	int ret = 0;
 
+	RTE_BUILD_BUG_ON(sizeof(struct rte_idxd_hw_desc) != 64);
+	RTE_BUILD_BUG_ON(offsetof(struct rte_idxd_hw_desc, size) != 32);
+	RTE_BUILD_BUG_ON(sizeof(struct rte_idxd_completion) != 32);
+
 	if (!name) {
 		IOAT_PMD_ERR("Invalid name of the device!");
 		ret = -EINVAL;
diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
index bd2bef3834..974eeb0106 100644
--- a/drivers/raw/ioat/ioat_private.h
+++ b/drivers/raw/ioat/ioat_private.h
@@ -59,4 +59,6 @@ extern int idxd_rawdev_create(const char *name, struct rte_device *dev,
 
 extern int idxd_rawdev_test(uint16_t dev_id);
 
+extern int idxd_dev_dump(struct rte_rawdev *dev, FILE *f);
+
 #endif /* _IOAT_PRIVATE_H_ */
diff --git a/drivers/raw/ioat/ioat_rawdev_test.c b/drivers/raw/ioat/ioat_rawdev_test.c
index 4f14b3d1bd..082b3091c4 100644
--- a/drivers/raw/ioat/ioat_rawdev_test.c
+++ b/drivers/raw/ioat/ioat_rawdev_test.c
@@ -271,7 +271,8 @@ ioat_rawdev_test(uint16_t dev_id)
 }
 
 int
-idxd_rawdev_test(uint16_t dev_id __rte_unused)
+idxd_rawdev_test(uint16_t dev_id)
 {
+	rte_rawdev_dump(dev_id, stdout);
 	return 0;
 }
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index db377fbfa7..d258ad9fd2 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -88,6 +88,86 @@ struct rte_ioat_rawdev {
 #define RTE_IOAT_CHANSTS_HALTED		0x3
 #define RTE_IOAT_CHANSTS_ARMED			0x4
 
+/*
+ * Defines used in the data path for interacting with hardware.
+ */
+#define IDXD_CMD_OP_SHIFT 24
+enum rte_idxd_ops {
+	idxd_op_nop = 0,
+	idxd_op_batch,
+	idxd_op_drain,
+	idxd_op_memmove,
+	idxd_op_fill
+};
+
+#define IDXD_FLAG_FENCE                 (1 << 0)
+#define IDXD_FLAG_COMPLETION_ADDR_VALID (1 << 2)
+#define IDXD_FLAG_REQUEST_COMPLETION    (1 << 3)
+#define IDXD_FLAG_CACHE_CONTROL         (1 << 8)
+
+/**
+ * Hardware descriptor used by DSA hardware, for both bursts and
+ * for individual operations.
+ */
+struct rte_idxd_hw_desc {
+	uint32_t pasid;
+	uint32_t op_flags;
+	rte_iova_t completion;
+
+	RTE_STD_C11
+	union {
+		rte_iova_t src;      /* source address for copy ops etc. */
+		rte_iova_t desc_addr; /* descriptor pointer for batch */
+	};
+	rte_iova_t dst;
+
+	uint32_t size;    /* length of data for op, or batch size */
+
+	/* 28 bytes of padding here */
+} __rte_aligned(64);
+
+/**
+ * Completion record structure written back by DSA
+ */
+struct rte_idxd_completion {
+	uint8_t status;
+	uint8_t result;
+	/* 16-bits pad here */
+	uint32_t completed_size; /* data length, or descriptors for batch */
+
+	rte_iova_t fault_address;
+	uint32_t invalid_flags;
+} __rte_aligned(32);
+
+#define BATCH_SIZE 64
+
+/**
+ * Structure used inside the driver for building up and submitting
+ * a batch of operations to the DSA hardware.
+ */
+struct rte_idxd_desc_batch {
+	struct rte_idxd_completion comp; /* the completion record for batch */
+
+	uint16_t submitted;
+	uint16_t op_count;
+	uint16_t hdl_end;
+
+	struct rte_idxd_hw_desc batch_desc;
+
+	/* batches must always have 2 descriptors, so put a null at the start */
+	struct rte_idxd_hw_desc null_desc;
+	struct rte_idxd_hw_desc ops[BATCH_SIZE];
+};
+
+/**
+ * structure used to save the "handles" provided by the user to be
+ * returned to the user on job completion.
+ */
+struct rte_idxd_user_hdl {
+	uint64_t src;
+	uint64_t dst;
+};
+
 /**
  * @internal
  * Structure representing an IDXD device instance
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v2 11/18] raw/ioat: add configure function for idxd devices
  2020-08-21 16:29 ` [dpdk-dev] [PATCH v2 00/18] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (9 preceding siblings ...)
  2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 10/18] raw/ioat: add datapath data structures for idxd devices Bruce Richardson
@ 2020-08-21 16:29   ` Bruce Richardson
  2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 12/18] raw/ioat: add start and stop functions " Bruce Richardson
                     ` (7 subsequent siblings)
  18 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-08-21 16:29 UTC (permalink / raw)
  To: dev; +Cc: cheng1.jiang, patrick.fu, ping.yu, kevin.laatz, Bruce Richardson

Add configure function for idxd devices, taking the same parameters as the
existing configure function for ioat. The ring_size parameter is used to
compute the maximum number of bursts to be supported by the driver, given
that the hardware works on individual bursts of descriptors at a time.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 drivers/raw/ioat/idxd_pci.c            |  1 +
 drivers/raw/ioat/idxd_vdev.c           |  1 +
 drivers/raw/ioat/ioat_common.c         | 64 ++++++++++++++++++++++++++
 drivers/raw/ioat/ioat_private.h        |  3 ++
 drivers/raw/ioat/rte_ioat_rawdev_fns.h |  1 +
 5 files changed, 70 insertions(+)

diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
index ce238ae04c..98e8668e34 100644
--- a/drivers/raw/ioat/idxd_pci.c
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -54,6 +54,7 @@ idxd_is_wq_enabled(struct idxd_rawdev *idxd)
 static const struct rte_rawdev_ops idxd_pci_ops = {
 		.dev_selftest = idxd_rawdev_test,
 		.dump = idxd_dev_dump,
+		.dev_configure = idxd_dev_configure,
 };
 
 /* each portal uses 4 x 4k pages */
diff --git a/drivers/raw/ioat/idxd_vdev.c b/drivers/raw/ioat/idxd_vdev.c
index 0f9aa48e84..73cb5d938f 100644
--- a/drivers/raw/ioat/idxd_vdev.c
+++ b/drivers/raw/ioat/idxd_vdev.c
@@ -33,6 +33,7 @@ struct idxd_vdev_args {
 static const struct rte_rawdev_ops idxd_vdev_ops = {
 		.dev_selftest = idxd_rawdev_test,
 		.dump = idxd_dev_dump,
+		.dev_configure = idxd_dev_configure,
 };
 
 static void *
diff --git a/drivers/raw/ioat/ioat_common.c b/drivers/raw/ioat/ioat_common.c
index fb4f7055de..85b13b0bae 100644
--- a/drivers/raw/ioat/ioat_common.c
+++ b/drivers/raw/ioat/ioat_common.c
@@ -37,6 +37,70 @@ idxd_dev_dump(struct rte_rawdev *dev, FILE *f)
 	return 0;
 }
 
+int
+idxd_dev_configure(const struct rte_rawdev *dev,
+		rte_rawdev_obj_t config, size_t config_size)
+{
+	struct idxd_rawdev *idxd = dev->dev_private;
+	struct rte_idxd_rawdev *rte_idxd = &idxd->public;
+	struct rte_ioat_rawdev_config *cfg = config;
+	uint16_t max_desc = cfg->ring_size;
+	uint16_t max_batches = max_desc / BATCH_SIZE;
+	uint16_t i;
+
+	if (config_size != sizeof(*cfg))
+		return -EINVAL;
+
+	if (dev->started) {
+		IOAT_PMD_ERR("%s: Error, device is started.", __func__);
+		return -EAGAIN;
+	}
+
+	rte_idxd->hdls_disable = cfg->hdls_disable;
+
+	/* limit the batches to what can be stored in hardware */
+	if (max_batches > idxd->max_batches) {
+		IOAT_PMD_DEBUG("Ring size of %u is too large for this device, need to limit to %u batches of %u",
+				max_desc, idxd->max_batches, BATCH_SIZE);
+		max_batches = idxd->max_batches;
+		max_desc = max_batches * BATCH_SIZE;
+	}
+	if (!rte_is_power_of_2(max_desc))
+		max_desc = rte_align32pow2(max_desc);
+	IOAT_PMD_DEBUG("Rawdev %u using %u descriptors in %u batches",
+			dev->dev_id, max_desc, max_batches);
+
+	/* in case we are reconfiguring a device, free any existing memory */
+	rte_free(rte_idxd->batch_ring);
+	rte_free(rte_idxd->hdl_ring);
+
+	rte_idxd->batch_ring = rte_zmalloc(NULL,
+			sizeof(*rte_idxd->batch_ring) * max_batches, 0);
+	if (rte_idxd->batch_ring == NULL)
+		return -ENOMEM;
+
+	rte_idxd->hdl_ring = rte_zmalloc(NULL,
+			sizeof(*rte_idxd->hdl_ring) * max_desc, 0);
+	if (rte_idxd->hdl_ring == NULL) {
+		rte_free(rte_idxd->batch_ring);
+		rte_idxd->batch_ring = NULL;
+		return -ENOMEM;
+	}
+	rte_idxd->batch_ring_sz = max_batches;
+	rte_idxd->hdl_ring_sz = max_desc;
+
+	for (i = 0; i < rte_idxd->batch_ring_sz; i++) {
+		struct rte_idxd_desc_batch *b = &rte_idxd->batch_ring[i];
+		b->batch_desc.completion = rte_mem_virt2iova(&b->comp);
+		b->batch_desc.desc_addr = rte_mem_virt2iova(&b->null_desc);
+		b->batch_desc.op_flags = (idxd_op_batch << IDXD_CMD_OP_SHIFT) |
+				IDXD_FLAG_COMPLETION_ADDR_VALID |
+				IDXD_FLAG_REQUEST_COMPLETION;
+	}
+
+	return 0;
+}
+
 int
 idxd_rawdev_create(const char *name, struct rte_device *dev,
 		   const struct idxd_rawdev *base_idxd,
diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
index 974eeb0106..928c9b497c 100644
--- a/drivers/raw/ioat/ioat_private.h
+++ b/drivers/raw/ioat/ioat_private.h
@@ -57,6 +57,9 @@ extern int idxd_rawdev_create(const char *name, struct rte_device *dev,
 		       const struct idxd_rawdev *idxd,
 		       const struct rte_rawdev_ops *ops);
 
+extern int idxd_dev_configure(const struct rte_rawdev *dev,
+		rte_rawdev_obj_t config, size_t config_size);
+
 extern int idxd_rawdev_test(uint16_t dev_id);
 
 extern int idxd_dev_dump(struct rte_rawdev *dev, FILE *f);
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index d258ad9fd2..1939437d50 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -185,6 +185,7 @@ struct rte_idxd_rawdev {
 	uint16_t next_ret_hdl;   /* the next user hdl to return */
 	uint16_t last_completed_hdl; /* the last user hdl that has completed */
 	uint16_t next_free_hdl;  /* where the handle for next op will go */
+	uint16_t hdls_disable;   /* disable tracking completion handles */
 
 	struct rte_idxd_user_hdl *hdl_ring;
 	struct rte_idxd_desc_batch *batch_ring;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v2 12/18] raw/ioat: add start and stop functions for idxd devices
  2020-08-21 16:29 ` [dpdk-dev] [PATCH v2 00/18] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (10 preceding siblings ...)
  2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 11/18] raw/ioat: add configure function " Bruce Richardson
@ 2020-08-21 16:29   ` Bruce Richardson
  2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 13/18] raw/ioat: add data path " Bruce Richardson
                     ` (6 subsequent siblings)
  18 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-08-21 16:29 UTC (permalink / raw)
  To: dev; +Cc: cheng1.jiang, patrick.fu, ping.yu, kevin.laatz, Bruce Richardson

Add the start and stop functions for DSA hardware devices using the
vfio/uio kernel drivers. For vdevs using the idxd kernel driver, the device
must be started using sysfs before the device node appears for vdev use -
making start/stop functions in the driver unnecessary.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 drivers/raw/ioat/idxd_pci.c | 50 +++++++++++++++++++++++++++++++++++++
 1 file changed, 50 insertions(+)

diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
index 98e8668e34..13a0a03211 100644
--- a/drivers/raw/ioat/idxd_pci.c
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -51,10 +51,60 @@ idxd_is_wq_enabled(struct idxd_rawdev *idxd)
 	return ((state >> WQ_STATE_SHIFT) & WQ_STATE_MASK) == 0x1;
 }
 
+static void
+idxd_pci_dev_stop(struct rte_rawdev *dev)
+{
+	struct idxd_rawdev *idxd = dev->dev_private;
+	uint8_t err_code;
+
+	if (!idxd_is_wq_enabled(idxd)) {
+		IOAT_PMD_ERR("Work queue %d already disabled", idxd->qid);
+		return;
+	}
+
+	err_code = idxd_pci_dev_command(idxd, idxd_disable_wq);
+	if (err_code || idxd_is_wq_enabled(idxd)) {
+		IOAT_PMD_ERR("Failed disabling work queue %d, error code: %#x",
+				idxd->qid, err_code);
+		return;
+	}
+	IOAT_PMD_DEBUG("Work queue %d disabled OK", idxd->qid);
+}
+
+static int
+idxd_pci_dev_start(struct rte_rawdev *dev)
+{
+	struct idxd_rawdev *idxd = dev->dev_private;
+	uint8_t err_code;
+
+	if (idxd_is_wq_enabled(idxd)) {
+		IOAT_PMD_WARN("WQ %d already enabled", idxd->qid);
+		return 0;
+	}
+
+	if (idxd->public.batch_ring == NULL) {
+		IOAT_PMD_ERR("WQ %d has not been fully configured", idxd->qid);
+		return -EINVAL;
+	}
+
+	err_code = idxd_pci_dev_command(idxd, idxd_enable_wq);
+	if (err_code || !idxd_is_wq_enabled(idxd)) {
+		IOAT_PMD_ERR("Failed enabling work queue %d, error code: %#x",
+				idxd->qid, err_code);
+		return err_code == 0 ? -1 : err_code;
+	}
+
+	IOAT_PMD_DEBUG("Work queue %d enabled OK", idxd->qid);
+
+	return 0;
+}
+
 static const struct rte_rawdev_ops idxd_pci_ops = {
 		.dev_selftest = idxd_rawdev_test,
 		.dump = idxd_dev_dump,
 		.dev_configure = idxd_dev_configure,
+		.dev_start = idxd_pci_dev_start,
+		.dev_stop = idxd_pci_dev_stop,
 };
 
 /* each portal uses 4 x 4k pages */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v2 13/18] raw/ioat: add data path for idxd devices
  2020-08-21 16:29 ` [dpdk-dev] [PATCH v2 00/18] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (11 preceding siblings ...)
  2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 12/18] raw/ioat: add start and stop functions " Bruce Richardson
@ 2020-08-21 16:29   ` Bruce Richardson
  2020-08-25 15:27     ` Laatz, Kevin
  2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 14/18] raw/ioat: add info function " Bruce Richardson
                     ` (5 subsequent siblings)
  18 siblings, 1 reply; 157+ messages in thread
From: Bruce Richardson @ 2020-08-21 16:29 UTC (permalink / raw)
  To: dev; +Cc: cheng1.jiang, patrick.fu, ping.yu, kevin.laatz, Bruce Richardson

Add support for doing copies using DSA hardware. This is implemented by
just switching on the device type field at the start of the inline
functions. Since there is no hardware which will have both device types
present this branch will always be predictable after the first call,
meaning it has little to no perf penalty.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 drivers/raw/ioat/ioat_common.c         |   1 +
 drivers/raw/ioat/ioat_rawdev.c         |   1 +
 drivers/raw/ioat/rte_ioat_rawdev_fns.h | 164 +++++++++++++++++++++++--
 3 files changed, 157 insertions(+), 9 deletions(-)

diff --git a/drivers/raw/ioat/ioat_common.c b/drivers/raw/ioat/ioat_common.c
index 85b13b0bae..23a9d71dd3 100644
--- a/drivers/raw/ioat/ioat_common.c
+++ b/drivers/raw/ioat/ioat_common.c
@@ -146,6 +146,7 @@ idxd_rawdev_create(const char *name, struct rte_device *dev,
 
 	idxd = rawdev->dev_private;
 	*idxd = *base_idxd; /* copy over the main fields already passed in */
+	idxd->public.type = RTE_IDXD_DEV;
 	idxd->rawdev = rawdev;
 	idxd->mz = mz;
 
diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
index 20c8b671a6..15133737b9 100644
--- a/drivers/raw/ioat/ioat_rawdev.c
+++ b/drivers/raw/ioat/ioat_rawdev.c
@@ -253,6 +253,7 @@ ioat_rawdev_create(const char *name, struct rte_pci_device *dev)
 	rawdev->driver_name = dev->device.driver->name;
 
 	ioat = rawdev->dev_private;
+	ioat->type = RTE_IOAT_DEV;
 	ioat->rawdev = rawdev;
 	ioat->mz = mz;
 	ioat->regs = dev->mem_resource[0].addr;
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index 1939437d50..19aaaa50c8 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -194,8 +194,8 @@ struct rte_idxd_rawdev {
 /**
  * Enqueue a copy operation onto the ioat device
  */
-static inline int
-rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
+static __rte_always_inline int
+__ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl,
 		int fence)
 {
@@ -233,8 +233,8 @@ rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 /**
  * Trigger hardware to begin performing enqueued copy operations
  */
-static inline void
-rte_ioat_do_copies(int dev_id)
+static __rte_always_inline void
+__ioat_perform_ops(int dev_id)
 {
 	struct rte_ioat_rawdev *ioat = rte_rawdevs[dev_id].dev_private;
 	ioat->desc_ring[(ioat->next_write - 1) & (ioat->ring_size - 1)].u
@@ -248,8 +248,8 @@ rte_ioat_do_copies(int dev_id)
  * @internal
  * Returns the index of the last completed operation.
  */
-static inline int
-rte_ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
+static __rte_always_inline int
+__ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
 {
 	uint64_t status = ioat->status;
 
@@ -263,8 +263,8 @@ rte_ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
 /**
  * Returns details of copy operations that have been completed
  */
-static inline int
-rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
+static __rte_always_inline int
+__ioat_completed_ops(int dev_id, uint8_t max_copies,
 		uintptr_t *src_hdls, uintptr_t *dst_hdls)
 {
 	struct rte_ioat_rawdev *ioat = rte_rawdevs[dev_id].dev_private;
@@ -274,7 +274,7 @@ rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
 	int error;
 	int i = 0;
 
-	end_read = (rte_ioat_get_last_completed(ioat, &error) + 1) & mask;
+	end_read = (__ioat_get_last_completed(ioat, &error) + 1) & mask;
 	count = (end_read - (read & mask)) & mask;
 
 	if (error) {
@@ -311,4 +311,150 @@ rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
 	return count;
 }
 
+static __rte_always_inline int
+__idxd_enqueue_copy(int dev_id, rte_iova_t src, rte_iova_t dst,
+		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl,
+		int fence __rte_unused)
+{
+	struct rte_idxd_rawdev *idxd = rte_rawdevs[dev_id].dev_private;
+	struct rte_idxd_desc_batch *b = &idxd->batch_ring[idxd->next_batch];
+	uint32_t op_flags = (idxd_op_memmove << IDXD_CMD_OP_SHIFT) |
+			IDXD_FLAG_CACHE_CONTROL;
+
+	/* check for room in the handle ring */
+	if (((idxd->next_free_hdl + 1) & (idxd->hdl_ring_sz - 1)) == idxd->next_ret_hdl) {
+		rte_errno = ENOSPC;
+		return 0;
+	}
+	if (b->op_count >= BATCH_SIZE) {
+		rte_errno = ENOSPC;
+		return 0;
+	}
+	/* check that we can actually use the current batch */
+	if (b->submitted) {
+		rte_errno = ENOSPC;
+		return 0;
+	}
+
+	/* write the descriptor */
+	b->ops[b->op_count++] = (struct rte_idxd_hw_desc){
+		.op_flags =  op_flags,
+		.src = src,
+		.dst = dst,
+		.size = length
+	};
+
+	/* store the completion details */
+	if (!idxd->hdls_disable)
+		idxd->hdl_ring[idxd->next_free_hdl] = (struct rte_idxd_user_hdl) {
+			.src = src_hdl,
+			.dst = dst_hdl
+		};
+	if (++idxd->next_free_hdl == idxd->hdl_ring_sz)
+		idxd->next_free_hdl = 0;
+
+	return 1;
+}
+
+static __rte_always_inline void
+__idxd_movdir64b(volatile void *dst, const void *src)
+{
+	asm volatile (".byte 0x66, 0x0f, 0x38, 0xf8, 0x02"
+			:
+			: "a" (dst), "d" (src));
+}
+
+static __rte_always_inline void
+__idxd_perform_ops(int dev_id)
+{
+	struct rte_idxd_rawdev *idxd = rte_rawdevs[dev_id].dev_private;
+	struct rte_idxd_desc_batch *b = &idxd->batch_ring[idxd->next_batch];
+
+	if (b->submitted || b->op_count == 0)
+		return;
+	b->hdl_end = idxd->next_free_hdl;
+	b->comp.status = 0;
+	b->submitted = 1;
+	b->batch_desc.size = b->op_count + 1;
+	__idxd_movdir64b(idxd->portal, &b->batch_desc);
+
+	if (++idxd->next_batch == idxd->batch_ring_sz)
+		idxd->next_batch = 0;
+}
+
+static __rte_always_inline int
+__idxd_completed_ops(int dev_id, uint8_t max_ops,
+		uintptr_t *src_hdls, uintptr_t *dst_hdls)
+{
+	struct rte_idxd_rawdev *idxd = rte_rawdevs[dev_id].dev_private;
+	struct rte_idxd_desc_batch *b = &idxd->batch_ring[idxd->next_completed];
+	uint16_t h_idx = idxd->next_ret_hdl;
+	int n = 0;
+
+	while (b->submitted && b->comp.status != 0) {
+		idxd->last_completed_hdl = b->hdl_end;
+		b->submitted = 0;
+		b->op_count = 0;
+		if (++idxd->next_completed == idxd->batch_ring_sz)
+			idxd->next_completed = 0;
+		b = &idxd->batch_ring[idxd->next_completed];
+	}
+
+	if (!idxd->hdls_disable)
+		for (n = 0; n < max_ops && h_idx != idxd->last_completed_hdl; n++) {
+			src_hdls[n] = idxd->hdl_ring[h_idx].src;
+			dst_hdls[n] = idxd->hdl_ring[h_idx].dst;
+			if (++h_idx == idxd->hdl_ring_sz)
+				h_idx = 0;
+		}
+	else
+		while (h_idx != idxd->last_completed_hdl) {
+			n++;
+			if (++h_idx == idxd->hdl_ring_sz)
+				h_idx = 0;
+		}
+
+	idxd->next_ret_hdl = h_idx;
+
+	return n;
+}
+
+static inline int
+rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
+		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl,
+		int fence)
+{
+	enum rte_ioat_dev_type *type = rte_rawdevs[dev_id].dev_private;
+	if (*type == RTE_IDXD_DEV)
+		return __idxd_enqueue_copy(dev_id, src, dst, length,
+				src_hdl, dst_hdl, fence);
+	else
+		return __ioat_enqueue_copy(dev_id, src, dst, length,
+				src_hdl, dst_hdl, fence);
+}
+
+static inline void
+rte_ioat_do_copies(int dev_id)
+{
+	enum rte_ioat_dev_type *type = rte_rawdevs[dev_id].dev_private;
+	if (*type == RTE_IDXD_DEV)
+		return __idxd_perform_ops(dev_id);
+	else
+		return __ioat_perform_ops(dev_id);
+}
+
+static inline int
+rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
+		uintptr_t *src_hdls, uintptr_t *dst_hdls)
+{
+	enum rte_ioat_dev_type *type = rte_rawdevs[dev_id].dev_private;
+	if (*type == RTE_IDXD_DEV)
+		return __idxd_completed_ops(dev_id, max_copies,
+				src_hdls, dst_hdls);
+	else
+		return __ioat_completed_ops(dev_id,  max_copies,
+				src_hdls, dst_hdls);
+}
+
+
 #endif /* _RTE_IOAT_RAWDEV_FNS_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v2 14/18] raw/ioat: add info function for idxd devices
  2020-08-21 16:29 ` [dpdk-dev] [PATCH v2 00/18] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (12 preceding siblings ...)
  2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 13/18] raw/ioat: add data path " Bruce Richardson
@ 2020-08-21 16:29   ` Bruce Richardson
  2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 15/18] raw/ioat: create separate statistics structure Bruce Richardson
                     ` (4 subsequent siblings)
  18 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-08-21 16:29 UTC (permalink / raw)
  To: dev; +Cc: cheng1.jiang, patrick.fu, ping.yu, kevin.laatz, Bruce Richardson

Add the info get function for DSA devices, returning just the ring size
info about the device, same as is returned for existing IOAT/CBDMA devices.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 drivers/raw/ioat/idxd_pci.c     |  1 +
 drivers/raw/ioat/idxd_vdev.c    |  1 +
 drivers/raw/ioat/ioat_common.c  | 18 ++++++++++++++++++
 drivers/raw/ioat/ioat_private.h |  3 +++
 4 files changed, 23 insertions(+)

diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
index 13a0a03211..1ae20bc04f 100644
--- a/drivers/raw/ioat/idxd_pci.c
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -105,6 +105,7 @@ static const struct rte_rawdev_ops idxd_pci_ops = {
 		.dev_configure = idxd_dev_configure,
 		.dev_start = idxd_pci_dev_start,
 		.dev_stop = idxd_pci_dev_stop,
+		.dev_info_get = idxd_dev_info_get,
 };
 
 /* each portal uses 4 x 4k pages */
diff --git a/drivers/raw/ioat/idxd_vdev.c b/drivers/raw/ioat/idxd_vdev.c
index 73cb5d938f..3d6aa31f48 100644
--- a/drivers/raw/ioat/idxd_vdev.c
+++ b/drivers/raw/ioat/idxd_vdev.c
@@ -34,6 +34,7 @@ static const struct rte_rawdev_ops idxd_vdev_ops = {
 		.dev_selftest = idxd_rawdev_test,
 		.dump = idxd_dev_dump,
 		.dev_configure = idxd_dev_configure,
+		.dev_info_get = idxd_dev_info_get,
 };
 
 static void *
diff --git a/drivers/raw/ioat/ioat_common.c b/drivers/raw/ioat/ioat_common.c
index 23a9d71dd3..4c225762bd 100644
--- a/drivers/raw/ioat/ioat_common.c
+++ b/drivers/raw/ioat/ioat_common.c
@@ -37,6 +37,24 @@ idxd_dev_dump(struct rte_rawdev *dev, FILE *f)
 	return 0;
 }
 
+int
+idxd_dev_info_get(struct rte_rawdev *dev, rte_rawdev_obj_t dev_info,
+		size_t info_size)
+{
+	struct rte_ioat_rawdev_config *cfg = dev_info;
+	struct idxd_rawdev *idxd = dev->dev_private;
+	struct rte_idxd_rawdev *rte_idxd = &idxd->public;
+
+	if (info_size != sizeof(*cfg))
+		return -EINVAL;
+
+	if (cfg != NULL) {
+		cfg->ring_size = rte_idxd->hdl_ring_sz;
+		cfg->hdls_disable = rte_idxd->hdls_disable;
+	}
+	return 0;
+}
+
 int
 idxd_dev_configure(const struct rte_rawdev *dev,
 		rte_rawdev_obj_t config, size_t config_size)
diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
index 928c9b497c..0b17a8646b 100644
--- a/drivers/raw/ioat/ioat_private.h
+++ b/drivers/raw/ioat/ioat_private.h
@@ -60,6 +60,9 @@ extern int idxd_rawdev_create(const char *name, struct rte_device *dev,
 extern int idxd_dev_configure(const struct rte_rawdev *dev,
 		rte_rawdev_obj_t config, size_t config_size);
 
+extern int idxd_dev_info_get(struct rte_rawdev *dev, rte_rawdev_obj_t dev_info,
+		size_t info_size);
+
 extern int idxd_rawdev_test(uint16_t dev_id);
 
 extern int idxd_dev_dump(struct rte_rawdev *dev, FILE *f);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v2 15/18] raw/ioat: create separate statistics structure
  2020-08-21 16:29 ` [dpdk-dev] [PATCH v2 00/18] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (13 preceding siblings ...)
  2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 14/18] raw/ioat: add info function " Bruce Richardson
@ 2020-08-21 16:29   ` Bruce Richardson
  2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 16/18] raw/ioat: move xstats functions to common file Bruce Richardson
                     ` (3 subsequent siblings)
  18 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-08-21 16:29 UTC (permalink / raw)
  To: dev; +Cc: cheng1.jiang, patrick.fu, ping.yu, kevin.laatz, Bruce Richardson

Rather than having the xstats as fields inside the main driver structure,
create a separate structure type for them.

As part of the change, when updating the stats functions referring to the
stats by the old path, we can simplify them to use the id to directly index
into the stats structure, making the code shorter and simpler.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 drivers/raw/ioat/ioat_rawdev.c         | 40 +++++++-------------------
 drivers/raw/ioat/rte_ioat_rawdev_fns.h | 30 ++++++++++++-------
 2 files changed, 29 insertions(+), 41 deletions(-)

diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
index 15133737b9..064eb839cf 100644
--- a/drivers/raw/ioat/ioat_rawdev.c
+++ b/drivers/raw/ioat/ioat_rawdev.c
@@ -132,16 +132,14 @@ ioat_xstats_get(const struct rte_rawdev *dev, const unsigned int ids[],
 		uint64_t values[], unsigned int n)
 {
 	const struct rte_ioat_rawdev *ioat = dev->dev_private;
+	const uint64_t *stats = (const void *)&ioat->xstats;
 	unsigned int i;
 
 	for (i = 0; i < n; i++) {
-		switch (ids[i]) {
-		case 0: values[i] = ioat->enqueue_failed; break;
-		case 1: values[i] = ioat->enqueued; break;
-		case 2: values[i] = ioat->started; break;
-		case 3: values[i] = ioat->completed; break;
-		default: values[i] = 0; break;
-		}
+		if (ids[i] < sizeof(ioat->xstats)/sizeof(*stats))
+			values[i] = stats[ids[i]];
+		else
+			values[i] = 0;
 	}
 	return n;
 }
@@ -167,35 +165,17 @@ static int
 ioat_xstats_reset(struct rte_rawdev *dev, const uint32_t *ids, uint32_t nb_ids)
 {
 	struct rte_ioat_rawdev *ioat = dev->dev_private;
+	uint64_t *stats = (void *)&ioat->xstats;
 	unsigned int i;
 
 	if (!ids) {
-		ioat->enqueue_failed = 0;
-		ioat->enqueued = 0;
-		ioat->started = 0;
-		ioat->completed = 0;
+		memset(&ioat->xstats, 0, sizeof(ioat->xstats));
 		return 0;
 	}
 
-	for (i = 0; i < nb_ids; i++) {
-		switch (ids[i]) {
-		case 0:
-			ioat->enqueue_failed = 0;
-			break;
-		case 1:
-			ioat->enqueued = 0;
-			break;
-		case 2:
-			ioat->started = 0;
-			break;
-		case 3:
-			ioat->completed = 0;
-			break;
-		default:
-			IOAT_PMD_WARN("Invalid xstat id - cannot reset value");
-			break;
-		}
-	}
+	for (i = 0; i < nb_ids; i++)
+		if (ids[i] < sizeof(ioat->xstats)/sizeof(*stats))
+			stats[ids[i]] = 0;
 
 	return 0;
 }
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index 19aaaa50c8..66e3f1a836 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -47,17 +47,31 @@ enum rte_ioat_dev_type {
 	RTE_IDXD_DEV,
 };
 
+/**
+ * @internal
+ * some statistics for tracking, if added/changed update xstats fns
+ */
+struct rte_ioat_xstats {
+	uint64_t enqueue_failed;
+	uint64_t enqueued;
+	uint64_t started;
+	uint64_t completed;
+};
+
 /**
  * @internal
  * Structure representing an IOAT device instance
  */
 struct rte_ioat_rawdev {
+	/* common fields at the top - match those in rte_idxd_rawdev */
 	enum rte_ioat_dev_type type;
+	struct rte_ioat_xstats xstats;
+
 	struct rte_rawdev *rawdev;
 	const struct rte_memzone *mz;
 	const struct rte_memzone *desc_mz;
 
-	volatile uint16_t *doorbell;
+	volatile uint16_t *doorbell __rte_cache_aligned;
 	phys_addr_t status_addr;
 	phys_addr_t ring_addr;
 
@@ -70,12 +84,6 @@ struct rte_ioat_rawdev {
 	unsigned short next_read;
 	unsigned short next_write;
 
-	/* some statistics for tracking, if added/changed update xstats fns*/
-	uint64_t enqueue_failed __rte_cache_aligned;
-	uint64_t enqueued;
-	uint64_t started;
-	uint64_t completed;
-
 	/* to report completions, the device will write status back here */
 	volatile uint64_t status __rte_cache_aligned;
 
@@ -207,7 +215,7 @@ __ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 	struct rte_ioat_generic_hw_desc *desc;
 
 	if (space == 0) {
-		ioat->enqueue_failed++;
+		ioat->xstats.enqueue_failed++;
 		return 0;
 	}
 
@@ -226,7 +234,7 @@ __ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 					(int64_t)src_hdl);
 	rte_prefetch0(&ioat->desc_ring[ioat->next_write & mask]);
 
-	ioat->enqueued++;
+	ioat->xstats.enqueued++;
 	return 1;
 }
 
@@ -241,7 +249,7 @@ __ioat_perform_ops(int dev_id)
 			.control.completion_update = 1;
 	rte_compiler_barrier();
 	*ioat->doorbell = ioat->next_write;
-	ioat->started = ioat->enqueued;
+	ioat->xstats.started = ioat->xstats.enqueued;
 }
 
 /**
@@ -307,7 +315,7 @@ __ioat_completed_ops(int dev_id, uint8_t max_copies,
 
 end:
 	ioat->next_read = read;
-	ioat->completed += count;
+	ioat->xstats.completed += count;
 	return count;
 }
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v2 16/18] raw/ioat: move xstats functions to common file
  2020-08-21 16:29 ` [dpdk-dev] [PATCH v2 00/18] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (14 preceding siblings ...)
  2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 15/18] raw/ioat: create separate statistics structure Bruce Richardson
@ 2020-08-21 16:29   ` Bruce Richardson
  2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 17/18] raw/ioat: add xstats tracking for idxd devices Bruce Richardson
                     ` (2 subsequent siblings)
  18 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-08-21 16:29 UTC (permalink / raw)
  To: dev; +Cc: cheng1.jiang, patrick.fu, ping.yu, kevin.laatz, Bruce Richardson

The xstats functions can be used by all ioat devices so move them from the
ioat_rawdev.c file to ioat_common.c, and add the function prototypes to the
internal header file.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 drivers/raw/ioat/ioat_common.c  | 59 +++++++++++++++++++++++++++++++++
 drivers/raw/ioat/ioat_private.h | 10 ++++++
 drivers/raw/ioat/ioat_rawdev.c  | 58 --------------------------------
 3 files changed, 69 insertions(+), 58 deletions(-)

diff --git a/drivers/raw/ioat/ioat_common.c b/drivers/raw/ioat/ioat_common.c
index 4c225762bd..08ec5a458e 100644
--- a/drivers/raw/ioat/ioat_common.c
+++ b/drivers/raw/ioat/ioat_common.c
@@ -4,9 +4,68 @@
 
 #include <rte_rawdev_pmd.h>
 #include <rte_memzone.h>
+#include <rte_string_fns.h>
 
 #include "ioat_private.h"
 
+static const char * const xstat_names[] = {
+		"failed_enqueues", "successful_enqueues",
+		"copies_started", "copies_completed"
+};
+
+int
+ioat_xstats_get(const struct rte_rawdev *dev, const unsigned int ids[],
+		uint64_t values[], unsigned int n)
+{
+	const struct rte_ioat_rawdev *ioat = dev->dev_private;
+	const uint64_t *stats = (const void *)&ioat->xstats;
+	unsigned int i;
+
+	for (i = 0; i < n; i++) {
+		if (ids[i] > sizeof(ioat->xstats)/sizeof(*stats))
+			values[i] = 0;
+		else
+			values[i] = stats[ids[i]];
+	}
+	return n;
+}
+
+int
+ioat_xstats_get_names(const struct rte_rawdev *dev,
+		struct rte_rawdev_xstats_name *names,
+		unsigned int size)
+{
+	unsigned int i;
+
+	RTE_SET_USED(dev);
+	if (size < RTE_DIM(xstat_names))
+		return RTE_DIM(xstat_names);
+
+	for (i = 0; i < RTE_DIM(xstat_names); i++)
+		strlcpy(names[i].name, xstat_names[i], sizeof(names[i]));
+
+	return RTE_DIM(xstat_names);
+}
+
+int
+ioat_xstats_reset(struct rte_rawdev *dev, const uint32_t *ids, uint32_t nb_ids)
+{
+	struct rte_ioat_rawdev *ioat = dev->dev_private;
+	uint64_t *stats = (void *)&ioat->xstats;
+	unsigned int i;
+
+	if (!ids) {
+		memset(&ioat->xstats, 0, sizeof(ioat->xstats));
+		return 0;
+	}
+
+	for (i = 0; i < nb_ids; i++)
+		if (ids[i] < sizeof(ioat->xstats)/sizeof(*stats))
+			stats[ids[i]] = 0;
+
+	return 0;
+}
+
 int
 idxd_dev_dump(struct rte_rawdev *dev, FILE *f)
 {
diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
index 0b17a8646b..f4e2982e2b 100644
--- a/drivers/raw/ioat/ioat_private.h
+++ b/drivers/raw/ioat/ioat_private.h
@@ -53,6 +53,16 @@ struct idxd_rawdev {
 	} u;
 };
 
+int ioat_xstats_get(const struct rte_rawdev *dev, const unsigned int ids[],
+		uint64_t values[], unsigned int n);
+
+int ioat_xstats_get_names(const struct rte_rawdev *dev,
+		struct rte_rawdev_xstats_name *names,
+		unsigned int size);
+
+int ioat_xstats_reset(struct rte_rawdev *dev, const uint32_t *ids,
+		uint32_t nb_ids);
+
 extern int idxd_rawdev_create(const char *name, struct rte_device *dev,
 		       const struct idxd_rawdev *idxd,
 		       const struct rte_rawdev_ops *ops);
diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
index 064eb839cf..385917db29 100644
--- a/drivers/raw/ioat/ioat_rawdev.c
+++ b/drivers/raw/ioat/ioat_rawdev.c
@@ -122,64 +122,6 @@ ioat_dev_info_get(struct rte_rawdev *dev, rte_rawdev_obj_t dev_info,
 	return 0;
 }
 
-static const char * const xstat_names[] = {
-		"failed_enqueues", "successful_enqueues",
-		"copies_started", "copies_completed"
-};
-
-static int
-ioat_xstats_get(const struct rte_rawdev *dev, const unsigned int ids[],
-		uint64_t values[], unsigned int n)
-{
-	const struct rte_ioat_rawdev *ioat = dev->dev_private;
-	const uint64_t *stats = (const void *)&ioat->xstats;
-	unsigned int i;
-
-	for (i = 0; i < n; i++) {
-		if (ids[i] < sizeof(ioat->xstats)/sizeof(*stats))
-			values[i] = stats[ids[i]];
-		else
-			values[i] = 0;
-	}
-	return n;
-}
-
-static int
-ioat_xstats_get_names(const struct rte_rawdev *dev,
-		struct rte_rawdev_xstats_name *names,
-		unsigned int size)
-{
-	unsigned int i;
-
-	RTE_SET_USED(dev);
-	if (size < RTE_DIM(xstat_names))
-		return RTE_DIM(xstat_names);
-
-	for (i = 0; i < RTE_DIM(xstat_names); i++)
-		strlcpy(names[i].name, xstat_names[i], sizeof(names[i]));
-
-	return RTE_DIM(xstat_names);
-}
-
-static int
-ioat_xstats_reset(struct rte_rawdev *dev, const uint32_t *ids, uint32_t nb_ids)
-{
-	struct rte_ioat_rawdev *ioat = dev->dev_private;
-	uint64_t *stats = (void *)&ioat->xstats;
-	unsigned int i;
-
-	if (!ids) {
-		memset(&ioat->xstats, 0, sizeof(ioat->xstats));
-		return 0;
-	}
-
-	for (i = 0; i < nb_ids; i++)
-		if (ids[i] < sizeof(ioat->xstats)/sizeof(*stats))
-			stats[ids[i]] = 0;
-
-	return 0;
-}
-
 extern int ioat_rawdev_test(uint16_t dev_id);
 
 static int
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v2 17/18] raw/ioat: add xstats tracking for idxd devices
  2020-08-21 16:29 ` [dpdk-dev] [PATCH v2 00/18] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (15 preceding siblings ...)
  2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 16/18] raw/ioat: move xstats functions to common file Bruce Richardson
@ 2020-08-21 16:29   ` Bruce Richardson
  2020-08-24  9:56     ` Laatz, Kevin
  2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 18/18] raw/ioat: clean up use of common test function Bruce Richardson
  2020-08-21 16:39   ` [dpdk-dev] [PATCH v2 00/18] raw/ioat: enhancements and new hardware support Bruce Richardson
  18 siblings, 1 reply; 157+ messages in thread
From: Bruce Richardson @ 2020-08-21 16:29 UTC (permalink / raw)
  To: dev; +Cc: cheng1.jiang, patrick.fu, ping.yu, kevin.laatz, Bruce Richardson

Add update of the relevant stats for the data path functions and point the
overall device struct xstats function pointers to the existing ioat
functions.

At this point, all necessary hooks for supporting the existing unit tests
are in place so call them for each device.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 drivers/raw/ioat/idxd_pci.c            |  3 +++
 drivers/raw/ioat/idxd_vdev.c           |  3 +++
 drivers/raw/ioat/ioat_rawdev_test.c    |  2 +-
 drivers/raw/ioat/rte_ioat_rawdev_fns.h | 30 +++++++++++++++-----------
 4 files changed, 25 insertions(+), 13 deletions(-)

diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
index 1ae20bc04f..4b97b5b5fd 100644
--- a/drivers/raw/ioat/idxd_pci.c
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -106,6 +106,9 @@ static const struct rte_rawdev_ops idxd_pci_ops = {
 		.dev_start = idxd_pci_dev_start,
 		.dev_stop = idxd_pci_dev_stop,
 		.dev_info_get = idxd_dev_info_get,
+		.xstats_get = ioat_xstats_get,
+		.xstats_get_names = ioat_xstats_get_names,
+		.xstats_reset = ioat_xstats_reset,
 };
 
 /* each portal uses 4 x 4k pages */
diff --git a/drivers/raw/ioat/idxd_vdev.c b/drivers/raw/ioat/idxd_vdev.c
index 3d6aa31f48..febc5919f4 100644
--- a/drivers/raw/ioat/idxd_vdev.c
+++ b/drivers/raw/ioat/idxd_vdev.c
@@ -35,6 +35,9 @@ static const struct rte_rawdev_ops idxd_vdev_ops = {
 		.dump = idxd_dev_dump,
 		.dev_configure = idxd_dev_configure,
 		.dev_info_get = idxd_dev_info_get,
+		.xstats_get = ioat_xstats_get,
+		.xstats_get_names = ioat_xstats_get_names,
+		.xstats_reset = ioat_xstats_reset,
 };
 
 static void *
diff --git a/drivers/raw/ioat/ioat_rawdev_test.c b/drivers/raw/ioat/ioat_rawdev_test.c
index 082b3091c4..db10178871 100644
--- a/drivers/raw/ioat/ioat_rawdev_test.c
+++ b/drivers/raw/ioat/ioat_rawdev_test.c
@@ -274,5 +274,5 @@ int
 idxd_rawdev_test(uint16_t dev_id)
 {
 	rte_rawdev_dump(dev_id, stdout);
-	return 0;
+	return ioat_rawdev_test(dev_id);
 }
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index 66e3f1a836..db8608fa6b 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -182,6 +182,8 @@ struct rte_idxd_user_hdl {
  */
 struct rte_idxd_rawdev {
 	enum rte_ioat_dev_type type;
+	struct rte_ioat_xstats xstats;
+
 	void *portal; /* address to write the batch descriptor */
 
 	/* counters to track the batches and the individual op handles */
@@ -330,19 +332,15 @@ __idxd_enqueue_copy(int dev_id, rte_iova_t src, rte_iova_t dst,
 			IDXD_FLAG_CACHE_CONTROL;
 
 	/* check for room in the handle ring */
-	if (((idxd->next_free_hdl + 1) & (idxd->hdl_ring_sz - 1)) == idxd->next_ret_hdl) {
-		rte_errno = ENOSPC;
-		return 0;
-	}
-	if (b->op_count >= BATCH_SIZE) {
-		rte_errno = ENOSPC;
-		return 0;
-	}
+	if (((idxd->next_free_hdl + 1) & (idxd->hdl_ring_sz - 1)) == idxd->next_ret_hdl)
+		goto failed;
+
+	if (b->op_count >= BATCH_SIZE)
+		goto failed;
+
 	/* check that we can actually use the current batch */
-	if (b->submitted) {
-		rte_errno = ENOSPC;
-		return 0;
-	}
+	if (b->submitted)
+		goto failed;
 
 	/* write the descriptor */
 	b->ops[b->op_count++] = (struct rte_idxd_hw_desc){
@@ -361,7 +359,13 @@ __idxd_enqueue_copy(int dev_id, rte_iova_t src, rte_iova_t dst,
 	if (++idxd->next_free_hdl == idxd->hdl_ring_sz)
 		idxd->next_free_hdl = 0;
 
+	idxd->xstats.enqueued++;
 	return 1;
+
+failed:
+	idxd->xstats.enqueue_failed++;
+	rte_errno = ENOSPC;
+	return 0;
 }
 
 static __rte_always_inline void
@@ -388,6 +392,7 @@ __idxd_perform_ops(int dev_id)
 
 	if (++idxd->next_batch == idxd->batch_ring_sz)
 		idxd->next_batch = 0;
+	idxd->xstats.started = idxd->xstats.enqueued;
 }
 
 static __rte_always_inline int
@@ -424,6 +429,7 @@ __idxd_completed_ops(int dev_id, uint8_t max_ops,
 
 	idxd->next_ret_hdl = h_idx;
 
+	idxd->xstats.completed += n;
 	return n;
 }
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v2 18/18] raw/ioat: clean up use of common test function
  2020-08-21 16:29 ` [dpdk-dev] [PATCH v2 00/18] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (16 preceding siblings ...)
  2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 17/18] raw/ioat: add xstats tracking for idxd devices Bruce Richardson
@ 2020-08-21 16:29   ` Bruce Richardson
  2020-08-21 16:39   ` [dpdk-dev] [PATCH v2 00/18] raw/ioat: enhancements and new hardware support Bruce Richardson
  18 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-08-21 16:29 UTC (permalink / raw)
  To: dev; +Cc: cheng1.jiang, patrick.fu, ping.yu, kevin.laatz, Bruce Richardson

Now that all devices can pass the same set of unit tests, elminate the
temporary idxd_rawdev_test function and move the prototype for
ioat_rawdev_test to the proper internal header file, to be used by all
device instances.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 drivers/raw/ioat/idxd_pci.c         | 2 +-
 drivers/raw/ioat/idxd_vdev.c        | 2 +-
 drivers/raw/ioat/ioat_private.h     | 4 ++--
 drivers/raw/ioat/ioat_rawdev.c      | 2 --
 drivers/raw/ioat/ioat_rawdev_test.c | 7 -------
 5 files changed, 4 insertions(+), 13 deletions(-)

diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
index 4b97b5b5fd..926ca39ca8 100644
--- a/drivers/raw/ioat/idxd_pci.c
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -100,7 +100,7 @@ idxd_pci_dev_start(struct rte_rawdev *dev)
 }
 
 static const struct rte_rawdev_ops idxd_pci_ops = {
-		.dev_selftest = idxd_rawdev_test,
+		.dev_selftest = ioat_rawdev_test,
 		.dump = idxd_dev_dump,
 		.dev_configure = idxd_dev_configure,
 		.dev_start = idxd_pci_dev_start,
diff --git a/drivers/raw/ioat/idxd_vdev.c b/drivers/raw/ioat/idxd_vdev.c
index febc5919f4..5529760aa2 100644
--- a/drivers/raw/ioat/idxd_vdev.c
+++ b/drivers/raw/ioat/idxd_vdev.c
@@ -31,7 +31,7 @@ struct idxd_vdev_args {
 };
 
 static const struct rte_rawdev_ops idxd_vdev_ops = {
-		.dev_selftest = idxd_rawdev_test,
+		.dev_selftest = ioat_rawdev_test,
 		.dump = idxd_dev_dump,
 		.dev_configure = idxd_dev_configure,
 		.dev_info_get = idxd_dev_info_get,
diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
index f4e2982e2b..2925483ffb 100644
--- a/drivers/raw/ioat/ioat_private.h
+++ b/drivers/raw/ioat/ioat_private.h
@@ -63,6 +63,8 @@ int ioat_xstats_get_names(const struct rte_rawdev *dev,
 int ioat_xstats_reset(struct rte_rawdev *dev, const uint32_t *ids,
 		uint32_t nb_ids);
 
+extern int ioat_rawdev_test(uint16_t dev_id);
+
 extern int idxd_rawdev_create(const char *name, struct rte_device *dev,
 		       const struct idxd_rawdev *idxd,
 		       const struct rte_rawdev_ops *ops);
@@ -73,8 +75,6 @@ extern int idxd_dev_configure(const struct rte_rawdev *dev,
 extern int idxd_dev_info_get(struct rte_rawdev *dev, rte_rawdev_obj_t dev_info,
 		size_t info_size);
 
-extern int idxd_rawdev_test(uint16_t dev_id);
-
 extern int idxd_dev_dump(struct rte_rawdev *dev, FILE *f);
 
 #endif /* _IOAT_PRIVATE_H_ */
diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
index 385917db29..06af235f96 100644
--- a/drivers/raw/ioat/ioat_rawdev.c
+++ b/drivers/raw/ioat/ioat_rawdev.c
@@ -122,8 +122,6 @@ ioat_dev_info_get(struct rte_rawdev *dev, rte_rawdev_obj_t dev_info,
 	return 0;
 }
 
-extern int ioat_rawdev_test(uint16_t dev_id);
-
 static int
 ioat_rawdev_create(const char *name, struct rte_pci_device *dev)
 {
diff --git a/drivers/raw/ioat/ioat_rawdev_test.c b/drivers/raw/ioat/ioat_rawdev_test.c
index db10178871..534b07b124 100644
--- a/drivers/raw/ioat/ioat_rawdev_test.c
+++ b/drivers/raw/ioat/ioat_rawdev_test.c
@@ -269,10 +269,3 @@ ioat_rawdev_test(uint16_t dev_id)
 	free(ids);
 	return -1;
 }
-
-int
-idxd_rawdev_test(uint16_t dev_id)
-{
-	rte_rawdev_dump(dev_id, stdout);
-	return ioat_rawdev_test(dev_id);
-}
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* Re: [dpdk-dev] [PATCH v2 00/18] raw/ioat: enhancements and new hardware support
  2020-08-21 16:29 ` [dpdk-dev] [PATCH v2 00/18] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (17 preceding siblings ...)
  2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 18/18] raw/ioat: clean up use of common test function Bruce Richardson
@ 2020-08-21 16:39   ` Bruce Richardson
  18 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-08-21 16:39 UTC (permalink / raw)
  To: dev; +Cc: cheng1.jiang, patrick.fu, ping.yu, kevin.laatz

On Fri, Aug 21, 2020 at 05:29:26PM +0100, Bruce Richardson wrote:
> This patchset adds some small enhancements, some rework and also support
> for new hardware to the ioat rawdev driver. Most rework and enhancements
> are largely self-explanatory from the individual patches.
> 
> The new hardware support is for the Intel(R) DSA accelerator which will be
> present in future Intel processors. A description of this new hardware is
> covered in [1]. Functions specific to the new hardware use the "idxd"
> prefix, for consistency with the kernel driver. *** SUBJECT HERE ***
> 
> [1] https://01.org/blogs/2019/introducing-intel-data-streaming-accelerator
> 
> ---
> V2:
>  * Included documentation additions in the set
>  * Split off the rawdev unit test changes to a separate patchset for easier
>    review
>  * General code improvements and cleanups
> 
> 
Forgot to add that this patchset is based on top of the proposed rawdev API
changes:
http://patches.dpdk.org/project/dpdk/list/?series=11639


^ permalink raw reply	[flat|nested] 157+ messages in thread

* Re: [dpdk-dev] [PATCH v2 17/18] raw/ioat: add xstats tracking for idxd devices
  2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 17/18] raw/ioat: add xstats tracking for idxd devices Bruce Richardson
@ 2020-08-24  9:56     ` Laatz, Kevin
  0 siblings, 0 replies; 157+ messages in thread
From: Laatz, Kevin @ 2020-08-24  9:56 UTC (permalink / raw)
  To: Bruce Richardson, dev; +Cc: cheng1.jiang, patrick.fu, ping.yu

On 21/08/2020 17:29, Bruce Richardson wrote:
> Add update of the relevant stats for the data path functions and point the
> overall device struct xstats function pointers to the existing ioat
> functions.
>
> At this point, all necessary hooks for supporting the existing unit tests
> are in place so call them for each device.
>
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
>   drivers/raw/ioat/idxd_pci.c            |  3 +++
>   drivers/raw/ioat/idxd_vdev.c           |  3 +++
>   drivers/raw/ioat/ioat_rawdev_test.c    |  2 +-
>   drivers/raw/ioat/rte_ioat_rawdev_fns.h | 30 +++++++++++++++-----------
>   4 files changed, 25 insertions(+), 13 deletions(-)
<snip>
> diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
> index 66e3f1a836..db8608fa6b 100644
> --- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
> +++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
> @@ -182,6 +182,8 @@ struct rte_idxd_user_hdl {
>    */
>   struct rte_idxd_rawdev {
>   	enum rte_ioat_dev_type type;
> +	struct rte_ioat_xstats xstats;
> +
>   	void *portal; /* address to write the batch descriptor */
>   
>   	/* counters to track the batches and the individual op handles */
> @@ -330,19 +332,15 @@ __idxd_enqueue_copy(int dev_id, rte_iova_t src, rte_iova_t dst,
>   			IDXD_FLAG_CACHE_CONTROL;
>   
>   	/* check for room in the handle ring */
> -	if (((idxd->next_free_hdl + 1) & (idxd->hdl_ring_sz - 1)) == idxd->next_ret_hdl) {
> -		rte_errno = ENOSPC;
> -		return 0;
> -	}
> -	if (b->op_count >= BATCH_SIZE) {
> -		rte_errno = ENOSPC;
> -		return 0;
> -	}
> +	if (((idxd->next_free_hdl + 1) & (idxd->hdl_ring_sz - 1)) == idxd->next_ret_hdl)
> +		goto failed;
> +
> +	if (b->op_count >= BATCH_SIZE)
> +		goto failed;
> +
>   	/* check that we can actually use the current batch */
> -	if (b->submitted) {
> -		rte_errno = ENOSPC;
> -		return 0;
> -	}
> +	if (b->submitted)
> +		goto failed;

This 'cleanup' can be done when initially adding the function in patch 
"raw/ioat: add data path for idxd devices", allowing for this patch to 
be more concise.

/Kevin


^ permalink raw reply	[flat|nested] 157+ messages in thread

* Re: [dpdk-dev] [PATCH v2 02/18] raw/ioat: split header for readability
  2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 02/18] raw/ioat: split header for readability Bruce Richardson
@ 2020-08-25 15:27     ` Laatz, Kevin
  0 siblings, 0 replies; 157+ messages in thread
From: Laatz, Kevin @ 2020-08-25 15:27 UTC (permalink / raw)
  To: Bruce Richardson, dev; +Cc: cheng1.jiang, patrick.fu, ping.yu

On 21/08/2020 17:29, Bruce Richardson wrote:
> Rather than having a single long complicated header file for general use we
> can split things so that there is one header with all the publically needed
> information - data structs and function prototypes - while the rest of the
> internal details are put separately. This makes it easier to read,
> understand and use the APIs.
>
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
>
> There are a couple of checkpatch errors about spacing in this patch,
> however, it appears that these are false positives.
> ---
>   drivers/raw/ioat/meson.build           |   1 +
>   drivers/raw/ioat/rte_ioat_rawdev.h     | 144 +---------------------
>   drivers/raw/ioat/rte_ioat_rawdev_fns.h | 164 +++++++++++++++++++++++++
>   3 files changed, 171 insertions(+), 138 deletions(-)
>   create mode 100644 drivers/raw/ioat/rte_ioat_rawdev_fns.h
>
<snip>
> diff --git a/drivers/raw/ioat/rte_ioat_rawdev.h b/drivers/raw/ioat/rte_ioat_rawdev.h
> index 4bc6491d91..7ace5c085a 100644
> --- a/drivers/raw/ioat/rte_ioat_rawdev.h
> +++ b/drivers/raw/ioat/rte_ioat_rawdev.h
> @@ -14,12 +14,7 @@
>    * @b EXPERIMENTAL: these structures and APIs may change without prior notice
>    */
>   
> -#include <x86intrin.h>
> -#include <rte_atomic.h>
> -#include <rte_memory.h>
> -#include <rte_memzone.h>
> -#include <rte_prefetch.h>
> -#include "rte_ioat_spec.h"
> +#include <rte_common.h>
>   
>   /** Name of the device driver */
>   #define IOAT_PMD_RAWDEV_NAME rawdev_ioat
> @@ -38,38 +33,6 @@ struct rte_ioat_rawdev_config {
>   	bool hdls_disable;    /**< if set, ignore user-supplied handle params */
>   };
>   
> -/**
> - * @internal
> - * Structure representing a device instance
> - */
> -struct rte_ioat_rawdev {
> -	struct rte_rawdev *rawdev;
> -	const struct rte_memzone *mz;
> -	const struct rte_memzone *desc_mz;
> -
> -	volatile struct rte_ioat_registers *regs;
> -	phys_addr_t status_addr;
> -	phys_addr_t ring_addr;
> -
> -	unsigned short ring_size;
> -	struct rte_ioat_generic_hw_desc *desc_ring;
> -	bool hdls_disable;
> -	__m128i *hdls; /* completion handles for returning to user */
> -
> -
> -	unsigned short next_read;
> -	unsigned short next_write;
> -
> -	/* some statistics for tracking, if added/changed update xstats fns*/
> -	uint64_t enqueue_failed __rte_cache_aligned;
> -	uint64_t enqueued;
> -	uint64_t started;
> -	uint64_t completed;
> -
> -	/* to report completions, the device will write status back here */
> -	volatile uint64_t status __rte_cache_aligned;
> -};
> -
>   /**
>    * Enqueue a copy operation onto the ioat device
>    *
> @@ -104,38 +67,7 @@ struct rte_ioat_rawdev {
>   static inline int
>   rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
>   		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl,
> -		int fence)
> -{
> -	struct rte_ioat_rawdev *ioat = rte_rawdevs[dev_id].dev_private;


This assignment needs to be type cast to "struct rte_ioat_rawdev *" for 
C++ compilation compatibility.

There are a number of occurrences of this in this patch.


> -	unsigned short read = ioat->next_read;
> -	unsigned short write = ioat->next_write;
> -	unsigned short mask = ioat->ring_size - 1;
> -	unsigned short space = mask + read - write;
> -	struct rte_ioat_generic_hw_desc *desc;
> -
> -	if (space == 0) {
> -		ioat->enqueue_failed++;
> -		return 0;
> -	}
> -
> -	ioat->next_write = write + 1;
> -	write &= mask;
> -
> -	desc = &ioat->desc_ring[write];
> -	desc->size = length;
> -	/* set descriptor write-back every 16th descriptor */
> -	desc->u.control_raw = (uint32_t)((!!fence << 4) | (!(write & 0xF)) << 3);
> -	desc->src_addr = src;
> -	desc->dest_addr = dst;
> -	if (!ioat->hdls_disable)
> -		ioat->hdls[write] = _mm_set_epi64x((int64_t)dst_hdl,
> -					(int64_t)src_hdl);
> -
> -	rte_prefetch0(&ioat->desc_ring[ioat->next_write & mask]);
> -
> -	ioat->enqueued++;
> -	return 1;
> -}
> +		int fence);
>   
<snip>
> +/**
> + * Returns details of copy operations that have been completed
> + */
> +static inline int
> +rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
> +		uintptr_t *src_hdls, uintptr_t *dst_hdls)
> +{
> +	struct rte_ioat_rawdev *ioat = rte_rawdevs[dev_id].dev_private;
> +	unsigned short mask = (ioat->ring_size - 1);
> +	unsigned short read = ioat->next_read;
> +	unsigned short end_read, count;
> +	int error;
> +	int i = 0;
> +
> +	end_read = (rte_ioat_get_last_completed(ioat, &error) + 1) & mask;
> +	count = (end_read - (read & mask)) & mask;
> +
> +	if (error) {
> +		rte_errno = EIO;
> +		return -1;
> +	}
> +
> +	if (ioat->hdls_disable) {
> +		read += count;
> +		goto end;
> +	}
> +
> +	if (count > max_copies)
> +		count = max_copies;
> +
> +	for (; i < count - 1; i += 2, read += 2) {
> +		__m128i hdls0 = _mm_load_si128(&ioat->hdls[read & mask]);
> +		__m128i hdls1 = _mm_load_si128(&ioat->hdls[(read + 1) & mask]);
> +
> +		_mm_storeu_si128((void *)&src_hdls[i],
> +				_mm_unpacklo_epi64(hdls0, hdls1));
> +		_mm_storeu_si128((void *)&dst_hdls[i],
> +				_mm_unpackhi_epi64(hdls0, hdls1));

"src_hdls" and "dst_hdls" need to be type cast to "__m128i *" here for 
C++ compatibility.


> +	}
> +	for (; i < count; i++, read++) {
> +		uintptr_t *hdls = (void *)&ioat->hdls[read & mask];

Type cast for "ioat->hdls" to "__m128i *" needed here for C++ compatibility.


> +		src_hdls[i] = hdls[0];
> +		dst_hdls[i] = hdls[1];
> +	}
> +
> +end:
> +	ioat->next_read = read;
> +	ioat->completed += count;
> +	return count;
> +}
> +
> +#endif /* _RTE_IOAT_RAWDEV_FNS_H_ */

Thanks,
Kevin

^ permalink raw reply	[flat|nested] 157+ messages in thread

* Re: [dpdk-dev] [PATCH v2 08/18] raw/ioat: create rawdev instances on idxd PCI probe
  2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 08/18] raw/ioat: create rawdev instances on idxd PCI probe Bruce Richardson
@ 2020-08-25 15:27     ` Laatz, Kevin
  2020-08-26 15:45       ` Bruce Richardson
  0 siblings, 1 reply; 157+ messages in thread
From: Laatz, Kevin @ 2020-08-25 15:27 UTC (permalink / raw)
  To: Bruce Richardson, dev; +Cc: cheng1.jiang, patrick.fu, ping.yu

On 21/08/2020 17:29, Bruce Richardson wrote:
> When a matching device is found via PCI probe create a rawdev instance for
> each queue on the hardware. Use empty self-test function for these devices
> so that the overall rawdev_autotest does not report failures.
>
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
>   drivers/raw/ioat/idxd_pci.c            | 236 ++++++++++++++++++++++++-
>   drivers/raw/ioat/ioat_common.c         |  61 +++++++
>   drivers/raw/ioat/ioat_private.h        |  31 ++++
>   drivers/raw/ioat/ioat_rawdev_test.c    |   7 +
>   drivers/raw/ioat/ioat_spec.h           |  64 +++++++
>   drivers/raw/ioat/meson.build           |   1 +
>   drivers/raw/ioat/rte_ioat_rawdev_fns.h |  35 +++-
>   7 files changed, 432 insertions(+), 3 deletions(-)
>   create mode 100644 drivers/raw/ioat/ioat_common.c
>
<snip>


> diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
> index d87d4d055e..32c824536d 100644
> --- a/drivers/raw/ioat/ioat_private.h
> +++ b/drivers/raw/ioat/ioat_private.h
> @@ -14,6 +14,10 @@
>    * @b EXPERIMENTAL: these structures and APIs may change without prior notice
>    */
>   
> +#include <rte_spinlock.h>
> +#include <rte_rawdev_pmd.h>
> +#include "rte_ioat_rawdev.h"
> +
>   extern int ioat_pmd_logtype;
>   
>   #define IOAT_PMD_LOG(level, fmt, args...) rte_log(RTE_LOG_ ## level, \
> @@ -24,4 +28,31 @@ extern int ioat_pmd_logtype;
>   #define IOAT_PMD_ERR(fmt, args...)    IOAT_PMD_LOG(ERR, fmt, ## args)
>   #define IOAT_PMD_WARN(fmt, args...)   IOAT_PMD_LOG(WARNING, fmt, ## args)
>   
> +struct idxd_pci_common {
> +	rte_spinlock_t lk;
> +	volatile struct rte_idxd_bar0 *regs;
> +	volatile struct rte_idxd_wqcfg *wq_regs;
> +	volatile struct rte_idxd_grpcfg *grp_regs;
> +	volatile void *portals;
> +};
> +
> +struct idxd_rawdev {
> +	struct rte_idxd_rawdev public; /* the public members, must be first */

For C++ compatibility, we cannot use "public" since it is a reserved 
word in C++. Suggest renaming to "pub".


Thanks,
Kevin

^ permalink raw reply	[flat|nested] 157+ messages in thread

* Re: [dpdk-dev] [PATCH v2 10/18] raw/ioat: add datapath data structures for idxd devices
  2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 10/18] raw/ioat: add datapath data structures for idxd devices Bruce Richardson
@ 2020-08-25 15:27     ` Laatz, Kevin
  0 siblings, 0 replies; 157+ messages in thread
From: Laatz, Kevin @ 2020-08-25 15:27 UTC (permalink / raw)
  To: Bruce Richardson, dev; +Cc: cheng1.jiang, patrick.fu, ping.yu

On 21/08/2020 17:29, Bruce Richardson wrote:
> Add in the relevant data structures for the data path for DSA devices. Also
> include a device dump function to output the status of each device.
>
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
>   drivers/raw/ioat/idxd_pci.c            |  3 +-
>   drivers/raw/ioat/idxd_vdev.c           |  1 +
>   drivers/raw/ioat/ioat_common.c         | 34 +++++++++++
>   drivers/raw/ioat/ioat_private.h        |  2 +
>   drivers/raw/ioat/ioat_rawdev_test.c    |  3 +-
>   drivers/raw/ioat/rte_ioat_rawdev_fns.h | 80 ++++++++++++++++++++++++++
>   6 files changed, 121 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
> index 72f4ecebb7..ce238ae04c 100644
> --- a/drivers/raw/ioat/idxd_pci.c
> +++ b/drivers/raw/ioat/idxd_pci.c
> @@ -52,7 +52,8 @@ idxd_is_wq_enabled(struct idxd_rawdev *idxd)
>   }
>   
>   static const struct rte_rawdev_ops idxd_pci_ops = {
> -		.dev_selftest = idxd_rawdev_test
> +		.dev_selftest = idxd_rawdev_test,
> +		.dump = idxd_dev_dump,
>   };
>   
>   /* each portal uses 4 x 4k pages */
> diff --git a/drivers/raw/ioat/idxd_vdev.c b/drivers/raw/ioat/idxd_vdev.c
> index 93c023a6e8..0f9aa48e84 100644
> --- a/drivers/raw/ioat/idxd_vdev.c
> +++ b/drivers/raw/ioat/idxd_vdev.c
> @@ -32,6 +32,7 @@ struct idxd_vdev_args {
>   
>   static const struct rte_rawdev_ops idxd_vdev_ops = {
>   		.dev_selftest = idxd_rawdev_test,
> +		.dump = idxd_dev_dump,
>   };
>   
>   static void *
> diff --git a/drivers/raw/ioat/ioat_common.c b/drivers/raw/ioat/ioat_common.c
> index 37a14e514d..fb4f7055de 100644
> --- a/drivers/raw/ioat/ioat_common.c
> +++ b/drivers/raw/ioat/ioat_common.c
> @@ -7,6 +7,36 @@
>   
>   #include "ioat_private.h"
>   
> +int
> +idxd_dev_dump(struct rte_rawdev *dev, FILE *f)
> +{
> +	struct idxd_rawdev *idxd = dev->dev_private;
> +	struct rte_idxd_rawdev *rte_idxd = &idxd->public;

For C++ compatibility, we need to rename "public" since it is a reserved 
word.

This will trigger renaming this wherever it is used in other patches, of 
course :)

Thanks,
Kevin


<snip>


^ permalink raw reply	[flat|nested] 157+ messages in thread

* Re: [dpdk-dev] [PATCH v2 13/18] raw/ioat: add data path for idxd devices
  2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 13/18] raw/ioat: add data path " Bruce Richardson
@ 2020-08-25 15:27     ` Laatz, Kevin
  0 siblings, 0 replies; 157+ messages in thread
From: Laatz, Kevin @ 2020-08-25 15:27 UTC (permalink / raw)
  To: Bruce Richardson, dev; +Cc: cheng1.jiang, patrick.fu, ping.yu

On 21/08/2020 17:29, Bruce Richardson wrote:
> Add support for doing copies using DSA hardware. This is implemented by
> just switching on the device type field at the start of the inline
> functions. Since there is no hardware which will have both device types
> present this branch will always be predictable after the first call,
> meaning it has little to no perf penalty.
>
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
>   drivers/raw/ioat/ioat_common.c         |   1 +
>   drivers/raw/ioat/ioat_rawdev.c         |   1 +
>   drivers/raw/ioat/rte_ioat_rawdev_fns.h | 164 +++++++++++++++++++++++--
>   3 files changed, 157 insertions(+), 9 deletions(-)
>
<snip>
> diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
> index 1939437d50..19aaaa50c8 100644
> --- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
> +++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
> @@ -194,8 +194,8 @@ struct rte_idxd_rawdev {
>   /**
>    * Enqueue a copy operation onto the ioat device
>    */
> -static inline int
> -rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
> +static __rte_always_inline int
> +__ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
>   		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl,
>   		int fence)
>   {
> @@ -233,8 +233,8 @@ rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
>   /**
>    * Trigger hardware to begin performing enqueued copy operations
>    */
> -static inline void
> -rte_ioat_do_copies(int dev_id)
> +static __rte_always_inline void
> +__ioat_perform_ops(int dev_id)
>   {
>   	struct rte_ioat_rawdev *ioat = rte_rawdevs[dev_id].dev_private;
>   	ioat->desc_ring[(ioat->next_write - 1) & (ioat->ring_size - 1)].u
> @@ -248,8 +248,8 @@ rte_ioat_do_copies(int dev_id)
>    * @internal
>    * Returns the index of the last completed operation.
>    */
> -static inline int
> -rte_ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
> +static __rte_always_inline int
> +__ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
>   {
>   	uint64_t status = ioat->status;
>   
> @@ -263,8 +263,8 @@ rte_ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
>   /**
>    * Returns details of copy operations that have been completed
>    */
> -static inline int
> -rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
> +static __rte_always_inline int
> +__ioat_completed_ops(int dev_id, uint8_t max_copies,
>   		uintptr_t *src_hdls, uintptr_t *dst_hdls)
>   {
>   	struct rte_ioat_rawdev *ioat = rte_rawdevs[dev_id].dev_private;
> @@ -274,7 +274,7 @@ rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
>   	int error;
>   	int i = 0;
>   
> -	end_read = (rte_ioat_get_last_completed(ioat, &error) + 1) & mask;
> +	end_read = (__ioat_get_last_completed(ioat, &error) + 1) & mask;
>   	count = (end_read - (read & mask)) & mask;
>   
>   	if (error) {
> @@ -311,4 +311,150 @@ rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
>   	return count;
>   }
>   
> +static __rte_always_inline int
> +__idxd_enqueue_copy(int dev_id, rte_iova_t src, rte_iova_t dst,
> +		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl,
> +		int fence __rte_unused)
> +{
> +	struct rte_idxd_rawdev *idxd = rte_rawdevs[dev_id].dev_private;

For C++ compatibility, "dev_private" needs to be type cast to "struct 
rte_idxd_rawdev *" here.


> +	struct rte_idxd_desc_batch *b = &idxd->batch_ring[idxd->next_batch];
> +	uint32_t op_flags = (idxd_op_memmove << IDXD_CMD_OP_SHIFT) |
> +			IDXD_FLAG_CACHE_CONTROL;
> +
> +	/* check for room in the handle ring */
> +	if (((idxd->next_free_hdl + 1) & (idxd->hdl_ring_sz - 1)) == idxd->next_ret_hdl) {
> +		rte_errno = ENOSPC;
> +		return 0;
> +	}
> +	if (b->op_count >= BATCH_SIZE) {
> +		rte_errno = ENOSPC;
> +		return 0;
> +	}
> +	/* check that we can actually use the current batch */
> +	if (b->submitted) {
> +		rte_errno = ENOSPC;
> +		return 0;
> +	}
> +
> +	/* write the descriptor */
> +	b->ops[b->op_count++] = (struct rte_idxd_hw_desc){
> +		.op_flags =  op_flags,
> +		.src = src,
> +		.dst = dst,
> +		.size = length
> +	};
> +
> +	/* store the completion details */
> +	if (!idxd->hdls_disable)
> +		idxd->hdl_ring[idxd->next_free_hdl] = (struct rte_idxd_user_hdl) {
> +			.src = src_hdl,
> +			.dst = dst_hdl
> +		};
> +	if (++idxd->next_free_hdl == idxd->hdl_ring_sz)
> +		idxd->next_free_hdl = 0;
> +
> +	return 1;
> +}
> +
> +static __rte_always_inline void
> +__idxd_movdir64b(volatile void *dst, const void *src)
> +{
> +	asm volatile (".byte 0x66, 0x0f, 0x38, 0xf8, 0x02"
> +			:
> +			: "a" (dst), "d" (src));
> +}
> +
> +static __rte_always_inline void
> +__idxd_perform_ops(int dev_id)
> +{
> +	struct rte_idxd_rawdev *idxd = rte_rawdevs[dev_id].dev_private;

Type cast needed here and more below.

Thanks,
Kevin


> +	struct rte_idxd_desc_batch *b = &idxd->batch_ring[idxd->next_batch];
> +
> +	if (b->submitted || b->op_count == 0)
> +		return;
> +	b->hdl_end = idxd->next_free_hdl;
> +	b->comp.status = 0;
> +	b->submitted = 1;
> +	b->batch_desc.size = b->op_count + 1;
> +	__idxd_movdir64b(idxd->portal, &b->batch_desc);
> +
> +	if (++idxd->next_batch == idxd->batch_ring_sz)
> +		idxd->next_batch = 0;
> +}
> +
> +static __rte_always_inline int
> +__idxd_completed_ops(int dev_id, uint8_t max_ops,
> +		uintptr_t *src_hdls, uintptr_t *dst_hdls)
> +{
> +	struct rte_idxd_rawdev *idxd = rte_rawdevs[dev_id].dev_private;
> +	struct rte_idxd_desc_batch *b = &idxd->batch_ring[idxd->next_completed];
> +	uint16_t h_idx = idxd->next_ret_hdl;
> +	int n = 0;
> +
> +	while (b->submitted && b->comp.status != 0) {
> +		idxd->last_completed_hdl = b->hdl_end;
> +		b->submitted = 0;
> +		b->op_count = 0;
> +		if (++idxd->next_completed == idxd->batch_ring_sz)
> +			idxd->next_completed = 0;
> +		b = &idxd->batch_ring[idxd->next_completed];
> +	}
> +
> +	if (!idxd->hdls_disable)
> +		for (n = 0; n < max_ops && h_idx != idxd->last_completed_hdl; n++) {
> +			src_hdls[n] = idxd->hdl_ring[h_idx].src;
> +			dst_hdls[n] = idxd->hdl_ring[h_idx].dst;
> +			if (++h_idx == idxd->hdl_ring_sz)
> +				h_idx = 0;
> +		}
> +	else
> +		while (h_idx != idxd->last_completed_hdl) {
> +			n++;
> +			if (++h_idx == idxd->hdl_ring_sz)
> +				h_idx = 0;
> +		}
> +
> +	idxd->next_ret_hdl = h_idx;
> +
> +	return n;
> +}
> +
> +static inline int
> +rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
> +		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl,
> +		int fence)
> +{
> +	enum rte_ioat_dev_type *type = rte_rawdevs[dev_id].dev_private;
> +	if (*type == RTE_IDXD_DEV)
> +		return __idxd_enqueue_copy(dev_id, src, dst, length,
> +				src_hdl, dst_hdl, fence);
> +	else
> +		return __ioat_enqueue_copy(dev_id, src, dst, length,
> +				src_hdl, dst_hdl, fence);
> +}
> +
> +static inline void
> +rte_ioat_do_copies(int dev_id)
> +{
> +	enum rte_ioat_dev_type *type = rte_rawdevs[dev_id].dev_private;
> +	if (*type == RTE_IDXD_DEV)
> +		return __idxd_perform_ops(dev_id);
> +	else
> +		return __ioat_perform_ops(dev_id);
> +}
> +
> +static inline int
> +rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
> +		uintptr_t *src_hdls, uintptr_t *dst_hdls)
> +{
> +	enum rte_ioat_dev_type *type = rte_rawdevs[dev_id].dev_private;
> +	if (*type == RTE_IDXD_DEV)
> +		return __idxd_completed_ops(dev_id, max_copies,
> +				src_hdls, dst_hdls);
> +	else
> +		return __ioat_completed_ops(dev_id,  max_copies,
> +				src_hdls, dst_hdls);
> +}
> +
> +
>   #endif /* _RTE_IOAT_RAWDEV_FNS_H_ */



^ permalink raw reply	[flat|nested] 157+ messages in thread

* Re: [dpdk-dev] [PATCH v2 08/18] raw/ioat: create rawdev instances on idxd PCI probe
  2020-08-25 15:27     ` Laatz, Kevin
@ 2020-08-26 15:45       ` Bruce Richardson
  0 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-08-26 15:45 UTC (permalink / raw)
  To: Laatz, Kevin; +Cc: dev, cheng1.jiang, patrick.fu, ping.yu

On Tue, Aug 25, 2020 at 04:27:43PM +0100, Laatz, Kevin wrote:
> On 21/08/2020 17:29, Bruce Richardson wrote:
> > When a matching device is found via PCI probe create a rawdev instance for
> > each queue on the hardware. Use empty self-test function for these devices
> > so that the overall rawdev_autotest does not report failures.
> > 
> > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > ---
> >   drivers/raw/ioat/idxd_pci.c            | 236 ++++++++++++++++++++++++-
> >   drivers/raw/ioat/ioat_common.c         |  61 +++++++
> >   drivers/raw/ioat/ioat_private.h        |  31 ++++
> >   drivers/raw/ioat/ioat_rawdev_test.c    |   7 +
> >   drivers/raw/ioat/ioat_spec.h           |  64 +++++++
> >   drivers/raw/ioat/meson.build           |   1 +
> >   drivers/raw/ioat/rte_ioat_rawdev_fns.h |  35 +++-
> >   7 files changed, 432 insertions(+), 3 deletions(-)
> >   create mode 100644 drivers/raw/ioat/ioat_common.c
> > 
> <snip>
> 
> 
> > diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
> > index d87d4d055e..32c824536d 100644
> > --- a/drivers/raw/ioat/ioat_private.h
> > +++ b/drivers/raw/ioat/ioat_private.h
> > @@ -14,6 +14,10 @@
> >    * @b EXPERIMENTAL: these structures and APIs may change without prior notice
> >    */
> > +#include <rte_spinlock.h>
> > +#include <rte_rawdev_pmd.h>
> > +#include "rte_ioat_rawdev.h"
> > +
> >   extern int ioat_pmd_logtype;
> >   #define IOAT_PMD_LOG(level, fmt, args...) rte_log(RTE_LOG_ ## level, \
> > @@ -24,4 +28,31 @@ extern int ioat_pmd_logtype;
> >   #define IOAT_PMD_ERR(fmt, args...)    IOAT_PMD_LOG(ERR, fmt, ## args)
> >   #define IOAT_PMD_WARN(fmt, args...)   IOAT_PMD_LOG(WARNING, fmt, ## args)
> > +struct idxd_pci_common {
> > +	rte_spinlock_t lk;
> > +	volatile struct rte_idxd_bar0 *regs;
> > +	volatile struct rte_idxd_wqcfg *wq_regs;
> > +	volatile struct rte_idxd_grpcfg *grp_regs;
> > +	volatile void *portals;
> > +};
> > +
> > +struct idxd_rawdev {
> > +	struct rte_idxd_rawdev public; /* the public members, must be first */
> 
> For C++ compatibility, we cannot use "public" since it is a reserved word in
> C++. Suggest renaming to "pub".

Actually, this should be fine, since the word public only occurs in the
private header file and is not exposed to external apps.

/Bruce

^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v3 00/25] raw/ioat: enhancements and new hardware support
  2020-07-21  9:51 [dpdk-dev] [PATCH 20.11 00/20] raw/ioat: enhancements and new hardware support Bruce Richardson
                   ` (20 preceding siblings ...)
  2020-08-21 16:29 ` [dpdk-dev] [PATCH v2 00/18] raw/ioat: enhancements and new hardware support Bruce Richardson
@ 2020-09-25 11:08 ` Bruce Richardson
  2020-09-25 11:08   ` [dpdk-dev] [PATCH v3 01/25] doc/api: add ioat driver to index Bruce Richardson
                     ` (24 more replies)
  2020-09-28 16:42 ` [dpdk-dev] [PATCH v4 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
                   ` (2 subsequent siblings)
  24 siblings, 25 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-25 11:08 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Bruce Richardson

This patchset adds some small enhancements, some rework and also support
for new hardware to the ioat rawdev driver. Most rework and enhancements
are largely self-explanatory from the individual patches.

The new hardware support is for the Intel(R) DSA accelerator which will be
present in future Intel processors. A description of this new hardware is
covered in [1]. Functions specific to the new hardware use the "idxd"
prefix, for consistency with the kernel driver.

[1] https://01.org/blogs/2019/introducing-intel-data-streaming-accelerator

---
V3:
 * More doc updates including release note updates throughout the set
 * Added in fill operation
 * Added in fix for missing close operation
 * Added in fix for doc building to ensure ioat is in in the index

V2:
 * Included documentation additions in the set
 * Split off the rawdev unit test changes to a separate patchset for easier
   review
 * General code improvements and cleanups

Bruce Richardson (19):
  doc/api: add ioat driver to index
  raw/ioat: enable use from C++ code
  raw/ioat: include extra info in error messages
  raw/ioat: split header for readability
  raw/ioat: rename functions to be operation-agnostic
  raw/ioat: add separate API for fence call
  raw/ioat: make the HW register spec private
  raw/ioat: add skeleton for VFIO/UIO based DSA device
  raw/ioat: include example configuration script
  raw/ioat: create rawdev instances on idxd PCI probe
  raw/ioat: add datapath data structures for idxd devices
  raw/ioat: add configure function for idxd devices
  raw/ioat: add start and stop functions for idxd devices
  raw/ioat: add data path for idxd devices
  raw/ioat: add info function for idxd devices
  raw/ioat: create separate statistics structure
  raw/ioat: move xstats functions to common file
  raw/ioat: add xstats tracking for idxd devices
  raw/ioat: clean up use of common test function

Cheng Jiang (1):
  raw/ioat: add a flag to control copying handle parameters

Kevin Laatz (5):
  raw/ioat: fix missing close function
  usertools/dpdk-devbind.py: add support for DSA HW
  raw/ioat: add vdev probe for DSA/idxd devices
  raw/ioat: create rawdev instances for idxd vdevs
  raw/ioat: add fill operation

 doc/api/doxy-api-index.md                     |   1 +
 doc/api/doxy-api.conf.in                      |   1 +
 doc/guides/rawdevs/ioat.rst                   | 163 +++--
 doc/guides/rel_notes/release_20_11.rst        |  23 +
 drivers/raw/ioat/dpdk_idxd_cfg.py             |  79 +++
 drivers/raw/ioat/idxd_pci.c                   | 345 ++++++++++
 drivers/raw/ioat/idxd_vdev.c                  | 233 +++++++
 drivers/raw/ioat/ioat_common.c                | 244 +++++++
 drivers/raw/ioat/ioat_private.h               |  82 +++
 drivers/raw/ioat/ioat_rawdev.c                |  92 +--
 drivers/raw/ioat/ioat_rawdev_test.c           | 112 +++-
 .../raw/ioat/{rte_ioat_spec.h => ioat_spec.h} |  90 ++-
 drivers/raw/ioat/meson.build                  |  15 +-
 drivers/raw/ioat/rte_ioat_rawdev.h            | 221 +++----
 drivers/raw/ioat/rte_ioat_rawdev_fns.h        | 599 ++++++++++++++++++
 examples/ioat/ioatfwd.c                       |  16 +-
 lib/librte_eal/include/rte_common.h           |   1 +
 usertools/dpdk-devbind.py                     |   4 +-
 18 files changed, 1971 insertions(+), 350 deletions(-)
 create mode 100755 drivers/raw/ioat/dpdk_idxd_cfg.py
 create mode 100644 drivers/raw/ioat/idxd_pci.c
 create mode 100644 drivers/raw/ioat/idxd_vdev.c
 create mode 100644 drivers/raw/ioat/ioat_common.c
 create mode 100644 drivers/raw/ioat/ioat_private.h
 rename drivers/raw/ioat/{rte_ioat_spec.h => ioat_spec.h} (74%)
 create mode 100644 drivers/raw/ioat/rte_ioat_rawdev_fns.h

-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v3 01/25] doc/api: add ioat driver to index
  2020-09-25 11:08 ` [dpdk-dev] [PATCH v3 00/25] " Bruce Richardson
@ 2020-09-25 11:08   ` Bruce Richardson
  2020-09-25 11:08   ` [dpdk-dev] [PATCH v3 02/25] raw/ioat: fix missing close function Bruce Richardson
                     ` (23 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-25 11:08 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Bruce Richardson, stable, Kevin Laatz

Add the ioat driver to the doxygen documentation.

Cc: stable@dpdk.org

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
---
 doc/api/doxy-api-index.md | 1 +
 doc/api/doxy-api.conf.in  | 1 +
 2 files changed, 2 insertions(+)

diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index b855a8f3b..06e49f438 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -44,6 +44,7 @@ The public API headers are grouped by topics:
   [ixgbe]              (@ref rte_pmd_ixgbe.h),
   [i40e]               (@ref rte_pmd_i40e.h),
   [ice]                (@ref rte_pmd_ice.h),
+  [ioat]               (@ref rte_ioat_rawdev.h),
   [bnxt]               (@ref rte_pmd_bnxt.h),
   [dpaa]               (@ref rte_pmd_dpaa.h),
   [dpaa2]              (@ref rte_pmd_dpaa2.h),
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index 42d38919d..f87336365 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -18,6 +18,7 @@ INPUT                   = @TOPDIR@/doc/api/doxy-api-index.md \
                           @TOPDIR@/drivers/net/softnic \
                           @TOPDIR@/drivers/raw/dpaa2_cmdif \
                           @TOPDIR@/drivers/raw/dpaa2_qdma \
+                          @TOPDIR@/drivers/raw/ioat \
                           @TOPDIR@/lib/librte_eal/include \
                           @TOPDIR@/lib/librte_eal/include/generic \
                           @TOPDIR@/lib/librte_acl \
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v3 02/25] raw/ioat: fix missing close function
  2020-09-25 11:08 ` [dpdk-dev] [PATCH v3 00/25] " Bruce Richardson
  2020-09-25 11:08   ` [dpdk-dev] [PATCH v3 01/25] doc/api: add ioat driver to index Bruce Richardson
@ 2020-09-25 11:08   ` Bruce Richardson
  2020-09-25 11:12     ` Bruce Richardson
  2020-09-25 11:12     ` Pai G, Sunil
  2020-09-25 11:08   ` [dpdk-dev] [PATCH v3 03/25] raw/ioat: enable use from C++ code Bruce Richardson
                     ` (22 subsequent siblings)
  24 siblings, 2 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-25 11:08 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Kevin Laatz, stable, Sunil Pai G

From: Kevin Laatz <kevin.laatz@intel.com>

When rte_rawdev_pmd_release() is called, rte_rawdev_close() looks for a
dev_close function for the device causing a segmentation fault when no
close() function is implemented for a driver.

This patch resolves the issue by adding a stub function ioat_dev_close().

Fixes: f687e842e328 ("raw/ioat: introduce IOAT driver")
Cc: stable@dpdk.org

Reported-by: Sunil Pai G <sunil.pai.g@intel.com>
Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
---
 drivers/raw/ioat/ioat_rawdev.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
index 7f1a15436..0732b059f 100644
--- a/drivers/raw/ioat/ioat_rawdev.c
+++ b/drivers/raw/ioat/ioat_rawdev.c
@@ -203,6 +203,12 @@ ioat_xstats_reset(struct rte_rawdev *dev, const uint32_t *ids, uint32_t nb_ids)
 	return 0;
 }
 
+static int
+ioat_dev_close(struct rte_rawdev *dev __rte_unused)
+{
+	return 0;
+}
+
 extern int ioat_rawdev_test(uint16_t dev_id);
 
 static int
@@ -212,6 +218,7 @@ ioat_rawdev_create(const char *name, struct rte_pci_device *dev)
 			.dev_configure = ioat_dev_configure,
 			.dev_start = ioat_dev_start,
 			.dev_stop = ioat_dev_stop,
+			.dev_close = ioat_dev_close,
 			.dev_info_get = ioat_dev_info_get,
 			.xstats_get = ioat_xstats_get,
 			.xstats_get_names = ioat_xstats_get_names,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v3 03/25] raw/ioat: enable use from C++ code
  2020-09-25 11:08 ` [dpdk-dev] [PATCH v3 00/25] " Bruce Richardson
  2020-09-25 11:08   ` [dpdk-dev] [PATCH v3 01/25] doc/api: add ioat driver to index Bruce Richardson
  2020-09-25 11:08   ` [dpdk-dev] [PATCH v3 02/25] raw/ioat: fix missing close function Bruce Richardson
@ 2020-09-25 11:08   ` Bruce Richardson
  2020-09-25 11:08   ` [dpdk-dev] [PATCH v3 04/25] raw/ioat: include extra info in error messages Bruce Richardson
                     ` (21 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-25 11:08 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Bruce Richardson, Kevin Laatz

To allow the header file to be used from C++ code we need to ensure all
typecasts are explicit, and include an 'extern "C"' guard.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
---
 drivers/raw/ioat/rte_ioat_rawdev.h | 23 +++++++++++++++++------
 1 file changed, 17 insertions(+), 6 deletions(-)

diff --git a/drivers/raw/ioat/rte_ioat_rawdev.h b/drivers/raw/ioat/rte_ioat_rawdev.h
index f765a6557..3d8419271 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev.h
@@ -5,6 +5,10 @@
 #ifndef _RTE_IOAT_RAWDEV_H_
 #define _RTE_IOAT_RAWDEV_H_
 
+#ifdef __cplusplus
+extern "C" {
+#endif
+
 /**
  * @file rte_ioat_rawdev.h
  *
@@ -100,7 +104,8 @@ rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl,
 		int fence)
 {
-	struct rte_ioat_rawdev *ioat = rte_rawdevs[dev_id].dev_private;
+	struct rte_ioat_rawdev *ioat =
+			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
 	unsigned short read = ioat->next_read;
 	unsigned short write = ioat->next_write;
 	unsigned short mask = ioat->ring_size - 1;
@@ -141,7 +146,8 @@ rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 static inline void
 rte_ioat_do_copies(int dev_id)
 {
-	struct rte_ioat_rawdev *ioat = rte_rawdevs[dev_id].dev_private;
+	struct rte_ioat_rawdev *ioat =
+			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
 	ioat->desc_ring[(ioat->next_write - 1) & (ioat->ring_size - 1)].u
 			.control.completion_update = 1;
 	rte_compiler_barrier();
@@ -190,7 +196,8 @@ static inline int
 rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
 		uintptr_t *src_hdls, uintptr_t *dst_hdls)
 {
-	struct rte_ioat_rawdev *ioat = rte_rawdevs[dev_id].dev_private;
+	struct rte_ioat_rawdev *ioat =
+			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
 	unsigned short mask = (ioat->ring_size - 1);
 	unsigned short read = ioat->next_read;
 	unsigned short end_read, count;
@@ -212,13 +219,13 @@ rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
 		__m128i hdls0 = _mm_load_si128(&ioat->hdls[read & mask]);
 		__m128i hdls1 = _mm_load_si128(&ioat->hdls[(read + 1) & mask]);
 
-		_mm_storeu_si128((void *)&src_hdls[i],
+		_mm_storeu_si128((__m128i *)&src_hdls[i],
 				_mm_unpacklo_epi64(hdls0, hdls1));
-		_mm_storeu_si128((void *)&dst_hdls[i],
+		_mm_storeu_si128((__m128i *)&dst_hdls[i],
 				_mm_unpackhi_epi64(hdls0, hdls1));
 	}
 	for (; i < count; i++, read++) {
-		uintptr_t *hdls = (void *)&ioat->hdls[read & mask];
+		uintptr_t *hdls = (uintptr_t *)&ioat->hdls[read & mask];
 		src_hdls[i] = hdls[0];
 		dst_hdls[i] = hdls[1];
 	}
@@ -228,4 +235,8 @@ rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
 	return count;
 }
 
+#ifdef __cplusplus
+}
+#endif
+
 #endif /* _RTE_IOAT_RAWDEV_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v3 04/25] raw/ioat: include extra info in error messages
  2020-09-25 11:08 ` [dpdk-dev] [PATCH v3 00/25] " Bruce Richardson
                     ` (2 preceding siblings ...)
  2020-09-25 11:08   ` [dpdk-dev] [PATCH v3 03/25] raw/ioat: enable use from C++ code Bruce Richardson
@ 2020-09-25 11:08   ` Bruce Richardson
  2020-09-25 11:08   ` [dpdk-dev] [PATCH v3 05/25] raw/ioat: add a flag to control copying handle parameters Bruce Richardson
                     ` (20 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-25 11:08 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Bruce Richardson, Kevin Laatz

In case of any failures, include the function name and the line number of
the error message in the message, to make tracking down the failure easier.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
---
 drivers/raw/ioat/ioat_rawdev_test.c | 53 +++++++++++++++++++----------
 1 file changed, 35 insertions(+), 18 deletions(-)

diff --git a/drivers/raw/ioat/ioat_rawdev_test.c b/drivers/raw/ioat/ioat_rawdev_test.c
index c463a82ad..8e7fd96af 100644
--- a/drivers/raw/ioat/ioat_rawdev_test.c
+++ b/drivers/raw/ioat/ioat_rawdev_test.c
@@ -13,6 +13,23 @@ int ioat_rawdev_test(uint16_t dev_id); /* pre-define to keep compiler happy */
 static struct rte_mempool *pool;
 static unsigned short expected_ring_size;
 
+#define PRINT_ERR(...) print_err(__func__, __LINE__, __VA_ARGS__)
+
+static inline int
+__rte_format_printf(3, 4)
+print_err(const char *func, int lineno, const char *format, ...)
+{
+	va_list ap;
+	int ret;
+
+	ret = fprintf(stderr, "In %s:%d - ", func, lineno);
+	va_start(ap, format);
+	ret += vfprintf(stderr, format, ap);
+	va_end(ap);
+
+	return ret;
+}
+
 static int
 test_enqueue_copies(int dev_id)
 {
@@ -42,7 +59,7 @@ test_enqueue_copies(int dev_id)
 				(uintptr_t)src,
 				(uintptr_t)dst,
 				0 /* no fence */) != 1) {
-			printf("Error with rte_ioat_enqueue_copy\n");
+			PRINT_ERR("Error with rte_ioat_enqueue_copy\n");
 			return -1;
 		}
 		rte_ioat_do_copies(dev_id);
@@ -50,18 +67,18 @@ test_enqueue_copies(int dev_id)
 
 		if (rte_ioat_completed_copies(dev_id, 1, (void *)&completed[0],
 				(void *)&completed[1]) != 1) {
-			printf("Error with rte_ioat_completed_copies\n");
+			PRINT_ERR("Error with rte_ioat_completed_copies\n");
 			return -1;
 		}
 		if (completed[0] != src || completed[1] != dst) {
-			printf("Error with completions: got (%p, %p), not (%p,%p)\n",
+			PRINT_ERR("Error with completions: got (%p, %p), not (%p,%p)\n",
 					completed[0], completed[1], src, dst);
 			return -1;
 		}
 
 		for (i = 0; i < length; i++)
 			if (dst_data[i] != src_data[i]) {
-				printf("Data mismatch at char %u\n", i);
+				PRINT_ERR("Data mismatch at char %u\n", i);
 				return -1;
 			}
 		rte_pktmbuf_free(src);
@@ -94,7 +111,7 @@ test_enqueue_copies(int dev_id)
 					(uintptr_t)srcs[i],
 					(uintptr_t)dsts[i],
 					0 /* nofence */) != 1) {
-				printf("Error with rte_ioat_enqueue_copy for buffer %u\n",
+				PRINT_ERR("Error with rte_ioat_enqueue_copy for buffer %u\n",
 						i);
 				return -1;
 			}
@@ -104,18 +121,18 @@ test_enqueue_copies(int dev_id)
 
 		if (rte_ioat_completed_copies(dev_id, 64, (void *)completed_src,
 				(void *)completed_dst) != RTE_DIM(srcs)) {
-			printf("Error with rte_ioat_completed_copies\n");
+			PRINT_ERR("Error with rte_ioat_completed_copies\n");
 			return -1;
 		}
 		for (i = 0; i < RTE_DIM(srcs); i++) {
 			char *src_data, *dst_data;
 
 			if (completed_src[i] != srcs[i]) {
-				printf("Error with source pointer %u\n", i);
+				PRINT_ERR("Error with source pointer %u\n", i);
 				return -1;
 			}
 			if (completed_dst[i] != dsts[i]) {
-				printf("Error with dest pointer %u\n", i);
+				PRINT_ERR("Error with dest pointer %u\n", i);
 				return -1;
 			}
 
@@ -123,7 +140,7 @@ test_enqueue_copies(int dev_id)
 			dst_data = rte_pktmbuf_mtod(dsts[i], char *);
 			for (j = 0; j < length; j++)
 				if (src_data[j] != dst_data[j]) {
-					printf("Error with copy of packet %u, byte %u\n",
+					PRINT_ERR("Error with copy of packet %u, byte %u\n",
 							i, j);
 					return -1;
 				}
@@ -150,26 +167,26 @@ ioat_rawdev_test(uint16_t dev_id)
 
 	rte_rawdev_info_get(dev_id, &info, sizeof(p));
 	if (p.ring_size != expected_ring_size) {
-		printf("Error, initial ring size is not as expected (Actual: %d, Expected: %d)\n",
+		PRINT_ERR("Error, initial ring size is not as expected (Actual: %d, Expected: %d)\n",
 				(int)p.ring_size, expected_ring_size);
 		return -1;
 	}
 
 	p.ring_size = IOAT_TEST_RINGSIZE;
 	if (rte_rawdev_configure(dev_id, &info, sizeof(p)) != 0) {
-		printf("Error with rte_rawdev_configure()\n");
+		PRINT_ERR("Error with rte_rawdev_configure()\n");
 		return -1;
 	}
 	rte_rawdev_info_get(dev_id, &info, sizeof(p));
 	if (p.ring_size != IOAT_TEST_RINGSIZE) {
-		printf("Error, ring size is not %d (%d)\n",
+		PRINT_ERR("Error, ring size is not %d (%d)\n",
 				IOAT_TEST_RINGSIZE, (int)p.ring_size);
 		return -1;
 	}
 	expected_ring_size = p.ring_size;
 
 	if (rte_rawdev_start(dev_id) != 0) {
-		printf("Error with rte_rawdev_start()\n");
+		PRINT_ERR("Error with rte_rawdev_start()\n");
 		return -1;
 	}
 
@@ -180,7 +197,7 @@ ioat_rawdev_test(uint16_t dev_id)
 			2048, /* data room size */
 			info.socket_id);
 	if (pool == NULL) {
-		printf("Error with mempool creation\n");
+		PRINT_ERR("Error with mempool creation\n");
 		return -1;
 	}
 
@@ -189,14 +206,14 @@ ioat_rawdev_test(uint16_t dev_id)
 
 	snames = malloc(sizeof(*snames) * nb_xstats);
 	if (snames == NULL) {
-		printf("Error allocating xstat names memory\n");
+		PRINT_ERR("Error allocating xstat names memory\n");
 		goto err;
 	}
 	rte_rawdev_xstats_names_get(dev_id, snames, nb_xstats);
 
 	ids = malloc(sizeof(*ids) * nb_xstats);
 	if (ids == NULL) {
-		printf("Error allocating xstat ids memory\n");
+		PRINT_ERR("Error allocating xstat ids memory\n");
 		goto err;
 	}
 	for (i = 0; i < nb_xstats; i++)
@@ -204,7 +221,7 @@ ioat_rawdev_test(uint16_t dev_id)
 
 	stats = malloc(sizeof(*stats) * nb_xstats);
 	if (stats == NULL) {
-		printf("Error allocating xstat memory\n");
+		PRINT_ERR("Error allocating xstat memory\n");
 		goto err;
 	}
 
@@ -224,7 +241,7 @@ ioat_rawdev_test(uint16_t dev_id)
 
 	rte_rawdev_stop(dev_id);
 	if (rte_rawdev_xstats_reset(dev_id, NULL, 0) != 0) {
-		printf("Error resetting xstat values\n");
+		PRINT_ERR("Error resetting xstat values\n");
 		goto err;
 	}
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v3 05/25] raw/ioat: add a flag to control copying handle parameters
  2020-09-25 11:08 ` [dpdk-dev] [PATCH v3 00/25] " Bruce Richardson
                     ` (3 preceding siblings ...)
  2020-09-25 11:08   ` [dpdk-dev] [PATCH v3 04/25] raw/ioat: include extra info in error messages Bruce Richardson
@ 2020-09-25 11:08   ` Bruce Richardson
  2020-09-25 11:08   ` [dpdk-dev] [PATCH v3 06/25] raw/ioat: split header for readability Bruce Richardson
                     ` (19 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-25 11:08 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Cheng Jiang, Bruce Richardson, Kevin Laatz

From: Cheng Jiang <Cheng1.jiang@intel.com>

Add a flag which controls whether rte_ioat_enqueue_copy and
rte_ioat_completed_copies function should process handle parameters. Not
doing so can improve the performance when handle parameters are not
necessary.

Signed-off-by: Cheng Jiang <Cheng1.jiang@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
---
 doc/guides/rawdevs/ioat.rst            |  3 ++
 doc/guides/rel_notes/release_20_11.rst |  6 ++++
 drivers/raw/ioat/ioat_rawdev.c         |  2 ++
 drivers/raw/ioat/rte_ioat_rawdev.h     | 45 +++++++++++++++++++-------
 4 files changed, 45 insertions(+), 11 deletions(-)

diff --git a/doc/guides/rawdevs/ioat.rst b/doc/guides/rawdevs/ioat.rst
index c46460ff4..af00d77fb 100644
--- a/doc/guides/rawdevs/ioat.rst
+++ b/doc/guides/rawdevs/ioat.rst
@@ -129,6 +129,9 @@ output, the ``dev_private`` structure element cannot be NULL, and must
 point to a valid ``rte_ioat_rawdev_config`` structure, containing the ring
 size to be used by the device. The ring size must be a power of two,
 between 64 and 4096.
+If it is not needed, the tracking by the driver of user-provided completion
+handles may be disabled by setting the ``hdls_disable`` flag in
+the configuration structure also.
 
 The following code shows how the device is configured in
 ``test_ioat_rawdev.c``:
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 4eb3224a7..196209f63 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -78,6 +78,12 @@ New Features
     ``--portmask=N``
     where N represents the hexadecimal bitmask of ports used.
 
+* **Updated ioat rawdev driver**
+
+  The ioat rawdev driver has been updated and enhanced. Changes include:
+
+  * Added a per-device configuration flag to disable management of user-provided completion handles
+
 
 Removed Items
 -------------
diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
index 0732b059f..ea9f51ffc 100644
--- a/drivers/raw/ioat/ioat_rawdev.c
+++ b/drivers/raw/ioat/ioat_rawdev.c
@@ -58,6 +58,7 @@ ioat_dev_configure(const struct rte_rawdev *dev, rte_rawdev_obj_t config,
 		return -EINVAL;
 
 	ioat->ring_size = params->ring_size;
+	ioat->hdls_disable = params->hdls_disable;
 	if (ioat->desc_ring != NULL) {
 		rte_memzone_free(ioat->desc_mz);
 		ioat->desc_ring = NULL;
@@ -122,6 +123,7 @@ ioat_dev_info_get(struct rte_rawdev *dev, rte_rawdev_obj_t dev_info,
 		return -EINVAL;
 
 	cfg->ring_size = ioat->ring_size;
+	cfg->hdls_disable = ioat->hdls_disable;
 	return 0;
 }
 
diff --git a/drivers/raw/ioat/rte_ioat_rawdev.h b/drivers/raw/ioat/rte_ioat_rawdev.h
index 3d8419271..28ce95cc9 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev.h
@@ -38,7 +38,8 @@ extern "C" {
  * an ioat rawdev instance.
  */
 struct rte_ioat_rawdev_config {
-	unsigned short ring_size;
+	unsigned short ring_size; /**< size of job submission descriptor ring */
+	bool hdls_disable;    /**< if set, ignore user-supplied handle params */
 };
 
 /**
@@ -56,6 +57,7 @@ struct rte_ioat_rawdev {
 
 	unsigned short ring_size;
 	struct rte_ioat_generic_hw_desc *desc_ring;
+	bool hdls_disable;
 	__m128i *hdls; /* completion handles for returning to user */
 
 
@@ -88,10 +90,14 @@ struct rte_ioat_rawdev {
  *   The length of the data to be copied
  * @param src_hdl
  *   An opaque handle for the source data, to be returned when this operation
- *   has been completed and the user polls for the completion details
+ *   has been completed and the user polls for the completion details.
+ *   NOTE: If hdls_disable configuration option for the device is set, this
+ *   parameter is ignored.
  * @param dst_hdl
  *   An opaque handle for the destination data, to be returned when this
- *   operation has been completed and the user polls for the completion details
+ *   operation has been completed and the user polls for the completion details.
+ *   NOTE: If hdls_disable configuration option for the device is set, this
+ *   parameter is ignored.
  * @param fence
  *   A flag parameter indicating that hardware should not begin to perform any
  *   subsequently enqueued copy operations until after this operation has
@@ -126,8 +132,10 @@ rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 	desc->u.control_raw = (uint32_t)((!!fence << 4) | (!(write & 0xF)) << 3);
 	desc->src_addr = src;
 	desc->dest_addr = dst;
+	if (!ioat->hdls_disable)
+		ioat->hdls[write] = _mm_set_epi64x((int64_t)dst_hdl,
+					(int64_t)src_hdl);
 
-	ioat->hdls[write] = _mm_set_epi64x((int64_t)dst_hdl, (int64_t)src_hdl);
 	rte_prefetch0(&ioat->desc_ring[ioat->next_write & mask]);
 
 	ioat->enqueued++;
@@ -174,19 +182,29 @@ rte_ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
 /**
  * Returns details of copy operations that have been completed
  *
- * Returns to the caller the user-provided "handles" for the copy operations
- * which have been completed by the hardware, and not already returned by
- * a previous call to this API.
+ * If the hdls_disable option was not set when the device was configured,
+ * the function will return to the caller the user-provided "handles" for
+ * the copy operations which have been completed by the hardware, and not
+ * already returned by a previous call to this API.
+ * If the hdls_disable option for the device was set on configure, the
+ * max_copies, src_hdls and dst_hdls parameters will be ignored, and the
+ * function returns the number of newly-completed operations.
  *
  * @param dev_id
  *   The rawdev device id of the ioat instance
  * @param max_copies
  *   The number of entries which can fit in the src_hdls and dst_hdls
- *   arrays, i.e. max number of completed operations to report
+ *   arrays, i.e. max number of completed operations to report.
+ *   NOTE: If hdls_disable configuration option for the device is set, this
+ *   parameter is ignored.
  * @param src_hdls
- *   Array to hold the source handle parameters of the completed copies
+ *   Array to hold the source handle parameters of the completed copies.
+ *   NOTE: If hdls_disable configuration option for the device is set, this
+ *   parameter is ignored.
  * @param dst_hdls
- *   Array to hold the destination handle parameters of the completed copies
+ *   Array to hold the destination handle parameters of the completed copies.
+ *   NOTE: If hdls_disable configuration option for the device is set, this
+ *   parameter is ignored.
  * @return
  *   -1 on error, with rte_errno set appropriately.
  *   Otherwise number of completed operations i.e. number of entries written
@@ -212,6 +230,11 @@ rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
 		return -1;
 	}
 
+	if (ioat->hdls_disable) {
+		read += count;
+		goto end;
+	}
+
 	if (count > max_copies)
 		count = max_copies;
 
@@ -229,7 +252,7 @@ rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
 		src_hdls[i] = hdls[0];
 		dst_hdls[i] = hdls[1];
 	}
-
+end:
 	ioat->next_read = read;
 	ioat->completed += count;
 	return count;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v3 06/25] raw/ioat: split header for readability
  2020-09-25 11:08 ` [dpdk-dev] [PATCH v3 00/25] " Bruce Richardson
                     ` (4 preceding siblings ...)
  2020-09-25 11:08   ` [dpdk-dev] [PATCH v3 05/25] raw/ioat: add a flag to control copying handle parameters Bruce Richardson
@ 2020-09-25 11:08   ` Bruce Richardson
  2020-09-25 11:08   ` [dpdk-dev] [PATCH v3 07/25] raw/ioat: rename functions to be operation-agnostic Bruce Richardson
                     ` (18 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-25 11:08 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Bruce Richardson, Kevin Laatz

Rather than having a single long complicated header file for general use we
can split things so that there is one header with all the publicly needed
information - data structs and function prototypes - while the rest of the
internal details are put separately. This makes it easier to read,
understand and use the APIs.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
---

Some versions of checkpatch show errors about spacing in this patch,
however, it appears that these are false positives.
---
 drivers/raw/ioat/meson.build           |   1 +
 drivers/raw/ioat/rte_ioat_rawdev.h     | 147 +---------------------
 drivers/raw/ioat/rte_ioat_rawdev_fns.h | 168 +++++++++++++++++++++++++
 3 files changed, 175 insertions(+), 141 deletions(-)
 create mode 100644 drivers/raw/ioat/rte_ioat_rawdev_fns.h

diff --git a/drivers/raw/ioat/meson.build b/drivers/raw/ioat/meson.build
index 0878418ae..f66e9b605 100644
--- a/drivers/raw/ioat/meson.build
+++ b/drivers/raw/ioat/meson.build
@@ -8,4 +8,5 @@ sources = files('ioat_rawdev.c',
 deps += ['rawdev', 'bus_pci', 'mbuf']
 
 install_headers('rte_ioat_rawdev.h',
+		'rte_ioat_rawdev_fns.h',
 		'rte_ioat_spec.h')
diff --git a/drivers/raw/ioat/rte_ioat_rawdev.h b/drivers/raw/ioat/rte_ioat_rawdev.h
index 28ce95cc9..7067b352f 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev.h
@@ -18,12 +18,7 @@ extern "C" {
  * @b EXPERIMENTAL: these structures and APIs may change without prior notice
  */
 
-#include <x86intrin.h>
-#include <rte_atomic.h>
-#include <rte_memory.h>
-#include <rte_memzone.h>
-#include <rte_prefetch.h>
-#include "rte_ioat_spec.h"
+#include <rte_common.h>
 
 /** Name of the device driver */
 #define IOAT_PMD_RAWDEV_NAME rawdev_ioat
@@ -42,38 +37,6 @@ struct rte_ioat_rawdev_config {
 	bool hdls_disable;    /**< if set, ignore user-supplied handle params */
 };
 
-/**
- * @internal
- * Structure representing a device instance
- */
-struct rte_ioat_rawdev {
-	struct rte_rawdev *rawdev;
-	const struct rte_memzone *mz;
-	const struct rte_memzone *desc_mz;
-
-	volatile struct rte_ioat_registers *regs;
-	phys_addr_t status_addr;
-	phys_addr_t ring_addr;
-
-	unsigned short ring_size;
-	struct rte_ioat_generic_hw_desc *desc_ring;
-	bool hdls_disable;
-	__m128i *hdls; /* completion handles for returning to user */
-
-
-	unsigned short next_read;
-	unsigned short next_write;
-
-	/* some statistics for tracking, if added/changed update xstats fns*/
-	uint64_t enqueue_failed __rte_cache_aligned;
-	uint64_t enqueued;
-	uint64_t started;
-	uint64_t completed;
-
-	/* to report completions, the device will write status back here */
-	volatile uint64_t status __rte_cache_aligned;
-};
-
 /**
  * Enqueue a copy operation onto the ioat device
  *
@@ -108,39 +71,7 @@ struct rte_ioat_rawdev {
 static inline int
 rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl,
-		int fence)
-{
-	struct rte_ioat_rawdev *ioat =
-			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
-	unsigned short read = ioat->next_read;
-	unsigned short write = ioat->next_write;
-	unsigned short mask = ioat->ring_size - 1;
-	unsigned short space = mask + read - write;
-	struct rte_ioat_generic_hw_desc *desc;
-
-	if (space == 0) {
-		ioat->enqueue_failed++;
-		return 0;
-	}
-
-	ioat->next_write = write + 1;
-	write &= mask;
-
-	desc = &ioat->desc_ring[write];
-	desc->size = length;
-	/* set descriptor write-back every 16th descriptor */
-	desc->u.control_raw = (uint32_t)((!!fence << 4) | (!(write & 0xF)) << 3);
-	desc->src_addr = src;
-	desc->dest_addr = dst;
-	if (!ioat->hdls_disable)
-		ioat->hdls[write] = _mm_set_epi64x((int64_t)dst_hdl,
-					(int64_t)src_hdl);
-
-	rte_prefetch0(&ioat->desc_ring[ioat->next_write & mask]);
-
-	ioat->enqueued++;
-	return 1;
-}
+		int fence);
 
 /**
  * Trigger hardware to begin performing enqueued copy operations
@@ -152,32 +83,7 @@ rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
  *   The rawdev device id of the ioat instance
  */
 static inline void
-rte_ioat_do_copies(int dev_id)
-{
-	struct rte_ioat_rawdev *ioat =
-			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
-	ioat->desc_ring[(ioat->next_write - 1) & (ioat->ring_size - 1)].u
-			.control.completion_update = 1;
-	rte_compiler_barrier();
-	ioat->regs->dmacount = ioat->next_write;
-	ioat->started = ioat->enqueued;
-}
-
-/**
- * @internal
- * Returns the index of the last completed operation.
- */
-static inline int
-rte_ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
-{
-	uint64_t status = ioat->status;
-
-	/* lower 3 bits indicate "transfer status" : active, idle, halted.
-	 * We can ignore bit 0.
-	 */
-	*error = status & (RTE_IOAT_CHANSTS_SUSPENDED | RTE_IOAT_CHANSTS_ARMED);
-	return (status - ioat->ring_addr) >> 6;
-}
+rte_ioat_do_copies(int dev_id);
 
 /**
  * Returns details of copy operations that have been completed
@@ -212,51 +118,10 @@ rte_ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
  */
 static inline int
 rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
-		uintptr_t *src_hdls, uintptr_t *dst_hdls)
-{
-	struct rte_ioat_rawdev *ioat =
-			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
-	unsigned short mask = (ioat->ring_size - 1);
-	unsigned short read = ioat->next_read;
-	unsigned short end_read, count;
-	int error;
-	int i = 0;
-
-	end_read = (rte_ioat_get_last_completed(ioat, &error) + 1) & mask;
-	count = (end_read - (read & mask)) & mask;
-
-	if (error) {
-		rte_errno = EIO;
-		return -1;
-	}
-
-	if (ioat->hdls_disable) {
-		read += count;
-		goto end;
-	}
+		uintptr_t *src_hdls, uintptr_t *dst_hdls);
 
-	if (count > max_copies)
-		count = max_copies;
-
-	for (; i < count - 1; i += 2, read += 2) {
-		__m128i hdls0 = _mm_load_si128(&ioat->hdls[read & mask]);
-		__m128i hdls1 = _mm_load_si128(&ioat->hdls[(read + 1) & mask]);
-
-		_mm_storeu_si128((__m128i *)&src_hdls[i],
-				_mm_unpacklo_epi64(hdls0, hdls1));
-		_mm_storeu_si128((__m128i *)&dst_hdls[i],
-				_mm_unpackhi_epi64(hdls0, hdls1));
-	}
-	for (; i < count; i++, read++) {
-		uintptr_t *hdls = (uintptr_t *)&ioat->hdls[read & mask];
-		src_hdls[i] = hdls[0];
-		dst_hdls[i] = hdls[1];
-	}
-end:
-	ioat->next_read = read;
-	ioat->completed += count;
-	return count;
-}
+/* include the implementation details from a separate file */
+#include "rte_ioat_rawdev_fns.h"
 
 #ifdef __cplusplus
 }
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
new file mode 100644
index 000000000..4b7bdb8e2
--- /dev/null
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -0,0 +1,168 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Intel Corporation
+ */
+#ifndef _RTE_IOAT_RAWDEV_FNS_H_
+#define _RTE_IOAT_RAWDEV_FNS_H_
+
+#include <x86intrin.h>
+#include <rte_rawdev.h>
+#include <rte_memzone.h>
+#include <rte_prefetch.h>
+#include "rte_ioat_spec.h"
+
+/**
+ * @internal
+ * Structure representing a device instance
+ */
+struct rte_ioat_rawdev {
+	struct rte_rawdev *rawdev;
+	const struct rte_memzone *mz;
+	const struct rte_memzone *desc_mz;
+
+	volatile struct rte_ioat_registers *regs;
+	phys_addr_t status_addr;
+	phys_addr_t ring_addr;
+
+	unsigned short ring_size;
+	bool hdls_disable;
+	struct rte_ioat_generic_hw_desc *desc_ring;
+	__m128i *hdls; /* completion handles for returning to user */
+
+
+	unsigned short next_read;
+	unsigned short next_write;
+
+	/* some statistics for tracking, if added/changed update xstats fns*/
+	uint64_t enqueue_failed __rte_cache_aligned;
+	uint64_t enqueued;
+	uint64_t started;
+	uint64_t completed;
+
+	/* to report completions, the device will write status back here */
+	volatile uint64_t status __rte_cache_aligned;
+};
+
+/*
+ * Enqueue a copy operation onto the ioat device
+ */
+static inline int
+rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
+		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl,
+		int fence)
+{
+	struct rte_ioat_rawdev *ioat =
+			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
+	unsigned short read = ioat->next_read;
+	unsigned short write = ioat->next_write;
+	unsigned short mask = ioat->ring_size - 1;
+	unsigned short space = mask + read - write;
+	struct rte_ioat_generic_hw_desc *desc;
+
+	if (space == 0) {
+		ioat->enqueue_failed++;
+		return 0;
+	}
+
+	ioat->next_write = write + 1;
+	write &= mask;
+
+	desc = &ioat->desc_ring[write];
+	desc->size = length;
+	/* set descriptor write-back every 16th descriptor */
+	desc->u.control_raw = (uint32_t)((!!fence << 4) | (!(write & 0xF)) << 3);
+	desc->src_addr = src;
+	desc->dest_addr = dst;
+
+	if (!ioat->hdls_disable)
+		ioat->hdls[write] = _mm_set_epi64x((int64_t)dst_hdl,
+					(int64_t)src_hdl);
+	rte_prefetch0(&ioat->desc_ring[ioat->next_write & mask]);
+
+	ioat->enqueued++;
+	return 1;
+}
+
+/*
+ * Trigger hardware to begin performing enqueued copy operations
+ */
+static inline void
+rte_ioat_do_copies(int dev_id)
+{
+	struct rte_ioat_rawdev *ioat =
+			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
+	ioat->desc_ring[(ioat->next_write - 1) & (ioat->ring_size - 1)].u
+			.control.completion_update = 1;
+	rte_compiler_barrier();
+	ioat->regs->dmacount = ioat->next_write;
+	ioat->started = ioat->enqueued;
+}
+
+/**
+ * @internal
+ * Returns the index of the last completed operation.
+ */
+static inline int
+rte_ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
+{
+	uint64_t status = ioat->status;
+
+	/* lower 3 bits indicate "transfer status" : active, idle, halted.
+	 * We can ignore bit 0.
+	 */
+	*error = status & (RTE_IOAT_CHANSTS_SUSPENDED | RTE_IOAT_CHANSTS_ARMED);
+	return (status - ioat->ring_addr) >> 6;
+}
+
+/*
+ * Returns details of copy operations that have been completed
+ */
+static inline int
+rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
+		uintptr_t *src_hdls, uintptr_t *dst_hdls)
+{
+	struct rte_ioat_rawdev *ioat =
+			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
+	unsigned short mask = (ioat->ring_size - 1);
+	unsigned short read = ioat->next_read;
+	unsigned short end_read, count;
+	int error;
+	int i = 0;
+
+	end_read = (rte_ioat_get_last_completed(ioat, &error) + 1) & mask;
+	count = (end_read - (read & mask)) & mask;
+
+	if (error) {
+		rte_errno = EIO;
+		return -1;
+	}
+
+	if (ioat->hdls_disable) {
+		read += count;
+		goto end;
+	}
+
+	if (count > max_copies)
+		count = max_copies;
+
+	for (; i < count - 1; i += 2, read += 2) {
+		__m128i hdls0 = _mm_load_si128(&ioat->hdls[read & mask]);
+		__m128i hdls1 = _mm_load_si128(&ioat->hdls[(read + 1) & mask]);
+
+		_mm_storeu_si128((__m128i *)&src_hdls[i],
+				_mm_unpacklo_epi64(hdls0, hdls1));
+		_mm_storeu_si128((__m128i *)&dst_hdls[i],
+				_mm_unpackhi_epi64(hdls0, hdls1));
+	}
+	for (; i < count; i++, read++) {
+		uintptr_t *hdls = (uintptr_t *)&ioat->hdls[read & mask];
+		src_hdls[i] = hdls[0];
+		dst_hdls[i] = hdls[1];
+	}
+
+end:
+	ioat->next_read = read;
+	ioat->completed += count;
+	return count;
+}
+
+#endif /* _RTE_IOAT_RAWDEV_FNS_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v3 07/25] raw/ioat: rename functions to be operation-agnostic
  2020-09-25 11:08 ` [dpdk-dev] [PATCH v3 00/25] " Bruce Richardson
                     ` (5 preceding siblings ...)
  2020-09-25 11:08   ` [dpdk-dev] [PATCH v3 06/25] raw/ioat: split header for readability Bruce Richardson
@ 2020-09-25 11:08   ` Bruce Richardson
  2020-09-25 11:08   ` [dpdk-dev] [PATCH v3 08/25] raw/ioat: add separate API for fence call Bruce Richardson
                     ` (17 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-25 11:08 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Bruce Richardson, Kevin Laatz

Since the hardware supported by the ioat driver is capable of operations
other than just copies, we can rename the doorbell and completion-return
functions to not have "copies" in their names. These functions are not
copy-specific, and so would apply for other operations which may be added
later to the driver.

Also add a suitable warning using deprecation attribute for any code using
the old functions names.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>

---
Note: The checkpatches warning on this patch is a false positive due to
the addition of the new __rte_deprecated_msg macro in rte_common.h
---
 doc/guides/rawdevs/ioat.rst            | 16 ++++++++--------
 doc/guides/rel_notes/release_20_11.rst |  9 +++++++++
 doc/guides/sample_app_ug/ioat.rst      |  8 ++++----
 drivers/raw/ioat/ioat_rawdev_test.c    | 12 ++++++------
 drivers/raw/ioat/rte_ioat_rawdev.h     | 14 +++++++-------
 drivers/raw/ioat/rte_ioat_rawdev_fns.h | 20 ++++++++++++++++----
 examples/ioat/ioatfwd.c                |  4 ++--
 lib/librte_eal/include/rte_common.h    |  1 +
 8 files changed, 53 insertions(+), 31 deletions(-)

diff --git a/doc/guides/rawdevs/ioat.rst b/doc/guides/rawdevs/ioat.rst
index af00d77fb..3db5f5d09 100644
--- a/doc/guides/rawdevs/ioat.rst
+++ b/doc/guides/rawdevs/ioat.rst
@@ -157,9 +157,9 @@ Performing Data Copies
 ~~~~~~~~~~~~~~~~~~~~~~~
 
 To perform data copies using IOAT rawdev devices, the functions
-``rte_ioat_enqueue_copy()`` and ``rte_ioat_do_copies()`` should be used.
+``rte_ioat_enqueue_copy()`` and ``rte_ioat_perform_ops()`` should be used.
 Once copies have been completed, the completion will be reported back when
-the application calls ``rte_ioat_completed_copies()``.
+the application calls ``rte_ioat_completed_ops()``.
 
 The ``rte_ioat_enqueue_copy()`` function enqueues a single copy to the
 device ring for copying at a later point. The parameters to that function
@@ -172,11 +172,11 @@ pointers if packet data is being copied.
 
 While the ``rte_ioat_enqueue_copy()`` function enqueues a copy operation on
 the device ring, the copy will not actually be performed until after the
-application calls the ``rte_ioat_do_copies()`` function. This function
+application calls the ``rte_ioat_perform_ops()`` function. This function
 informs the device hardware of the elements enqueued on the ring, and the
 device will begin to process them. It is expected that, for efficiency
 reasons, a burst of operations will be enqueued to the device via multiple
-enqueue calls between calls to the ``rte_ioat_do_copies()`` function.
+enqueue calls between calls to the ``rte_ioat_perform_ops()`` function.
 
 The following code from ``test_ioat_rawdev.c`` demonstrates how to enqueue
 a burst of copies to the device and start the hardware processing of them:
@@ -210,10 +210,10 @@ a burst of copies to the device and start the hardware processing of them:
                         return -1;
                 }
         }
-        rte_ioat_do_copies(dev_id);
+        rte_ioat_perform_ops(dev_id);
 
 To retrieve information about completed copies, the API
-``rte_ioat_completed_copies()`` should be used. This API will return to the
+``rte_ioat_completed_ops()`` should be used. This API will return to the
 application a set of completion handles passed in when the relevant copies
 were enqueued.
 
@@ -223,9 +223,9 @@ is correct before freeing the data buffers using the returned handles:
 
 .. code-block:: C
 
-        if (rte_ioat_completed_copies(dev_id, 64, (void *)completed_src,
+        if (rte_ioat_completed_ops(dev_id, 64, (void *)completed_src,
                         (void *)completed_dst) != RTE_DIM(srcs)) {
-                printf("Error with rte_ioat_completed_copies\n");
+                printf("Error with rte_ioat_completed_ops\n");
                 return -1;
         }
         for (i = 0; i < RTE_DIM(srcs); i++) {
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 196209f63..c99c0b33f 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -83,6 +83,11 @@ New Features
   The ioat rawdev driver has been updated and enhanced. Changes include:
 
   * Added a per-device configuration flag to disable management of user-provided completion handles
+  * Renamed the ``rte_ioat_do_copies()`` API to ``rte_ioat_perform_ops()``,
+    and renamed the ``rte_ioat_completed_copies()`` API to ``rte_ioat_completed_ops()``
+    to better reflect the APIs' purposes, and remove the implication that
+    they are limited to copy operations only.
+    [Note: The old API is still provided but marked as deprecated in the code]
 
 
 Removed Items
@@ -178,6 +183,10 @@ API Changes
 
 * bpf: ``RTE_BPF_XTYPE_NUM`` has been dropped from ``rte_bpf_xtype``.
 
+* raw/ioat: As noted above, the ``rte_ioat_do_copies()`` and
+  ``rte_ioat_completed_copies()`` functions have been renamed to
+  ``rte_ioat_perform_ops()`` and ``rte_ioat_completed_ops()`` respectively.
+
 
 ABI Changes
 -----------
diff --git a/doc/guides/sample_app_ug/ioat.rst b/doc/guides/sample_app_ug/ioat.rst
index 3f7d5c34a..964160dff 100644
--- a/doc/guides/sample_app_ug/ioat.rst
+++ b/doc/guides/sample_app_ug/ioat.rst
@@ -394,7 +394,7 @@ packet using ``pktmbuf_sw_copy()`` function and enqueue them to an rte_ring:
                 nb_enq = ioat_enqueue_packets(pkts_burst,
                     nb_rx, rx_config->ioat_ids[i]);
                 if (nb_enq > 0)
-                    rte_ioat_do_copies(rx_config->ioat_ids[i]);
+                    rte_ioat_perform_ops(rx_config->ioat_ids[i]);
             } else {
                 /* Perform packet software copy, free source packets */
                 int ret;
@@ -433,7 +433,7 @@ The packets are received in burst mode using ``rte_eth_rx_burst()``
 function. When using hardware copy mode the packets are enqueued in
 copying device's buffer using ``ioat_enqueue_packets()`` which calls
 ``rte_ioat_enqueue_copy()``. When all received packets are in the
-buffer the copy operations are started by calling ``rte_ioat_do_copies()``.
+buffer the copy operations are started by calling ``rte_ioat_perform_ops()``.
 Function ``rte_ioat_enqueue_copy()`` operates on physical address of
 the packet. Structure ``rte_mbuf`` contains only physical address to
 start of the data buffer (``buf_iova``). Thus the address is adjusted
@@ -490,7 +490,7 @@ or indirect mbufs, then multiple copy operations must be used.
 
 
 All completed copies are processed by ``ioat_tx_port()`` function. When using
-hardware copy mode the function invokes ``rte_ioat_completed_copies()``
+hardware copy mode the function invokes ``rte_ioat_completed_ops()``
 on each assigned IOAT channel to gather copied packets. If software copy
 mode is used the function dequeues copied packets from the rte_ring. Then each
 packet MAC address is changed if it was enabled. After that copies are sent
@@ -510,7 +510,7 @@ in burst mode using `` rte_eth_tx_burst()``.
         for (i = 0; i < tx_config->nb_queues; i++) {
             if (copy_mode == COPY_MODE_IOAT_NUM) {
                 /* Deque the mbufs from IOAT device. */
-                nb_dq = rte_ioat_completed_copies(
+                nb_dq = rte_ioat_completed_ops(
                     tx_config->ioat_ids[i], MAX_PKT_BURST,
                     (void *)mbufs_src, (void *)mbufs_dst);
             } else {
diff --git a/drivers/raw/ioat/ioat_rawdev_test.c b/drivers/raw/ioat/ioat_rawdev_test.c
index 8e7fd96af..bb40eab6b 100644
--- a/drivers/raw/ioat/ioat_rawdev_test.c
+++ b/drivers/raw/ioat/ioat_rawdev_test.c
@@ -62,12 +62,12 @@ test_enqueue_copies(int dev_id)
 			PRINT_ERR("Error with rte_ioat_enqueue_copy\n");
 			return -1;
 		}
-		rte_ioat_do_copies(dev_id);
+		rte_ioat_perform_ops(dev_id);
 		usleep(10);
 
-		if (rte_ioat_completed_copies(dev_id, 1, (void *)&completed[0],
+		if (rte_ioat_completed_ops(dev_id, 1, (void *)&completed[0],
 				(void *)&completed[1]) != 1) {
-			PRINT_ERR("Error with rte_ioat_completed_copies\n");
+			PRINT_ERR("Error with rte_ioat_completed_ops\n");
 			return -1;
 		}
 		if (completed[0] != src || completed[1] != dst) {
@@ -116,12 +116,12 @@ test_enqueue_copies(int dev_id)
 				return -1;
 			}
 		}
-		rte_ioat_do_copies(dev_id);
+		rte_ioat_perform_ops(dev_id);
 		usleep(100);
 
-		if (rte_ioat_completed_copies(dev_id, 64, (void *)completed_src,
+		if (rte_ioat_completed_ops(dev_id, 64, (void *)completed_src,
 				(void *)completed_dst) != RTE_DIM(srcs)) {
-			PRINT_ERR("Error with rte_ioat_completed_copies\n");
+			PRINT_ERR("Error with rte_ioat_completed_ops\n");
 			return -1;
 		}
 		for (i = 0; i < RTE_DIM(srcs); i++) {
diff --git a/drivers/raw/ioat/rte_ioat_rawdev.h b/drivers/raw/ioat/rte_ioat_rawdev.h
index 7067b352f..5b2c47e8c 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev.h
@@ -74,19 +74,19 @@ rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 		int fence);
 
 /**
- * Trigger hardware to begin performing enqueued copy operations
+ * Trigger hardware to begin performing enqueued operations
  *
  * This API is used to write the "doorbell" to the hardware to trigger it
- * to begin the copy operations previously enqueued by rte_ioat_enqueue_copy()
+ * to begin the operations previously enqueued by rte_ioat_enqueue_copy()
  *
  * @param dev_id
  *   The rawdev device id of the ioat instance
  */
 static inline void
-rte_ioat_do_copies(int dev_id);
+rte_ioat_perform_ops(int dev_id);
 
 /**
- * Returns details of copy operations that have been completed
+ * Returns details of operations that have been completed
  *
  * If the hdls_disable option was not set when the device was configured,
  * the function will return to the caller the user-provided "handles" for
@@ -104,11 +104,11 @@ rte_ioat_do_copies(int dev_id);
  *   NOTE: If hdls_disable configuration option for the device is set, this
  *   parameter is ignored.
  * @param src_hdls
- *   Array to hold the source handle parameters of the completed copies.
+ *   Array to hold the source handle parameters of the completed ops.
  *   NOTE: If hdls_disable configuration option for the device is set, this
  *   parameter is ignored.
  * @param dst_hdls
- *   Array to hold the destination handle parameters of the completed copies.
+ *   Array to hold the destination handle parameters of the completed ops.
  *   NOTE: If hdls_disable configuration option for the device is set, this
  *   parameter is ignored.
  * @return
@@ -117,7 +117,7 @@ rte_ioat_do_copies(int dev_id);
  *   to the src_hdls and dst_hdls array parameters.
  */
 static inline int
-rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
+rte_ioat_completed_ops(int dev_id, uint8_t max_copies,
 		uintptr_t *src_hdls, uintptr_t *dst_hdls);
 
 /* include the implementation details from a separate file */
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index 4b7bdb8e2..b155d79c4 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -83,10 +83,10 @@ rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 }
 
 /*
- * Trigger hardware to begin performing enqueued copy operations
+ * Trigger hardware to begin performing enqueued operations
  */
 static inline void
-rte_ioat_do_copies(int dev_id)
+rte_ioat_perform_ops(int dev_id)
 {
 	struct rte_ioat_rawdev *ioat =
 			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
@@ -114,10 +114,10 @@ rte_ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
 }
 
 /*
- * Returns details of copy operations that have been completed
+ * Returns details of operations that have been completed
  */
 static inline int
-rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
+rte_ioat_completed_ops(int dev_id, uint8_t max_copies,
 		uintptr_t *src_hdls, uintptr_t *dst_hdls)
 {
 	struct rte_ioat_rawdev *ioat =
@@ -165,4 +165,16 @@ rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
 	return count;
 }
 
+static inline void
+__rte_deprecated_msg("use rte_ioat_perform_ops() instead")
+rte_ioat_do_copies(int dev_id) { rte_ioat_perform_ops(dev_id); }
+
+static inline int
+__rte_deprecated_msg("use rte_ioat_completed_ops() instead")
+rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
+		uintptr_t *src_hdls, uintptr_t *dst_hdls)
+{
+	return rte_ioat_completed_ops(dev_id, max_copies, src_hdls, dst_hdls);
+}
+
 #endif /* _RTE_IOAT_RAWDEV_FNS_H_ */
diff --git a/examples/ioat/ioatfwd.c b/examples/ioat/ioatfwd.c
index 288a75c7b..67f75737b 100644
--- a/examples/ioat/ioatfwd.c
+++ b/examples/ioat/ioatfwd.c
@@ -406,7 +406,7 @@ ioat_rx_port(struct rxtx_port_config *rx_config)
 			nb_enq = ioat_enqueue_packets(pkts_burst,
 				nb_rx, rx_config->ioat_ids[i]);
 			if (nb_enq > 0)
-				rte_ioat_do_copies(rx_config->ioat_ids[i]);
+				rte_ioat_perform_ops(rx_config->ioat_ids[i]);
 		} else {
 			/* Perform packet software copy, free source packets */
 			int ret;
@@ -452,7 +452,7 @@ ioat_tx_port(struct rxtx_port_config *tx_config)
 	for (i = 0; i < tx_config->nb_queues; i++) {
 		if (copy_mode == COPY_MODE_IOAT_NUM) {
 			/* Deque the mbufs from IOAT device. */
-			nb_dq = rte_ioat_completed_copies(
+			nb_dq = rte_ioat_completed_ops(
 				tx_config->ioat_ids[i], MAX_PKT_BURST,
 				(void *)mbufs_src, (void *)mbufs_dst);
 		} else {
diff --git a/lib/librte_eal/include/rte_common.h b/lib/librte_eal/include/rte_common.h
index 8f487a563..2920255fc 100644
--- a/lib/librte_eal/include/rte_common.h
+++ b/lib/librte_eal/include/rte_common.h
@@ -85,6 +85,7 @@ typedef uint16_t unaligned_uint16_t;
 
 /******* Macro to mark functions and fields scheduled for removal *****/
 #define __rte_deprecated	__attribute__((__deprecated__))
+#define __rte_deprecated_msg(msg)	__attribute__((__deprecated__(msg)))
 
 /**
  * Mark a function or variable to a weak reference.
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v3 08/25] raw/ioat: add separate API for fence call
  2020-09-25 11:08 ` [dpdk-dev] [PATCH v3 00/25] " Bruce Richardson
                     ` (6 preceding siblings ...)
  2020-09-25 11:08   ` [dpdk-dev] [PATCH v3 07/25] raw/ioat: rename functions to be operation-agnostic Bruce Richardson
@ 2020-09-25 11:08   ` Bruce Richardson
  2020-09-25 11:08   ` [dpdk-dev] [PATCH v3 09/25] raw/ioat: make the HW register spec private Bruce Richardson
                     ` (16 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-25 11:08 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Bruce Richardson, Kevin Laatz

Rather than having the fence signalled via a flag on a descriptor - which
requires reading the docs to find out whether the flag needs to go on the
last descriptor before, or the first descriptor after the fence - we can
instead add a separate fence API call. This becomes unambiguous to use,
since the fence call explicitly comes between two other enqueue calls. It
also allows more freedom of implementation in the driver code.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
---
 doc/guides/rawdevs/ioat.rst            |  3 +--
 doc/guides/rel_notes/release_20_11.rst |  4 ++++
 drivers/raw/ioat/ioat_rawdev_test.c    |  6 ++----
 drivers/raw/ioat/rte_ioat_rawdev.h     | 26 ++++++++++++++++++++------
 drivers/raw/ioat/rte_ioat_rawdev_fns.h | 22 +++++++++++++++++++---
 examples/ioat/ioatfwd.c                | 12 ++++--------
 6 files changed, 50 insertions(+), 23 deletions(-)

diff --git a/doc/guides/rawdevs/ioat.rst b/doc/guides/rawdevs/ioat.rst
index 3db5f5d09..71bca0b28 100644
--- a/doc/guides/rawdevs/ioat.rst
+++ b/doc/guides/rawdevs/ioat.rst
@@ -203,8 +203,7 @@ a burst of copies to the device and start the hardware processing of them:
                                 dsts[i]->buf_iova + dsts[i]->data_off,
                                 length,
                                 (uintptr_t)srcs[i],
-                                (uintptr_t)dsts[i],
-                                0 /* nofence */) != 1) {
+                                (uintptr_t)dsts[i]) != 1) {
                         printf("Error with rte_ioat_enqueue_copy for buffer %u\n",
                                         i);
                         return -1;
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index c99c0b33f..3868529ac 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -88,6 +88,10 @@ New Features
     to better reflect the APIs' purposes, and remove the implication that
     they are limited to copy operations only.
     [Note: The old API is still provided but marked as deprecated in the code]
+  * Added a new API ``rte_ioat_fence()`` to add a fence between operations.
+    This API replaces the ``fence`` flag parameter in the ``rte_ioat_enqueue_copies()`` function,
+    and is clearer as there is no ambiguity as to whether the flag should be
+    set on the last operation before the fence or the first operation after it.
 
 
 Removed Items
diff --git a/drivers/raw/ioat/ioat_rawdev_test.c b/drivers/raw/ioat/ioat_rawdev_test.c
index bb40eab6b..8ff546803 100644
--- a/drivers/raw/ioat/ioat_rawdev_test.c
+++ b/drivers/raw/ioat/ioat_rawdev_test.c
@@ -57,8 +57,7 @@ test_enqueue_copies(int dev_id)
 				dst->buf_iova + dst->data_off,
 				length,
 				(uintptr_t)src,
-				(uintptr_t)dst,
-				0 /* no fence */) != 1) {
+				(uintptr_t)dst) != 1) {
 			PRINT_ERR("Error with rte_ioat_enqueue_copy\n");
 			return -1;
 		}
@@ -109,8 +108,7 @@ test_enqueue_copies(int dev_id)
 					dsts[i]->buf_iova + dsts[i]->data_off,
 					length,
 					(uintptr_t)srcs[i],
-					(uintptr_t)dsts[i],
-					0 /* nofence */) != 1) {
+					(uintptr_t)dsts[i]) != 1) {
 				PRINT_ERR("Error with rte_ioat_enqueue_copy for buffer %u\n",
 						i);
 				return -1;
diff --git a/drivers/raw/ioat/rte_ioat_rawdev.h b/drivers/raw/ioat/rte_ioat_rawdev.h
index 5b2c47e8c..6b891cd44 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev.h
@@ -61,17 +61,31 @@ struct rte_ioat_rawdev_config {
  *   operation has been completed and the user polls for the completion details.
  *   NOTE: If hdls_disable configuration option for the device is set, this
  *   parameter is ignored.
- * @param fence
- *   A flag parameter indicating that hardware should not begin to perform any
- *   subsequently enqueued copy operations until after this operation has
- *   completed
  * @return
  *   Number of operations enqueued, either 0 or 1
  */
 static inline int
 rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
-		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl,
-		int fence);
+		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl);
+
+/**
+ * Add a fence to force ordering between operations
+ *
+ * This adds a fence to a sequence of operations to enforce ordering, such that
+ * all operations enqueued before the fence must be completed before operations
+ * after the fence.
+ * NOTE: Since this fence may be added as a flag to the last operation enqueued,
+ * this API may not function correctly when called immediately after an
+ * "rte_ioat_perform_ops" call i.e. before any new operations are enqueued.
+ *
+ * @param dev_id
+ *   The rawdev device id of the ioat instance
+ * @return
+ *   Number of fences enqueued, either 0 or 1
+ */
+static inline int
+rte_ioat_fence(int dev_id);
+
 
 /**
  * Trigger hardware to begin performing enqueued operations
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index b155d79c4..466721a23 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -47,8 +47,7 @@ struct rte_ioat_rawdev {
  */
 static inline int
 rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
-		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl,
-		int fence)
+		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl)
 {
 	struct rte_ioat_rawdev *ioat =
 			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
@@ -69,7 +68,7 @@ rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 	desc = &ioat->desc_ring[write];
 	desc->size = length;
 	/* set descriptor write-back every 16th descriptor */
-	desc->u.control_raw = (uint32_t)((!!fence << 4) | (!(write & 0xF)) << 3);
+	desc->u.control_raw = (uint32_t)((!(write & 0xF)) << 3);
 	desc->src_addr = src;
 	desc->dest_addr = dst;
 
@@ -82,6 +81,23 @@ rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 	return 1;
 }
 
+/* add fence to last written descriptor */
+static inline int
+rte_ioat_fence(int dev_id)
+{
+	struct rte_ioat_rawdev *ioat =
+			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
+	unsigned short write = ioat->next_write;
+	unsigned short mask = ioat->ring_size - 1;
+	struct rte_ioat_generic_hw_desc *desc;
+
+	write = (write - 1) & mask;
+	desc = &ioat->desc_ring[write];
+
+	desc->u.control.fence = 1;
+	return 0;
+}
+
 /*
  * Trigger hardware to begin performing enqueued operations
  */
diff --git a/examples/ioat/ioatfwd.c b/examples/ioat/ioatfwd.c
index 67f75737b..e6d1d1236 100644
--- a/examples/ioat/ioatfwd.c
+++ b/examples/ioat/ioatfwd.c
@@ -361,15 +361,11 @@ ioat_enqueue_packets(struct rte_mbuf **pkts,
 	for (i = 0; i < nb_rx; i++) {
 		/* Perform data copy */
 		ret = rte_ioat_enqueue_copy(dev_id,
-			pkts[i]->buf_iova
-			- addr_offset,
-			pkts_copy[i]->buf_iova
-			- addr_offset,
-			rte_pktmbuf_data_len(pkts[i])
-			+ addr_offset,
+			pkts[i]->buf_iova - addr_offset,
+			pkts_copy[i]->buf_iova - addr_offset,
+			rte_pktmbuf_data_len(pkts[i]) + addr_offset,
 			(uintptr_t)pkts[i],
-			(uintptr_t)pkts_copy[i],
-			0 /* nofence */);
+			(uintptr_t)pkts_copy[i]);
 
 		if (ret != 1)
 			break;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v3 09/25] raw/ioat: make the HW register spec private
  2020-09-25 11:08 ` [dpdk-dev] [PATCH v3 00/25] " Bruce Richardson
                     ` (7 preceding siblings ...)
  2020-09-25 11:08   ` [dpdk-dev] [PATCH v3 08/25] raw/ioat: add separate API for fence call Bruce Richardson
@ 2020-09-25 11:08   ` Bruce Richardson
  2020-09-25 11:08   ` [dpdk-dev] [PATCH v3 10/25] usertools/dpdk-devbind.py: add support for DSA HW Bruce Richardson
                     ` (15 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-25 11:08 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Bruce Richardson, Kevin Laatz

Only a few definitions from the hardware spec are actually used in the
driver runtime, so we can copy over those few and make the rest of the spec
a private header in the driver.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
---
 drivers/raw/ioat/ioat_rawdev.c                |  3 ++
 .../raw/ioat/{rte_ioat_spec.h => ioat_spec.h} | 26 -----------
 drivers/raw/ioat/meson.build                  |  3 +-
 drivers/raw/ioat/rte_ioat_rawdev_fns.h        | 43 +++++++++++++++++--
 4 files changed, 44 insertions(+), 31 deletions(-)
 rename drivers/raw/ioat/{rte_ioat_spec.h => ioat_spec.h} (91%)

diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
index ea9f51ffc..aa59b731f 100644
--- a/drivers/raw/ioat/ioat_rawdev.c
+++ b/drivers/raw/ioat/ioat_rawdev.c
@@ -4,10 +4,12 @@
 
 #include <rte_cycles.h>
 #include <rte_bus_pci.h>
+#include <rte_memzone.h>
 #include <rte_string_fns.h>
 #include <rte_rawdev_pmd.h>
 
 #include "rte_ioat_rawdev.h"
+#include "ioat_spec.h"
 
 static struct rte_pci_driver ioat_pmd_drv;
 
@@ -268,6 +270,7 @@ ioat_rawdev_create(const char *name, struct rte_pci_device *dev)
 	ioat->rawdev = rawdev;
 	ioat->mz = mz;
 	ioat->regs = dev->mem_resource[0].addr;
+	ioat->doorbell = &ioat->regs->dmacount;
 	ioat->ring_size = 0;
 	ioat->desc_ring = NULL;
 	ioat->status_addr = ioat->mz->iova +
diff --git a/drivers/raw/ioat/rte_ioat_spec.h b/drivers/raw/ioat/ioat_spec.h
similarity index 91%
rename from drivers/raw/ioat/rte_ioat_spec.h
rename to drivers/raw/ioat/ioat_spec.h
index c6e7929b2..9645e16d4 100644
--- a/drivers/raw/ioat/rte_ioat_spec.h
+++ b/drivers/raw/ioat/ioat_spec.h
@@ -86,32 +86,6 @@ struct rte_ioat_registers {
 
 #define RTE_IOAT_CHANCMP_ALIGN			8	/* CHANCMP address must be 64-bit aligned */
 
-struct rte_ioat_generic_hw_desc {
-	uint32_t size;
-	union {
-		uint32_t control_raw;
-		struct {
-			uint32_t int_enable: 1;
-			uint32_t src_snoop_disable: 1;
-			uint32_t dest_snoop_disable: 1;
-			uint32_t completion_update: 1;
-			uint32_t fence: 1;
-			uint32_t reserved2: 1;
-			uint32_t src_page_break: 1;
-			uint32_t dest_page_break: 1;
-			uint32_t bundle: 1;
-			uint32_t dest_dca: 1;
-			uint32_t hint: 1;
-			uint32_t reserved: 13;
-			uint32_t op: 8;
-		} control;
-	} u;
-	uint64_t src_addr;
-	uint64_t dest_addr;
-	uint64_t next;
-	uint64_t op_specific[4];
-};
-
 struct rte_ioat_dma_hw_desc {
 	uint32_t size;
 	union {
diff --git a/drivers/raw/ioat/meson.build b/drivers/raw/ioat/meson.build
index f66e9b605..06636f8a9 100644
--- a/drivers/raw/ioat/meson.build
+++ b/drivers/raw/ioat/meson.build
@@ -8,5 +8,4 @@ sources = files('ioat_rawdev.c',
 deps += ['rawdev', 'bus_pci', 'mbuf']
 
 install_headers('rte_ioat_rawdev.h',
-		'rte_ioat_rawdev_fns.h',
-		'rte_ioat_spec.h')
+		'rte_ioat_rawdev_fns.h')
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index 466721a23..c6e0b9a58 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -8,7 +8,36 @@
 #include <rte_rawdev.h>
 #include <rte_memzone.h>
 #include <rte_prefetch.h>
-#include "rte_ioat_spec.h"
+
+/**
+ * @internal
+ * Structure representing a device descriptor
+ */
+struct rte_ioat_generic_hw_desc {
+	uint32_t size;
+	union {
+		uint32_t control_raw;
+		struct {
+			uint32_t int_enable: 1;
+			uint32_t src_snoop_disable: 1;
+			uint32_t dest_snoop_disable: 1;
+			uint32_t completion_update: 1;
+			uint32_t fence: 1;
+			uint32_t reserved2: 1;
+			uint32_t src_page_break: 1;
+			uint32_t dest_page_break: 1;
+			uint32_t bundle: 1;
+			uint32_t dest_dca: 1;
+			uint32_t hint: 1;
+			uint32_t reserved: 13;
+			uint32_t op: 8;
+		} control;
+	} u;
+	uint64_t src_addr;
+	uint64_t dest_addr;
+	uint64_t next;
+	uint64_t op_specific[4];
+};
 
 /**
  * @internal
@@ -19,7 +48,7 @@ struct rte_ioat_rawdev {
 	const struct rte_memzone *mz;
 	const struct rte_memzone *desc_mz;
 
-	volatile struct rte_ioat_registers *regs;
+	volatile uint16_t *doorbell;
 	phys_addr_t status_addr;
 	phys_addr_t ring_addr;
 
@@ -40,8 +69,16 @@ struct rte_ioat_rawdev {
 
 	/* to report completions, the device will write status back here */
 	volatile uint64_t status __rte_cache_aligned;
+
+	/* pointer to the register bar */
+	volatile struct rte_ioat_registers *regs;
 };
 
+#define RTE_IOAT_CHANSTS_IDLE			0x1
+#define RTE_IOAT_CHANSTS_SUSPENDED		0x2
+#define RTE_IOAT_CHANSTS_HALTED			0x3
+#define RTE_IOAT_CHANSTS_ARMED			0x4
+
 /*
  * Enqueue a copy operation onto the ioat device
  */
@@ -109,7 +146,7 @@ rte_ioat_perform_ops(int dev_id)
 	ioat->desc_ring[(ioat->next_write - 1) & (ioat->ring_size - 1)].u
 			.control.completion_update = 1;
 	rte_compiler_barrier();
-	ioat->regs->dmacount = ioat->next_write;
+	*ioat->doorbell = ioat->next_write;
 	ioat->started = ioat->enqueued;
 }
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v3 10/25] usertools/dpdk-devbind.py: add support for DSA HW
  2020-09-25 11:08 ` [dpdk-dev] [PATCH v3 00/25] " Bruce Richardson
                     ` (8 preceding siblings ...)
  2020-09-25 11:08   ` [dpdk-dev] [PATCH v3 09/25] raw/ioat: make the HW register spec private Bruce Richardson
@ 2020-09-25 11:08   ` Bruce Richardson
  2020-09-25 11:08   ` [dpdk-dev] [PATCH v3 11/25] raw/ioat: add skeleton for VFIO/UIO based DSA device Bruce Richardson
                     ` (14 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-25 11:08 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Kevin Laatz, Bruce Richardson

From: Kevin Laatz <kevin.laatz@intel.com>

Intel Data Streaming Accelerator (Intel DSA) is a high-performance data
copy and transformation accelerator which will be integrated in future
Intel processors [1].

Add DSA device support to dpdk-devbind.py script.

[1] https://01.org/blogs/2019/introducing-intel-data-streaming-accelerator

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
---
 doc/guides/rel_notes/release_20_11.rst | 2 ++
 usertools/dpdk-devbind.py              | 4 +++-
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 3868529ac..4d8b78154 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -82,6 +82,8 @@ New Features
 
   The ioat rawdev driver has been updated and enhanced. Changes include:
 
+  * Added support for Intel\ |reg| Data Streaming Accelerator hardware.
+    For more information, see https://01.org/blogs/2019/introducing-intel-data-streaming-accelerator
   * Added a per-device configuration flag to disable management of user-provided completion handles
   * Renamed the ``rte_ioat_do_copies()`` API to ``rte_ioat_perform_ops()``,
     and renamed the ``rte_ioat_completed_copies()`` API to ``rte_ioat_completed_ops()``
diff --git a/usertools/dpdk-devbind.py b/usertools/dpdk-devbind.py
index 094c2ffc8..f2916bef5 100755
--- a/usertools/dpdk-devbind.py
+++ b/usertools/dpdk-devbind.py
@@ -53,6 +53,8 @@
               'SVendor': None, 'SDevice': None}
 intel_ioat_icx = {'Class': '08', 'Vendor': '8086', 'Device': '0b00',
               'SVendor': None, 'SDevice': None}
+intel_idxd_spr = {'Class': '08', 'Vendor': '8086', 'Device': '0b25',
+              'SVendor': None, 'SDevice': None}
 intel_ntb_skx = {'Class': '06', 'Vendor': '8086', 'Device': '201c',
               'SVendor': None, 'SDevice': None}
 
@@ -62,7 +64,7 @@
 eventdev_devices = [cavium_sso, cavium_tim, octeontx2_sso]
 mempool_devices = [cavium_fpa, octeontx2_npa]
 compress_devices = [cavium_zip]
-misc_devices = [intel_ioat_bdw, intel_ioat_skx, intel_ioat_icx, intel_ntb_skx, octeontx2_dma]
+misc_devices = [intel_ioat_bdw, intel_ioat_skx, intel_ioat_icx, intel_idxd_spr, intel_ntb_skx, octeontx2_dma]
 
 # global dict ethernet devices present. Dictionary indexed by PCI address.
 # Each device within this is itself a dictionary of device properties
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v3 11/25] raw/ioat: add skeleton for VFIO/UIO based DSA device
  2020-09-25 11:08 ` [dpdk-dev] [PATCH v3 00/25] " Bruce Richardson
                     ` (9 preceding siblings ...)
  2020-09-25 11:08   ` [dpdk-dev] [PATCH v3 10/25] usertools/dpdk-devbind.py: add support for DSA HW Bruce Richardson
@ 2020-09-25 11:08   ` Bruce Richardson
  2020-09-25 11:08   ` [dpdk-dev] [PATCH v3 12/25] raw/ioat: add vdev probe for DSA/idxd devices Bruce Richardson
                     ` (13 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-25 11:08 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Bruce Richardson, Kevin Laatz

Add in the basic probe/remove skeleton code for DSA devices which are bound
directly to vfio or uio driver. The kernel module for supporting these uses
the "idxd" name, so that name is used as function and file prefix to avoid
conflict with existing "ioat" prefixed functions.

Since we are adding new files to the driver and there will be common
definitions shared between the various files, we create a new internal
header file ioat_private.h to hold common macros and function prototypes.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
---
 doc/guides/rawdevs/ioat.rst     | 69 ++++++++++-----------------------
 drivers/raw/ioat/idxd_pci.c     | 56 ++++++++++++++++++++++++++
 drivers/raw/ioat/ioat_private.h | 27 +++++++++++++
 drivers/raw/ioat/ioat_rawdev.c  |  9 +----
 drivers/raw/ioat/meson.build    |  6 ++-
 5 files changed, 108 insertions(+), 59 deletions(-)
 create mode 100644 drivers/raw/ioat/idxd_pci.c
 create mode 100644 drivers/raw/ioat/ioat_private.h

diff --git a/doc/guides/rawdevs/ioat.rst b/doc/guides/rawdevs/ioat.rst
index 71bca0b28..b898f98d5 100644
--- a/doc/guides/rawdevs/ioat.rst
+++ b/doc/guides/rawdevs/ioat.rst
@@ -3,10 +3,12 @@
 
 .. include:: <isonum.txt>
 
-IOAT Rawdev Driver for Intel\ |reg| QuickData Technology
-======================================================================
+IOAT Rawdev Driver
+===================
 
 The ``ioat`` rawdev driver provides a poll-mode driver (PMD) for Intel\ |reg|
+Data Streaming Accelerator `(Intel DSA)
+<https://01.org/blogs/2019/introducing-intel-data-streaming-accelerator>`_ and for Intel\ |reg|
 QuickData Technology, part of Intel\ |reg| I/O Acceleration Technology
 `(Intel I/OAT)
 <https://www.intel.com/content/www/us/en/wireless-network/accel-technology.html>`_.
@@ -17,61 +19,30 @@ be done by software, freeing up CPU cycles for other tasks.
 Hardware Requirements
 ----------------------
 
-On Linux, the presence of an Intel\ |reg| QuickData Technology hardware can
-be detected by checking the output of the ``lspci`` command, where the
-hardware will be often listed as "Crystal Beach DMA" or "CBDMA". For
-example, on a system with Intel\ |reg| Xeon\ |reg| CPU E5-2699 v4 @2.20GHz,
-lspci shows:
-
-.. code-block:: console
-
-  # lspci | grep DMA
-  00:04.0 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 0 (rev 01)
-  00:04.1 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 1 (rev 01)
-  00:04.2 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 2 (rev 01)
-  00:04.3 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 3 (rev 01)
-  00:04.4 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 4 (rev 01)
-  00:04.5 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 5 (rev 01)
-  00:04.6 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 6 (rev 01)
-  00:04.7 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 7 (rev 01)
-
-On a system with Intel\ |reg| Xeon\ |reg| Gold 6154 CPU @ 3.00GHz, lspci
-shows:
-
-.. code-block:: console
-
-  # lspci | grep DMA
-  00:04.0 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
-  00:04.1 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
-  00:04.2 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
-  00:04.3 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
-  00:04.4 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
-  00:04.5 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
-  00:04.6 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
-  00:04.7 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
-
+The ``dpdk-devbind.py`` script, included with DPDK,
+can be used to show the presence of supported hardware.
+Running ``dpdk-devbind.py --status-dev misc`` will show all the miscellaneous,
+or rawdev-based devices on the system.
+For Intel\ |reg| QuickData Technology devices, the hardware will be often listed as "Crystal Beach DMA",
+or "CBDMA".
+For Intel\ |reg| DSA devices, they are currently (at time of writing) appearing as devices with type "0b25",
+due to the absence of pci-id database entries for them at this point.
 
 Compilation
 ------------
 
-For builds done with ``make``, the driver compilation is enabled by the
-``CONFIG_RTE_LIBRTE_PMD_IOAT_RAWDEV`` build configuration option. This is
-enabled by default in builds for x86 platforms, and disabled in other
-configurations.
-
-For builds using ``meson`` and ``ninja``, the driver will be built when the
-target platform is x86-based.
+For builds using ``meson`` and ``ninja``, the driver will be built when the target platform is x86-based.
+No additional compilation steps are necessary.
 
 Device Setup
 -------------
 
-The Intel\ |reg| QuickData Technology HW devices will need to be bound to a
-user-space IO driver for use. The script ``dpdk-devbind.py`` script
-included with DPDK can be used to view the state of the devices and to bind
-them to a suitable DPDK-supported kernel driver. When querying the status
-of the devices, they will appear under the category of "Misc (rawdev)
-devices", i.e. the command ``dpdk-devbind.py --status-dev misc`` can be
-used to see the state of those devices alone.
+The HW devices to be used will need to be bound to a user-space IO driver for use.
+The ``dpdk-devbind.py`` script can be used to view the state of the devices
+and to bind them to a suitable DPDK-supported kernel driver, such as ``vfio-pci``.
+For example::
+
+	$ dpdk-devbind.py -b vfio-pci 00:04.0 00:04.1
 
 Device Probing and Initialization
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
new file mode 100644
index 000000000..1a30e9c31
--- /dev/null
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -0,0 +1,56 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_bus_pci.h>
+
+#include "ioat_private.h"
+
+#define IDXD_VENDOR_ID		0x8086
+#define IDXD_DEVICE_ID_SPR	0x0B25
+
+#define IDXD_PMD_RAWDEV_NAME_PCI rawdev_idxd_pci
+
+const struct rte_pci_id pci_id_idxd_map[] = {
+	{ RTE_PCI_DEVICE(IDXD_VENDOR_ID, IDXD_DEVICE_ID_SPR) },
+	{ .vendor_id = 0, /* sentinel */ },
+};
+
+static int
+idxd_rawdev_probe_pci(struct rte_pci_driver *drv, struct rte_pci_device *dev)
+{
+	int ret = 0;
+	char name[PCI_PRI_STR_SIZE];
+
+	rte_pci_device_name(&dev->addr, name, sizeof(name));
+	IOAT_PMD_INFO("Init %s on NUMA node %d", name, dev->device.numa_node);
+	dev->device.driver = &drv->driver;
+
+	return ret;
+}
+
+static int
+idxd_rawdev_remove_pci(struct rte_pci_device *dev)
+{
+	char name[PCI_PRI_STR_SIZE];
+	int ret = 0;
+
+	rte_pci_device_name(&dev->addr, name, sizeof(name));
+
+	IOAT_PMD_INFO("Closing %s on NUMA node %d",
+			name, dev->device.numa_node);
+
+	return ret;
+}
+
+struct rte_pci_driver idxd_pmd_drv_pci = {
+	.id_table = pci_id_idxd_map,
+	.drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+	.probe = idxd_rawdev_probe_pci,
+	.remove = idxd_rawdev_remove_pci,
+};
+
+RTE_PMD_REGISTER_PCI(IDXD_PMD_RAWDEV_NAME_PCI, idxd_pmd_drv_pci);
+RTE_PMD_REGISTER_PCI_TABLE(IDXD_PMD_RAWDEV_NAME_PCI, pci_id_idxd_map);
+RTE_PMD_REGISTER_KMOD_DEP(IDXD_PMD_RAWDEV_NAME_PCI,
+			  "* igb_uio | uio_pci_generic | vfio-pci");
diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
new file mode 100644
index 000000000..d87d4d055
--- /dev/null
+++ b/drivers/raw/ioat/ioat_private.h
@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#ifndef _IOAT_PRIVATE_H_
+#define _IOAT_PRIVATE_H_
+
+/**
+ * @file idxd_private.h
+ *
+ * Private data structures for the idxd/DSA part of ioat device driver
+ *
+ * @warning
+ * @b EXPERIMENTAL: these structures and APIs may change without prior notice
+ */
+
+extern int ioat_pmd_logtype;
+
+#define IOAT_PMD_LOG(level, fmt, args...) rte_log(RTE_LOG_ ## level, \
+		ioat_pmd_logtype, "%s(): " fmt "\n", __func__, ##args)
+
+#define IOAT_PMD_DEBUG(fmt, args...)  IOAT_PMD_LOG(DEBUG, fmt, ## args)
+#define IOAT_PMD_INFO(fmt, args...)   IOAT_PMD_LOG(INFO, fmt, ## args)
+#define IOAT_PMD_ERR(fmt, args...)    IOAT_PMD_LOG(ERR, fmt, ## args)
+#define IOAT_PMD_WARN(fmt, args...)   IOAT_PMD_LOG(WARNING, fmt, ## args)
+
+#endif /* _IOAT_PRIVATE_H_ */
diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
index aa59b731f..1fe32278d 100644
--- a/drivers/raw/ioat/ioat_rawdev.c
+++ b/drivers/raw/ioat/ioat_rawdev.c
@@ -10,6 +10,7 @@
 
 #include "rte_ioat_rawdev.h"
 #include "ioat_spec.h"
+#include "ioat_private.h"
 
 static struct rte_pci_driver ioat_pmd_drv;
 
@@ -29,14 +30,6 @@ static struct rte_pci_driver ioat_pmd_drv;
 
 RTE_LOG_REGISTER(ioat_pmd_logtype, rawdev.ioat, INFO);
 
-#define IOAT_PMD_LOG(level, fmt, args...) rte_log(RTE_LOG_ ## level, \
-	ioat_pmd_logtype, "%s(): " fmt "\n", __func__, ##args)
-
-#define IOAT_PMD_DEBUG(fmt, args...)  IOAT_PMD_LOG(DEBUG, fmt, ## args)
-#define IOAT_PMD_INFO(fmt, args...)   IOAT_PMD_LOG(INFO, fmt, ## args)
-#define IOAT_PMD_ERR(fmt, args...)    IOAT_PMD_LOG(ERR, fmt, ## args)
-#define IOAT_PMD_WARN(fmt, args...)   IOAT_PMD_LOG(WARNING, fmt, ## args)
-
 #define DESC_SZ sizeof(struct rte_ioat_generic_hw_desc)
 #define COMPLETION_SZ sizeof(__m128i)
 
diff --git a/drivers/raw/ioat/meson.build b/drivers/raw/ioat/meson.build
index 06636f8a9..3529635e9 100644
--- a/drivers/raw/ioat/meson.build
+++ b/drivers/raw/ioat/meson.build
@@ -3,8 +3,10 @@
 
 build = dpdk_conf.has('RTE_ARCH_X86')
 reason = 'only supported on x86'
-sources = files('ioat_rawdev.c',
-		'ioat_rawdev_test.c')
+sources = files(
+	'idxd_pci.c',
+	'ioat_rawdev.c',
+	'ioat_rawdev_test.c')
 deps += ['rawdev', 'bus_pci', 'mbuf']
 
 install_headers('rte_ioat_rawdev.h',
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v3 12/25] raw/ioat: add vdev probe for DSA/idxd devices
  2020-09-25 11:08 ` [dpdk-dev] [PATCH v3 00/25] " Bruce Richardson
                     ` (10 preceding siblings ...)
  2020-09-25 11:08   ` [dpdk-dev] [PATCH v3 11/25] raw/ioat: add skeleton for VFIO/UIO based DSA device Bruce Richardson
@ 2020-09-25 11:08   ` Bruce Richardson
  2020-09-25 11:08   ` [dpdk-dev] [PATCH v3 13/25] raw/ioat: include example configuration script Bruce Richardson
                     ` (12 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-25 11:08 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Kevin Laatz, Bruce Richardson

From: Kevin Laatz <kevin.laatz@intel.com>

The Intel DSA devices can be exposed to userspace via kernel driver, so can
be used without having to bind them to vfio/uio. Therefore we add support
for using those kernel-configured devices as vdevs, taking as parameter the
individual HW work queue to be used by the vdev.

Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 doc/guides/rawdevs/ioat.rst  |  68 +++++++++++++++++--
 drivers/raw/ioat/idxd_vdev.c | 123 +++++++++++++++++++++++++++++++++++
 drivers/raw/ioat/meson.build |   6 +-
 3 files changed, 192 insertions(+), 5 deletions(-)
 create mode 100644 drivers/raw/ioat/idxd_vdev.c

diff --git a/doc/guides/rawdevs/ioat.rst b/doc/guides/rawdevs/ioat.rst
index b898f98d5..5b8d27980 100644
--- a/doc/guides/rawdevs/ioat.rst
+++ b/doc/guides/rawdevs/ioat.rst
@@ -37,9 +37,62 @@ No additional compilation steps are necessary.
 Device Setup
 -------------
 
+Depending on support provided by the PMD, HW devices can either use the kernel configured driver
+or be bound to a user-space IO driver for use.
+For example, Intel\ |reg| DSA devices can use the IDXD kernel driver or DPDK-supported drivers,
+such as ``vfio-pci``.
+
+Intel\ |reg| DSA devices using idxd kernel driver
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To use a Intel\ |reg| DSA device bound to the IDXD kernel driver, the device must first be configured.
+The `accel-config <https://github.com/intel/idxd-config>`_ utility library can be used for configuration.
+
+.. note::
+        The device configuration can also be done by directly interacting with the sysfs nodes.
+
+There are some mandatory configuration steps before being able to use a device with an application.
+The internal engines, which do the copies or other operations,
+and the work-queues, which are used by applications to assign work to the device,
+need to be assigned to groups, and the various other configuration options,
+such as priority or queue depth, need to be set for each queue.
+
+To assign an engine to a group::
+
+        $ accel-config config-engine dsa0/engine0.0 --group-id=0
+        $ accel-config config-engine dsa0/engine0.1 --group-id=1
+
+To assign work queues to groups for passing descriptors to the engines a similar accel-config command can be used.
+However, the work queues also need to be configured depending on the use-case.
+Some configuration options include:
+
+* mode (Dedicated/Shared): Indicates whether a WQ may accept jobs from multiple queues simultaneously.
+* priority: WQ priority between 1 and 15. Larger value means higher priority.
+* wq-size: the size of the WQ. Sum of all WQ sizes must be less that the total-size defined by the device.
+* type: WQ type (kernel/mdev/user). Determines how the device is presented.
+* name: identifier given to the WQ.
+
+Example configuration for a work queue::
+
+        $ accel-config config-wq dsa0/wq0.0 --group-id=0 \
+           --mode=dedicated --priority=10 --wq-size=8 \
+           --type=user --name=app1
+
+Once the devices have been configured, they need to be enabled::
+
+        $ accel-config enable-device dsa0
+        $ accel-config enable-wq dsa0/wq0.0
+
+Check the device configuration::
+
+        $ accel-config list
+
+Devices using VFIO/UIO drivers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
 The HW devices to be used will need to be bound to a user-space IO driver for use.
 The ``dpdk-devbind.py`` script can be used to view the state of the devices
-and to bind them to a suitable DPDK-supported kernel driver, such as ``vfio-pci``.
+and to bind them to a suitable DPDK-supported driver, such as ``vfio-pci``.
 For example::
 
 	$ dpdk-devbind.py -b vfio-pci 00:04.0 00:04.1
@@ -47,9 +100,16 @@ For example::
 Device Probing and Initialization
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-Once bound to a suitable kernel device driver, the HW devices will be found
-as part of the PCI scan done at application initialization time. No vdev
-parameters need to be passed to create or initialize the device.
+For devices bound to a suitable DPDK-supported VFIO/UIO driver, the HW devices will
+be found as part of the device scan done at application initialization time without
+the need to pass parameters to the application.
+
+If the device is bound to the IDXD kernel driver (and previously configured with sysfs),
+then a specific work queue needs to be passed to the application via a vdev parameter.
+This vdev parameter take the driver name and work queue name as parameters.
+For example, to use work queue 0 on Intel\ |reg| DSA instance 0::
+
+        $ dpdk-test --no-pci --vdev=rawdev_idxd,wq=0.0
 
 Once probed successfully, the device will appear as a ``rawdev``, that is a
 "raw device type" inside DPDK, and can be accessed using APIs from the
diff --git a/drivers/raw/ioat/idxd_vdev.c b/drivers/raw/ioat/idxd_vdev.c
new file mode 100644
index 000000000..0509fc084
--- /dev/null
+++ b/drivers/raw/ioat/idxd_vdev.c
@@ -0,0 +1,123 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_bus_vdev.h>
+#include <rte_kvargs.h>
+#include <rte_string_fns.h>
+#include <rte_rawdev_pmd.h>
+
+#include "ioat_private.h"
+
+/** Name of the device driver */
+#define IDXD_PMD_RAWDEV_NAME rawdev_idxd
+/* takes a work queue(WQ) as parameter */
+#define IDXD_ARG_WQ		"wq"
+
+static const char * const valid_args[] = {
+	IDXD_ARG_WQ,
+	NULL
+};
+
+struct idxd_vdev_args {
+	uint8_t device_id;
+	uint8_t wq_id;
+};
+
+static int
+idxd_rawdev_parse_wq(const char *key __rte_unused, const char *value,
+			  void *extra_args)
+{
+	struct idxd_vdev_args *args = (struct idxd_vdev_args *)extra_args;
+	int dev, wq, bytes = -1;
+	int read = sscanf(value, "%d.%d%n", &dev, &wq, &bytes);
+
+	if (read != 2 || bytes != (int)strlen(value)) {
+		IOAT_PMD_ERR("Error parsing work-queue id. Must be in <dev_id>.<queue_id> format");
+		return -EINVAL;
+	}
+
+	if (dev >= UINT8_MAX || wq >= UINT8_MAX) {
+		IOAT_PMD_ERR("Device or work queue id out of range");
+		return -EINVAL;
+	}
+
+	args->device_id = dev;
+	args->wq_id = wq;
+
+	return 0;
+}
+
+static int
+idxd_vdev_parse_params(struct rte_kvargs *kvlist, struct idxd_vdev_args *args)
+{
+	if (rte_kvargs_count(kvlist, IDXD_ARG_WQ) == 1) {
+		if (rte_kvargs_process(kvlist, IDXD_ARG_WQ,
+				&idxd_rawdev_parse_wq, args) < 0) {
+			IOAT_PMD_ERR("Error parsing %s", IDXD_ARG_WQ);
+			goto free;
+		}
+	} else {
+		IOAT_PMD_ERR("%s is a mandatory arg", IDXD_ARG_WQ);
+		return -EINVAL;
+	}
+
+	return 0;
+
+free:
+	if (kvlist)
+		rte_kvargs_free(kvlist);
+	return -EINVAL;
+}
+
+static int
+idxd_rawdev_probe_vdev(struct rte_vdev_device *vdev)
+{
+	struct rte_kvargs *kvlist;
+	struct idxd_vdev_args vdev_args;
+	const char *name;
+	int ret = 0;
+
+	name = rte_vdev_device_name(vdev);
+	if (name == NULL)
+		return -EINVAL;
+
+	IOAT_PMD_INFO("Initializing pmd_idxd for %s", name);
+
+	kvlist = rte_kvargs_parse(rte_vdev_device_args(vdev), valid_args);
+	if (kvlist == NULL) {
+		IOAT_PMD_ERR("Invalid kvargs key");
+		return -EINVAL;
+	}
+
+	ret = idxd_vdev_parse_params(kvlist, &vdev_args);
+	if (ret) {
+		IOAT_PMD_ERR("Failed to parse kvargs");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int
+idxd_rawdev_remove_vdev(struct rte_vdev_device *vdev)
+{
+	const char *name;
+
+	name = rte_vdev_device_name(vdev);
+	if (name == NULL)
+		return -EINVAL;
+
+	IOAT_PMD_INFO("Remove DSA vdev %p", name);
+
+	return 0;
+}
+
+struct rte_vdev_driver idxd_rawdev_drv_vdev = {
+	.probe = idxd_rawdev_probe_vdev,
+	.remove = idxd_rawdev_remove_vdev,
+};
+
+RTE_PMD_REGISTER_VDEV(IDXD_PMD_RAWDEV_NAME, idxd_rawdev_drv_vdev);
+RTE_PMD_REGISTER_PARAM_STRING(IDXD_PMD_RAWDEV_NAME,
+			      "wq=<string>");
diff --git a/drivers/raw/ioat/meson.build b/drivers/raw/ioat/meson.build
index 3529635e9..b343b7367 100644
--- a/drivers/raw/ioat/meson.build
+++ b/drivers/raw/ioat/meson.build
@@ -5,9 +5,13 @@ build = dpdk_conf.has('RTE_ARCH_X86')
 reason = 'only supported on x86'
 sources = files(
 	'idxd_pci.c',
+	'idxd_vdev.c',
 	'ioat_rawdev.c',
 	'ioat_rawdev_test.c')
-deps += ['rawdev', 'bus_pci', 'mbuf']
+deps += ['bus_pci',
+	'bus_vdev',
+	'mbuf',
+	'rawdev']
 
 install_headers('rte_ioat_rawdev.h',
 		'rte_ioat_rawdev_fns.h')
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v3 13/25] raw/ioat: include example configuration script
  2020-09-25 11:08 ` [dpdk-dev] [PATCH v3 00/25] " Bruce Richardson
                     ` (11 preceding siblings ...)
  2020-09-25 11:08   ` [dpdk-dev] [PATCH v3 12/25] raw/ioat: add vdev probe for DSA/idxd devices Bruce Richardson
@ 2020-09-25 11:08   ` Bruce Richardson
  2020-09-25 11:08   ` [dpdk-dev] [PATCH v3 14/25] raw/ioat: create rawdev instances on idxd PCI probe Bruce Richardson
                     ` (11 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-25 11:08 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Bruce Richardson, Kevin Laatz

Devices managed by the idxd kernel driver must be configured for DPDK use
before it can be used by the ioat driver. This example script serves both
as a quick way to get the driver set up with a simple configuration, and as
the basis for users to modify it and create their own configuration
scripts.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
---
 doc/guides/rawdevs/ioat.rst       |  2 +
 drivers/raw/ioat/dpdk_idxd_cfg.py | 79 +++++++++++++++++++++++++++++++
 2 files changed, 81 insertions(+)
 create mode 100755 drivers/raw/ioat/dpdk_idxd_cfg.py

diff --git a/doc/guides/rawdevs/ioat.rst b/doc/guides/rawdevs/ioat.rst
index 5b8d27980..7c2a2d457 100644
--- a/doc/guides/rawdevs/ioat.rst
+++ b/doc/guides/rawdevs/ioat.rst
@@ -50,6 +50,8 @@ The `accel-config <https://github.com/intel/idxd-config>`_ utility library can b
 
 .. note::
         The device configuration can also be done by directly interacting with the sysfs nodes.
+        An example of how this may be done can be seen in the script ``dpdk_idxd_cfg.py``
+        included in the driver source directory.
 
 There are some mandatory configuration steps before being able to use a device with an application.
 The internal engines, which do the copies or other operations,
diff --git a/drivers/raw/ioat/dpdk_idxd_cfg.py b/drivers/raw/ioat/dpdk_idxd_cfg.py
new file mode 100755
index 000000000..bce4bb5bd
--- /dev/null
+++ b/drivers/raw/ioat/dpdk_idxd_cfg.py
@@ -0,0 +1,79 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2020 Intel Corporation
+
+"""
+Configure an entire Intel DSA instance, using idxd kernel driver, for DPDK use
+"""
+
+import sys
+import argparse
+import os
+import os.path
+
+
+class SysfsDir:
+    "Used to read/write paths in a sysfs directory"
+    def __init__(self, path):
+        self.path = path
+
+    def read_int(self, filename):
+        "Return a value from sysfs file"
+        with open(os.path.join(self.path, filename)) as f:
+            return int(f.readline())
+
+    def write_values(self, values):
+        "write dictionary, where key is filename and value is value to write"
+        for filename, contents in values.items():
+            with open(os.path.join(self.path, filename), "w") as f:
+                f.write(str(contents))
+
+
+def configure_dsa(dsa_id, queues):
+    "Configure the DSA instance with appropriate number of queues"
+    dsa_dir = SysfsDir(f"/sys/bus/dsa/devices/dsa{dsa_id}")
+    drv_dir = SysfsDir("/sys/bus/dsa/drivers/dsa")
+
+    max_groups = dsa_dir.read_int("max_groups")
+    max_engines = dsa_dir.read_int("max_engines")
+    max_queues = dsa_dir.read_int("max_work_queues")
+    max_tokens = dsa_dir.read_int("max_tokens")
+
+    # we want one engine per group
+    nb_groups = min(max_engines, max_groups)
+    for grp in range(nb_groups):
+        dsa_dir.write_values({f"engine{dsa_id}.{grp}/group_id": grp})
+
+    nb_queues = min(queues, max_queues)
+    if queues > nb_queues:
+        print(f"Setting number of queues to max supported value: {max_queues}")
+
+    # configure each queue
+    for q in range(nb_queues):
+        wq_dir = SysfsDir(os.path.join(dsa_dir.path, f"wq{dsa_id}.{q}"))
+        wq_dir.write_values({"group_id": q % nb_groups,
+                             "type": "user",
+                             "mode": "dedicated",
+                             "name": f"dpdk_wq{dsa_id}.{q}",
+                             "priority": 1,
+                             "size": int(max_tokens / nb_queues)})
+
+    # enable device and then queues
+    drv_dir.write_values({"bind": f"dsa{dsa_id}"})
+    for q in range(nb_queues):
+        drv_dir.write_values({"bind": f"wq{dsa_id}.{q}"})
+
+
+def main(args):
+    "Main function, does arg parsing and calls config function"
+    arg_p = argparse.ArgumentParser(
+        description="Configure whole DSA device instance for DPDK use")
+    arg_p.add_argument('dsa_id', type=int, help="DSA instance number")
+    arg_p.add_argument('-q', metavar='queues', type=int, default=255,
+                       help="Number of queues to set up")
+    parsed_args = arg_p.parse_args(args[1:])
+    configure_dsa(parsed_args.dsa_id, parsed_args.q)
+
+
+if __name__ == "__main__":
+    main(sys.argv)
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v3 14/25] raw/ioat: create rawdev instances on idxd PCI probe
  2020-09-25 11:08 ` [dpdk-dev] [PATCH v3 00/25] " Bruce Richardson
                     ` (12 preceding siblings ...)
  2020-09-25 11:08   ` [dpdk-dev] [PATCH v3 13/25] raw/ioat: include example configuration script Bruce Richardson
@ 2020-09-25 11:08   ` Bruce Richardson
  2020-09-25 11:09   ` [dpdk-dev] [PATCH v3 15/25] raw/ioat: create rawdev instances for idxd vdevs Bruce Richardson
                     ` (10 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-25 11:08 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Bruce Richardson, Kevin Laatz

When a matching device is found via PCI probe create a rawdev instance for
each queue on the hardware. Use empty self-test function for these devices
so that the overall rawdev_autotest does not report failures.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
---
 drivers/raw/ioat/idxd_pci.c            | 237 ++++++++++++++++++++++++-
 drivers/raw/ioat/ioat_common.c         |  68 +++++++
 drivers/raw/ioat/ioat_private.h        |  33 ++++
 drivers/raw/ioat/ioat_rawdev_test.c    |   7 +
 drivers/raw/ioat/ioat_spec.h           |  64 +++++++
 drivers/raw/ioat/meson.build           |   1 +
 drivers/raw/ioat/rte_ioat_rawdev_fns.h |  35 +++-
 7 files changed, 442 insertions(+), 3 deletions(-)
 create mode 100644 drivers/raw/ioat/ioat_common.c

diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
index 1a30e9c31..6752959ed 100644
--- a/drivers/raw/ioat/idxd_pci.c
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -3,8 +3,10 @@
  */
 
 #include <rte_bus_pci.h>
+#include <rte_memzone.h>
 
 #include "ioat_private.h"
+#include "ioat_spec.h"
 
 #define IDXD_VENDOR_ID		0x8086
 #define IDXD_DEVICE_ID_SPR	0x0B25
@@ -16,17 +18,246 @@ const struct rte_pci_id pci_id_idxd_map[] = {
 	{ .vendor_id = 0, /* sentinel */ },
 };
 
+static inline int
+idxd_pci_dev_command(struct idxd_rawdev *idxd, enum rte_idxd_cmds command)
+{
+	uint8_t err_code;
+	uint16_t qid = idxd->qid;
+	int i = 0;
+
+	if (command >= idxd_disable_wq && command <= idxd_reset_wq)
+		qid = (1 << qid);
+	rte_spinlock_lock(&idxd->u.pci->lk);
+	idxd->u.pci->regs->cmd = (command << IDXD_CMD_SHIFT) | qid;
+
+	do {
+		rte_pause();
+		err_code = idxd->u.pci->regs->cmdstatus;
+		if (++i >= 1000) {
+			IOAT_PMD_ERR("Timeout waiting for command response from HW");
+			rte_spinlock_unlock(&idxd->u.pci->lk);
+			return err_code;
+		}
+	} while (idxd->u.pci->regs->cmdstatus & CMDSTATUS_ACTIVE_MASK);
+	rte_spinlock_unlock(&idxd->u.pci->lk);
+
+	return err_code & CMDSTATUS_ERR_MASK;
+}
+
+static int
+idxd_is_wq_enabled(struct idxd_rawdev *idxd)
+{
+	uint32_t state = idxd->u.pci->wq_regs[idxd->qid].wqcfg[WQ_STATE_IDX];
+	return ((state >> WQ_STATE_SHIFT) & WQ_STATE_MASK) == 0x1;
+}
+
+static const struct rte_rawdev_ops idxd_pci_ops = {
+		.dev_close = idxd_rawdev_close,
+		.dev_selftest = idxd_rawdev_test,
+};
+
+/* each portal uses 4 x 4k pages */
+#define IDXD_PORTAL_SIZE (4096 * 4)
+
+static int
+init_pci_device(struct rte_pci_device *dev, struct idxd_rawdev *idxd)
+{
+	struct idxd_pci_common *pci;
+	uint8_t nb_groups, nb_engines, nb_wqs;
+	uint16_t grp_offset, wq_offset; /* how far into bar0 the regs are */
+	uint16_t wq_size, total_wq_size;
+	uint8_t lg2_max_batch, lg2_max_copy_size;
+	unsigned int i, err_code;
+
+	pci = malloc(sizeof(*pci));
+	if (pci == NULL) {
+		IOAT_PMD_ERR("%s: Can't allocate memory", __func__);
+		goto err;
+	}
+	rte_spinlock_init(&pci->lk);
+
+	/* assign the bar registers, and then configure device */
+	pci->regs = dev->mem_resource[0].addr;
+	grp_offset = (uint16_t)pci->regs->offsets[0];
+	pci->grp_regs = RTE_PTR_ADD(pci->regs, grp_offset * 0x100);
+	wq_offset = (uint16_t)(pci->regs->offsets[0] >> 16);
+	pci->wq_regs = RTE_PTR_ADD(pci->regs, wq_offset * 0x100);
+	pci->portals = dev->mem_resource[2].addr;
+
+	/* sanity check device status */
+	if (pci->regs->gensts & GENSTS_DEV_STATE_MASK) {
+		/* need function-level-reset (FLR) or is enabled */
+		IOAT_PMD_ERR("Device status is not disabled, cannot init");
+		goto err;
+	}
+	if (pci->regs->cmdstatus & CMDSTATUS_ACTIVE_MASK) {
+		/* command in progress */
+		IOAT_PMD_ERR("Device has a command in progress, cannot init");
+		goto err;
+	}
+
+	/* read basic info about the hardware for use when configuring */
+	nb_groups = (uint8_t)pci->regs->grpcap;
+	nb_engines = (uint8_t)pci->regs->engcap;
+	nb_wqs = (uint8_t)(pci->regs->wqcap >> 16);
+	total_wq_size = (uint16_t)pci->regs->wqcap;
+	lg2_max_copy_size = (uint8_t)(pci->regs->gencap >> 16) & 0x1F;
+	lg2_max_batch = (uint8_t)(pci->regs->gencap >> 21) & 0x0F;
+
+	IOAT_PMD_DEBUG("nb_groups = %u, nb_engines = %u, nb_wqs = %u",
+			nb_groups, nb_engines, nb_wqs);
+
+	/* zero out any old config */
+	for (i = 0; i < nb_groups; i++) {
+		pci->grp_regs[i].grpengcfg = 0;
+		pci->grp_regs[i].grpwqcfg[0] = 0;
+	}
+	for (i = 0; i < nb_wqs; i++)
+		pci->wq_regs[i].wqcfg[0] = 0;
+
+	/* put each engine into a separate group to avoid reordering */
+	if (nb_groups > nb_engines)
+		nb_groups = nb_engines;
+	if (nb_groups < nb_engines)
+		nb_engines = nb_groups;
+
+	/* assign engines to groups, round-robin style */
+	for (i = 0; i < nb_engines; i++) {
+		IOAT_PMD_DEBUG("Assigning engine %u to group %u",
+				i, i % nb_groups);
+		pci->grp_regs[i % nb_groups].grpengcfg |= (1ULL << i);
+	}
+
+	/* now do the same for queues and give work slots to each queue */
+	wq_size = total_wq_size / nb_wqs;
+	IOAT_PMD_DEBUG("Work queue size = %u, max batch = 2^%u, max copy = 2^%u",
+			wq_size, lg2_max_batch, lg2_max_copy_size);
+	for (i = 0; i < nb_wqs; i++) {
+		/* add engine "i" to a group */
+		IOAT_PMD_DEBUG("Assigning work queue %u to group %u",
+				i, i % nb_groups);
+		pci->grp_regs[i % nb_groups].grpwqcfg[0] |= (1ULL << i);
+		/* now configure it, in terms of size, max batch, mode */
+		pci->wq_regs[i].wqcfg[WQ_SIZE_IDX] = wq_size;
+		pci->wq_regs[i].wqcfg[WQ_MODE_IDX] = (1 << WQ_PRIORITY_SHIFT) |
+				WQ_MODE_DEDICATED;
+		pci->wq_regs[i].wqcfg[WQ_SIZES_IDX] = lg2_max_copy_size |
+				(lg2_max_batch << WQ_BATCH_SZ_SHIFT);
+	}
+
+	/* dump the group configuration to output */
+	for (i = 0; i < nb_groups; i++) {
+		IOAT_PMD_DEBUG("## Group %d", i);
+		IOAT_PMD_DEBUG("    GRPWQCFG: %"PRIx64, pci->grp_regs[i].grpwqcfg[0]);
+		IOAT_PMD_DEBUG("    GRPENGCFG: %"PRIx64, pci->grp_regs[i].grpengcfg);
+		IOAT_PMD_DEBUG("    GRPFLAGS: %"PRIx32, pci->grp_regs[i].grpflags);
+	}
+
+	idxd->u.pci = pci;
+	idxd->max_batches = wq_size;
+
+	/* enable the device itself */
+	err_code = idxd_pci_dev_command(idxd, idxd_enable_dev);
+	if (err_code) {
+		IOAT_PMD_ERR("Error enabling device: code %#x", err_code);
+		return err_code;
+	}
+	IOAT_PMD_DEBUG("IDXD Device enabled OK");
+
+	return nb_wqs;
+
+err:
+	free(pci);
+	return -1;
+}
+
 static int
 idxd_rawdev_probe_pci(struct rte_pci_driver *drv, struct rte_pci_device *dev)
 {
-	int ret = 0;
+	struct idxd_rawdev idxd = {0};
+	uint8_t nb_wqs;
+	int qid, ret = 0;
 	char name[PCI_PRI_STR_SIZE];
 
 	rte_pci_device_name(&dev->addr, name, sizeof(name));
 	IOAT_PMD_INFO("Init %s on NUMA node %d", name, dev->device.numa_node);
 	dev->device.driver = &drv->driver;
 
-	return ret;
+	ret = init_pci_device(dev, &idxd);
+	if (ret < 0) {
+		IOAT_PMD_ERR("Error initializing PCI hardware");
+		return ret;
+	}
+	nb_wqs = (uint8_t)ret;
+
+	/* set up one device for each queue */
+	for (qid = 0; qid < nb_wqs; qid++) {
+		char qname[32];
+
+		/* add the queue number to each device name */
+		snprintf(qname, sizeof(qname), "%s-q%d", name, qid);
+		idxd.qid = qid;
+		idxd.public.portal = RTE_PTR_ADD(idxd.u.pci->portals,
+				qid * IDXD_PORTAL_SIZE);
+		if (idxd_is_wq_enabled(&idxd))
+			IOAT_PMD_ERR("Error, WQ %u seems enabled", qid);
+		ret = idxd_rawdev_create(qname, &dev->device,
+				&idxd, &idxd_pci_ops);
+		if (ret != 0) {
+			IOAT_PMD_ERR("Failed to create rawdev %s", name);
+			if (qid == 0) /* if no devices using this, free pci */
+				free(idxd.u.pci);
+			return ret;
+		}
+	}
+
+	return 0;
+}
+
+static int
+idxd_rawdev_destroy(const char *name)
+{
+	int ret;
+	uint8_t err_code;
+	struct rte_rawdev *rdev;
+	struct idxd_rawdev *idxd;
+
+	if (!name) {
+		IOAT_PMD_ERR("Invalid device name");
+		return -EINVAL;
+	}
+
+	rdev = rte_rawdev_pmd_get_named_dev(name);
+	if (!rdev) {
+		IOAT_PMD_ERR("Invalid device name (%s)", name);
+		return -EINVAL;
+	}
+
+	idxd = rdev->dev_private;
+
+	/* disable the device */
+	err_code = idxd_pci_dev_command(idxd, idxd_disable_dev);
+	if (err_code) {
+		IOAT_PMD_ERR("Error disabling device: code %#x", err_code);
+		return err_code;
+	}
+	IOAT_PMD_DEBUG("IDXD Device disabled OK");
+
+	/* free device memory */
+	if (rdev->dev_private != NULL) {
+		IOAT_PMD_DEBUG("Freeing device driver memory");
+		rdev->dev_private = NULL;
+		rte_free(idxd->public.batch_ring);
+		rte_free(idxd->public.hdl_ring);
+		rte_memzone_free(idxd->mz);
+	}
+
+	/* rte_rawdev_close is called by pmd_release */
+	ret = rte_rawdev_pmd_release(rdev);
+	if (ret)
+		IOAT_PMD_DEBUG("Device cleanup failed");
+
+	return 0;
 }
 
 static int
@@ -40,6 +271,8 @@ idxd_rawdev_remove_pci(struct rte_pci_device *dev)
 	IOAT_PMD_INFO("Closing %s on NUMA node %d",
 			name, dev->device.numa_node);
 
+	ret = idxd_rawdev_destroy(name);
+
 	return ret;
 }
 
diff --git a/drivers/raw/ioat/ioat_common.c b/drivers/raw/ioat/ioat_common.c
new file mode 100644
index 000000000..c3aa015ed
--- /dev/null
+++ b/drivers/raw/ioat/ioat_common.c
@@ -0,0 +1,68 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_rawdev_pmd.h>
+#include <rte_memzone.h>
+#include <rte_common.h>
+
+#include "ioat_private.h"
+
+int
+idxd_rawdev_close(struct rte_rawdev *dev __rte_unused)
+{
+	return 0;
+}
+
+int
+idxd_rawdev_create(const char *name, struct rte_device *dev,
+		   const struct idxd_rawdev *base_idxd,
+		   const struct rte_rawdev_ops *ops)
+{
+	struct idxd_rawdev *idxd;
+	struct rte_rawdev *rawdev = NULL;
+	const struct rte_memzone *mz = NULL;
+	char mz_name[RTE_MEMZONE_NAMESIZE];
+	int ret = 0;
+
+	if (!name) {
+		IOAT_PMD_ERR("Invalid name of the device!");
+		ret = -EINVAL;
+		goto cleanup;
+	}
+
+	/* Allocate device structure */
+	rawdev = rte_rawdev_pmd_allocate(name, sizeof(struct idxd_rawdev),
+					 dev->numa_node);
+	if (rawdev == NULL) {
+		IOAT_PMD_ERR("Unable to allocate raw device");
+		ret = -ENOMEM;
+		goto cleanup;
+	}
+
+	snprintf(mz_name, sizeof(mz_name), "rawdev%u_private", rawdev->dev_id);
+	mz = rte_memzone_reserve(mz_name, sizeof(struct idxd_rawdev),
+			dev->numa_node, RTE_MEMZONE_IOVA_CONTIG);
+	if (mz == NULL) {
+		IOAT_PMD_ERR("Unable to reserve memzone for private data\n");
+		ret = -ENOMEM;
+		goto cleanup;
+	}
+	rawdev->dev_private = mz->addr;
+	rawdev->dev_ops = ops;
+	rawdev->device = dev;
+	rawdev->driver_name = IOAT_PMD_RAWDEV_NAME_STR;
+
+	idxd = rawdev->dev_private;
+	*idxd = *base_idxd; /* copy over the main fields already passed in */
+	idxd->rawdev = rawdev;
+	idxd->mz = mz;
+
+	return 0;
+
+cleanup:
+	if (rawdev)
+		rte_rawdev_pmd_release(rawdev);
+
+	return ret;
+}
diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
index d87d4d055..53f00a9f3 100644
--- a/drivers/raw/ioat/ioat_private.h
+++ b/drivers/raw/ioat/ioat_private.h
@@ -14,6 +14,10 @@
  * @b EXPERIMENTAL: these structures and APIs may change without prior notice
  */
 
+#include <rte_spinlock.h>
+#include <rte_rawdev_pmd.h>
+#include "rte_ioat_rawdev.h"
+
 extern int ioat_pmd_logtype;
 
 #define IOAT_PMD_LOG(level, fmt, args...) rte_log(RTE_LOG_ ## level, \
@@ -24,4 +28,33 @@ extern int ioat_pmd_logtype;
 #define IOAT_PMD_ERR(fmt, args...)    IOAT_PMD_LOG(ERR, fmt, ## args)
 #define IOAT_PMD_WARN(fmt, args...)   IOAT_PMD_LOG(WARNING, fmt, ## args)
 
+struct idxd_pci_common {
+	rte_spinlock_t lk;
+	volatile struct rte_idxd_bar0 *regs;
+	volatile struct rte_idxd_wqcfg *wq_regs;
+	volatile struct rte_idxd_grpcfg *grp_regs;
+	volatile void *portals;
+};
+
+struct idxd_rawdev {
+	struct rte_idxd_rawdev public; /* the public members, must be first */
+
+	struct rte_rawdev *rawdev;
+	const struct rte_memzone *mz;
+	uint8_t qid;
+	uint16_t max_batches;
+
+	union {
+		struct idxd_pci_common *pci;
+	} u;
+};
+
+extern int idxd_rawdev_create(const char *name, struct rte_device *dev,
+		       const struct idxd_rawdev *idxd,
+		       const struct rte_rawdev_ops *ops);
+
+extern int idxd_rawdev_close(struct rte_rawdev *dev);
+
+extern int idxd_rawdev_test(uint16_t dev_id);
+
 #endif /* _IOAT_PRIVATE_H_ */
diff --git a/drivers/raw/ioat/ioat_rawdev_test.c b/drivers/raw/ioat/ioat_rawdev_test.c
index 8ff546803..7cd0f4abf 100644
--- a/drivers/raw/ioat/ioat_rawdev_test.c
+++ b/drivers/raw/ioat/ioat_rawdev_test.c
@@ -7,6 +7,7 @@
 #include <rte_mbuf.h>
 #include "rte_rawdev.h"
 #include "rte_ioat_rawdev.h"
+#include "ioat_private.h"
 
 int ioat_rawdev_test(uint16_t dev_id); /* pre-define to keep compiler happy */
 
@@ -258,3 +259,9 @@ ioat_rawdev_test(uint16_t dev_id)
 	free(ids);
 	return -1;
 }
+
+int
+idxd_rawdev_test(uint16_t dev_id __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/raw/ioat/ioat_spec.h b/drivers/raw/ioat/ioat_spec.h
index 9645e16d4..1aa768b9a 100644
--- a/drivers/raw/ioat/ioat_spec.h
+++ b/drivers/raw/ioat/ioat_spec.h
@@ -268,6 +268,70 @@ union rte_ioat_hw_desc {
 	struct rte_ioat_pq_update_hw_desc pq_update;
 };
 
+/*** Definitions for Intel(R) Data Streaming Accelerator Follow ***/
+
+#define IDXD_CMD_SHIFT 20
+enum rte_idxd_cmds {
+	idxd_enable_dev = 1,
+	idxd_disable_dev,
+	idxd_drain_all,
+	idxd_abort_all,
+	idxd_reset_device,
+	idxd_enable_wq,
+	idxd_disable_wq,
+	idxd_drain_wq,
+	idxd_abort_wq,
+	idxd_reset_wq,
+};
+
+/* General bar0 registers */
+struct rte_idxd_bar0 {
+	uint32_t __rte_cache_aligned version;    /* offset 0x00 */
+	uint64_t __rte_aligned(0x10) gencap;     /* offset 0x10 */
+	uint64_t __rte_aligned(0x10) wqcap;      /* offset 0x20 */
+	uint64_t __rte_aligned(0x10) grpcap;     /* offset 0x30 */
+	uint64_t __rte_aligned(0x08) engcap;     /* offset 0x38 */
+	uint64_t __rte_aligned(0x10) opcap;      /* offset 0x40 */
+	uint64_t __rte_aligned(0x20) offsets[2]; /* offset 0x60 */
+	uint32_t __rte_aligned(0x20) gencfg;     /* offset 0x80 */
+	uint32_t __rte_aligned(0x08) genctrl;    /* offset 0x88 */
+	uint32_t __rte_aligned(0x10) gensts;     /* offset 0x90 */
+	uint32_t __rte_aligned(0x08) intcause;   /* offset 0x98 */
+	uint32_t __rte_aligned(0x10) cmd;        /* offset 0xA0 */
+	uint32_t __rte_aligned(0x08) cmdstatus;  /* offset 0xA8 */
+	uint64_t __rte_aligned(0x20) swerror[4]; /* offset 0xC0 */
+};
+
+struct rte_idxd_wqcfg {
+	uint32_t wqcfg[8] __rte_aligned(32); /* 32-byte register */
+};
+
+#define WQ_SIZE_IDX      0 /* size is in first 32-bit value */
+#define WQ_THRESHOLD_IDX 1 /* WQ threshold second 32-bits */
+#define WQ_MODE_IDX      2 /* WQ mode and other flags */
+#define WQ_SIZES_IDX     3 /* WQ transfer and batch sizes */
+#define WQ_OCC_INT_IDX   4 /* WQ occupancy interrupt handle */
+#define WQ_OCC_LIMIT_IDX 5 /* WQ occupancy limit */
+#define WQ_STATE_IDX     6 /* WQ state and occupancy state */
+
+#define WQ_MODE_SHARED    0
+#define WQ_MODE_DEDICATED 1
+#define WQ_PRIORITY_SHIFT 4
+#define WQ_BATCH_SZ_SHIFT 5
+#define WQ_STATE_SHIFT 30
+#define WQ_STATE_MASK 0x3
+
+struct rte_idxd_grpcfg {
+	uint64_t grpwqcfg[4]  __rte_cache_aligned; /* 64-byte register set */
+	uint64_t grpengcfg;  /* offset 32 */
+	uint32_t grpflags;   /* offset 40 */
+};
+
+#define GENSTS_DEV_STATE_MASK 0x03
+#define CMDSTATUS_ACTIVE_SHIFT 31
+#define CMDSTATUS_ACTIVE_MASK (1 << 31)
+#define CMDSTATUS_ERR_MASK 0xFF
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/drivers/raw/ioat/meson.build b/drivers/raw/ioat/meson.build
index b343b7367..5eff76a1a 100644
--- a/drivers/raw/ioat/meson.build
+++ b/drivers/raw/ioat/meson.build
@@ -6,6 +6,7 @@ reason = 'only supported on x86'
 sources = files(
 	'idxd_pci.c',
 	'idxd_vdev.c',
+	'ioat_common.c',
 	'ioat_rawdev.c',
 	'ioat_rawdev_test.c')
 deps += ['bus_pci',
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index c6e0b9a58..fa2eb5334 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -41,9 +41,20 @@ struct rte_ioat_generic_hw_desc {
 
 /**
  * @internal
- * Structure representing a device instance
+ * Identify the data path to use.
+ * Must be first field of rte_ioat_rawdev and rte_idxd_rawdev structs
+ */
+enum rte_ioat_dev_type {
+	RTE_IOAT_DEV,
+	RTE_IDXD_DEV,
+};
+
+/**
+ * @internal
+ * Structure representing an IOAT device instance
  */
 struct rte_ioat_rawdev {
+	enum rte_ioat_dev_type type;
 	struct rte_rawdev *rawdev;
 	const struct rte_memzone *mz;
 	const struct rte_memzone *desc_mz;
@@ -79,6 +90,28 @@ struct rte_ioat_rawdev {
 #define RTE_IOAT_CHANSTS_HALTED			0x3
 #define RTE_IOAT_CHANSTS_ARMED			0x4
 
+/**
+ * @internal
+ * Structure representing an IDXD device instance
+ */
+struct rte_idxd_rawdev {
+	enum rte_ioat_dev_type type;
+	void *portal; /* address to write the batch descriptor */
+
+	/* counters to track the batches and the individual op handles */
+	uint16_t batch_ring_sz;  /* size of batch ring */
+	uint16_t hdl_ring_sz;    /* size of the user hdl ring */
+
+	uint16_t next_batch;     /* where we write descriptor ops */
+	uint16_t next_completed; /* batch where we read completions */
+	uint16_t next_ret_hdl;   /* the next user hdl to return */
+	uint16_t last_completed_hdl; /* the last user hdl that has completed */
+	uint16_t next_free_hdl;  /* where the handle for next op will go */
+
+	struct rte_idxd_user_hdl *hdl_ring;
+	struct rte_idxd_desc_batch *batch_ring;
+};
+
 /*
  * Enqueue a copy operation onto the ioat device
  */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v3 15/25] raw/ioat: create rawdev instances for idxd vdevs
  2020-09-25 11:08 ` [dpdk-dev] [PATCH v3 00/25] " Bruce Richardson
                     ` (13 preceding siblings ...)
  2020-09-25 11:08   ` [dpdk-dev] [PATCH v3 14/25] raw/ioat: create rawdev instances on idxd PCI probe Bruce Richardson
@ 2020-09-25 11:09   ` Bruce Richardson
  2020-09-25 11:09   ` [dpdk-dev] [PATCH v3 16/25] raw/ioat: add datapath data structures for idxd devices Bruce Richardson
                     ` (9 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-25 11:09 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Kevin Laatz, Bruce Richardson

From: Kevin Laatz <kevin.laatz@intel.com>

For each vdev (DSA work queue) instance, create a rawdev instance.

Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 drivers/raw/ioat/idxd_vdev.c    | 106 +++++++++++++++++++++++++++++++-
 drivers/raw/ioat/ioat_private.h |   4 ++
 2 files changed, 109 insertions(+), 1 deletion(-)

diff --git a/drivers/raw/ioat/idxd_vdev.c b/drivers/raw/ioat/idxd_vdev.c
index 0509fc084..d2d588916 100644
--- a/drivers/raw/ioat/idxd_vdev.c
+++ b/drivers/raw/ioat/idxd_vdev.c
@@ -2,6 +2,12 @@
  * Copyright(c) 2020 Intel Corporation
  */
 
+#include <fcntl.h>
+#include <unistd.h>
+#include <limits.h>
+#include <sys/mman.h>
+
+#include <rte_memzone.h>
 #include <rte_bus_vdev.h>
 #include <rte_kvargs.h>
 #include <rte_string_fns.h>
@@ -24,6 +30,36 @@ struct idxd_vdev_args {
 	uint8_t wq_id;
 };
 
+static const struct rte_rawdev_ops idxd_vdev_ops = {
+		.dev_close = idxd_rawdev_close,
+		.dev_selftest = idxd_rawdev_test,
+};
+
+static void *
+idxd_vdev_mmap_wq(struct idxd_vdev_args *args)
+{
+	void *addr;
+	char path[PATH_MAX];
+	int fd;
+
+	snprintf(path, sizeof(path), "/dev/dsa/wq%u.%u",
+			args->device_id, args->wq_id);
+	fd = open(path, O_RDWR);
+	if (fd < 0) {
+		IOAT_PMD_ERR("Failed to open device path");
+		return NULL;
+	}
+
+	addr = mmap(NULL, 0x1000, PROT_WRITE, MAP_SHARED | MAP_POPULATE, fd, 0);
+	close(fd);
+	if (addr == MAP_FAILED) {
+		IOAT_PMD_ERR("Failed to mmap device");
+		return NULL;
+	}
+
+	return addr;
+}
+
 static int
 idxd_rawdev_parse_wq(const char *key __rte_unused, const char *value,
 			  void *extra_args)
@@ -70,10 +106,32 @@ idxd_vdev_parse_params(struct rte_kvargs *kvlist, struct idxd_vdev_args *args)
 	return -EINVAL;
 }
 
+static int
+idxd_vdev_get_max_batches(struct idxd_vdev_args *args)
+{
+	char sysfs_path[PATH_MAX];
+	FILE *f;
+	int ret;
+
+	snprintf(sysfs_path, sizeof(sysfs_path),
+			"/sys/bus/dsa/devices/wq%u.%u/size",
+			args->device_id, args->wq_id);
+	f = fopen(sysfs_path, "r");
+	if (f == NULL)
+		return -1;
+
+	if (fscanf(f, "%d", &ret) != 1)
+		ret = -1;
+
+	fclose(f);
+	return ret;
+}
+
 static int
 idxd_rawdev_probe_vdev(struct rte_vdev_device *vdev)
 {
 	struct rte_kvargs *kvlist;
+	struct idxd_rawdev idxd = {0};
 	struct idxd_vdev_args vdev_args;
 	const char *name;
 	int ret = 0;
@@ -96,13 +154,32 @@ idxd_rawdev_probe_vdev(struct rte_vdev_device *vdev)
 		return -EINVAL;
 	}
 
+	idxd.qid = vdev_args.wq_id;
+	idxd.u.vdev.dsa_id = vdev_args.device_id;
+	idxd.max_batches = idxd_vdev_get_max_batches(&vdev_args);
+
+	idxd.public.portal = idxd_vdev_mmap_wq(&vdev_args);
+	if (idxd.public.portal == NULL) {
+		IOAT_PMD_ERR("WQ mmap failed");
+		return -ENOENT;
+	}
+
+	ret = idxd_rawdev_create(name, &vdev->device, &idxd, &idxd_vdev_ops);
+	if (ret) {
+		IOAT_PMD_ERR("Failed to create rawdev %s", name);
+		return ret;
+	}
+
 	return 0;
 }
 
 static int
 idxd_rawdev_remove_vdev(struct rte_vdev_device *vdev)
 {
+	struct idxd_rawdev *idxd;
 	const char *name;
+	struct rte_rawdev *rdev;
+	int ret = 0;
 
 	name = rte_vdev_device_name(vdev);
 	if (name == NULL)
@@ -110,7 +187,34 @@ idxd_rawdev_remove_vdev(struct rte_vdev_device *vdev)
 
 	IOAT_PMD_INFO("Remove DSA vdev %p", name);
 
-	return 0;
+	rdev = rte_rawdev_pmd_get_named_dev(name);
+	if (!rdev) {
+		IOAT_PMD_ERR("Invalid device name (%s)", name);
+		return -EINVAL;
+	}
+
+	idxd = rdev->dev_private;
+
+	/* free context and memory */
+	if (rdev->dev_private != NULL) {
+		IOAT_PMD_DEBUG("Freeing device driver memory");
+		rdev->dev_private = NULL;
+
+		if (munmap(idxd->public.portal, 0x1000) < 0) {
+			IOAT_PMD_ERR("Error unmapping portal");
+			ret = -errno;
+		}
+
+		rte_free(idxd->public.batch_ring);
+		rte_free(idxd->public.hdl_ring);
+
+		rte_memzone_free(idxd->mz);
+	}
+
+	if (rte_rawdev_pmd_release(rdev))
+		IOAT_PMD_ERR("Device cleanup failed");
+
+	return ret;
 }
 
 struct rte_vdev_driver idxd_rawdev_drv_vdev = {
diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
index 53f00a9f3..6f7bdb499 100644
--- a/drivers/raw/ioat/ioat_private.h
+++ b/drivers/raw/ioat/ioat_private.h
@@ -45,6 +45,10 @@ struct idxd_rawdev {
 	uint16_t max_batches;
 
 	union {
+		struct {
+			unsigned int dsa_id;
+		} vdev;
+
 		struct idxd_pci_common *pci;
 	} u;
 };
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v3 16/25] raw/ioat: add datapath data structures for idxd devices
  2020-09-25 11:08 ` [dpdk-dev] [PATCH v3 00/25] " Bruce Richardson
                     ` (14 preceding siblings ...)
  2020-09-25 11:09   ` [dpdk-dev] [PATCH v3 15/25] raw/ioat: create rawdev instances for idxd vdevs Bruce Richardson
@ 2020-09-25 11:09   ` Bruce Richardson
  2020-09-25 11:09   ` [dpdk-dev] [PATCH v3 17/25] raw/ioat: add configure function " Bruce Richardson
                     ` (8 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-25 11:09 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Bruce Richardson, Kevin Laatz

Add in the relevant data structures for the data path for DSA devices. Also
include a device dump function to output the status of each device.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
---
 drivers/raw/ioat/idxd_pci.c            |  1 +
 drivers/raw/ioat/idxd_vdev.c           |  1 +
 drivers/raw/ioat/ioat_common.c         | 34 +++++++++++
 drivers/raw/ioat/ioat_private.h        |  2 +
 drivers/raw/ioat/ioat_rawdev_test.c    |  3 +-
 drivers/raw/ioat/rte_ioat_rawdev_fns.h | 80 ++++++++++++++++++++++++++
 6 files changed, 120 insertions(+), 1 deletion(-)

diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
index 6752959ed..113ee98e8 100644
--- a/drivers/raw/ioat/idxd_pci.c
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -54,6 +54,7 @@ idxd_is_wq_enabled(struct idxd_rawdev *idxd)
 static const struct rte_rawdev_ops idxd_pci_ops = {
 		.dev_close = idxd_rawdev_close,
 		.dev_selftest = idxd_rawdev_test,
+		.dump = idxd_dev_dump,
 };
 
 /* each portal uses 4 x 4k pages */
diff --git a/drivers/raw/ioat/idxd_vdev.c b/drivers/raw/ioat/idxd_vdev.c
index d2d588916..31d8916d0 100644
--- a/drivers/raw/ioat/idxd_vdev.c
+++ b/drivers/raw/ioat/idxd_vdev.c
@@ -33,6 +33,7 @@ struct idxd_vdev_args {
 static const struct rte_rawdev_ops idxd_vdev_ops = {
 		.dev_close = idxd_rawdev_close,
 		.dev_selftest = idxd_rawdev_test,
+		.dump = idxd_dev_dump,
 };
 
 static void *
diff --git a/drivers/raw/ioat/ioat_common.c b/drivers/raw/ioat/ioat_common.c
index c3aa015ed..672241351 100644
--- a/drivers/raw/ioat/ioat_common.c
+++ b/drivers/raw/ioat/ioat_common.c
@@ -14,6 +14,36 @@ idxd_rawdev_close(struct rte_rawdev *dev __rte_unused)
 	return 0;
 }
 
+int
+idxd_dev_dump(struct rte_rawdev *dev, FILE *f)
+{
+	struct idxd_rawdev *idxd = dev->dev_private;
+	struct rte_idxd_rawdev *rte_idxd = &idxd->public;
+	int i;
+
+	fprintf(f, "Raw Device #%d\n", dev->dev_id);
+	fprintf(f, "Driver: %s\n\n", dev->driver_name);
+
+	fprintf(f, "Portal: %p\n", rte_idxd->portal);
+	fprintf(f, "Batch Ring size: %u\n", rte_idxd->batch_ring_sz);
+	fprintf(f, "Comp Handle Ring size: %u\n\n", rte_idxd->hdl_ring_sz);
+
+	fprintf(f, "Next batch: %u\n", rte_idxd->next_batch);
+	fprintf(f, "Next batch to be completed: %u\n", rte_idxd->next_completed);
+	for (i = 0; i < rte_idxd->batch_ring_sz; i++) {
+		struct rte_idxd_desc_batch *b = &rte_idxd->batch_ring[i];
+		fprintf(f, "Batch %u @%p: submitted=%u, op_count=%u, hdl_end=%u\n",
+				i, b, b->submitted, b->op_count, b->hdl_end);
+	}
+
+	fprintf(f, "\n");
+	fprintf(f, "Next free hdl: %u\n", rte_idxd->next_free_hdl);
+	fprintf(f, "Last completed hdl: %u\n", rte_idxd->last_completed_hdl);
+	fprintf(f, "Next returned hdl: %u\n", rte_idxd->next_ret_hdl);
+
+	return 0;
+}
+
 int
 idxd_rawdev_create(const char *name, struct rte_device *dev,
 		   const struct idxd_rawdev *base_idxd,
@@ -25,6 +55,10 @@ idxd_rawdev_create(const char *name, struct rte_device *dev,
 	char mz_name[RTE_MEMZONE_NAMESIZE];
 	int ret = 0;
 
+	RTE_BUILD_BUG_ON(sizeof(struct rte_idxd_hw_desc) != 64);
+	RTE_BUILD_BUG_ON(offsetof(struct rte_idxd_hw_desc, size) != 32);
+	RTE_BUILD_BUG_ON(sizeof(struct rte_idxd_completion) != 32);
+
 	if (!name) {
 		IOAT_PMD_ERR("Invalid name of the device!");
 		ret = -EINVAL;
diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
index 6f7bdb499..f521c85a1 100644
--- a/drivers/raw/ioat/ioat_private.h
+++ b/drivers/raw/ioat/ioat_private.h
@@ -61,4 +61,6 @@ extern int idxd_rawdev_close(struct rte_rawdev *dev);
 
 extern int idxd_rawdev_test(uint16_t dev_id);
 
+extern int idxd_dev_dump(struct rte_rawdev *dev, FILE *f);
+
 #endif /* _IOAT_PRIVATE_H_ */
diff --git a/drivers/raw/ioat/ioat_rawdev_test.c b/drivers/raw/ioat/ioat_rawdev_test.c
index 7cd0f4abf..a9132a8f1 100644
--- a/drivers/raw/ioat/ioat_rawdev_test.c
+++ b/drivers/raw/ioat/ioat_rawdev_test.c
@@ -261,7 +261,8 @@ ioat_rawdev_test(uint16_t dev_id)
 }
 
 int
-idxd_rawdev_test(uint16_t dev_id __rte_unused)
+idxd_rawdev_test(uint16_t dev_id)
 {
+	rte_rawdev_dump(dev_id, stdout);
 	return 0;
 }
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index fa2eb5334..178c432dd 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -90,6 +90,86 @@ struct rte_ioat_rawdev {
 #define RTE_IOAT_CHANSTS_HALTED			0x3
 #define RTE_IOAT_CHANSTS_ARMED			0x4
 
+/*
+ * Defines used in the data path for interacting with hardware.
+ */
+#define IDXD_CMD_OP_SHIFT 24
+enum rte_idxd_ops {
+	idxd_op_nop = 0,
+	idxd_op_batch,
+	idxd_op_drain,
+	idxd_op_memmove,
+	idxd_op_fill
+};
+
+#define IDXD_FLAG_FENCE                 (1 << 0)
+#define IDXD_FLAG_COMPLETION_ADDR_VALID (1 << 2)
+#define IDXD_FLAG_REQUEST_COMPLETION    (1 << 3)
+#define IDXD_FLAG_CACHE_CONTROL         (1 << 8)
+
+/**
+ * Hardware descriptor used by DSA hardware, for both bursts and
+ * for individual operations.
+ */
+struct rte_idxd_hw_desc {
+	uint32_t pasid;
+	uint32_t op_flags;
+	rte_iova_t completion;
+
+	RTE_STD_C11
+	union {
+		rte_iova_t src;      /* source address for copy ops etc. */
+		rte_iova_t desc_addr; /* descriptor pointer for batch */
+	};
+	rte_iova_t dst;
+
+	uint32_t size;    /* length of data for op, or batch size */
+
+	/* 28 bytes of padding here */
+} __rte_aligned(64);
+
+/**
+ * Completion record structure written back by DSA
+ */
+struct rte_idxd_completion {
+	uint8_t status;
+	uint8_t result;
+	/* 16-bits pad here */
+	uint32_t completed_size; /* data length, or descriptors for batch */
+
+	rte_iova_t fault_address;
+	uint32_t invalid_flags;
+} __rte_aligned(32);
+
+#define BATCH_SIZE 64
+
+/**
+ * Structure used inside the driver for building up and submitting
+ * a batch of operations to the DSA hardware.
+ */
+struct rte_idxd_desc_batch {
+	struct rte_idxd_completion comp; /* the completion record for batch */
+
+	uint16_t submitted;
+	uint16_t op_count;
+	uint16_t hdl_end;
+
+	struct rte_idxd_hw_desc batch_desc;
+
+	/* batches must always have 2 descriptors, so put a null at the start */
+	struct rte_idxd_hw_desc null_desc;
+	struct rte_idxd_hw_desc ops[BATCH_SIZE];
+};
+
+/**
+ * structure used to save the "handles" provided by the user to be
+ * returned to the user on job completion.
+ */
+struct rte_idxd_user_hdl {
+	uint64_t src;
+	uint64_t dst;
+};
+
 /**
  * @internal
  * Structure representing an IDXD device instance
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v3 17/25] raw/ioat: add configure function for idxd devices
  2020-09-25 11:08 ` [dpdk-dev] [PATCH v3 00/25] " Bruce Richardson
                     ` (15 preceding siblings ...)
  2020-09-25 11:09   ` [dpdk-dev] [PATCH v3 16/25] raw/ioat: add datapath data structures for idxd devices Bruce Richardson
@ 2020-09-25 11:09   ` Bruce Richardson
  2020-09-25 11:09   ` [dpdk-dev] [PATCH v3 18/25] raw/ioat: add start and stop functions " Bruce Richardson
                     ` (7 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-25 11:09 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Bruce Richardson, Kevin Laatz

Add configure function for idxd devices, taking the same parameters as the
existing configure function for ioat. The ring_size parameter is used to
compute the maximum number of bursts to be supported by the driver, given
that the hardware works on individual bursts of descriptors at a time.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
---
 drivers/raw/ioat/idxd_pci.c            |  1 +
 drivers/raw/ioat/idxd_vdev.c           |  1 +
 drivers/raw/ioat/ioat_common.c         | 64 ++++++++++++++++++++++++++
 drivers/raw/ioat/ioat_private.h        |  3 ++
 drivers/raw/ioat/rte_ioat_rawdev_fns.h |  1 +
 5 files changed, 70 insertions(+)

diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
index 113ee98e8..2d8dbc2f2 100644
--- a/drivers/raw/ioat/idxd_pci.c
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -55,6 +55,7 @@ static const struct rte_rawdev_ops idxd_pci_ops = {
 		.dev_close = idxd_rawdev_close,
 		.dev_selftest = idxd_rawdev_test,
 		.dump = idxd_dev_dump,
+		.dev_configure = idxd_dev_configure,
 };
 
 /* each portal uses 4 x 4k pages */
diff --git a/drivers/raw/ioat/idxd_vdev.c b/drivers/raw/ioat/idxd_vdev.c
index 31d8916d0..aa6a5a9aa 100644
--- a/drivers/raw/ioat/idxd_vdev.c
+++ b/drivers/raw/ioat/idxd_vdev.c
@@ -34,6 +34,7 @@ static const struct rte_rawdev_ops idxd_vdev_ops = {
 		.dev_close = idxd_rawdev_close,
 		.dev_selftest = idxd_rawdev_test,
 		.dump = idxd_dev_dump,
+		.dev_configure = idxd_dev_configure,
 };
 
 static void *
diff --git a/drivers/raw/ioat/ioat_common.c b/drivers/raw/ioat/ioat_common.c
index 672241351..5173c331c 100644
--- a/drivers/raw/ioat/ioat_common.c
+++ b/drivers/raw/ioat/ioat_common.c
@@ -44,6 +44,70 @@ idxd_dev_dump(struct rte_rawdev *dev, FILE *f)
 	return 0;
 }
 
+int
+idxd_dev_configure(const struct rte_rawdev *dev,
+		rte_rawdev_obj_t config, size_t config_size)
+{
+	struct idxd_rawdev *idxd = dev->dev_private;
+	struct rte_idxd_rawdev *rte_idxd = &idxd->public;
+	struct rte_ioat_rawdev_config *cfg = config;
+	uint16_t max_desc = cfg->ring_size;
+	uint16_t max_batches = max_desc / BATCH_SIZE;
+	uint16_t i;
+
+	if (config_size != sizeof(*cfg))
+		return -EINVAL;
+
+	if (dev->started) {
+		IOAT_PMD_ERR("%s: Error, device is started.", __func__);
+		return -EAGAIN;
+	}
+
+	rte_idxd->hdls_disable = cfg->hdls_disable;
+
+	/* limit the batches to what can be stored in hardware */
+	if (max_batches > idxd->max_batches) {
+		IOAT_PMD_DEBUG("Ring size of %u is too large for this device, need to limit to %u batches of %u",
+				max_desc, idxd->max_batches, BATCH_SIZE);
+		max_batches = idxd->max_batches;
+		max_desc = max_batches * BATCH_SIZE;
+	}
+	if (!rte_is_power_of_2(max_desc))
+		max_desc = rte_align32pow2(max_desc);
+	IOAT_PMD_DEBUG("Rawdev %u using %u descriptors in %u batches",
+			dev->dev_id, max_desc, max_batches);
+
+	/* in case we are reconfiguring a device, free any existing memory */
+	rte_free(rte_idxd->batch_ring);
+	rte_free(rte_idxd->hdl_ring);
+
+	rte_idxd->batch_ring = rte_zmalloc(NULL,
+			sizeof(*rte_idxd->batch_ring) * max_batches, 0);
+	if (rte_idxd->batch_ring == NULL)
+		return -ENOMEM;
+
+	rte_idxd->hdl_ring = rte_zmalloc(NULL,
+			sizeof(*rte_idxd->hdl_ring) * max_desc, 0);
+	if (rte_idxd->hdl_ring == NULL) {
+		rte_free(rte_idxd->batch_ring);
+		rte_idxd->batch_ring = NULL;
+		return -ENOMEM;
+	}
+	rte_idxd->batch_ring_sz = max_batches;
+	rte_idxd->hdl_ring_sz = max_desc;
+
+	for (i = 0; i < rte_idxd->batch_ring_sz; i++) {
+		struct rte_idxd_desc_batch *b = &rte_idxd->batch_ring[i];
+		b->batch_desc.completion = rte_mem_virt2iova(&b->comp);
+		b->batch_desc.desc_addr = rte_mem_virt2iova(&b->null_desc);
+		b->batch_desc.op_flags = (idxd_op_batch << IDXD_CMD_OP_SHIFT) |
+				IDXD_FLAG_COMPLETION_ADDR_VALID |
+				IDXD_FLAG_REQUEST_COMPLETION;
+	}
+
+	return 0;
+}
+
 int
 idxd_rawdev_create(const char *name, struct rte_device *dev,
 		   const struct idxd_rawdev *base_idxd,
diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
index f521c85a1..aba70d8d7 100644
--- a/drivers/raw/ioat/ioat_private.h
+++ b/drivers/raw/ioat/ioat_private.h
@@ -59,6 +59,9 @@ extern int idxd_rawdev_create(const char *name, struct rte_device *dev,
 
 extern int idxd_rawdev_close(struct rte_rawdev *dev);
 
+extern int idxd_dev_configure(const struct rte_rawdev *dev,
+		rte_rawdev_obj_t config, size_t config_size);
+
 extern int idxd_rawdev_test(uint16_t dev_id);
 
 extern int idxd_dev_dump(struct rte_rawdev *dev, FILE *f);
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index 178c432dd..e9cdce016 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -187,6 +187,7 @@ struct rte_idxd_rawdev {
 	uint16_t next_ret_hdl;   /* the next user hdl to return */
 	uint16_t last_completed_hdl; /* the last user hdl that has completed */
 	uint16_t next_free_hdl;  /* where the handle for next op will go */
+	uint16_t hdls_disable;   /* disable tracking completion handles */
 
 	struct rte_idxd_user_hdl *hdl_ring;
 	struct rte_idxd_desc_batch *batch_ring;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v3 18/25] raw/ioat: add start and stop functions for idxd devices
  2020-09-25 11:08 ` [dpdk-dev] [PATCH v3 00/25] " Bruce Richardson
                     ` (16 preceding siblings ...)
  2020-09-25 11:09   ` [dpdk-dev] [PATCH v3 17/25] raw/ioat: add configure function " Bruce Richardson
@ 2020-09-25 11:09   ` Bruce Richardson
  2020-09-25 11:09   ` [dpdk-dev] [PATCH v3 19/25] raw/ioat: add data path " Bruce Richardson
                     ` (6 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-25 11:09 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Bruce Richardson, Kevin Laatz

Add the start and stop functions for DSA hardware devices using the
vfio/uio kernel drivers. For vdevs using the idxd kernel driver, the device
must be started using sysfs before the device node appears for vdev use -
making start/stop functions in the driver unnecessary.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
---
 drivers/raw/ioat/idxd_pci.c | 50 +++++++++++++++++++++++++++++++++++++
 1 file changed, 50 insertions(+)

diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
index 2d8dbc2f2..e4eecdc3a 100644
--- a/drivers/raw/ioat/idxd_pci.c
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -51,11 +51,61 @@ idxd_is_wq_enabled(struct idxd_rawdev *idxd)
 	return ((state >> WQ_STATE_SHIFT) & WQ_STATE_MASK) == 0x1;
 }
 
+static void
+idxd_pci_dev_stop(struct rte_rawdev *dev)
+{
+	struct idxd_rawdev *idxd = dev->dev_private;
+	uint8_t err_code;
+
+	if (!idxd_is_wq_enabled(idxd)) {
+		IOAT_PMD_ERR("Work queue %d already disabled", idxd->qid);
+		return;
+	}
+
+	err_code = idxd_pci_dev_command(idxd, idxd_disable_wq);
+	if (err_code || idxd_is_wq_enabled(idxd)) {
+		IOAT_PMD_ERR("Failed disabling work queue %d, error code: %#x",
+				idxd->qid, err_code);
+		return;
+	}
+	IOAT_PMD_DEBUG("Work queue %d disabled OK", idxd->qid);
+}
+
+static int
+idxd_pci_dev_start(struct rte_rawdev *dev)
+{
+	struct idxd_rawdev *idxd = dev->dev_private;
+	uint8_t err_code;
+
+	if (idxd_is_wq_enabled(idxd)) {
+		IOAT_PMD_WARN("WQ %d already enabled", idxd->qid);
+		return 0;
+	}
+
+	if (idxd->public.batch_ring == NULL) {
+		IOAT_PMD_ERR("WQ %d has not been fully configured", idxd->qid);
+		return -EINVAL;
+	}
+
+	err_code = idxd_pci_dev_command(idxd, idxd_enable_wq);
+	if (err_code || !idxd_is_wq_enabled(idxd)) {
+		IOAT_PMD_ERR("Failed enabling work queue %d, error code: %#x",
+				idxd->qid, err_code);
+		return err_code == 0 ? -1 : err_code;
+	}
+
+	IOAT_PMD_DEBUG("Work queue %d enabled OK", idxd->qid);
+
+	return 0;
+}
+
 static const struct rte_rawdev_ops idxd_pci_ops = {
 		.dev_close = idxd_rawdev_close,
 		.dev_selftest = idxd_rawdev_test,
 		.dump = idxd_dev_dump,
 		.dev_configure = idxd_dev_configure,
+		.dev_start = idxd_pci_dev_start,
+		.dev_stop = idxd_pci_dev_stop,
 };
 
 /* each portal uses 4 x 4k pages */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v3 19/25] raw/ioat: add data path for idxd devices
  2020-09-25 11:08 ` [dpdk-dev] [PATCH v3 00/25] " Bruce Richardson
                     ` (17 preceding siblings ...)
  2020-09-25 11:09   ` [dpdk-dev] [PATCH v3 18/25] raw/ioat: add start and stop functions " Bruce Richardson
@ 2020-09-25 11:09   ` Bruce Richardson
  2020-09-25 11:09   ` [dpdk-dev] [PATCH v3 20/25] raw/ioat: add info function " Bruce Richardson
                     ` (5 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-25 11:09 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Bruce Richardson, Kevin Laatz

Add support for doing copies using DSA hardware. This is implemented by
just switching on the device type field at the start of the inline
functions. Since there is no hardware which will have both device types
present this branch will always be predictable after the first call,
meaning it has little to no perf penalty.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
---
 drivers/raw/ioat/ioat_common.c         |   1 +
 drivers/raw/ioat/ioat_rawdev.c         |   1 +
 drivers/raw/ioat/rte_ioat_rawdev_fns.h | 201 +++++++++++++++++++++++--
 3 files changed, 192 insertions(+), 11 deletions(-)

diff --git a/drivers/raw/ioat/ioat_common.c b/drivers/raw/ioat/ioat_common.c
index 5173c331c..6a4e2979f 100644
--- a/drivers/raw/ioat/ioat_common.c
+++ b/drivers/raw/ioat/ioat_common.c
@@ -153,6 +153,7 @@ idxd_rawdev_create(const char *name, struct rte_device *dev,
 
 	idxd = rawdev->dev_private;
 	*idxd = *base_idxd; /* copy over the main fields already passed in */
+	idxd->public.type = RTE_IDXD_DEV;
 	idxd->rawdev = rawdev;
 	idxd->mz = mz;
 
diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
index 1fe32278d..0097be87e 100644
--- a/drivers/raw/ioat/ioat_rawdev.c
+++ b/drivers/raw/ioat/ioat_rawdev.c
@@ -260,6 +260,7 @@ ioat_rawdev_create(const char *name, struct rte_pci_device *dev)
 	rawdev->driver_name = dev->device.driver->name;
 
 	ioat = rawdev->dev_private;
+	ioat->type = RTE_IOAT_DEV;
 	ioat->rawdev = rawdev;
 	ioat->mz = mz;
 	ioat->regs = dev->mem_resource[0].addr;
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index e9cdce016..36ba876ea 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -196,8 +196,8 @@ struct rte_idxd_rawdev {
 /*
  * Enqueue a copy operation onto the ioat device
  */
-static inline int
-rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
+static __rte_always_inline int
+__ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl)
 {
 	struct rte_ioat_rawdev *ioat =
@@ -233,8 +233,8 @@ rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 }
 
 /* add fence to last written descriptor */
-static inline int
-rte_ioat_fence(int dev_id)
+static __rte_always_inline int
+__ioat_fence(int dev_id)
 {
 	struct rte_ioat_rawdev *ioat =
 			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
@@ -252,8 +252,8 @@ rte_ioat_fence(int dev_id)
 /*
  * Trigger hardware to begin performing enqueued operations
  */
-static inline void
-rte_ioat_perform_ops(int dev_id)
+static __rte_always_inline void
+__ioat_perform_ops(int dev_id)
 {
 	struct rte_ioat_rawdev *ioat =
 			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
@@ -268,8 +268,8 @@ rte_ioat_perform_ops(int dev_id)
  * @internal
  * Returns the index of the last completed operation.
  */
-static inline int
-rte_ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
+static __rte_always_inline int
+__ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
 {
 	uint64_t status = ioat->status;
 
@@ -283,8 +283,8 @@ rte_ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
 /*
  * Returns details of operations that have been completed
  */
-static inline int
-rte_ioat_completed_ops(int dev_id, uint8_t max_copies,
+static __rte_always_inline int
+__ioat_completed_ops(int dev_id, uint8_t max_copies,
 		uintptr_t *src_hdls, uintptr_t *dst_hdls)
 {
 	struct rte_ioat_rawdev *ioat =
@@ -295,7 +295,7 @@ rte_ioat_completed_ops(int dev_id, uint8_t max_copies,
 	int error;
 	int i = 0;
 
-	end_read = (rte_ioat_get_last_completed(ioat, &error) + 1) & mask;
+	end_read = (__ioat_get_last_completed(ioat, &error) + 1) & mask;
 	count = (end_read - (read & mask)) & mask;
 
 	if (error) {
@@ -332,6 +332,185 @@ rte_ioat_completed_ops(int dev_id, uint8_t max_copies,
 	return count;
 }
 
+static __rte_always_inline int
+__idxd_write_desc(int dev_id, const struct rte_idxd_hw_desc *desc,
+		const struct rte_idxd_user_hdl *hdl)
+{
+	struct rte_idxd_rawdev *idxd =
+			(struct rte_idxd_rawdev *)rte_rawdevs[dev_id].dev_private;
+	struct rte_idxd_desc_batch *b = &idxd->batch_ring[idxd->next_batch];
+
+	/* check for room in the handle ring */
+	if (((idxd->next_free_hdl + 1) & (idxd->hdl_ring_sz - 1)) == idxd->next_ret_hdl)
+		goto failed;
+
+	/* check for space in current batch */
+	if (b->op_count >= BATCH_SIZE)
+		goto failed;
+
+	/* check that we can actually use the current batch */
+	if (b->submitted)
+		goto failed;
+
+	/* write the descriptor */
+	b->ops[b->op_count++] = *desc;
+
+	/* store the completion details */
+	if (!idxd->hdls_disable)
+		idxd->hdl_ring[idxd->next_free_hdl] = *hdl;
+	if (++idxd->next_free_hdl == idxd->hdl_ring_sz)
+		idxd->next_free_hdl = 0;
+
+	return 1;
+
+failed:
+	rte_errno = ENOSPC;
+	return 0;
+}
+
+static __rte_always_inline int
+__idxd_enqueue_copy(int dev_id, rte_iova_t src, rte_iova_t dst,
+		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl)
+{
+	const struct rte_idxd_hw_desc desc = {
+			.op_flags =  (idxd_op_memmove << IDXD_CMD_OP_SHIFT) |
+				IDXD_FLAG_CACHE_CONTROL,
+			.src = src,
+			.dst = dst,
+			.size = length
+	};
+	const struct rte_idxd_user_hdl hdl = {
+			.src = src_hdl,
+			.dst = dst_hdl
+	};
+	return __idxd_write_desc(dev_id, &desc, &hdl);
+}
+
+static __rte_always_inline int
+__idxd_fence(int dev_id)
+{
+	static const struct rte_idxd_hw_desc fence = {
+			.op_flags = IDXD_FLAG_FENCE
+	};
+	static const struct rte_idxd_user_hdl null_hdl;
+	return __idxd_write_desc(dev_id, &fence, &null_hdl);
+}
+
+static __rte_always_inline void
+__idxd_movdir64b(volatile void *dst, const void *src)
+{
+	asm volatile (".byte 0x66, 0x0f, 0x38, 0xf8, 0x02"
+			:
+			: "a" (dst), "d" (src));
+}
+
+static __rte_always_inline void
+__idxd_perform_ops(int dev_id)
+{
+	struct rte_idxd_rawdev *idxd =
+			(struct rte_idxd_rawdev *)rte_rawdevs[dev_id].dev_private;
+	struct rte_idxd_desc_batch *b = &idxd->batch_ring[idxd->next_batch];
+
+	if (b->submitted || b->op_count == 0)
+		return;
+	b->hdl_end = idxd->next_free_hdl;
+	b->comp.status = 0;
+	b->submitted = 1;
+	b->batch_desc.size = b->op_count + 1;
+	__idxd_movdir64b(idxd->portal, &b->batch_desc);
+
+	if (++idxd->next_batch == idxd->batch_ring_sz)
+		idxd->next_batch = 0;
+}
+
+static __rte_always_inline int
+__idxd_completed_ops(int dev_id, uint8_t max_ops,
+		uintptr_t *src_hdls, uintptr_t *dst_hdls)
+{
+	struct rte_idxd_rawdev *idxd =
+			(struct rte_idxd_rawdev *)rte_rawdevs[dev_id].dev_private;
+	struct rte_idxd_desc_batch *b = &idxd->batch_ring[idxd->next_completed];
+	uint16_t h_idx = idxd->next_ret_hdl;
+	int n = 0;
+
+	while (b->submitted && b->comp.status != 0) {
+		idxd->last_completed_hdl = b->hdl_end;
+		b->submitted = 0;
+		b->op_count = 0;
+		if (++idxd->next_completed == idxd->batch_ring_sz)
+			idxd->next_completed = 0;
+		b = &idxd->batch_ring[idxd->next_completed];
+	}
+
+	if (!idxd->hdls_disable)
+		for (n = 0; n < max_ops && h_idx != idxd->last_completed_hdl; n++) {
+			src_hdls[n] = idxd->hdl_ring[h_idx].src;
+			dst_hdls[n] = idxd->hdl_ring[h_idx].dst;
+			if (++h_idx == idxd->hdl_ring_sz)
+				h_idx = 0;
+		}
+	else
+		while (h_idx != idxd->last_completed_hdl) {
+			n++;
+			if (++h_idx == idxd->hdl_ring_sz)
+				h_idx = 0;
+		}
+
+	idxd->next_ret_hdl = h_idx;
+
+	return n;
+}
+
+static inline int
+rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
+		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl)
+{
+	enum rte_ioat_dev_type *type =
+			(enum rte_ioat_dev_type *)rte_rawdevs[dev_id].dev_private;
+	if (*type == RTE_IDXD_DEV)
+		return __idxd_enqueue_copy(dev_id, src, dst, length,
+				src_hdl, dst_hdl);
+	else
+		return __ioat_enqueue_copy(dev_id, src, dst, length,
+				src_hdl, dst_hdl);
+}
+
+static inline int
+rte_ioat_fence(int dev_id)
+{
+	enum rte_ioat_dev_type *type =
+			(enum rte_ioat_dev_type *)rte_rawdevs[dev_id].dev_private;
+	if (*type == RTE_IDXD_DEV)
+		return __idxd_fence(dev_id);
+	else
+		return __ioat_fence(dev_id);
+}
+
+static inline void
+rte_ioat_perform_ops(int dev_id)
+{
+	enum rte_ioat_dev_type *type =
+			(enum rte_ioat_dev_type *)rte_rawdevs[dev_id].dev_private;
+	if (*type == RTE_IDXD_DEV)
+		return __idxd_perform_ops(dev_id);
+	else
+		return __ioat_perform_ops(dev_id);
+}
+
+static inline int
+rte_ioat_completed_ops(int dev_id, uint8_t max_copies,
+		uintptr_t *src_hdls, uintptr_t *dst_hdls)
+{
+	enum rte_ioat_dev_type *type =
+			(enum rte_ioat_dev_type *)rte_rawdevs[dev_id].dev_private;
+	if (*type == RTE_IDXD_DEV)
+		return __idxd_completed_ops(dev_id, max_copies,
+				src_hdls, dst_hdls);
+	else
+		return __ioat_completed_ops(dev_id,  max_copies,
+				src_hdls, dst_hdls);
+}
+
 static inline void
 __rte_deprecated_msg("use rte_ioat_perform_ops() instead")
 rte_ioat_do_copies(int dev_id) { rte_ioat_perform_ops(dev_id); }
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v3 20/25] raw/ioat: add info function for idxd devices
  2020-09-25 11:08 ` [dpdk-dev] [PATCH v3 00/25] " Bruce Richardson
                     ` (18 preceding siblings ...)
  2020-09-25 11:09   ` [dpdk-dev] [PATCH v3 19/25] raw/ioat: add data path " Bruce Richardson
@ 2020-09-25 11:09   ` Bruce Richardson
  2020-09-25 11:09   ` [dpdk-dev] [PATCH v3 21/25] raw/ioat: create separate statistics structure Bruce Richardson
                     ` (4 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-25 11:09 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Bruce Richardson, Kevin Laatz

Add the info get function for DSA devices, returning just the ring size
info about the device, same as is returned for existing IOAT/CBDMA devices.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
---
 drivers/raw/ioat/idxd_pci.c     |  1 +
 drivers/raw/ioat/idxd_vdev.c    |  1 +
 drivers/raw/ioat/ioat_common.c  | 18 ++++++++++++++++++
 drivers/raw/ioat/ioat_private.h |  3 +++
 4 files changed, 23 insertions(+)

diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
index e4eecdc3a..475763e3c 100644
--- a/drivers/raw/ioat/idxd_pci.c
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -106,6 +106,7 @@ static const struct rte_rawdev_ops idxd_pci_ops = {
 		.dev_configure = idxd_dev_configure,
 		.dev_start = idxd_pci_dev_start,
 		.dev_stop = idxd_pci_dev_stop,
+		.dev_info_get = idxd_dev_info_get,
 };
 
 /* each portal uses 4 x 4k pages */
diff --git a/drivers/raw/ioat/idxd_vdev.c b/drivers/raw/ioat/idxd_vdev.c
index aa6a5a9aa..5fbbd8e25 100644
--- a/drivers/raw/ioat/idxd_vdev.c
+++ b/drivers/raw/ioat/idxd_vdev.c
@@ -35,6 +35,7 @@ static const struct rte_rawdev_ops idxd_vdev_ops = {
 		.dev_selftest = idxd_rawdev_test,
 		.dump = idxd_dev_dump,
 		.dev_configure = idxd_dev_configure,
+		.dev_info_get = idxd_dev_info_get,
 };
 
 static void *
diff --git a/drivers/raw/ioat/ioat_common.c b/drivers/raw/ioat/ioat_common.c
index 6a4e2979f..b5cea2fda 100644
--- a/drivers/raw/ioat/ioat_common.c
+++ b/drivers/raw/ioat/ioat_common.c
@@ -44,6 +44,24 @@ idxd_dev_dump(struct rte_rawdev *dev, FILE *f)
 	return 0;
 }
 
+int
+idxd_dev_info_get(struct rte_rawdev *dev, rte_rawdev_obj_t dev_info,
+		size_t info_size)
+{
+	struct rte_ioat_rawdev_config *cfg = dev_info;
+	struct idxd_rawdev *idxd = dev->dev_private;
+	struct rte_idxd_rawdev *rte_idxd = &idxd->public;
+
+	if (info_size != sizeof(*cfg))
+		return -EINVAL;
+
+	if (cfg != NULL) {
+		cfg->ring_size = rte_idxd->hdl_ring_sz;
+		cfg->hdls_disable = rte_idxd->hdls_disable;
+	}
+	return 0;
+}
+
 int
 idxd_dev_configure(const struct rte_rawdev *dev,
 		rte_rawdev_obj_t config, size_t config_size)
diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
index aba70d8d7..0f80d60bf 100644
--- a/drivers/raw/ioat/ioat_private.h
+++ b/drivers/raw/ioat/ioat_private.h
@@ -62,6 +62,9 @@ extern int idxd_rawdev_close(struct rte_rawdev *dev);
 extern int idxd_dev_configure(const struct rte_rawdev *dev,
 		rte_rawdev_obj_t config, size_t config_size);
 
+extern int idxd_dev_info_get(struct rte_rawdev *dev, rte_rawdev_obj_t dev_info,
+		size_t info_size);
+
 extern int idxd_rawdev_test(uint16_t dev_id);
 
 extern int idxd_dev_dump(struct rte_rawdev *dev, FILE *f);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v3 21/25] raw/ioat: create separate statistics structure
  2020-09-25 11:08 ` [dpdk-dev] [PATCH v3 00/25] " Bruce Richardson
                     ` (19 preceding siblings ...)
  2020-09-25 11:09   ` [dpdk-dev] [PATCH v3 20/25] raw/ioat: add info function " Bruce Richardson
@ 2020-09-25 11:09   ` Bruce Richardson
  2020-09-25 11:09   ` [dpdk-dev] [PATCH v3 22/25] raw/ioat: move xstats functions to common file Bruce Richardson
                     ` (3 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-25 11:09 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Bruce Richardson, Kevin Laatz

Rather than having the xstats as fields inside the main driver structure,
create a separate structure type for them.

As part of the change, when updating the stats functions referring to the
stats by the old path, we can simplify them to use the id to directly index
into the stats structure, making the code shorter and simpler.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
---
 drivers/raw/ioat/ioat_rawdev.c         | 40 +++++++-------------------
 drivers/raw/ioat/rte_ioat_rawdev_fns.h | 30 ++++++++++++-------
 2 files changed, 29 insertions(+), 41 deletions(-)

diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
index 0097be87e..4ea913fff 100644
--- a/drivers/raw/ioat/ioat_rawdev.c
+++ b/drivers/raw/ioat/ioat_rawdev.c
@@ -132,16 +132,14 @@ ioat_xstats_get(const struct rte_rawdev *dev, const unsigned int ids[],
 		uint64_t values[], unsigned int n)
 {
 	const struct rte_ioat_rawdev *ioat = dev->dev_private;
+	const uint64_t *stats = (const void *)&ioat->xstats;
 	unsigned int i;
 
 	for (i = 0; i < n; i++) {
-		switch (ids[i]) {
-		case 0: values[i] = ioat->enqueue_failed; break;
-		case 1: values[i] = ioat->enqueued; break;
-		case 2: values[i] = ioat->started; break;
-		case 3: values[i] = ioat->completed; break;
-		default: values[i] = 0; break;
-		}
+		if (ids[i] < sizeof(ioat->xstats)/sizeof(*stats))
+			values[i] = stats[ids[i]];
+		else
+			values[i] = 0;
 	}
 	return n;
 }
@@ -167,35 +165,17 @@ static int
 ioat_xstats_reset(struct rte_rawdev *dev, const uint32_t *ids, uint32_t nb_ids)
 {
 	struct rte_ioat_rawdev *ioat = dev->dev_private;
+	uint64_t *stats = (void *)&ioat->xstats;
 	unsigned int i;
 
 	if (!ids) {
-		ioat->enqueue_failed = 0;
-		ioat->enqueued = 0;
-		ioat->started = 0;
-		ioat->completed = 0;
+		memset(&ioat->xstats, 0, sizeof(ioat->xstats));
 		return 0;
 	}
 
-	for (i = 0; i < nb_ids; i++) {
-		switch (ids[i]) {
-		case 0:
-			ioat->enqueue_failed = 0;
-			break;
-		case 1:
-			ioat->enqueued = 0;
-			break;
-		case 2:
-			ioat->started = 0;
-			break;
-		case 3:
-			ioat->completed = 0;
-			break;
-		default:
-			IOAT_PMD_WARN("Invalid xstat id - cannot reset value");
-			break;
-		}
-	}
+	for (i = 0; i < nb_ids; i++)
+		if (ids[i] < sizeof(ioat->xstats)/sizeof(*stats))
+			stats[ids[i]] = 0;
 
 	return 0;
 }
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index 36ba876ea..89bfc8d21 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -49,17 +49,31 @@ enum rte_ioat_dev_type {
 	RTE_IDXD_DEV,
 };
 
+/**
+ * @internal
+ * some statistics for tracking, if added/changed update xstats fns
+ */
+struct rte_ioat_xstats {
+	uint64_t enqueue_failed;
+	uint64_t enqueued;
+	uint64_t started;
+	uint64_t completed;
+};
+
 /**
  * @internal
  * Structure representing an IOAT device instance
  */
 struct rte_ioat_rawdev {
+	/* common fields at the top - match those in rte_idxd_rawdev */
 	enum rte_ioat_dev_type type;
+	struct rte_ioat_xstats xstats;
+
 	struct rte_rawdev *rawdev;
 	const struct rte_memzone *mz;
 	const struct rte_memzone *desc_mz;
 
-	volatile uint16_t *doorbell;
+	volatile uint16_t *doorbell __rte_cache_aligned;
 	phys_addr_t status_addr;
 	phys_addr_t ring_addr;
 
@@ -72,12 +86,6 @@ struct rte_ioat_rawdev {
 	unsigned short next_read;
 	unsigned short next_write;
 
-	/* some statistics for tracking, if added/changed update xstats fns*/
-	uint64_t enqueue_failed __rte_cache_aligned;
-	uint64_t enqueued;
-	uint64_t started;
-	uint64_t completed;
-
 	/* to report completions, the device will write status back here */
 	volatile uint64_t status __rte_cache_aligned;
 
@@ -209,7 +217,7 @@ __ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 	struct rte_ioat_generic_hw_desc *desc;
 
 	if (space == 0) {
-		ioat->enqueue_failed++;
+		ioat->xstats.enqueue_failed++;
 		return 0;
 	}
 
@@ -228,7 +236,7 @@ __ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 					(int64_t)src_hdl);
 	rte_prefetch0(&ioat->desc_ring[ioat->next_write & mask]);
 
-	ioat->enqueued++;
+	ioat->xstats.enqueued++;
 	return 1;
 }
 
@@ -261,7 +269,7 @@ __ioat_perform_ops(int dev_id)
 			.control.completion_update = 1;
 	rte_compiler_barrier();
 	*ioat->doorbell = ioat->next_write;
-	ioat->started = ioat->enqueued;
+	ioat->xstats.started = ioat->xstats.enqueued;
 }
 
 /**
@@ -328,7 +336,7 @@ __ioat_completed_ops(int dev_id, uint8_t max_copies,
 
 end:
 	ioat->next_read = read;
-	ioat->completed += count;
+	ioat->xstats.completed += count;
 	return count;
 }
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v3 22/25] raw/ioat: move xstats functions to common file
  2020-09-25 11:08 ` [dpdk-dev] [PATCH v3 00/25] " Bruce Richardson
                     ` (20 preceding siblings ...)
  2020-09-25 11:09   ` [dpdk-dev] [PATCH v3 21/25] raw/ioat: create separate statistics structure Bruce Richardson
@ 2020-09-25 11:09   ` Bruce Richardson
  2020-09-25 11:09   ` [dpdk-dev] [PATCH v3 23/25] raw/ioat: add xstats tracking for idxd devices Bruce Richardson
                     ` (2 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-25 11:09 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Bruce Richardson, Kevin Laatz

The xstats functions can be used by all ioat devices so move them from the
ioat_rawdev.c file to ioat_common.c, and add the function prototypes to the
internal header file.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
---
 drivers/raw/ioat/ioat_common.c  | 59 +++++++++++++++++++++++++++++++++
 drivers/raw/ioat/ioat_private.h | 10 ++++++
 drivers/raw/ioat/ioat_rawdev.c  | 58 --------------------------------
 3 files changed, 69 insertions(+), 58 deletions(-)

diff --git a/drivers/raw/ioat/ioat_common.c b/drivers/raw/ioat/ioat_common.c
index b5cea2fda..142e171bc 100644
--- a/drivers/raw/ioat/ioat_common.c
+++ b/drivers/raw/ioat/ioat_common.c
@@ -5,9 +5,68 @@
 #include <rte_rawdev_pmd.h>
 #include <rte_memzone.h>
 #include <rte_common.h>
+#include <rte_string_fns.h>
 
 #include "ioat_private.h"
 
+static const char * const xstat_names[] = {
+		"failed_enqueues", "successful_enqueues",
+		"copies_started", "copies_completed"
+};
+
+int
+ioat_xstats_get(const struct rte_rawdev *dev, const unsigned int ids[],
+		uint64_t values[], unsigned int n)
+{
+	const struct rte_ioat_rawdev *ioat = dev->dev_private;
+	const uint64_t *stats = (const void *)&ioat->xstats;
+	unsigned int i;
+
+	for (i = 0; i < n; i++) {
+		if (ids[i] > sizeof(ioat->xstats)/sizeof(*stats))
+			values[i] = 0;
+		else
+			values[i] = stats[ids[i]];
+	}
+	return n;
+}
+
+int
+ioat_xstats_get_names(const struct rte_rawdev *dev,
+		struct rte_rawdev_xstats_name *names,
+		unsigned int size)
+{
+	unsigned int i;
+
+	RTE_SET_USED(dev);
+	if (size < RTE_DIM(xstat_names))
+		return RTE_DIM(xstat_names);
+
+	for (i = 0; i < RTE_DIM(xstat_names); i++)
+		strlcpy(names[i].name, xstat_names[i], sizeof(names[i]));
+
+	return RTE_DIM(xstat_names);
+}
+
+int
+ioat_xstats_reset(struct rte_rawdev *dev, const uint32_t *ids, uint32_t nb_ids)
+{
+	struct rte_ioat_rawdev *ioat = dev->dev_private;
+	uint64_t *stats = (void *)&ioat->xstats;
+	unsigned int i;
+
+	if (!ids) {
+		memset(&ioat->xstats, 0, sizeof(ioat->xstats));
+		return 0;
+	}
+
+	for (i = 0; i < nb_ids; i++)
+		if (ids[i] < sizeof(ioat->xstats)/sizeof(*stats))
+			stats[ids[i]] = 0;
+
+	return 0;
+}
+
 int
 idxd_rawdev_close(struct rte_rawdev *dev __rte_unused)
 {
diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
index 0f80d60bf..ab9a3e6cc 100644
--- a/drivers/raw/ioat/ioat_private.h
+++ b/drivers/raw/ioat/ioat_private.h
@@ -53,6 +53,16 @@ struct idxd_rawdev {
 	} u;
 };
 
+int ioat_xstats_get(const struct rte_rawdev *dev, const unsigned int ids[],
+		uint64_t values[], unsigned int n);
+
+int ioat_xstats_get_names(const struct rte_rawdev *dev,
+		struct rte_rawdev_xstats_name *names,
+		unsigned int size);
+
+int ioat_xstats_reset(struct rte_rawdev *dev, const uint32_t *ids,
+		uint32_t nb_ids);
+
 extern int idxd_rawdev_create(const char *name, struct rte_device *dev,
 		       const struct idxd_rawdev *idxd,
 		       const struct rte_rawdev_ops *ops);
diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
index 4ea913fff..dd2543c80 100644
--- a/drivers/raw/ioat/ioat_rawdev.c
+++ b/drivers/raw/ioat/ioat_rawdev.c
@@ -122,64 +122,6 @@ ioat_dev_info_get(struct rte_rawdev *dev, rte_rawdev_obj_t dev_info,
 	return 0;
 }
 
-static const char * const xstat_names[] = {
-		"failed_enqueues", "successful_enqueues",
-		"copies_started", "copies_completed"
-};
-
-static int
-ioat_xstats_get(const struct rte_rawdev *dev, const unsigned int ids[],
-		uint64_t values[], unsigned int n)
-{
-	const struct rte_ioat_rawdev *ioat = dev->dev_private;
-	const uint64_t *stats = (const void *)&ioat->xstats;
-	unsigned int i;
-
-	for (i = 0; i < n; i++) {
-		if (ids[i] < sizeof(ioat->xstats)/sizeof(*stats))
-			values[i] = stats[ids[i]];
-		else
-			values[i] = 0;
-	}
-	return n;
-}
-
-static int
-ioat_xstats_get_names(const struct rte_rawdev *dev,
-		struct rte_rawdev_xstats_name *names,
-		unsigned int size)
-{
-	unsigned int i;
-
-	RTE_SET_USED(dev);
-	if (size < RTE_DIM(xstat_names))
-		return RTE_DIM(xstat_names);
-
-	for (i = 0; i < RTE_DIM(xstat_names); i++)
-		strlcpy(names[i].name, xstat_names[i], sizeof(names[i]));
-
-	return RTE_DIM(xstat_names);
-}
-
-static int
-ioat_xstats_reset(struct rte_rawdev *dev, const uint32_t *ids, uint32_t nb_ids)
-{
-	struct rte_ioat_rawdev *ioat = dev->dev_private;
-	uint64_t *stats = (void *)&ioat->xstats;
-	unsigned int i;
-
-	if (!ids) {
-		memset(&ioat->xstats, 0, sizeof(ioat->xstats));
-		return 0;
-	}
-
-	for (i = 0; i < nb_ids; i++)
-		if (ids[i] < sizeof(ioat->xstats)/sizeof(*stats))
-			stats[ids[i]] = 0;
-
-	return 0;
-}
-
 static int
 ioat_dev_close(struct rte_rawdev *dev __rte_unused)
 {
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v3 23/25] raw/ioat: add xstats tracking for idxd devices
  2020-09-25 11:08 ` [dpdk-dev] [PATCH v3 00/25] " Bruce Richardson
                     ` (21 preceding siblings ...)
  2020-09-25 11:09   ` [dpdk-dev] [PATCH v3 22/25] raw/ioat: move xstats functions to common file Bruce Richardson
@ 2020-09-25 11:09   ` Bruce Richardson
  2020-09-25 11:09   ` [dpdk-dev] [PATCH v3 24/25] raw/ioat: clean up use of common test function Bruce Richardson
  2020-09-25 11:09   ` [dpdk-dev] [PATCH v3 25/25] raw/ioat: add fill operation Bruce Richardson
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-25 11:09 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Bruce Richardson, Kevin Laatz

Add update of the relevant stats for the data path functions and point the
overall device struct xstats function pointers to the existing ioat
functions.

At this point, all necessary hooks for supporting the existing unit tests
are in place so call them for each device.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
---
 drivers/raw/ioat/idxd_pci.c            | 3 +++
 drivers/raw/ioat/idxd_vdev.c           | 3 +++
 drivers/raw/ioat/ioat_rawdev_test.c    | 2 +-
 drivers/raw/ioat/rte_ioat_rawdev_fns.h | 6 ++++++
 4 files changed, 13 insertions(+), 1 deletion(-)

diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
index 475763e3c..5dfa4ffab 100644
--- a/drivers/raw/ioat/idxd_pci.c
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -107,6 +107,9 @@ static const struct rte_rawdev_ops idxd_pci_ops = {
 		.dev_start = idxd_pci_dev_start,
 		.dev_stop = idxd_pci_dev_stop,
 		.dev_info_get = idxd_dev_info_get,
+		.xstats_get = ioat_xstats_get,
+		.xstats_get_names = ioat_xstats_get_names,
+		.xstats_reset = ioat_xstats_reset,
 };
 
 /* each portal uses 4 x 4k pages */
diff --git a/drivers/raw/ioat/idxd_vdev.c b/drivers/raw/ioat/idxd_vdev.c
index 5fbbd8e25..3a5cc94b4 100644
--- a/drivers/raw/ioat/idxd_vdev.c
+++ b/drivers/raw/ioat/idxd_vdev.c
@@ -36,6 +36,9 @@ static const struct rte_rawdev_ops idxd_vdev_ops = {
 		.dump = idxd_dev_dump,
 		.dev_configure = idxd_dev_configure,
 		.dev_info_get = idxd_dev_info_get,
+		.xstats_get = ioat_xstats_get,
+		.xstats_get_names = ioat_xstats_get_names,
+		.xstats_reset = ioat_xstats_reset,
 };
 
 static void *
diff --git a/drivers/raw/ioat/ioat_rawdev_test.c b/drivers/raw/ioat/ioat_rawdev_test.c
index a9132a8f1..0b172f318 100644
--- a/drivers/raw/ioat/ioat_rawdev_test.c
+++ b/drivers/raw/ioat/ioat_rawdev_test.c
@@ -264,5 +264,5 @@ int
 idxd_rawdev_test(uint16_t dev_id)
 {
 	rte_rawdev_dump(dev_id, stdout);
-	return 0;
+	return ioat_rawdev_test(dev_id);
 }
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index 89bfc8d21..d0045d8a4 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -184,6 +184,8 @@ struct rte_idxd_user_hdl {
  */
 struct rte_idxd_rawdev {
 	enum rte_ioat_dev_type type;
+	struct rte_ioat_xstats xstats;
+
 	void *portal; /* address to write the batch descriptor */
 
 	/* counters to track the batches and the individual op handles */
@@ -369,9 +371,11 @@ __idxd_write_desc(int dev_id, const struct rte_idxd_hw_desc *desc,
 	if (++idxd->next_free_hdl == idxd->hdl_ring_sz)
 		idxd->next_free_hdl = 0;
 
+	idxd->xstats.enqueued++;
 	return 1;
 
 failed:
+	idxd->xstats.enqueue_failed++;
 	rte_errno = ENOSPC;
 	return 0;
 }
@@ -429,6 +433,7 @@ __idxd_perform_ops(int dev_id)
 
 	if (++idxd->next_batch == idxd->batch_ring_sz)
 		idxd->next_batch = 0;
+	idxd->xstats.started = idxd->xstats.enqueued;
 }
 
 static __rte_always_inline int
@@ -466,6 +471,7 @@ __idxd_completed_ops(int dev_id, uint8_t max_ops,
 
 	idxd->next_ret_hdl = h_idx;
 
+	idxd->xstats.completed += n;
 	return n;
 }
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v3 24/25] raw/ioat: clean up use of common test function
  2020-09-25 11:08 ` [dpdk-dev] [PATCH v3 00/25] " Bruce Richardson
                     ` (22 preceding siblings ...)
  2020-09-25 11:09   ` [dpdk-dev] [PATCH v3 23/25] raw/ioat: add xstats tracking for idxd devices Bruce Richardson
@ 2020-09-25 11:09   ` Bruce Richardson
  2020-09-25 11:09   ` [dpdk-dev] [PATCH v3 25/25] raw/ioat: add fill operation Bruce Richardson
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-25 11:09 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Bruce Richardson, Kevin Laatz

Now that all devices can pass the same set of unit tests, eliminate the
temporary idxd_rawdev_test function and move the prototype for
ioat_rawdev_test to the proper internal header file, to be used by all
device instances.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
---
 drivers/raw/ioat/idxd_pci.c         | 2 +-
 drivers/raw/ioat/idxd_vdev.c        | 2 +-
 drivers/raw/ioat/ioat_private.h     | 4 ++--
 drivers/raw/ioat/ioat_rawdev.c      | 2 --
 drivers/raw/ioat/ioat_rawdev_test.c | 7 -------
 5 files changed, 4 insertions(+), 13 deletions(-)

diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
index 5dfa4ffab..c0da1f977 100644
--- a/drivers/raw/ioat/idxd_pci.c
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -101,7 +101,7 @@ idxd_pci_dev_start(struct rte_rawdev *dev)
 
 static const struct rte_rawdev_ops idxd_pci_ops = {
 		.dev_close = idxd_rawdev_close,
-		.dev_selftest = idxd_rawdev_test,
+		.dev_selftest = ioat_rawdev_test,
 		.dump = idxd_dev_dump,
 		.dev_configure = idxd_dev_configure,
 		.dev_start = idxd_pci_dev_start,
diff --git a/drivers/raw/ioat/idxd_vdev.c b/drivers/raw/ioat/idxd_vdev.c
index 3a5cc94b4..989d9e45f 100644
--- a/drivers/raw/ioat/idxd_vdev.c
+++ b/drivers/raw/ioat/idxd_vdev.c
@@ -32,7 +32,7 @@ struct idxd_vdev_args {
 
 static const struct rte_rawdev_ops idxd_vdev_ops = {
 		.dev_close = idxd_rawdev_close,
-		.dev_selftest = idxd_rawdev_test,
+		.dev_selftest = ioat_rawdev_test,
 		.dump = idxd_dev_dump,
 		.dev_configure = idxd_dev_configure,
 		.dev_info_get = idxd_dev_info_get,
diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
index ab9a3e6cc..a74bc0422 100644
--- a/drivers/raw/ioat/ioat_private.h
+++ b/drivers/raw/ioat/ioat_private.h
@@ -63,6 +63,8 @@ int ioat_xstats_get_names(const struct rte_rawdev *dev,
 int ioat_xstats_reset(struct rte_rawdev *dev, const uint32_t *ids,
 		uint32_t nb_ids);
 
+extern int ioat_rawdev_test(uint16_t dev_id);
+
 extern int idxd_rawdev_create(const char *name, struct rte_device *dev,
 		       const struct idxd_rawdev *idxd,
 		       const struct rte_rawdev_ops *ops);
@@ -75,8 +77,6 @@ extern int idxd_dev_configure(const struct rte_rawdev *dev,
 extern int idxd_dev_info_get(struct rte_rawdev *dev, rte_rawdev_obj_t dev_info,
 		size_t info_size);
 
-extern int idxd_rawdev_test(uint16_t dev_id);
-
 extern int idxd_dev_dump(struct rte_rawdev *dev, FILE *f);
 
 #endif /* _IOAT_PRIVATE_H_ */
diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
index dd2543c80..2c88b4369 100644
--- a/drivers/raw/ioat/ioat_rawdev.c
+++ b/drivers/raw/ioat/ioat_rawdev.c
@@ -128,8 +128,6 @@ ioat_dev_close(struct rte_rawdev *dev __rte_unused)
 	return 0;
 }
 
-extern int ioat_rawdev_test(uint16_t dev_id);
-
 static int
 ioat_rawdev_create(const char *name, struct rte_pci_device *dev)
 {
diff --git a/drivers/raw/ioat/ioat_rawdev_test.c b/drivers/raw/ioat/ioat_rawdev_test.c
index 0b172f318..7be6f2a2d 100644
--- a/drivers/raw/ioat/ioat_rawdev_test.c
+++ b/drivers/raw/ioat/ioat_rawdev_test.c
@@ -259,10 +259,3 @@ ioat_rawdev_test(uint16_t dev_id)
 	free(ids);
 	return -1;
 }
-
-int
-idxd_rawdev_test(uint16_t dev_id)
-{
-	rte_rawdev_dump(dev_id, stdout);
-	return ioat_rawdev_test(dev_id);
-}
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v3 25/25] raw/ioat: add fill operation
  2020-09-25 11:08 ` [dpdk-dev] [PATCH v3 00/25] " Bruce Richardson
                     ` (23 preceding siblings ...)
  2020-09-25 11:09   ` [dpdk-dev] [PATCH v3 24/25] raw/ioat: clean up use of common test function Bruce Richardson
@ 2020-09-25 11:09   ` Bruce Richardson
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-25 11:09 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Kevin Laatz, Bruce Richardson

From: Kevin Laatz <kevin.laatz@intel.com>

Add fill operation enqueue support for IOAT and IDXD. The fill enqueue is
similar to the copy enqueue, but takes a 'pattern' rather than a source
address to transfer to the destination address. This patch also includes an
additional test case for the new operation type.

Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Reviewed-by: Bruce Richardson <bruce.richardson@intel.com>
---
 doc/guides/rawdevs/ioat.rst            | 10 ++++
 doc/guides/rel_notes/release_20_11.rst |  2 +
 drivers/raw/ioat/ioat_rawdev_test.c    | 44 +++++++++++++++++
 drivers/raw/ioat/rte_ioat_rawdev.h     | 26 +++++++++++
 drivers/raw/ioat/rte_ioat_rawdev_fns.h | 65 ++++++++++++++++++++++++--
 5 files changed, 142 insertions(+), 5 deletions(-)

diff --git a/doc/guides/rawdevs/ioat.rst b/doc/guides/rawdevs/ioat.rst
index 7c2a2d457..250cfc48a 100644
--- a/doc/guides/rawdevs/ioat.rst
+++ b/doc/guides/rawdevs/ioat.rst
@@ -285,6 +285,16 @@ is correct before freeing the data buffers using the returned handles:
         }
 
 
+Filling an Area of Memory
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The IOAT driver also has support for the ``fill`` operation, where an area
+of memory is overwritten, or filled, with a short pattern of data.
+Fill operations can be performed in much the same was as copy operations
+described above, just using the ``rte_ioat_enqueue_fill()`` function rather
+than the ``rte_ioat_enqueue_copy()`` function.
+
+
 Querying Device Statistics
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 4d8b78154..dd65b779d 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -84,6 +84,8 @@ New Features
 
   * Added support for Intel\ |reg| Data Streaming Accelerator hardware.
     For more information, see https://01.org/blogs/2019/introducing-intel-data-streaming-accelerator
+  * Added support for the fill operation via the API ``rte_ioat_enqueue_fill()``,
+    where the hardware fills an area of memory with a repeating pattern.
   * Added a per-device configuration flag to disable management of user-provided completion handles
   * Renamed the ``rte_ioat_do_copies()`` API to ``rte_ioat_perform_ops()``,
     and renamed the ``rte_ioat_completed_copies()`` API to ``rte_ioat_completed_ops()``
diff --git a/drivers/raw/ioat/ioat_rawdev_test.c b/drivers/raw/ioat/ioat_rawdev_test.c
index 7be6f2a2d..64269af55 100644
--- a/drivers/raw/ioat/ioat_rawdev_test.c
+++ b/drivers/raw/ioat/ioat_rawdev_test.c
@@ -152,6 +152,46 @@ test_enqueue_copies(int dev_id)
 	return 0;
 }
 
+static int
+test_enqueue_fill(int dev_id)
+{
+	const unsigned int length[] = {8, 64, 1024, 50, 100, 89};
+	struct rte_mbuf *dst = rte_pktmbuf_alloc(pool);
+	char *dst_data = rte_pktmbuf_mtod(dst, char *);
+	uint64_t pattern = 0xfedcba9876543210;
+	unsigned int i, j;
+
+	for (i = 0; i < RTE_DIM(length); i++) {
+		/* reset dst_data */
+		memset(dst_data, 0, length[i]);
+
+		/* perform the fill operation */
+		if (rte_ioat_enqueue_fill(dev_id, pattern,
+				dst->buf_iova + dst->data_off, length[i],
+				(uintptr_t)dst) != 1) {
+			PRINT_ERR("Error with rte_ioat_enqueue_fill\n");
+			return -1;
+		}
+
+		rte_ioat_perform_ops(dev_id);
+		usleep(100);
+
+		/* check the result */
+		for (j = 0; j < length[i]; j++) {
+			char pat_byte = ((char *)&pattern)[j % 8];
+			if (dst_data[j] != pat_byte) {
+				PRINT_ERR("Error with fill operation (length = %u): got (%x), not (%x)\n",
+						length[i], dst_data[j],
+						pat_byte);
+				return -1;
+			}
+		}
+	}
+
+	rte_pktmbuf_free(dst);
+	return 0;
+}
+
 int
 ioat_rawdev_test(uint16_t dev_id)
 {
@@ -238,6 +278,10 @@ ioat_rawdev_test(uint16_t dev_id)
 	}
 	printf("\n");
 
+	/* test enqueue fill operation */
+	if (test_enqueue_fill(dev_id) != 0)
+		goto err;
+
 	rte_rawdev_stop(dev_id);
 	if (rte_rawdev_xstats_reset(dev_id, NULL, 0) != 0) {
 		PRINT_ERR("Error resetting xstat values\n");
diff --git a/drivers/raw/ioat/rte_ioat_rawdev.h b/drivers/raw/ioat/rte_ioat_rawdev.h
index 6b891cd44..b7632ebf3 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev.h
@@ -37,6 +37,32 @@ struct rte_ioat_rawdev_config {
 	bool hdls_disable;    /**< if set, ignore user-supplied handle params */
 };
 
+/**
+ * Enqueue a fill operation onto the ioat device
+ *
+ * This queues up a fill operation to be performed by hardware, but does not
+ * trigger hardware to begin that operation.
+ *
+ * @param dev_id
+ *   The rawdev device id of the ioat instance
+ * @param pattern
+ *   The pattern to populate the destination buffer with
+ * @param dst
+ *   The physical address of the destination buffer
+ * @param length
+ *   The length of the destination buffer
+ * @param dst_hdl
+ *   An opaque handle for the destination data, to be returned when this
+ *   operation has been completed and the user polls for the completion details.
+ *   NOTE: If hdls_disable configuration option for the device is set, this
+ *   parameter is ignored.
+ * @return
+ *   Number of operations enqueued, either 0 or 1
+ */
+static inline int
+rte_ioat_enqueue_fill(int dev_id, uint64_t pattern, phys_addr_t dst,
+		unsigned int length, uintptr_t dst_hdl);
+
 /**
  * Enqueue a copy operation onto the ioat device
  *
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index d0045d8a4..c2c4601ca 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -115,6 +115,13 @@ enum rte_idxd_ops {
 #define IDXD_FLAG_REQUEST_COMPLETION    (1 << 3)
 #define IDXD_FLAG_CACHE_CONTROL         (1 << 8)
 
+#define IOAT_COMP_UPDATE_SHIFT	3
+#define IOAT_CMD_OP_SHIFT	24
+enum rte_ioat_ops {
+	ioat_op_copy = 0,	/* Standard DMA Operation */
+	ioat_op_fill		/* Block Fill */
+};
+
 /**
  * Hardware descriptor used by DSA hardware, for both bursts and
  * for individual operations.
@@ -203,11 +210,8 @@ struct rte_idxd_rawdev {
 	struct rte_idxd_desc_batch *batch_ring;
 };
 
-/*
- * Enqueue a copy operation onto the ioat device
- */
 static __rte_always_inline int
-__ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
+__ioat_write_desc(int dev_id, uint32_t op, uint64_t src, phys_addr_t dst,
 		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl)
 {
 	struct rte_ioat_rawdev *ioat =
@@ -229,7 +233,8 @@ __ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 	desc = &ioat->desc_ring[write];
 	desc->size = length;
 	/* set descriptor write-back every 16th descriptor */
-	desc->u.control_raw = (uint32_t)((!(write & 0xF)) << 3);
+	desc->u.control_raw = (uint32_t)((op << IOAT_CMD_OP_SHIFT) |
+			(!(write & 0xF) << IOAT_COMP_UPDATE_SHIFT));
 	desc->src_addr = src;
 	desc->dest_addr = dst;
 
@@ -242,6 +247,27 @@ __ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 	return 1;
 }
 
+static __rte_always_inline int
+__ioat_enqueue_fill(int dev_id, uint64_t pattern, phys_addr_t dst,
+		unsigned int length, uintptr_t dst_hdl)
+{
+	static const uintptr_t null_hdl;
+
+	return __ioat_write_desc(dev_id, ioat_op_fill, pattern, dst, length,
+			null_hdl, dst_hdl);
+}
+
+/*
+ * Enqueue a copy operation onto the ioat device
+ */
+static __rte_always_inline int
+__ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
+		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl)
+{
+	return __ioat_write_desc(dev_id, ioat_op_copy, src, dst, length,
+			src_hdl, dst_hdl);
+}
+
 /* add fence to last written descriptor */
 static __rte_always_inline int
 __ioat_fence(int dev_id)
@@ -380,6 +406,23 @@ __idxd_write_desc(int dev_id, const struct rte_idxd_hw_desc *desc,
 	return 0;
 }
 
+static __rte_always_inline int
+__idxd_enqueue_fill(int dev_id, uint64_t pattern, rte_iova_t dst,
+		unsigned int length, uintptr_t dst_hdl)
+{
+	const struct rte_idxd_hw_desc desc = {
+			.op_flags =  (idxd_op_fill << IDXD_CMD_OP_SHIFT) |
+				IDXD_FLAG_CACHE_CONTROL,
+			.src = pattern,
+			.dst = dst,
+			.size = length
+	};
+	const struct rte_idxd_user_hdl hdl = {
+			.dst = dst_hdl
+	};
+	return __idxd_write_desc(dev_id, &desc, &hdl);
+}
+
 static __rte_always_inline int
 __idxd_enqueue_copy(int dev_id, rte_iova_t src, rte_iova_t dst,
 		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl)
@@ -475,6 +518,18 @@ __idxd_completed_ops(int dev_id, uint8_t max_ops,
 	return n;
 }
 
+static inline int
+rte_ioat_enqueue_fill(int dev_id, uint64_t pattern, phys_addr_t dst,
+		unsigned int len, uintptr_t dst_hdl)
+{
+	enum rte_ioat_dev_type *type =
+			(enum rte_ioat_dev_type *)rte_rawdevs[dev_id].dev_private;
+	if (*type == RTE_IDXD_DEV)
+		return __idxd_enqueue_fill(dev_id, pattern, dst, len, dst_hdl);
+	else
+		return __ioat_enqueue_fill(dev_id, pattern, dst, len, dst_hdl);
+}
+
 static inline int
 rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl)
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* Re: [dpdk-dev] [PATCH v3 02/25] raw/ioat: fix missing close function
  2020-09-25 11:08   ` [dpdk-dev] [PATCH v3 02/25] raw/ioat: fix missing close function Bruce Richardson
@ 2020-09-25 11:12     ` Bruce Richardson
  2020-09-25 11:12     ` Pai G, Sunil
  1 sibling, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-25 11:12 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Kevin Laatz, stable, Sunil Pai G

On Fri, Sep 25, 2020 at 12:08:47PM +0100, Bruce Richardson wrote:
> From: Kevin Laatz <kevin.laatz@intel.com>
> 
> When rte_rawdev_pmd_release() is called, rte_rawdev_close() looks for a
> dev_close function for the device causing a segmentation fault when no
> close() function is implemented for a driver.
> 
> This patch resolves the issue by adding a stub function ioat_dev_close().
> 
> Fixes: f687e842e328 ("raw/ioat: introduce IOAT driver")
> Cc: stable@dpdk.org
> 
> Reported-by: Sunil Pai G <sunil.pai.g@intel.com>
> Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
> ---
Forgot to add my own reviewed-by to this:

Reviewed-by: Bruce Richardson <bruce.richardson@intel.com>

^ permalink raw reply	[flat|nested] 157+ messages in thread

* Re: [dpdk-dev] [PATCH v3 02/25] raw/ioat: fix missing close function
  2020-09-25 11:08   ` [dpdk-dev] [PATCH v3 02/25] raw/ioat: fix missing close function Bruce Richardson
  2020-09-25 11:12     ` Bruce Richardson
@ 2020-09-25 11:12     ` Pai G, Sunil
  1 sibling, 0 replies; 157+ messages in thread
From: Pai G, Sunil @ 2020-09-25 11:12 UTC (permalink / raw)
  To: Richardson, Bruce, dev; +Cc: Fu, Patrick, Laatz, Kevin, stable

> -----Original Message-----
> From: Richardson, Bruce <bruce.richardson@intel.com>
> Sent: Friday, September 25, 2020 4:39 PM
> To: dev@dpdk.org
> Cc: Fu, Patrick <patrick.fu@intel.com>; Laatz, Kevin <kevin.laatz@intel.com>;
> stable@dpdk.org; Pai G, Sunil <sunil.pai.g@intel.com>
> Subject: [PATCH v3 02/25] raw/ioat: fix missing close function
> 
> From: Kevin Laatz <kevin.laatz@intel.com>
> 
> When rte_rawdev_pmd_release() is called, rte_rawdev_close() looks for a
> dev_close function for the device causing a segmentation fault when no
> close() function is implemented for a driver.
> 
> This patch resolves the issue by adding a stub function ioat_dev_close().
> 
> Fixes: f687e842e328 ("raw/ioat: introduce IOAT driver")
> Cc: stable@dpdk.org
> 
> Reported-by: Sunil Pai G <sunil.pai.g@intel.com>
> Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
> ---
>  drivers/raw/ioat/ioat_rawdev.c | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
> index 7f1a15436..0732b059f 100644
> --- a/drivers/raw/ioat/ioat_rawdev.c
> +++ b/drivers/raw/ioat/ioat_rawdev.c
> @@ -203,6 +203,12 @@ ioat_xstats_reset(struct rte_rawdev *dev, const
> uint32_t *ids, uint32_t nb_ids)
>  	return 0;
>  }
> 
> +static int
> +ioat_dev_close(struct rte_rawdev *dev __rte_unused) {
> +	return 0;
> +}
> +
>  extern int ioat_rawdev_test(uint16_t dev_id);
> 
>  static int
> @@ -212,6 +218,7 @@ ioat_rawdev_create(const char *name, struct
> rte_pci_device *dev)
>  			.dev_configure = ioat_dev_configure,
>  			.dev_start = ioat_dev_start,
>  			.dev_stop = ioat_dev_stop,
> +			.dev_close = ioat_dev_close,
>  			.dev_info_get = ioat_dev_info_get,
>  			.xstats_get = ioat_xstats_get,
>  			.xstats_get_names = ioat_xstats_get_names,
> --
> 2.25.1
Acked-by: Sunil Pai G <sunil.pai.g@intel.com>

^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v4 00/25] raw/ioat: enhancements and new hardware support
  2020-07-21  9:51 [dpdk-dev] [PATCH 20.11 00/20] raw/ioat: enhancements and new hardware support Bruce Richardson
                   ` (21 preceding siblings ...)
  2020-09-25 11:08 ` [dpdk-dev] [PATCH v3 00/25] " Bruce Richardson
@ 2020-09-28 16:42 ` Bruce Richardson
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 01/25] doc/api: add ioat driver to index Bruce Richardson
                     ` (26 more replies)
  2020-10-07 16:29 ` [dpdk-dev] [PATCH v5 " Bruce Richardson
  2020-10-08  9:51 ` [dpdk-dev] [PATCH v6 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
  24 siblings, 27 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-28 16:42 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Bruce Richardson

This patchset adds some small enhancements, some rework and also support
for new hardware to the ioat rawdev driver. Most rework and enhancements
are largely self-explanatory from the individual patches.

The new hardware support is for the Intel(R) DSA accelerator which will be
present in future Intel processors. A description of this new hardware is
covered in [1]. Functions specific to the new hardware use the "idxd"
prefix, for consistency with the kernel driver.

[1] https://01.org/blogs/2019/introducing-intel-data-streaming-accelerator

---
V4:
 * Fixed compile with FreeBSD clang
 * Improved autotests for fill operation

V3:
 * More doc updates including release note updates throughout the set
 * Added in fill operation
 * Added in fix for missing close operation
 * Added in fix for doc building to ensure ioat is in in the index

V2:
 * Included documentation additions in the set
 * Split off the rawdev unit test changes to a separate patchset for easier
   review
 * General code improvements and cleanups 

Bruce Richardson (19):
  doc/api: add ioat driver to index
  raw/ioat: enable use from C++ code
  raw/ioat: include extra info in error messages
  raw/ioat: split header for readability
  raw/ioat: rename functions to be operation-agnostic
  raw/ioat: add separate API for fence call
  raw/ioat: make the HW register spec private
  raw/ioat: add skeleton for VFIO/UIO based DSA device
  raw/ioat: include example configuration script
  raw/ioat: create rawdev instances on idxd PCI probe
  raw/ioat: add datapath data structures for idxd devices
  raw/ioat: add configure function for idxd devices
  raw/ioat: add start and stop functions for idxd devices
  raw/ioat: add data path for idxd devices
  raw/ioat: add info function for idxd devices
  raw/ioat: create separate statistics structure
  raw/ioat: move xstats functions to common file
  raw/ioat: add xstats tracking for idxd devices
  raw/ioat: clean up use of common test function

Cheng Jiang (1):
  raw/ioat: add a flag to control copying handle parameters

Kevin Laatz (5):
  raw/ioat: fix missing close function
  usertools/dpdk-devbind.py: add support for DSA HW
  raw/ioat: add vdev probe for DSA/idxd devices
  raw/ioat: create rawdev instances for idxd vdevs
  raw/ioat: add fill operation

 doc/api/doxy-api-index.md                     |   1 +
 doc/api/doxy-api.conf.in                      |   1 +
 doc/guides/rawdevs/ioat.rst                   | 163 +++--
 doc/guides/rel_notes/release_20_11.rst        |  23 +
 doc/guides/sample_app_ug/ioat.rst             |   8 +-
 drivers/raw/ioat/dpdk_idxd_cfg.py             |  79 +++
 drivers/raw/ioat/idxd_pci.c                   | 345 ++++++++++
 drivers/raw/ioat/idxd_vdev.c                  | 233 +++++++
 drivers/raw/ioat/ioat_common.c                | 244 +++++++
 drivers/raw/ioat/ioat_private.h               |  82 +++
 drivers/raw/ioat/ioat_rawdev.c                |  92 +--
 drivers/raw/ioat/ioat_rawdev_test.c           | 130 +++-
 .../raw/ioat/{rte_ioat_spec.h => ioat_spec.h} |  90 ++-
 drivers/raw/ioat/meson.build                  |  15 +-
 drivers/raw/ioat/rte_ioat_rawdev.h            | 221 +++----
 drivers/raw/ioat/rte_ioat_rawdev_fns.h        | 595 ++++++++++++++++++
 examples/ioat/ioatfwd.c                       |  16 +-
 lib/librte_eal/include/rte_common.h           |   1 +
 usertools/dpdk-devbind.py                     |   4 +-
 19 files changed, 1989 insertions(+), 354 deletions(-)
 create mode 100755 drivers/raw/ioat/dpdk_idxd_cfg.py
 create mode 100644 drivers/raw/ioat/idxd_pci.c
 create mode 100644 drivers/raw/ioat/idxd_vdev.c
 create mode 100644 drivers/raw/ioat/ioat_common.c
 create mode 100644 drivers/raw/ioat/ioat_private.h
 rename drivers/raw/ioat/{rte_ioat_spec.h => ioat_spec.h} (74%)
 create mode 100644 drivers/raw/ioat/rte_ioat_rawdev_fns.h

-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v4 01/25] doc/api: add ioat driver to index
  2020-09-28 16:42 ` [dpdk-dev] [PATCH v4 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
@ 2020-09-28 16:42   ` Bruce Richardson
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 02/25] raw/ioat: fix missing close function Bruce Richardson
                     ` (25 subsequent siblings)
  26 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-28 16:42 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Bruce Richardson, Kevin Laatz

Add the ioat driver to the doxygen documentation.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
---
 doc/api/doxy-api-index.md | 1 +
 doc/api/doxy-api.conf.in  | 1 +
 2 files changed, 2 insertions(+)

diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index b855a8f3b9..06e49f438d 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -44,6 +44,7 @@ The public API headers are grouped by topics:
   [ixgbe]              (@ref rte_pmd_ixgbe.h),
   [i40e]               (@ref rte_pmd_i40e.h),
   [ice]                (@ref rte_pmd_ice.h),
+  [ioat]               (@ref rte_ioat_rawdev.h),
   [bnxt]               (@ref rte_pmd_bnxt.h),
   [dpaa]               (@ref rte_pmd_dpaa.h),
   [dpaa2]              (@ref rte_pmd_dpaa2.h),
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index 42d38919d3..f87336365b 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -18,6 +18,7 @@ INPUT                   = @TOPDIR@/doc/api/doxy-api-index.md \
                           @TOPDIR@/drivers/net/softnic \
                           @TOPDIR@/drivers/raw/dpaa2_cmdif \
                           @TOPDIR@/drivers/raw/dpaa2_qdma \
+                          @TOPDIR@/drivers/raw/ioat \
                           @TOPDIR@/lib/librte_eal/include \
                           @TOPDIR@/lib/librte_eal/include/generic \
                           @TOPDIR@/lib/librte_acl \
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v4 02/25] raw/ioat: fix missing close function
  2020-09-28 16:42 ` [dpdk-dev] [PATCH v4 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 01/25] doc/api: add ioat driver to index Bruce Richardson
@ 2020-09-28 16:42   ` Bruce Richardson
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 03/25] raw/ioat: enable use from C++ code Bruce Richardson
                     ` (24 subsequent siblings)
  26 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-28 16:42 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Kevin Laatz, stable, Sunil Pai G, Bruce Richardson

From: Kevin Laatz <kevin.laatz@intel.com>

When rte_rawdev_pmd_release() is called, rte_rawdev_close() looks for a
dev_close function for the device causing a segmentation fault when no
close() function is implemented for a driver.

This patch resolves the issue by adding a stub function ioat_dev_close().

Fixes: f687e842e328 ("raw/ioat: introduce IOAT driver")
Cc: stable@dpdk.org

Reported-by: Sunil Pai G <sunil.pai.g@intel.com>
Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Reviewed-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Sunil Pai G <sunil.pai.g@intel.com>
---
 drivers/raw/ioat/ioat_rawdev.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
index 7f1a154360..0732b059fe 100644
--- a/drivers/raw/ioat/ioat_rawdev.c
+++ b/drivers/raw/ioat/ioat_rawdev.c
@@ -203,6 +203,12 @@ ioat_xstats_reset(struct rte_rawdev *dev, const uint32_t *ids, uint32_t nb_ids)
 	return 0;
 }
 
+static int
+ioat_dev_close(struct rte_rawdev *dev __rte_unused)
+{
+	return 0;
+}
+
 extern int ioat_rawdev_test(uint16_t dev_id);
 
 static int
@@ -212,6 +218,7 @@ ioat_rawdev_create(const char *name, struct rte_pci_device *dev)
 			.dev_configure = ioat_dev_configure,
 			.dev_start = ioat_dev_start,
 			.dev_stop = ioat_dev_stop,
+			.dev_close = ioat_dev_close,
 			.dev_info_get = ioat_dev_info_get,
 			.xstats_get = ioat_xstats_get,
 			.xstats_get_names = ioat_xstats_get_names,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v4 03/25] raw/ioat: enable use from C++ code
  2020-09-28 16:42 ` [dpdk-dev] [PATCH v4 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 01/25] doc/api: add ioat driver to index Bruce Richardson
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 02/25] raw/ioat: fix missing close function Bruce Richardson
@ 2020-09-28 16:42   ` Bruce Richardson
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 04/25] raw/ioat: include extra info in error messages Bruce Richardson
                     ` (23 subsequent siblings)
  26 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-28 16:42 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Bruce Richardson, Kevin Laatz

To allow the header file to be used from C++ code we need to ensure all
typecasts are explicit, and include an 'extern "C"' guard.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
---
 drivers/raw/ioat/rte_ioat_rawdev.h | 23 +++++++++++++++++------
 1 file changed, 17 insertions(+), 6 deletions(-)

diff --git a/drivers/raw/ioat/rte_ioat_rawdev.h b/drivers/raw/ioat/rte_ioat_rawdev.h
index f765a65571..3d84192714 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev.h
@@ -5,6 +5,10 @@
 #ifndef _RTE_IOAT_RAWDEV_H_
 #define _RTE_IOAT_RAWDEV_H_
 
+#ifdef __cplusplus
+extern "C" {
+#endif
+
 /**
  * @file rte_ioat_rawdev.h
  *
@@ -100,7 +104,8 @@ rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl,
 		int fence)
 {
-	struct rte_ioat_rawdev *ioat = rte_rawdevs[dev_id].dev_private;
+	struct rte_ioat_rawdev *ioat =
+			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
 	unsigned short read = ioat->next_read;
 	unsigned short write = ioat->next_write;
 	unsigned short mask = ioat->ring_size - 1;
@@ -141,7 +146,8 @@ rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 static inline void
 rte_ioat_do_copies(int dev_id)
 {
-	struct rte_ioat_rawdev *ioat = rte_rawdevs[dev_id].dev_private;
+	struct rte_ioat_rawdev *ioat =
+			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
 	ioat->desc_ring[(ioat->next_write - 1) & (ioat->ring_size - 1)].u
 			.control.completion_update = 1;
 	rte_compiler_barrier();
@@ -190,7 +196,8 @@ static inline int
 rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
 		uintptr_t *src_hdls, uintptr_t *dst_hdls)
 {
-	struct rte_ioat_rawdev *ioat = rte_rawdevs[dev_id].dev_private;
+	struct rte_ioat_rawdev *ioat =
+			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
 	unsigned short mask = (ioat->ring_size - 1);
 	unsigned short read = ioat->next_read;
 	unsigned short end_read, count;
@@ -212,13 +219,13 @@ rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
 		__m128i hdls0 = _mm_load_si128(&ioat->hdls[read & mask]);
 		__m128i hdls1 = _mm_load_si128(&ioat->hdls[(read + 1) & mask]);
 
-		_mm_storeu_si128((void *)&src_hdls[i],
+		_mm_storeu_si128((__m128i *)&src_hdls[i],
 				_mm_unpacklo_epi64(hdls0, hdls1));
-		_mm_storeu_si128((void *)&dst_hdls[i],
+		_mm_storeu_si128((__m128i *)&dst_hdls[i],
 				_mm_unpackhi_epi64(hdls0, hdls1));
 	}
 	for (; i < count; i++, read++) {
-		uintptr_t *hdls = (void *)&ioat->hdls[read & mask];
+		uintptr_t *hdls = (uintptr_t *)&ioat->hdls[read & mask];
 		src_hdls[i] = hdls[0];
 		dst_hdls[i] = hdls[1];
 	}
@@ -228,4 +235,8 @@ rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
 	return count;
 }
 
+#ifdef __cplusplus
+}
+#endif
+
 #endif /* _RTE_IOAT_RAWDEV_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v4 04/25] raw/ioat: include extra info in error messages
  2020-09-28 16:42 ` [dpdk-dev] [PATCH v4 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (2 preceding siblings ...)
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 03/25] raw/ioat: enable use from C++ code Bruce Richardson
@ 2020-09-28 16:42   ` Bruce Richardson
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 05/25] raw/ioat: add a flag to control copying handle parameters Bruce Richardson
                     ` (22 subsequent siblings)
  26 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-28 16:42 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Bruce Richardson, Kevin Laatz

In case of any failures, include the function name and the line number of
the error message in the message, to make tracking down the failure easier.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
---
 drivers/raw/ioat/ioat_rawdev_test.c | 53 +++++++++++++++++++----------
 1 file changed, 35 insertions(+), 18 deletions(-)

diff --git a/drivers/raw/ioat/ioat_rawdev_test.c b/drivers/raw/ioat/ioat_rawdev_test.c
index c463a82ad6..8e7fd96afc 100644
--- a/drivers/raw/ioat/ioat_rawdev_test.c
+++ b/drivers/raw/ioat/ioat_rawdev_test.c
@@ -13,6 +13,23 @@ int ioat_rawdev_test(uint16_t dev_id); /* pre-define to keep compiler happy */
 static struct rte_mempool *pool;
 static unsigned short expected_ring_size;
 
+#define PRINT_ERR(...) print_err(__func__, __LINE__, __VA_ARGS__)
+
+static inline int
+__rte_format_printf(3, 4)
+print_err(const char *func, int lineno, const char *format, ...)
+{
+	va_list ap;
+	int ret;
+
+	ret = fprintf(stderr, "In %s:%d - ", func, lineno);
+	va_start(ap, format);
+	ret += vfprintf(stderr, format, ap);
+	va_end(ap);
+
+	return ret;
+}
+
 static int
 test_enqueue_copies(int dev_id)
 {
@@ -42,7 +59,7 @@ test_enqueue_copies(int dev_id)
 				(uintptr_t)src,
 				(uintptr_t)dst,
 				0 /* no fence */) != 1) {
-			printf("Error with rte_ioat_enqueue_copy\n");
+			PRINT_ERR("Error with rte_ioat_enqueue_copy\n");
 			return -1;
 		}
 		rte_ioat_do_copies(dev_id);
@@ -50,18 +67,18 @@ test_enqueue_copies(int dev_id)
 
 		if (rte_ioat_completed_copies(dev_id, 1, (void *)&completed[0],
 				(void *)&completed[1]) != 1) {
-			printf("Error with rte_ioat_completed_copies\n");
+			PRINT_ERR("Error with rte_ioat_completed_copies\n");
 			return -1;
 		}
 		if (completed[0] != src || completed[1] != dst) {
-			printf("Error with completions: got (%p, %p), not (%p,%p)\n",
+			PRINT_ERR("Error with completions: got (%p, %p), not (%p,%p)\n",
 					completed[0], completed[1], src, dst);
 			return -1;
 		}
 
 		for (i = 0; i < length; i++)
 			if (dst_data[i] != src_data[i]) {
-				printf("Data mismatch at char %u\n", i);
+				PRINT_ERR("Data mismatch at char %u\n", i);
 				return -1;
 			}
 		rte_pktmbuf_free(src);
@@ -94,7 +111,7 @@ test_enqueue_copies(int dev_id)
 					(uintptr_t)srcs[i],
 					(uintptr_t)dsts[i],
 					0 /* nofence */) != 1) {
-				printf("Error with rte_ioat_enqueue_copy for buffer %u\n",
+				PRINT_ERR("Error with rte_ioat_enqueue_copy for buffer %u\n",
 						i);
 				return -1;
 			}
@@ -104,18 +121,18 @@ test_enqueue_copies(int dev_id)
 
 		if (rte_ioat_completed_copies(dev_id, 64, (void *)completed_src,
 				(void *)completed_dst) != RTE_DIM(srcs)) {
-			printf("Error with rte_ioat_completed_copies\n");
+			PRINT_ERR("Error with rte_ioat_completed_copies\n");
 			return -1;
 		}
 		for (i = 0; i < RTE_DIM(srcs); i++) {
 			char *src_data, *dst_data;
 
 			if (completed_src[i] != srcs[i]) {
-				printf("Error with source pointer %u\n", i);
+				PRINT_ERR("Error with source pointer %u\n", i);
 				return -1;
 			}
 			if (completed_dst[i] != dsts[i]) {
-				printf("Error with dest pointer %u\n", i);
+				PRINT_ERR("Error with dest pointer %u\n", i);
 				return -1;
 			}
 
@@ -123,7 +140,7 @@ test_enqueue_copies(int dev_id)
 			dst_data = rte_pktmbuf_mtod(dsts[i], char *);
 			for (j = 0; j < length; j++)
 				if (src_data[j] != dst_data[j]) {
-					printf("Error with copy of packet %u, byte %u\n",
+					PRINT_ERR("Error with copy of packet %u, byte %u\n",
 							i, j);
 					return -1;
 				}
@@ -150,26 +167,26 @@ ioat_rawdev_test(uint16_t dev_id)
 
 	rte_rawdev_info_get(dev_id, &info, sizeof(p));
 	if (p.ring_size != expected_ring_size) {
-		printf("Error, initial ring size is not as expected (Actual: %d, Expected: %d)\n",
+		PRINT_ERR("Error, initial ring size is not as expected (Actual: %d, Expected: %d)\n",
 				(int)p.ring_size, expected_ring_size);
 		return -1;
 	}
 
 	p.ring_size = IOAT_TEST_RINGSIZE;
 	if (rte_rawdev_configure(dev_id, &info, sizeof(p)) != 0) {
-		printf("Error with rte_rawdev_configure()\n");
+		PRINT_ERR("Error with rte_rawdev_configure()\n");
 		return -1;
 	}
 	rte_rawdev_info_get(dev_id, &info, sizeof(p));
 	if (p.ring_size != IOAT_TEST_RINGSIZE) {
-		printf("Error, ring size is not %d (%d)\n",
+		PRINT_ERR("Error, ring size is not %d (%d)\n",
 				IOAT_TEST_RINGSIZE, (int)p.ring_size);
 		return -1;
 	}
 	expected_ring_size = p.ring_size;
 
 	if (rte_rawdev_start(dev_id) != 0) {
-		printf("Error with rte_rawdev_start()\n");
+		PRINT_ERR("Error with rte_rawdev_start()\n");
 		return -1;
 	}
 
@@ -180,7 +197,7 @@ ioat_rawdev_test(uint16_t dev_id)
 			2048, /* data room size */
 			info.socket_id);
 	if (pool == NULL) {
-		printf("Error with mempool creation\n");
+		PRINT_ERR("Error with mempool creation\n");
 		return -1;
 	}
 
@@ -189,14 +206,14 @@ ioat_rawdev_test(uint16_t dev_id)
 
 	snames = malloc(sizeof(*snames) * nb_xstats);
 	if (snames == NULL) {
-		printf("Error allocating xstat names memory\n");
+		PRINT_ERR("Error allocating xstat names memory\n");
 		goto err;
 	}
 	rte_rawdev_xstats_names_get(dev_id, snames, nb_xstats);
 
 	ids = malloc(sizeof(*ids) * nb_xstats);
 	if (ids == NULL) {
-		printf("Error allocating xstat ids memory\n");
+		PRINT_ERR("Error allocating xstat ids memory\n");
 		goto err;
 	}
 	for (i = 0; i < nb_xstats; i++)
@@ -204,7 +221,7 @@ ioat_rawdev_test(uint16_t dev_id)
 
 	stats = malloc(sizeof(*stats) * nb_xstats);
 	if (stats == NULL) {
-		printf("Error allocating xstat memory\n");
+		PRINT_ERR("Error allocating xstat memory\n");
 		goto err;
 	}
 
@@ -224,7 +241,7 @@ ioat_rawdev_test(uint16_t dev_id)
 
 	rte_rawdev_stop(dev_id);
 	if (rte_rawdev_xstats_reset(dev_id, NULL, 0) != 0) {
-		printf("Error resetting xstat values\n");
+		PRINT_ERR("Error resetting xstat values\n");
 		goto err;
 	}
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v4 05/25] raw/ioat: add a flag to control copying handle parameters
  2020-09-28 16:42 ` [dpdk-dev] [PATCH v4 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (3 preceding siblings ...)
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 04/25] raw/ioat: include extra info in error messages Bruce Richardson
@ 2020-09-28 16:42   ` Bruce Richardson
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 06/25] raw/ioat: split header for readability Bruce Richardson
                     ` (21 subsequent siblings)
  26 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-28 16:42 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Cheng Jiang, Bruce Richardson, Kevin Laatz

From: Cheng Jiang <Cheng1.jiang@intel.com>

Add a flag which controls whether rte_ioat_enqueue_copy and
rte_ioat_completed_copies function should process handle parameters. Not
doing so can improve the performance when handle parameters are not
necessary.

Signed-off-by: Cheng Jiang <Cheng1.jiang@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
---
 doc/guides/rawdevs/ioat.rst            |  3 ++
 doc/guides/rel_notes/release_20_11.rst |  6 ++++
 drivers/raw/ioat/ioat_rawdev.c         |  2 ++
 drivers/raw/ioat/rte_ioat_rawdev.h     | 45 +++++++++++++++++++-------
 4 files changed, 45 insertions(+), 11 deletions(-)

diff --git a/doc/guides/rawdevs/ioat.rst b/doc/guides/rawdevs/ioat.rst
index c46460ff45..af00d77fbf 100644
--- a/doc/guides/rawdevs/ioat.rst
+++ b/doc/guides/rawdevs/ioat.rst
@@ -129,6 +129,9 @@ output, the ``dev_private`` structure element cannot be NULL, and must
 point to a valid ``rte_ioat_rawdev_config`` structure, containing the ring
 size to be used by the device. The ring size must be a power of two,
 between 64 and 4096.
+If it is not needed, the tracking by the driver of user-provided completion
+handles may be disabled by setting the ``hdls_disable`` flag in
+the configuration structure also.
 
 The following code shows how the device is configured in
 ``test_ioat_rawdev.c``:
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 4eb3224a76..196209f63d 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -78,6 +78,12 @@ New Features
     ``--portmask=N``
     where N represents the hexadecimal bitmask of ports used.
 
+* **Updated ioat rawdev driver**
+
+  The ioat rawdev driver has been updated and enhanced. Changes include:
+
+  * Added a per-device configuration flag to disable management of user-provided completion handles
+
 
 Removed Items
 -------------
diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
index 0732b059fe..ea9f51ffc1 100644
--- a/drivers/raw/ioat/ioat_rawdev.c
+++ b/drivers/raw/ioat/ioat_rawdev.c
@@ -58,6 +58,7 @@ ioat_dev_configure(const struct rte_rawdev *dev, rte_rawdev_obj_t config,
 		return -EINVAL;
 
 	ioat->ring_size = params->ring_size;
+	ioat->hdls_disable = params->hdls_disable;
 	if (ioat->desc_ring != NULL) {
 		rte_memzone_free(ioat->desc_mz);
 		ioat->desc_ring = NULL;
@@ -122,6 +123,7 @@ ioat_dev_info_get(struct rte_rawdev *dev, rte_rawdev_obj_t dev_info,
 		return -EINVAL;
 
 	cfg->ring_size = ioat->ring_size;
+	cfg->hdls_disable = ioat->hdls_disable;
 	return 0;
 }
 
diff --git a/drivers/raw/ioat/rte_ioat_rawdev.h b/drivers/raw/ioat/rte_ioat_rawdev.h
index 3d84192714..28ce95cc90 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev.h
@@ -38,7 +38,8 @@ extern "C" {
  * an ioat rawdev instance.
  */
 struct rte_ioat_rawdev_config {
-	unsigned short ring_size;
+	unsigned short ring_size; /**< size of job submission descriptor ring */
+	bool hdls_disable;    /**< if set, ignore user-supplied handle params */
 };
 
 /**
@@ -56,6 +57,7 @@ struct rte_ioat_rawdev {
 
 	unsigned short ring_size;
 	struct rte_ioat_generic_hw_desc *desc_ring;
+	bool hdls_disable;
 	__m128i *hdls; /* completion handles for returning to user */
 
 
@@ -88,10 +90,14 @@ struct rte_ioat_rawdev {
  *   The length of the data to be copied
  * @param src_hdl
  *   An opaque handle for the source data, to be returned when this operation
- *   has been completed and the user polls for the completion details
+ *   has been completed and the user polls for the completion details.
+ *   NOTE: If hdls_disable configuration option for the device is set, this
+ *   parameter is ignored.
  * @param dst_hdl
  *   An opaque handle for the destination data, to be returned when this
- *   operation has been completed and the user polls for the completion details
+ *   operation has been completed and the user polls for the completion details.
+ *   NOTE: If hdls_disable configuration option for the device is set, this
+ *   parameter is ignored.
  * @param fence
  *   A flag parameter indicating that hardware should not begin to perform any
  *   subsequently enqueued copy operations until after this operation has
@@ -126,8 +132,10 @@ rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 	desc->u.control_raw = (uint32_t)((!!fence << 4) | (!(write & 0xF)) << 3);
 	desc->src_addr = src;
 	desc->dest_addr = dst;
+	if (!ioat->hdls_disable)
+		ioat->hdls[write] = _mm_set_epi64x((int64_t)dst_hdl,
+					(int64_t)src_hdl);
 
-	ioat->hdls[write] = _mm_set_epi64x((int64_t)dst_hdl, (int64_t)src_hdl);
 	rte_prefetch0(&ioat->desc_ring[ioat->next_write & mask]);
 
 	ioat->enqueued++;
@@ -174,19 +182,29 @@ rte_ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
 /**
  * Returns details of copy operations that have been completed
  *
- * Returns to the caller the user-provided "handles" for the copy operations
- * which have been completed by the hardware, and not already returned by
- * a previous call to this API.
+ * If the hdls_disable option was not set when the device was configured,
+ * the function will return to the caller the user-provided "handles" for
+ * the copy operations which have been completed by the hardware, and not
+ * already returned by a previous call to this API.
+ * If the hdls_disable option for the device was set on configure, the
+ * max_copies, src_hdls and dst_hdls parameters will be ignored, and the
+ * function returns the number of newly-completed operations.
  *
  * @param dev_id
  *   The rawdev device id of the ioat instance
  * @param max_copies
  *   The number of entries which can fit in the src_hdls and dst_hdls
- *   arrays, i.e. max number of completed operations to report
+ *   arrays, i.e. max number of completed operations to report.
+ *   NOTE: If hdls_disable configuration option for the device is set, this
+ *   parameter is ignored.
  * @param src_hdls
- *   Array to hold the source handle parameters of the completed copies
+ *   Array to hold the source handle parameters of the completed copies.
+ *   NOTE: If hdls_disable configuration option for the device is set, this
+ *   parameter is ignored.
  * @param dst_hdls
- *   Array to hold the destination handle parameters of the completed copies
+ *   Array to hold the destination handle parameters of the completed copies.
+ *   NOTE: If hdls_disable configuration option for the device is set, this
+ *   parameter is ignored.
  * @return
  *   -1 on error, with rte_errno set appropriately.
  *   Otherwise number of completed operations i.e. number of entries written
@@ -212,6 +230,11 @@ rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
 		return -1;
 	}
 
+	if (ioat->hdls_disable) {
+		read += count;
+		goto end;
+	}
+
 	if (count > max_copies)
 		count = max_copies;
 
@@ -229,7 +252,7 @@ rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
 		src_hdls[i] = hdls[0];
 		dst_hdls[i] = hdls[1];
 	}
-
+end:
 	ioat->next_read = read;
 	ioat->completed += count;
 	return count;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v4 06/25] raw/ioat: split header for readability
  2020-09-28 16:42 ` [dpdk-dev] [PATCH v4 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (4 preceding siblings ...)
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 05/25] raw/ioat: add a flag to control copying handle parameters Bruce Richardson
@ 2020-09-28 16:42   ` Bruce Richardson
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 07/25] raw/ioat: rename functions to be operation-agnostic Bruce Richardson
                     ` (20 subsequent siblings)
  26 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-28 16:42 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Bruce Richardson, Kevin Laatz

Rather than having a single long complicated header file for general use we
can split things so that there is one header with all the publicly needed
information - data structs and function prototypes - while the rest of the
internal details are put separately. This makes it easier to read,
understand and use the APIs.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
---

There are a couple of checkpatch errors about spacing in this patch,
however, it appears that these are false positives.
---
 drivers/raw/ioat/meson.build           |   1 +
 drivers/raw/ioat/rte_ioat_rawdev.h     | 147 +---------------------
 drivers/raw/ioat/rte_ioat_rawdev_fns.h | 168 +++++++++++++++++++++++++
 3 files changed, 175 insertions(+), 141 deletions(-)
 create mode 100644 drivers/raw/ioat/rte_ioat_rawdev_fns.h

diff --git a/drivers/raw/ioat/meson.build b/drivers/raw/ioat/meson.build
index 0878418aee..f66e9b605e 100644
--- a/drivers/raw/ioat/meson.build
+++ b/drivers/raw/ioat/meson.build
@@ -8,4 +8,5 @@ sources = files('ioat_rawdev.c',
 deps += ['rawdev', 'bus_pci', 'mbuf']
 
 install_headers('rte_ioat_rawdev.h',
+		'rte_ioat_rawdev_fns.h',
 		'rte_ioat_spec.h')
diff --git a/drivers/raw/ioat/rte_ioat_rawdev.h b/drivers/raw/ioat/rte_ioat_rawdev.h
index 28ce95cc90..7067b352fc 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev.h
@@ -18,12 +18,7 @@ extern "C" {
  * @b EXPERIMENTAL: these structures and APIs may change without prior notice
  */
 
-#include <x86intrin.h>
-#include <rte_atomic.h>
-#include <rte_memory.h>
-#include <rte_memzone.h>
-#include <rte_prefetch.h>
-#include "rte_ioat_spec.h"
+#include <rte_common.h>
 
 /** Name of the device driver */
 #define IOAT_PMD_RAWDEV_NAME rawdev_ioat
@@ -42,38 +37,6 @@ struct rte_ioat_rawdev_config {
 	bool hdls_disable;    /**< if set, ignore user-supplied handle params */
 };
 
-/**
- * @internal
- * Structure representing a device instance
- */
-struct rte_ioat_rawdev {
-	struct rte_rawdev *rawdev;
-	const struct rte_memzone *mz;
-	const struct rte_memzone *desc_mz;
-
-	volatile struct rte_ioat_registers *regs;
-	phys_addr_t status_addr;
-	phys_addr_t ring_addr;
-
-	unsigned short ring_size;
-	struct rte_ioat_generic_hw_desc *desc_ring;
-	bool hdls_disable;
-	__m128i *hdls; /* completion handles for returning to user */
-
-
-	unsigned short next_read;
-	unsigned short next_write;
-
-	/* some statistics for tracking, if added/changed update xstats fns*/
-	uint64_t enqueue_failed __rte_cache_aligned;
-	uint64_t enqueued;
-	uint64_t started;
-	uint64_t completed;
-
-	/* to report completions, the device will write status back here */
-	volatile uint64_t status __rte_cache_aligned;
-};
-
 /**
  * Enqueue a copy operation onto the ioat device
  *
@@ -108,39 +71,7 @@ struct rte_ioat_rawdev {
 static inline int
 rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl,
-		int fence)
-{
-	struct rte_ioat_rawdev *ioat =
-			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
-	unsigned short read = ioat->next_read;
-	unsigned short write = ioat->next_write;
-	unsigned short mask = ioat->ring_size - 1;
-	unsigned short space = mask + read - write;
-	struct rte_ioat_generic_hw_desc *desc;
-
-	if (space == 0) {
-		ioat->enqueue_failed++;
-		return 0;
-	}
-
-	ioat->next_write = write + 1;
-	write &= mask;
-
-	desc = &ioat->desc_ring[write];
-	desc->size = length;
-	/* set descriptor write-back every 16th descriptor */
-	desc->u.control_raw = (uint32_t)((!!fence << 4) | (!(write & 0xF)) << 3);
-	desc->src_addr = src;
-	desc->dest_addr = dst;
-	if (!ioat->hdls_disable)
-		ioat->hdls[write] = _mm_set_epi64x((int64_t)dst_hdl,
-					(int64_t)src_hdl);
-
-	rte_prefetch0(&ioat->desc_ring[ioat->next_write & mask]);
-
-	ioat->enqueued++;
-	return 1;
-}
+		int fence);
 
 /**
  * Trigger hardware to begin performing enqueued copy operations
@@ -152,32 +83,7 @@ rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
  *   The rawdev device id of the ioat instance
  */
 static inline void
-rte_ioat_do_copies(int dev_id)
-{
-	struct rte_ioat_rawdev *ioat =
-			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
-	ioat->desc_ring[(ioat->next_write - 1) & (ioat->ring_size - 1)].u
-			.control.completion_update = 1;
-	rte_compiler_barrier();
-	ioat->regs->dmacount = ioat->next_write;
-	ioat->started = ioat->enqueued;
-}
-
-/**
- * @internal
- * Returns the index of the last completed operation.
- */
-static inline int
-rte_ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
-{
-	uint64_t status = ioat->status;
-
-	/* lower 3 bits indicate "transfer status" : active, idle, halted.
-	 * We can ignore bit 0.
-	 */
-	*error = status & (RTE_IOAT_CHANSTS_SUSPENDED | RTE_IOAT_CHANSTS_ARMED);
-	return (status - ioat->ring_addr) >> 6;
-}
+rte_ioat_do_copies(int dev_id);
 
 /**
  * Returns details of copy operations that have been completed
@@ -212,51 +118,10 @@ rte_ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
  */
 static inline int
 rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
-		uintptr_t *src_hdls, uintptr_t *dst_hdls)
-{
-	struct rte_ioat_rawdev *ioat =
-			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
-	unsigned short mask = (ioat->ring_size - 1);
-	unsigned short read = ioat->next_read;
-	unsigned short end_read, count;
-	int error;
-	int i = 0;
-
-	end_read = (rte_ioat_get_last_completed(ioat, &error) + 1) & mask;
-	count = (end_read - (read & mask)) & mask;
-
-	if (error) {
-		rte_errno = EIO;
-		return -1;
-	}
-
-	if (ioat->hdls_disable) {
-		read += count;
-		goto end;
-	}
+		uintptr_t *src_hdls, uintptr_t *dst_hdls);
 
-	if (count > max_copies)
-		count = max_copies;
-
-	for (; i < count - 1; i += 2, read += 2) {
-		__m128i hdls0 = _mm_load_si128(&ioat->hdls[read & mask]);
-		__m128i hdls1 = _mm_load_si128(&ioat->hdls[(read + 1) & mask]);
-
-		_mm_storeu_si128((__m128i *)&src_hdls[i],
-				_mm_unpacklo_epi64(hdls0, hdls1));
-		_mm_storeu_si128((__m128i *)&dst_hdls[i],
-				_mm_unpackhi_epi64(hdls0, hdls1));
-	}
-	for (; i < count; i++, read++) {
-		uintptr_t *hdls = (uintptr_t *)&ioat->hdls[read & mask];
-		src_hdls[i] = hdls[0];
-		dst_hdls[i] = hdls[1];
-	}
-end:
-	ioat->next_read = read;
-	ioat->completed += count;
-	return count;
-}
+/* include the implementation details from a separate file */
+#include "rte_ioat_rawdev_fns.h"
 
 #ifdef __cplusplus
 }
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
new file mode 100644
index 0000000000..4b7bdb8e23
--- /dev/null
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -0,0 +1,168 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Intel Corporation
+ */
+#ifndef _RTE_IOAT_RAWDEV_FNS_H_
+#define _RTE_IOAT_RAWDEV_FNS_H_
+
+#include <x86intrin.h>
+#include <rte_rawdev.h>
+#include <rte_memzone.h>
+#include <rte_prefetch.h>
+#include "rte_ioat_spec.h"
+
+/**
+ * @internal
+ * Structure representing a device instance
+ */
+struct rte_ioat_rawdev {
+	struct rte_rawdev *rawdev;
+	const struct rte_memzone *mz;
+	const struct rte_memzone *desc_mz;
+
+	volatile struct rte_ioat_registers *regs;
+	phys_addr_t status_addr;
+	phys_addr_t ring_addr;
+
+	unsigned short ring_size;
+	bool hdls_disable;
+	struct rte_ioat_generic_hw_desc *desc_ring;
+	__m128i *hdls; /* completion handles for returning to user */
+
+
+	unsigned short next_read;
+	unsigned short next_write;
+
+	/* some statistics for tracking, if added/changed update xstats fns*/
+	uint64_t enqueue_failed __rte_cache_aligned;
+	uint64_t enqueued;
+	uint64_t started;
+	uint64_t completed;
+
+	/* to report completions, the device will write status back here */
+	volatile uint64_t status __rte_cache_aligned;
+};
+
+/*
+ * Enqueue a copy operation onto the ioat device
+ */
+static inline int
+rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
+		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl,
+		int fence)
+{
+	struct rte_ioat_rawdev *ioat =
+			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
+	unsigned short read = ioat->next_read;
+	unsigned short write = ioat->next_write;
+	unsigned short mask = ioat->ring_size - 1;
+	unsigned short space = mask + read - write;
+	struct rte_ioat_generic_hw_desc *desc;
+
+	if (space == 0) {
+		ioat->enqueue_failed++;
+		return 0;
+	}
+
+	ioat->next_write = write + 1;
+	write &= mask;
+
+	desc = &ioat->desc_ring[write];
+	desc->size = length;
+	/* set descriptor write-back every 16th descriptor */
+	desc->u.control_raw = (uint32_t)((!!fence << 4) | (!(write & 0xF)) << 3);
+	desc->src_addr = src;
+	desc->dest_addr = dst;
+
+	if (!ioat->hdls_disable)
+		ioat->hdls[write] = _mm_set_epi64x((int64_t)dst_hdl,
+					(int64_t)src_hdl);
+	rte_prefetch0(&ioat->desc_ring[ioat->next_write & mask]);
+
+	ioat->enqueued++;
+	return 1;
+}
+
+/*
+ * Trigger hardware to begin performing enqueued copy operations
+ */
+static inline void
+rte_ioat_do_copies(int dev_id)
+{
+	struct rte_ioat_rawdev *ioat =
+			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
+	ioat->desc_ring[(ioat->next_write - 1) & (ioat->ring_size - 1)].u
+			.control.completion_update = 1;
+	rte_compiler_barrier();
+	ioat->regs->dmacount = ioat->next_write;
+	ioat->started = ioat->enqueued;
+}
+
+/**
+ * @internal
+ * Returns the index of the last completed operation.
+ */
+static inline int
+rte_ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
+{
+	uint64_t status = ioat->status;
+
+	/* lower 3 bits indicate "transfer status" : active, idle, halted.
+	 * We can ignore bit 0.
+	 */
+	*error = status & (RTE_IOAT_CHANSTS_SUSPENDED | RTE_IOAT_CHANSTS_ARMED);
+	return (status - ioat->ring_addr) >> 6;
+}
+
+/*
+ * Returns details of copy operations that have been completed
+ */
+static inline int
+rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
+		uintptr_t *src_hdls, uintptr_t *dst_hdls)
+{
+	struct rte_ioat_rawdev *ioat =
+			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
+	unsigned short mask = (ioat->ring_size - 1);
+	unsigned short read = ioat->next_read;
+	unsigned short end_read, count;
+	int error;
+	int i = 0;
+
+	end_read = (rte_ioat_get_last_completed(ioat, &error) + 1) & mask;
+	count = (end_read - (read & mask)) & mask;
+
+	if (error) {
+		rte_errno = EIO;
+		return -1;
+	}
+
+	if (ioat->hdls_disable) {
+		read += count;
+		goto end;
+	}
+
+	if (count > max_copies)
+		count = max_copies;
+
+	for (; i < count - 1; i += 2, read += 2) {
+		__m128i hdls0 = _mm_load_si128(&ioat->hdls[read & mask]);
+		__m128i hdls1 = _mm_load_si128(&ioat->hdls[(read + 1) & mask]);
+
+		_mm_storeu_si128((__m128i *)&src_hdls[i],
+				_mm_unpacklo_epi64(hdls0, hdls1));
+		_mm_storeu_si128((__m128i *)&dst_hdls[i],
+				_mm_unpackhi_epi64(hdls0, hdls1));
+	}
+	for (; i < count; i++, read++) {
+		uintptr_t *hdls = (uintptr_t *)&ioat->hdls[read & mask];
+		src_hdls[i] = hdls[0];
+		dst_hdls[i] = hdls[1];
+	}
+
+end:
+	ioat->next_read = read;
+	ioat->completed += count;
+	return count;
+}
+
+#endif /* _RTE_IOAT_RAWDEV_FNS_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v4 07/25] raw/ioat: rename functions to be operation-agnostic
  2020-09-28 16:42 ` [dpdk-dev] [PATCH v4 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (5 preceding siblings ...)
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 06/25] raw/ioat: split header for readability Bruce Richardson
@ 2020-09-28 16:42   ` Bruce Richardson
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 08/25] raw/ioat: add separate API for fence call Bruce Richardson
                     ` (19 subsequent siblings)
  26 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-28 16:42 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Bruce Richardson, Kevin Laatz

Since the hardware supported by the ioat driver is capable of operations
other than just copies, we can rename the doorbell and completion-return
functions to not have "copies" in their names. These functions are not
copy-specific, and so would apply for other operations which may be added
later to the driver.

Also add a suitable warning using deprecation attribute for any code using
the old functions names.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>

---
Note: The checkpatches warning on this patch is a false positive due to
the addition of the new __rte_deprecated_msg macro in rte_common.h
---
 doc/guides/rawdevs/ioat.rst            | 16 ++++++++--------
 doc/guides/rel_notes/release_20_11.rst |  9 +++++++++
 doc/guides/sample_app_ug/ioat.rst      |  8 ++++----
 drivers/raw/ioat/ioat_rawdev_test.c    | 12 ++++++------
 drivers/raw/ioat/rte_ioat_rawdev.h     | 14 +++++++-------
 drivers/raw/ioat/rte_ioat_rawdev_fns.h | 20 ++++++++++++++++----
 examples/ioat/ioatfwd.c                |  4 ++--
 lib/librte_eal/include/rte_common.h    |  1 +
 8 files changed, 53 insertions(+), 31 deletions(-)

diff --git a/doc/guides/rawdevs/ioat.rst b/doc/guides/rawdevs/ioat.rst
index af00d77fbf..3db5f5d097 100644
--- a/doc/guides/rawdevs/ioat.rst
+++ b/doc/guides/rawdevs/ioat.rst
@@ -157,9 +157,9 @@ Performing Data Copies
 ~~~~~~~~~~~~~~~~~~~~~~~
 
 To perform data copies using IOAT rawdev devices, the functions
-``rte_ioat_enqueue_copy()`` and ``rte_ioat_do_copies()`` should be used.
+``rte_ioat_enqueue_copy()`` and ``rte_ioat_perform_ops()`` should be used.
 Once copies have been completed, the completion will be reported back when
-the application calls ``rte_ioat_completed_copies()``.
+the application calls ``rte_ioat_completed_ops()``.
 
 The ``rte_ioat_enqueue_copy()`` function enqueues a single copy to the
 device ring for copying at a later point. The parameters to that function
@@ -172,11 +172,11 @@ pointers if packet data is being copied.
 
 While the ``rte_ioat_enqueue_copy()`` function enqueues a copy operation on
 the device ring, the copy will not actually be performed until after the
-application calls the ``rte_ioat_do_copies()`` function. This function
+application calls the ``rte_ioat_perform_ops()`` function. This function
 informs the device hardware of the elements enqueued on the ring, and the
 device will begin to process them. It is expected that, for efficiency
 reasons, a burst of operations will be enqueued to the device via multiple
-enqueue calls between calls to the ``rte_ioat_do_copies()`` function.
+enqueue calls between calls to the ``rte_ioat_perform_ops()`` function.
 
 The following code from ``test_ioat_rawdev.c`` demonstrates how to enqueue
 a burst of copies to the device and start the hardware processing of them:
@@ -210,10 +210,10 @@ a burst of copies to the device and start the hardware processing of them:
                         return -1;
                 }
         }
-        rte_ioat_do_copies(dev_id);
+        rte_ioat_perform_ops(dev_id);
 
 To retrieve information about completed copies, the API
-``rte_ioat_completed_copies()`` should be used. This API will return to the
+``rte_ioat_completed_ops()`` should be used. This API will return to the
 application a set of completion handles passed in when the relevant copies
 were enqueued.
 
@@ -223,9 +223,9 @@ is correct before freeing the data buffers using the returned handles:
 
 .. code-block:: C
 
-        if (rte_ioat_completed_copies(dev_id, 64, (void *)completed_src,
+        if (rte_ioat_completed_ops(dev_id, 64, (void *)completed_src,
                         (void *)completed_dst) != RTE_DIM(srcs)) {
-                printf("Error with rte_ioat_completed_copies\n");
+                printf("Error with rte_ioat_completed_ops\n");
                 return -1;
         }
         for (i = 0; i < RTE_DIM(srcs); i++) {
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 196209f63d..c99c0b33f5 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -83,6 +83,11 @@ New Features
   The ioat rawdev driver has been updated and enhanced. Changes include:
 
   * Added a per-device configuration flag to disable management of user-provided completion handles
+  * Renamed the ``rte_ioat_do_copies()`` API to ``rte_ioat_perform_ops()``,
+    and renamed the ``rte_ioat_completed_copies()`` API to ``rte_ioat_completed_ops()``
+    to better reflect the APIs' purposes, and remove the implication that
+    they are limited to copy operations only.
+    [Note: The old API is still provided but marked as deprecated in the code]
 
 
 Removed Items
@@ -178,6 +183,10 @@ API Changes
 
 * bpf: ``RTE_BPF_XTYPE_NUM`` has been dropped from ``rte_bpf_xtype``.
 
+* raw/ioat: As noted above, the ``rte_ioat_do_copies()`` and
+  ``rte_ioat_completed_copies()`` functions have been renamed to
+  ``rte_ioat_perform_ops()`` and ``rte_ioat_completed_ops()`` respectively.
+
 
 ABI Changes
 -----------
diff --git a/doc/guides/sample_app_ug/ioat.rst b/doc/guides/sample_app_ug/ioat.rst
index 3f7d5c34a6..964160dff8 100644
--- a/doc/guides/sample_app_ug/ioat.rst
+++ b/doc/guides/sample_app_ug/ioat.rst
@@ -394,7 +394,7 @@ packet using ``pktmbuf_sw_copy()`` function and enqueue them to an rte_ring:
                 nb_enq = ioat_enqueue_packets(pkts_burst,
                     nb_rx, rx_config->ioat_ids[i]);
                 if (nb_enq > 0)
-                    rte_ioat_do_copies(rx_config->ioat_ids[i]);
+                    rte_ioat_perform_ops(rx_config->ioat_ids[i]);
             } else {
                 /* Perform packet software copy, free source packets */
                 int ret;
@@ -433,7 +433,7 @@ The packets are received in burst mode using ``rte_eth_rx_burst()``
 function. When using hardware copy mode the packets are enqueued in
 copying device's buffer using ``ioat_enqueue_packets()`` which calls
 ``rte_ioat_enqueue_copy()``. When all received packets are in the
-buffer the copy operations are started by calling ``rte_ioat_do_copies()``.
+buffer the copy operations are started by calling ``rte_ioat_perform_ops()``.
 Function ``rte_ioat_enqueue_copy()`` operates on physical address of
 the packet. Structure ``rte_mbuf`` contains only physical address to
 start of the data buffer (``buf_iova``). Thus the address is adjusted
@@ -490,7 +490,7 @@ or indirect mbufs, then multiple copy operations must be used.
 
 
 All completed copies are processed by ``ioat_tx_port()`` function. When using
-hardware copy mode the function invokes ``rte_ioat_completed_copies()``
+hardware copy mode the function invokes ``rte_ioat_completed_ops()``
 on each assigned IOAT channel to gather copied packets. If software copy
 mode is used the function dequeues copied packets from the rte_ring. Then each
 packet MAC address is changed if it was enabled. After that copies are sent
@@ -510,7 +510,7 @@ in burst mode using `` rte_eth_tx_burst()``.
         for (i = 0; i < tx_config->nb_queues; i++) {
             if (copy_mode == COPY_MODE_IOAT_NUM) {
                 /* Deque the mbufs from IOAT device. */
-                nb_dq = rte_ioat_completed_copies(
+                nb_dq = rte_ioat_completed_ops(
                     tx_config->ioat_ids[i], MAX_PKT_BURST,
                     (void *)mbufs_src, (void *)mbufs_dst);
             } else {
diff --git a/drivers/raw/ioat/ioat_rawdev_test.c b/drivers/raw/ioat/ioat_rawdev_test.c
index 8e7fd96afc..bb40eab6b7 100644
--- a/drivers/raw/ioat/ioat_rawdev_test.c
+++ b/drivers/raw/ioat/ioat_rawdev_test.c
@@ -62,12 +62,12 @@ test_enqueue_copies(int dev_id)
 			PRINT_ERR("Error with rte_ioat_enqueue_copy\n");
 			return -1;
 		}
-		rte_ioat_do_copies(dev_id);
+		rte_ioat_perform_ops(dev_id);
 		usleep(10);
 
-		if (rte_ioat_completed_copies(dev_id, 1, (void *)&completed[0],
+		if (rte_ioat_completed_ops(dev_id, 1, (void *)&completed[0],
 				(void *)&completed[1]) != 1) {
-			PRINT_ERR("Error with rte_ioat_completed_copies\n");
+			PRINT_ERR("Error with rte_ioat_completed_ops\n");
 			return -1;
 		}
 		if (completed[0] != src || completed[1] != dst) {
@@ -116,12 +116,12 @@ test_enqueue_copies(int dev_id)
 				return -1;
 			}
 		}
-		rte_ioat_do_copies(dev_id);
+		rte_ioat_perform_ops(dev_id);
 		usleep(100);
 
-		if (rte_ioat_completed_copies(dev_id, 64, (void *)completed_src,
+		if (rte_ioat_completed_ops(dev_id, 64, (void *)completed_src,
 				(void *)completed_dst) != RTE_DIM(srcs)) {
-			PRINT_ERR("Error with rte_ioat_completed_copies\n");
+			PRINT_ERR("Error with rte_ioat_completed_ops\n");
 			return -1;
 		}
 		for (i = 0; i < RTE_DIM(srcs); i++) {
diff --git a/drivers/raw/ioat/rte_ioat_rawdev.h b/drivers/raw/ioat/rte_ioat_rawdev.h
index 7067b352fc..5b2c47e8c8 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev.h
@@ -74,19 +74,19 @@ rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 		int fence);
 
 /**
- * Trigger hardware to begin performing enqueued copy operations
+ * Trigger hardware to begin performing enqueued operations
  *
  * This API is used to write the "doorbell" to the hardware to trigger it
- * to begin the copy operations previously enqueued by rte_ioat_enqueue_copy()
+ * to begin the operations previously enqueued by rte_ioat_enqueue_copy()
  *
  * @param dev_id
  *   The rawdev device id of the ioat instance
  */
 static inline void
-rte_ioat_do_copies(int dev_id);
+rte_ioat_perform_ops(int dev_id);
 
 /**
- * Returns details of copy operations that have been completed
+ * Returns details of operations that have been completed
  *
  * If the hdls_disable option was not set when the device was configured,
  * the function will return to the caller the user-provided "handles" for
@@ -104,11 +104,11 @@ rte_ioat_do_copies(int dev_id);
  *   NOTE: If hdls_disable configuration option for the device is set, this
  *   parameter is ignored.
  * @param src_hdls
- *   Array to hold the source handle parameters of the completed copies.
+ *   Array to hold the source handle parameters of the completed ops.
  *   NOTE: If hdls_disable configuration option for the device is set, this
  *   parameter is ignored.
  * @param dst_hdls
- *   Array to hold the destination handle parameters of the completed copies.
+ *   Array to hold the destination handle parameters of the completed ops.
  *   NOTE: If hdls_disable configuration option for the device is set, this
  *   parameter is ignored.
  * @return
@@ -117,7 +117,7 @@ rte_ioat_do_copies(int dev_id);
  *   to the src_hdls and dst_hdls array parameters.
  */
 static inline int
-rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
+rte_ioat_completed_ops(int dev_id, uint8_t max_copies,
 		uintptr_t *src_hdls, uintptr_t *dst_hdls);
 
 /* include the implementation details from a separate file */
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index 4b7bdb8e23..b155d79c45 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -83,10 +83,10 @@ rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 }
 
 /*
- * Trigger hardware to begin performing enqueued copy operations
+ * Trigger hardware to begin performing enqueued operations
  */
 static inline void
-rte_ioat_do_copies(int dev_id)
+rte_ioat_perform_ops(int dev_id)
 {
 	struct rte_ioat_rawdev *ioat =
 			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
@@ -114,10 +114,10 @@ rte_ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
 }
 
 /*
- * Returns details of copy operations that have been completed
+ * Returns details of operations that have been completed
  */
 static inline int
-rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
+rte_ioat_completed_ops(int dev_id, uint8_t max_copies,
 		uintptr_t *src_hdls, uintptr_t *dst_hdls)
 {
 	struct rte_ioat_rawdev *ioat =
@@ -165,4 +165,16 @@ rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
 	return count;
 }
 
+static inline void
+__rte_deprecated_msg("use rte_ioat_perform_ops() instead")
+rte_ioat_do_copies(int dev_id) { rte_ioat_perform_ops(dev_id); }
+
+static inline int
+__rte_deprecated_msg("use rte_ioat_completed_ops() instead")
+rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
+		uintptr_t *src_hdls, uintptr_t *dst_hdls)
+{
+	return rte_ioat_completed_ops(dev_id, max_copies, src_hdls, dst_hdls);
+}
+
 #endif /* _RTE_IOAT_RAWDEV_FNS_H_ */
diff --git a/examples/ioat/ioatfwd.c b/examples/ioat/ioatfwd.c
index 288a75c7b0..67f75737b6 100644
--- a/examples/ioat/ioatfwd.c
+++ b/examples/ioat/ioatfwd.c
@@ -406,7 +406,7 @@ ioat_rx_port(struct rxtx_port_config *rx_config)
 			nb_enq = ioat_enqueue_packets(pkts_burst,
 				nb_rx, rx_config->ioat_ids[i]);
 			if (nb_enq > 0)
-				rte_ioat_do_copies(rx_config->ioat_ids[i]);
+				rte_ioat_perform_ops(rx_config->ioat_ids[i]);
 		} else {
 			/* Perform packet software copy, free source packets */
 			int ret;
@@ -452,7 +452,7 @@ ioat_tx_port(struct rxtx_port_config *tx_config)
 	for (i = 0; i < tx_config->nb_queues; i++) {
 		if (copy_mode == COPY_MODE_IOAT_NUM) {
 			/* Deque the mbufs from IOAT device. */
-			nb_dq = rte_ioat_completed_copies(
+			nb_dq = rte_ioat_completed_ops(
 				tx_config->ioat_ids[i], MAX_PKT_BURST,
 				(void *)mbufs_src, (void *)mbufs_dst);
 		} else {
diff --git a/lib/librte_eal/include/rte_common.h b/lib/librte_eal/include/rte_common.h
index 8f487a563d..2920255fc1 100644
--- a/lib/librte_eal/include/rte_common.h
+++ b/lib/librte_eal/include/rte_common.h
@@ -85,6 +85,7 @@ typedef uint16_t unaligned_uint16_t;
 
 /******* Macro to mark functions and fields scheduled for removal *****/
 #define __rte_deprecated	__attribute__((__deprecated__))
+#define __rte_deprecated_msg(msg)	__attribute__((__deprecated__(msg)))
 
 /**
  * Mark a function or variable to a weak reference.
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v4 08/25] raw/ioat: add separate API for fence call
  2020-09-28 16:42 ` [dpdk-dev] [PATCH v4 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (6 preceding siblings ...)
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 07/25] raw/ioat: rename functions to be operation-agnostic Bruce Richardson
@ 2020-09-28 16:42   ` Bruce Richardson
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 09/25] raw/ioat: make the HW register spec private Bruce Richardson
                     ` (18 subsequent siblings)
  26 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-28 16:42 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Bruce Richardson, Kevin Laatz

Rather than having the fence signalled via a flag on a descriptor - which
requires reading the docs to find out whether the flag needs to go on the
last descriptor before, or the first descriptor after the fence - we can
instead add a separate fence API call. This becomes unambiguous to use,
since the fence call explicitly comes between two other enqueue calls. It
also allows more freedom of implementation in the driver code.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
---
 doc/guides/rawdevs/ioat.rst            |  3 +--
 doc/guides/rel_notes/release_20_11.rst |  4 ++++
 drivers/raw/ioat/ioat_rawdev_test.c    |  6 ++----
 drivers/raw/ioat/rte_ioat_rawdev.h     | 26 ++++++++++++++++++++------
 drivers/raw/ioat/rte_ioat_rawdev_fns.h | 22 +++++++++++++++++++---
 examples/ioat/ioatfwd.c                | 12 ++++--------
 6 files changed, 50 insertions(+), 23 deletions(-)

diff --git a/doc/guides/rawdevs/ioat.rst b/doc/guides/rawdevs/ioat.rst
index 3db5f5d097..71bca0b28f 100644
--- a/doc/guides/rawdevs/ioat.rst
+++ b/doc/guides/rawdevs/ioat.rst
@@ -203,8 +203,7 @@ a burst of copies to the device and start the hardware processing of them:
                                 dsts[i]->buf_iova + dsts[i]->data_off,
                                 length,
                                 (uintptr_t)srcs[i],
-                                (uintptr_t)dsts[i],
-                                0 /* nofence */) != 1) {
+                                (uintptr_t)dsts[i]) != 1) {
                         printf("Error with rte_ioat_enqueue_copy for buffer %u\n",
                                         i);
                         return -1;
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index c99c0b33f5..3868529ac2 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -88,6 +88,10 @@ New Features
     to better reflect the APIs' purposes, and remove the implication that
     they are limited to copy operations only.
     [Note: The old API is still provided but marked as deprecated in the code]
+  * Added a new API ``rte_ioat_fence()`` to add a fence between operations.
+    This API replaces the ``fence`` flag parameter in the ``rte_ioat_enqueue_copies()`` function,
+    and is clearer as there is no ambiguity as to whether the flag should be
+    set on the last operation before the fence or the first operation after it.
 
 
 Removed Items
diff --git a/drivers/raw/ioat/ioat_rawdev_test.c b/drivers/raw/ioat/ioat_rawdev_test.c
index bb40eab6b7..8ff5468035 100644
--- a/drivers/raw/ioat/ioat_rawdev_test.c
+++ b/drivers/raw/ioat/ioat_rawdev_test.c
@@ -57,8 +57,7 @@ test_enqueue_copies(int dev_id)
 				dst->buf_iova + dst->data_off,
 				length,
 				(uintptr_t)src,
-				(uintptr_t)dst,
-				0 /* no fence */) != 1) {
+				(uintptr_t)dst) != 1) {
 			PRINT_ERR("Error with rte_ioat_enqueue_copy\n");
 			return -1;
 		}
@@ -109,8 +108,7 @@ test_enqueue_copies(int dev_id)
 					dsts[i]->buf_iova + dsts[i]->data_off,
 					length,
 					(uintptr_t)srcs[i],
-					(uintptr_t)dsts[i],
-					0 /* nofence */) != 1) {
+					(uintptr_t)dsts[i]) != 1) {
 				PRINT_ERR("Error with rte_ioat_enqueue_copy for buffer %u\n",
 						i);
 				return -1;
diff --git a/drivers/raw/ioat/rte_ioat_rawdev.h b/drivers/raw/ioat/rte_ioat_rawdev.h
index 5b2c47e8c8..6b891cd449 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev.h
@@ -61,17 +61,31 @@ struct rte_ioat_rawdev_config {
  *   operation has been completed and the user polls for the completion details.
  *   NOTE: If hdls_disable configuration option for the device is set, this
  *   parameter is ignored.
- * @param fence
- *   A flag parameter indicating that hardware should not begin to perform any
- *   subsequently enqueued copy operations until after this operation has
- *   completed
  * @return
  *   Number of operations enqueued, either 0 or 1
  */
 static inline int
 rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
-		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl,
-		int fence);
+		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl);
+
+/**
+ * Add a fence to force ordering between operations
+ *
+ * This adds a fence to a sequence of operations to enforce ordering, such that
+ * all operations enqueued before the fence must be completed before operations
+ * after the fence.
+ * NOTE: Since this fence may be added as a flag to the last operation enqueued,
+ * this API may not function correctly when called immediately after an
+ * "rte_ioat_perform_ops" call i.e. before any new operations are enqueued.
+ *
+ * @param dev_id
+ *   The rawdev device id of the ioat instance
+ * @return
+ *   Number of fences enqueued, either 0 or 1
+ */
+static inline int
+rte_ioat_fence(int dev_id);
+
 
 /**
  * Trigger hardware to begin performing enqueued operations
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index b155d79c45..466721a23a 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -47,8 +47,7 @@ struct rte_ioat_rawdev {
  */
 static inline int
 rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
-		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl,
-		int fence)
+		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl)
 {
 	struct rte_ioat_rawdev *ioat =
 			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
@@ -69,7 +68,7 @@ rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 	desc = &ioat->desc_ring[write];
 	desc->size = length;
 	/* set descriptor write-back every 16th descriptor */
-	desc->u.control_raw = (uint32_t)((!!fence << 4) | (!(write & 0xF)) << 3);
+	desc->u.control_raw = (uint32_t)((!(write & 0xF)) << 3);
 	desc->src_addr = src;
 	desc->dest_addr = dst;
 
@@ -82,6 +81,23 @@ rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 	return 1;
 }
 
+/* add fence to last written descriptor */
+static inline int
+rte_ioat_fence(int dev_id)
+{
+	struct rte_ioat_rawdev *ioat =
+			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
+	unsigned short write = ioat->next_write;
+	unsigned short mask = ioat->ring_size - 1;
+	struct rte_ioat_generic_hw_desc *desc;
+
+	write = (write - 1) & mask;
+	desc = &ioat->desc_ring[write];
+
+	desc->u.control.fence = 1;
+	return 0;
+}
+
 /*
  * Trigger hardware to begin performing enqueued operations
  */
diff --git a/examples/ioat/ioatfwd.c b/examples/ioat/ioatfwd.c
index 67f75737b6..e6d1d1236e 100644
--- a/examples/ioat/ioatfwd.c
+++ b/examples/ioat/ioatfwd.c
@@ -361,15 +361,11 @@ ioat_enqueue_packets(struct rte_mbuf **pkts,
 	for (i = 0; i < nb_rx; i++) {
 		/* Perform data copy */
 		ret = rte_ioat_enqueue_copy(dev_id,
-			pkts[i]->buf_iova
-			- addr_offset,
-			pkts_copy[i]->buf_iova
-			- addr_offset,
-			rte_pktmbuf_data_len(pkts[i])
-			+ addr_offset,
+			pkts[i]->buf_iova - addr_offset,
+			pkts_copy[i]->buf_iova - addr_offset,
+			rte_pktmbuf_data_len(pkts[i]) + addr_offset,
 			(uintptr_t)pkts[i],
-			(uintptr_t)pkts_copy[i],
-			0 /* nofence */);
+			(uintptr_t)pkts_copy[i]);
 
 		if (ret != 1)
 			break;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v4 09/25] raw/ioat: make the HW register spec private
  2020-09-28 16:42 ` [dpdk-dev] [PATCH v4 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (7 preceding siblings ...)
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 08/25] raw/ioat: add separate API for fence call Bruce Richardson
@ 2020-09-28 16:42   ` Bruce Richardson
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 10/25] usertools/dpdk-devbind.py: add support for DSA HW Bruce Richardson
                     ` (17 subsequent siblings)
  26 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-28 16:42 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Bruce Richardson, Kevin Laatz

Only a few definitions from the hardware spec are actually used in the
driver runtime, so we can copy over those few and make the rest of the spec
a private header in the driver.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
---
 drivers/raw/ioat/ioat_rawdev.c                |  3 ++
 .../raw/ioat/{rte_ioat_spec.h => ioat_spec.h} | 26 -----------
 drivers/raw/ioat/meson.build                  |  3 +-
 drivers/raw/ioat/rte_ioat_rawdev_fns.h        | 43 +++++++++++++++++--
 4 files changed, 44 insertions(+), 31 deletions(-)
 rename drivers/raw/ioat/{rte_ioat_spec.h => ioat_spec.h} (91%)

diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
index ea9f51ffc1..aa59b731fd 100644
--- a/drivers/raw/ioat/ioat_rawdev.c
+++ b/drivers/raw/ioat/ioat_rawdev.c
@@ -4,10 +4,12 @@
 
 #include <rte_cycles.h>
 #include <rte_bus_pci.h>
+#include <rte_memzone.h>
 #include <rte_string_fns.h>
 #include <rte_rawdev_pmd.h>
 
 #include "rte_ioat_rawdev.h"
+#include "ioat_spec.h"
 
 static struct rte_pci_driver ioat_pmd_drv;
 
@@ -268,6 +270,7 @@ ioat_rawdev_create(const char *name, struct rte_pci_device *dev)
 	ioat->rawdev = rawdev;
 	ioat->mz = mz;
 	ioat->regs = dev->mem_resource[0].addr;
+	ioat->doorbell = &ioat->regs->dmacount;
 	ioat->ring_size = 0;
 	ioat->desc_ring = NULL;
 	ioat->status_addr = ioat->mz->iova +
diff --git a/drivers/raw/ioat/rte_ioat_spec.h b/drivers/raw/ioat/ioat_spec.h
similarity index 91%
rename from drivers/raw/ioat/rte_ioat_spec.h
rename to drivers/raw/ioat/ioat_spec.h
index c6e7929b2c..9645e16d41 100644
--- a/drivers/raw/ioat/rte_ioat_spec.h
+++ b/drivers/raw/ioat/ioat_spec.h
@@ -86,32 +86,6 @@ struct rte_ioat_registers {
 
 #define RTE_IOAT_CHANCMP_ALIGN			8	/* CHANCMP address must be 64-bit aligned */
 
-struct rte_ioat_generic_hw_desc {
-	uint32_t size;
-	union {
-		uint32_t control_raw;
-		struct {
-			uint32_t int_enable: 1;
-			uint32_t src_snoop_disable: 1;
-			uint32_t dest_snoop_disable: 1;
-			uint32_t completion_update: 1;
-			uint32_t fence: 1;
-			uint32_t reserved2: 1;
-			uint32_t src_page_break: 1;
-			uint32_t dest_page_break: 1;
-			uint32_t bundle: 1;
-			uint32_t dest_dca: 1;
-			uint32_t hint: 1;
-			uint32_t reserved: 13;
-			uint32_t op: 8;
-		} control;
-	} u;
-	uint64_t src_addr;
-	uint64_t dest_addr;
-	uint64_t next;
-	uint64_t op_specific[4];
-};
-
 struct rte_ioat_dma_hw_desc {
 	uint32_t size;
 	union {
diff --git a/drivers/raw/ioat/meson.build b/drivers/raw/ioat/meson.build
index f66e9b605e..06636f8a9f 100644
--- a/drivers/raw/ioat/meson.build
+++ b/drivers/raw/ioat/meson.build
@@ -8,5 +8,4 @@ sources = files('ioat_rawdev.c',
 deps += ['rawdev', 'bus_pci', 'mbuf']
 
 install_headers('rte_ioat_rawdev.h',
-		'rte_ioat_rawdev_fns.h',
-		'rte_ioat_spec.h')
+		'rte_ioat_rawdev_fns.h')
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index 466721a23a..c6e0b9a586 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -8,7 +8,36 @@
 #include <rte_rawdev.h>
 #include <rte_memzone.h>
 #include <rte_prefetch.h>
-#include "rte_ioat_spec.h"
+
+/**
+ * @internal
+ * Structure representing a device descriptor
+ */
+struct rte_ioat_generic_hw_desc {
+	uint32_t size;
+	union {
+		uint32_t control_raw;
+		struct {
+			uint32_t int_enable: 1;
+			uint32_t src_snoop_disable: 1;
+			uint32_t dest_snoop_disable: 1;
+			uint32_t completion_update: 1;
+			uint32_t fence: 1;
+			uint32_t reserved2: 1;
+			uint32_t src_page_break: 1;
+			uint32_t dest_page_break: 1;
+			uint32_t bundle: 1;
+			uint32_t dest_dca: 1;
+			uint32_t hint: 1;
+			uint32_t reserved: 13;
+			uint32_t op: 8;
+		} control;
+	} u;
+	uint64_t src_addr;
+	uint64_t dest_addr;
+	uint64_t next;
+	uint64_t op_specific[4];
+};
 
 /**
  * @internal
@@ -19,7 +48,7 @@ struct rte_ioat_rawdev {
 	const struct rte_memzone *mz;
 	const struct rte_memzone *desc_mz;
 
-	volatile struct rte_ioat_registers *regs;
+	volatile uint16_t *doorbell;
 	phys_addr_t status_addr;
 	phys_addr_t ring_addr;
 
@@ -40,8 +69,16 @@ struct rte_ioat_rawdev {
 
 	/* to report completions, the device will write status back here */
 	volatile uint64_t status __rte_cache_aligned;
+
+	/* pointer to the register bar */
+	volatile struct rte_ioat_registers *regs;
 };
 
+#define RTE_IOAT_CHANSTS_IDLE			0x1
+#define RTE_IOAT_CHANSTS_SUSPENDED		0x2
+#define RTE_IOAT_CHANSTS_HALTED			0x3
+#define RTE_IOAT_CHANSTS_ARMED			0x4
+
 /*
  * Enqueue a copy operation onto the ioat device
  */
@@ -109,7 +146,7 @@ rte_ioat_perform_ops(int dev_id)
 	ioat->desc_ring[(ioat->next_write - 1) & (ioat->ring_size - 1)].u
 			.control.completion_update = 1;
 	rte_compiler_barrier();
-	ioat->regs->dmacount = ioat->next_write;
+	*ioat->doorbell = ioat->next_write;
 	ioat->started = ioat->enqueued;
 }
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v4 10/25] usertools/dpdk-devbind.py: add support for DSA HW
  2020-09-28 16:42 ` [dpdk-dev] [PATCH v4 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (8 preceding siblings ...)
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 09/25] raw/ioat: make the HW register spec private Bruce Richardson
@ 2020-09-28 16:42   ` Bruce Richardson
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 11/25] raw/ioat: add skeleton for VFIO/UIO based DSA device Bruce Richardson
                     ` (16 subsequent siblings)
  26 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-28 16:42 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Kevin Laatz, Bruce Richardson

From: Kevin Laatz <kevin.laatz@intel.com>

Intel Data Streaming Accelerator (Intel DSA) is a high-performance data
copy and transformation accelerator which will be integrated in future
Intel processors [1].

Add DSA device support to dpdk-devbind.py script.

[1] https://01.org/blogs/2019/introducing-intel-data-streaming-accelerator

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
---
 doc/guides/rel_notes/release_20_11.rst | 2 ++
 usertools/dpdk-devbind.py              | 4 +++-
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 3868529ac2..4d8b781547 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -82,6 +82,8 @@ New Features
 
   The ioat rawdev driver has been updated and enhanced. Changes include:
 
+  * Added support for Intel\ |reg| Data Streaming Accelerator hardware.
+    For more information, see https://01.org/blogs/2019/introducing-intel-data-streaming-accelerator
   * Added a per-device configuration flag to disable management of user-provided completion handles
   * Renamed the ``rte_ioat_do_copies()`` API to ``rte_ioat_perform_ops()``,
     and renamed the ``rte_ioat_completed_copies()`` API to ``rte_ioat_completed_ops()``
diff --git a/usertools/dpdk-devbind.py b/usertools/dpdk-devbind.py
index 094c2ffc8b..f2916bef54 100755
--- a/usertools/dpdk-devbind.py
+++ b/usertools/dpdk-devbind.py
@@ -53,6 +53,8 @@
               'SVendor': None, 'SDevice': None}
 intel_ioat_icx = {'Class': '08', 'Vendor': '8086', 'Device': '0b00',
               'SVendor': None, 'SDevice': None}
+intel_idxd_spr = {'Class': '08', 'Vendor': '8086', 'Device': '0b25',
+              'SVendor': None, 'SDevice': None}
 intel_ntb_skx = {'Class': '06', 'Vendor': '8086', 'Device': '201c',
               'SVendor': None, 'SDevice': None}
 
@@ -62,7 +64,7 @@
 eventdev_devices = [cavium_sso, cavium_tim, octeontx2_sso]
 mempool_devices = [cavium_fpa, octeontx2_npa]
 compress_devices = [cavium_zip]
-misc_devices = [intel_ioat_bdw, intel_ioat_skx, intel_ioat_icx, intel_ntb_skx, octeontx2_dma]
+misc_devices = [intel_ioat_bdw, intel_ioat_skx, intel_ioat_icx, intel_idxd_spr, intel_ntb_skx, octeontx2_dma]
 
 # global dict ethernet devices present. Dictionary indexed by PCI address.
 # Each device within this is itself a dictionary of device properties
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v4 11/25] raw/ioat: add skeleton for VFIO/UIO based DSA device
  2020-09-28 16:42 ` [dpdk-dev] [PATCH v4 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (9 preceding siblings ...)
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 10/25] usertools/dpdk-devbind.py: add support for DSA HW Bruce Richardson
@ 2020-09-28 16:42   ` Bruce Richardson
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 12/25] raw/ioat: add vdev probe for DSA/idxd devices Bruce Richardson
                     ` (15 subsequent siblings)
  26 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-28 16:42 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Bruce Richardson, Kevin Laatz

Add in the basic probe/remove skeleton code for DSA devices which are bound
directly to vfio or uio driver. The kernel module for supporting these uses
the "idxd" name, so that name is used as function and file prefix to avoid
conflict with existing "ioat" prefixed functions.

Since we are adding new files to the driver and there will be common
definitions shared between the various files, we create a new internal
header file ioat_private.h to hold common macros and function prototypes.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
---
 doc/guides/rawdevs/ioat.rst     | 69 ++++++++++-----------------------
 drivers/raw/ioat/idxd_pci.c     | 56 ++++++++++++++++++++++++++
 drivers/raw/ioat/ioat_private.h | 27 +++++++++++++
 drivers/raw/ioat/ioat_rawdev.c  |  9 +----
 drivers/raw/ioat/meson.build    |  6 ++-
 5 files changed, 108 insertions(+), 59 deletions(-)
 create mode 100644 drivers/raw/ioat/idxd_pci.c
 create mode 100644 drivers/raw/ioat/ioat_private.h

diff --git a/doc/guides/rawdevs/ioat.rst b/doc/guides/rawdevs/ioat.rst
index 71bca0b28f..b898f98d5f 100644
--- a/doc/guides/rawdevs/ioat.rst
+++ b/doc/guides/rawdevs/ioat.rst
@@ -3,10 +3,12 @@
 
 .. include:: <isonum.txt>
 
-IOAT Rawdev Driver for Intel\ |reg| QuickData Technology
-======================================================================
+IOAT Rawdev Driver
+===================
 
 The ``ioat`` rawdev driver provides a poll-mode driver (PMD) for Intel\ |reg|
+Data Streaming Accelerator `(Intel DSA)
+<https://01.org/blogs/2019/introducing-intel-data-streaming-accelerator>`_ and for Intel\ |reg|
 QuickData Technology, part of Intel\ |reg| I/O Acceleration Technology
 `(Intel I/OAT)
 <https://www.intel.com/content/www/us/en/wireless-network/accel-technology.html>`_.
@@ -17,61 +19,30 @@ be done by software, freeing up CPU cycles for other tasks.
 Hardware Requirements
 ----------------------
 
-On Linux, the presence of an Intel\ |reg| QuickData Technology hardware can
-be detected by checking the output of the ``lspci`` command, where the
-hardware will be often listed as "Crystal Beach DMA" or "CBDMA". For
-example, on a system with Intel\ |reg| Xeon\ |reg| CPU E5-2699 v4 @2.20GHz,
-lspci shows:
-
-.. code-block:: console
-
-  # lspci | grep DMA
-  00:04.0 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 0 (rev 01)
-  00:04.1 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 1 (rev 01)
-  00:04.2 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 2 (rev 01)
-  00:04.3 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 3 (rev 01)
-  00:04.4 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 4 (rev 01)
-  00:04.5 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 5 (rev 01)
-  00:04.6 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 6 (rev 01)
-  00:04.7 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 7 (rev 01)
-
-On a system with Intel\ |reg| Xeon\ |reg| Gold 6154 CPU @ 3.00GHz, lspci
-shows:
-
-.. code-block:: console
-
-  # lspci | grep DMA
-  00:04.0 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
-  00:04.1 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
-  00:04.2 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
-  00:04.3 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
-  00:04.4 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
-  00:04.5 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
-  00:04.6 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
-  00:04.7 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
-
+The ``dpdk-devbind.py`` script, included with DPDK,
+can be used to show the presence of supported hardware.
+Running ``dpdk-devbind.py --status-dev misc`` will show all the miscellaneous,
+or rawdev-based devices on the system.
+For Intel\ |reg| QuickData Technology devices, the hardware will be often listed as "Crystal Beach DMA",
+or "CBDMA".
+For Intel\ |reg| DSA devices, they are currently (at time of writing) appearing as devices with type "0b25",
+due to the absence of pci-id database entries for them at this point.
 
 Compilation
 ------------
 
-For builds done with ``make``, the driver compilation is enabled by the
-``CONFIG_RTE_LIBRTE_PMD_IOAT_RAWDEV`` build configuration option. This is
-enabled by default in builds for x86 platforms, and disabled in other
-configurations.
-
-For builds using ``meson`` and ``ninja``, the driver will be built when the
-target platform is x86-based.
+For builds using ``meson`` and ``ninja``, the driver will be built when the target platform is x86-based.
+No additional compilation steps are necessary.
 
 Device Setup
 -------------
 
-The Intel\ |reg| QuickData Technology HW devices will need to be bound to a
-user-space IO driver for use. The script ``dpdk-devbind.py`` script
-included with DPDK can be used to view the state of the devices and to bind
-them to a suitable DPDK-supported kernel driver. When querying the status
-of the devices, they will appear under the category of "Misc (rawdev)
-devices", i.e. the command ``dpdk-devbind.py --status-dev misc`` can be
-used to see the state of those devices alone.
+The HW devices to be used will need to be bound to a user-space IO driver for use.
+The ``dpdk-devbind.py`` script can be used to view the state of the devices
+and to bind them to a suitable DPDK-supported kernel driver, such as ``vfio-pci``.
+For example::
+
+	$ dpdk-devbind.py -b vfio-pci 00:04.0 00:04.1
 
 Device Probing and Initialization
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
new file mode 100644
index 0000000000..1a30e9c316
--- /dev/null
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -0,0 +1,56 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_bus_pci.h>
+
+#include "ioat_private.h"
+
+#define IDXD_VENDOR_ID		0x8086
+#define IDXD_DEVICE_ID_SPR	0x0B25
+
+#define IDXD_PMD_RAWDEV_NAME_PCI rawdev_idxd_pci
+
+const struct rte_pci_id pci_id_idxd_map[] = {
+	{ RTE_PCI_DEVICE(IDXD_VENDOR_ID, IDXD_DEVICE_ID_SPR) },
+	{ .vendor_id = 0, /* sentinel */ },
+};
+
+static int
+idxd_rawdev_probe_pci(struct rte_pci_driver *drv, struct rte_pci_device *dev)
+{
+	int ret = 0;
+	char name[PCI_PRI_STR_SIZE];
+
+	rte_pci_device_name(&dev->addr, name, sizeof(name));
+	IOAT_PMD_INFO("Init %s on NUMA node %d", name, dev->device.numa_node);
+	dev->device.driver = &drv->driver;
+
+	return ret;
+}
+
+static int
+idxd_rawdev_remove_pci(struct rte_pci_device *dev)
+{
+	char name[PCI_PRI_STR_SIZE];
+	int ret = 0;
+
+	rte_pci_device_name(&dev->addr, name, sizeof(name));
+
+	IOAT_PMD_INFO("Closing %s on NUMA node %d",
+			name, dev->device.numa_node);
+
+	return ret;
+}
+
+struct rte_pci_driver idxd_pmd_drv_pci = {
+	.id_table = pci_id_idxd_map,
+	.drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+	.probe = idxd_rawdev_probe_pci,
+	.remove = idxd_rawdev_remove_pci,
+};
+
+RTE_PMD_REGISTER_PCI(IDXD_PMD_RAWDEV_NAME_PCI, idxd_pmd_drv_pci);
+RTE_PMD_REGISTER_PCI_TABLE(IDXD_PMD_RAWDEV_NAME_PCI, pci_id_idxd_map);
+RTE_PMD_REGISTER_KMOD_DEP(IDXD_PMD_RAWDEV_NAME_PCI,
+			  "* igb_uio | uio_pci_generic | vfio-pci");
diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
new file mode 100644
index 0000000000..d87d4d055e
--- /dev/null
+++ b/drivers/raw/ioat/ioat_private.h
@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#ifndef _IOAT_PRIVATE_H_
+#define _IOAT_PRIVATE_H_
+
+/**
+ * @file idxd_private.h
+ *
+ * Private data structures for the idxd/DSA part of ioat device driver
+ *
+ * @warning
+ * @b EXPERIMENTAL: these structures and APIs may change without prior notice
+ */
+
+extern int ioat_pmd_logtype;
+
+#define IOAT_PMD_LOG(level, fmt, args...) rte_log(RTE_LOG_ ## level, \
+		ioat_pmd_logtype, "%s(): " fmt "\n", __func__, ##args)
+
+#define IOAT_PMD_DEBUG(fmt, args...)  IOAT_PMD_LOG(DEBUG, fmt, ## args)
+#define IOAT_PMD_INFO(fmt, args...)   IOAT_PMD_LOG(INFO, fmt, ## args)
+#define IOAT_PMD_ERR(fmt, args...)    IOAT_PMD_LOG(ERR, fmt, ## args)
+#define IOAT_PMD_WARN(fmt, args...)   IOAT_PMD_LOG(WARNING, fmt, ## args)
+
+#endif /* _IOAT_PRIVATE_H_ */
diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
index aa59b731fd..1fe32278d2 100644
--- a/drivers/raw/ioat/ioat_rawdev.c
+++ b/drivers/raw/ioat/ioat_rawdev.c
@@ -10,6 +10,7 @@
 
 #include "rte_ioat_rawdev.h"
 #include "ioat_spec.h"
+#include "ioat_private.h"
 
 static struct rte_pci_driver ioat_pmd_drv;
 
@@ -29,14 +30,6 @@ static struct rte_pci_driver ioat_pmd_drv;
 
 RTE_LOG_REGISTER(ioat_pmd_logtype, rawdev.ioat, INFO);
 
-#define IOAT_PMD_LOG(level, fmt, args...) rte_log(RTE_LOG_ ## level, \
-	ioat_pmd_logtype, "%s(): " fmt "\n", __func__, ##args)
-
-#define IOAT_PMD_DEBUG(fmt, args...)  IOAT_PMD_LOG(DEBUG, fmt, ## args)
-#define IOAT_PMD_INFO(fmt, args...)   IOAT_PMD_LOG(INFO, fmt, ## args)
-#define IOAT_PMD_ERR(fmt, args...)    IOAT_PMD_LOG(ERR, fmt, ## args)
-#define IOAT_PMD_WARN(fmt, args...)   IOAT_PMD_LOG(WARNING, fmt, ## args)
-
 #define DESC_SZ sizeof(struct rte_ioat_generic_hw_desc)
 #define COMPLETION_SZ sizeof(__m128i)
 
diff --git a/drivers/raw/ioat/meson.build b/drivers/raw/ioat/meson.build
index 06636f8a9f..3529635e9c 100644
--- a/drivers/raw/ioat/meson.build
+++ b/drivers/raw/ioat/meson.build
@@ -3,8 +3,10 @@
 
 build = dpdk_conf.has('RTE_ARCH_X86')
 reason = 'only supported on x86'
-sources = files('ioat_rawdev.c',
-		'ioat_rawdev_test.c')
+sources = files(
+	'idxd_pci.c',
+	'ioat_rawdev.c',
+	'ioat_rawdev_test.c')
 deps += ['rawdev', 'bus_pci', 'mbuf']
 
 install_headers('rte_ioat_rawdev.h',
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v4 12/25] raw/ioat: add vdev probe for DSA/idxd devices
  2020-09-28 16:42 ` [dpdk-dev] [PATCH v4 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (10 preceding siblings ...)
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 11/25] raw/ioat: add skeleton for VFIO/UIO based DSA device Bruce Richardson
@ 2020-09-28 16:42   ` Bruce Richardson
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 13/25] raw/ioat: include example configuration script Bruce Richardson
                     ` (14 subsequent siblings)
  26 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-28 16:42 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Kevin Laatz, Bruce Richardson

From: Kevin Laatz <kevin.laatz@intel.com>

The Intel DSA devices can be exposed to userspace via kernel driver, so can
be used without having to bind them to vfio/uio. Therefore we add support
for using those kernel-configured devices as vdevs, taking as parameter the
individual HW work queue to be used by the vdev.

Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 doc/guides/rawdevs/ioat.rst  |  68 +++++++++++++++++--
 drivers/raw/ioat/idxd_vdev.c | 123 +++++++++++++++++++++++++++++++++++
 drivers/raw/ioat/meson.build |   6 +-
 3 files changed, 192 insertions(+), 5 deletions(-)
 create mode 100644 drivers/raw/ioat/idxd_vdev.c

diff --git a/doc/guides/rawdevs/ioat.rst b/doc/guides/rawdevs/ioat.rst
index b898f98d5f..5b8d27980e 100644
--- a/doc/guides/rawdevs/ioat.rst
+++ b/doc/guides/rawdevs/ioat.rst
@@ -37,9 +37,62 @@ No additional compilation steps are necessary.
 Device Setup
 -------------
 
+Depending on support provided by the PMD, HW devices can either use the kernel configured driver
+or be bound to a user-space IO driver for use.
+For example, Intel\ |reg| DSA devices can use the IDXD kernel driver or DPDK-supported drivers,
+such as ``vfio-pci``.
+
+Intel\ |reg| DSA devices using idxd kernel driver
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To use a Intel\ |reg| DSA device bound to the IDXD kernel driver, the device must first be configured.
+The `accel-config <https://github.com/intel/idxd-config>`_ utility library can be used for configuration.
+
+.. note::
+        The device configuration can also be done by directly interacting with the sysfs nodes.
+
+There are some mandatory configuration steps before being able to use a device with an application.
+The internal engines, which do the copies or other operations,
+and the work-queues, which are used by applications to assign work to the device,
+need to be assigned to groups, and the various other configuration options,
+such as priority or queue depth, need to be set for each queue.
+
+To assign an engine to a group::
+
+        $ accel-config config-engine dsa0/engine0.0 --group-id=0
+        $ accel-config config-engine dsa0/engine0.1 --group-id=1
+
+To assign work queues to groups for passing descriptors to the engines a similar accel-config command can be used.
+However, the work queues also need to be configured depending on the use-case.
+Some configuration options include:
+
+* mode (Dedicated/Shared): Indicates whether a WQ may accept jobs from multiple queues simultaneously.
+* priority: WQ priority between 1 and 15. Larger value means higher priority.
+* wq-size: the size of the WQ. Sum of all WQ sizes must be less that the total-size defined by the device.
+* type: WQ type (kernel/mdev/user). Determines how the device is presented.
+* name: identifier given to the WQ.
+
+Example configuration for a work queue::
+
+        $ accel-config config-wq dsa0/wq0.0 --group-id=0 \
+           --mode=dedicated --priority=10 --wq-size=8 \
+           --type=user --name=app1
+
+Once the devices have been configured, they need to be enabled::
+
+        $ accel-config enable-device dsa0
+        $ accel-config enable-wq dsa0/wq0.0
+
+Check the device configuration::
+
+        $ accel-config list
+
+Devices using VFIO/UIO drivers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
 The HW devices to be used will need to be bound to a user-space IO driver for use.
 The ``dpdk-devbind.py`` script can be used to view the state of the devices
-and to bind them to a suitable DPDK-supported kernel driver, such as ``vfio-pci``.
+and to bind them to a suitable DPDK-supported driver, such as ``vfio-pci``.
 For example::
 
 	$ dpdk-devbind.py -b vfio-pci 00:04.0 00:04.1
@@ -47,9 +100,16 @@ For example::
 Device Probing and Initialization
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-Once bound to a suitable kernel device driver, the HW devices will be found
-as part of the PCI scan done at application initialization time. No vdev
-parameters need to be passed to create or initialize the device.
+For devices bound to a suitable DPDK-supported VFIO/UIO driver, the HW devices will
+be found as part of the device scan done at application initialization time without
+the need to pass parameters to the application.
+
+If the device is bound to the IDXD kernel driver (and previously configured with sysfs),
+then a specific work queue needs to be passed to the application via a vdev parameter.
+This vdev parameter take the driver name and work queue name as parameters.
+For example, to use work queue 0 on Intel\ |reg| DSA instance 0::
+
+        $ dpdk-test --no-pci --vdev=rawdev_idxd,wq=0.0
 
 Once probed successfully, the device will appear as a ``rawdev``, that is a
 "raw device type" inside DPDK, and can be accessed using APIs from the
diff --git a/drivers/raw/ioat/idxd_vdev.c b/drivers/raw/ioat/idxd_vdev.c
new file mode 100644
index 0000000000..0509fc0842
--- /dev/null
+++ b/drivers/raw/ioat/idxd_vdev.c
@@ -0,0 +1,123 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_bus_vdev.h>
+#include <rte_kvargs.h>
+#include <rte_string_fns.h>
+#include <rte_rawdev_pmd.h>
+
+#include "ioat_private.h"
+
+/** Name of the device driver */
+#define IDXD_PMD_RAWDEV_NAME rawdev_idxd
+/* takes a work queue(WQ) as parameter */
+#define IDXD_ARG_WQ		"wq"
+
+static const char * const valid_args[] = {
+	IDXD_ARG_WQ,
+	NULL
+};
+
+struct idxd_vdev_args {
+	uint8_t device_id;
+	uint8_t wq_id;
+};
+
+static int
+idxd_rawdev_parse_wq(const char *key __rte_unused, const char *value,
+			  void *extra_args)
+{
+	struct idxd_vdev_args *args = (struct idxd_vdev_args *)extra_args;
+	int dev, wq, bytes = -1;
+	int read = sscanf(value, "%d.%d%n", &dev, &wq, &bytes);
+
+	if (read != 2 || bytes != (int)strlen(value)) {
+		IOAT_PMD_ERR("Error parsing work-queue id. Must be in <dev_id>.<queue_id> format");
+		return -EINVAL;
+	}
+
+	if (dev >= UINT8_MAX || wq >= UINT8_MAX) {
+		IOAT_PMD_ERR("Device or work queue id out of range");
+		return -EINVAL;
+	}
+
+	args->device_id = dev;
+	args->wq_id = wq;
+
+	return 0;
+}
+
+static int
+idxd_vdev_parse_params(struct rte_kvargs *kvlist, struct idxd_vdev_args *args)
+{
+	if (rte_kvargs_count(kvlist, IDXD_ARG_WQ) == 1) {
+		if (rte_kvargs_process(kvlist, IDXD_ARG_WQ,
+				&idxd_rawdev_parse_wq, args) < 0) {
+			IOAT_PMD_ERR("Error parsing %s", IDXD_ARG_WQ);
+			goto free;
+		}
+	} else {
+		IOAT_PMD_ERR("%s is a mandatory arg", IDXD_ARG_WQ);
+		return -EINVAL;
+	}
+
+	return 0;
+
+free:
+	if (kvlist)
+		rte_kvargs_free(kvlist);
+	return -EINVAL;
+}
+
+static int
+idxd_rawdev_probe_vdev(struct rte_vdev_device *vdev)
+{
+	struct rte_kvargs *kvlist;
+	struct idxd_vdev_args vdev_args;
+	const char *name;
+	int ret = 0;
+
+	name = rte_vdev_device_name(vdev);
+	if (name == NULL)
+		return -EINVAL;
+
+	IOAT_PMD_INFO("Initializing pmd_idxd for %s", name);
+
+	kvlist = rte_kvargs_parse(rte_vdev_device_args(vdev), valid_args);
+	if (kvlist == NULL) {
+		IOAT_PMD_ERR("Invalid kvargs key");
+		return -EINVAL;
+	}
+
+	ret = idxd_vdev_parse_params(kvlist, &vdev_args);
+	if (ret) {
+		IOAT_PMD_ERR("Failed to parse kvargs");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int
+idxd_rawdev_remove_vdev(struct rte_vdev_device *vdev)
+{
+	const char *name;
+
+	name = rte_vdev_device_name(vdev);
+	if (name == NULL)
+		return -EINVAL;
+
+	IOAT_PMD_INFO("Remove DSA vdev %p", name);
+
+	return 0;
+}
+
+struct rte_vdev_driver idxd_rawdev_drv_vdev = {
+	.probe = idxd_rawdev_probe_vdev,
+	.remove = idxd_rawdev_remove_vdev,
+};
+
+RTE_PMD_REGISTER_VDEV(IDXD_PMD_RAWDEV_NAME, idxd_rawdev_drv_vdev);
+RTE_PMD_REGISTER_PARAM_STRING(IDXD_PMD_RAWDEV_NAME,
+			      "wq=<string>");
diff --git a/drivers/raw/ioat/meson.build b/drivers/raw/ioat/meson.build
index 3529635e9c..b343b7367b 100644
--- a/drivers/raw/ioat/meson.build
+++ b/drivers/raw/ioat/meson.build
@@ -5,9 +5,13 @@ build = dpdk_conf.has('RTE_ARCH_X86')
 reason = 'only supported on x86'
 sources = files(
 	'idxd_pci.c',
+	'idxd_vdev.c',
 	'ioat_rawdev.c',
 	'ioat_rawdev_test.c')
-deps += ['rawdev', 'bus_pci', 'mbuf']
+deps += ['bus_pci',
+	'bus_vdev',
+	'mbuf',
+	'rawdev']
 
 install_headers('rte_ioat_rawdev.h',
 		'rte_ioat_rawdev_fns.h')
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v4 13/25] raw/ioat: include example configuration script
  2020-09-28 16:42 ` [dpdk-dev] [PATCH v4 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (11 preceding siblings ...)
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 12/25] raw/ioat: add vdev probe for DSA/idxd devices Bruce Richardson
@ 2020-09-28 16:42   ` Bruce Richardson
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 14/25] raw/ioat: create rawdev instances on idxd PCI probe Bruce Richardson
                     ` (13 subsequent siblings)
  26 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-28 16:42 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Bruce Richardson, Kevin Laatz

Devices managed by the idxd kernel driver must be configured for DPDK use
before it can be used by the ioat driver. This example script serves both
as a quick way to get the driver set up with a simple configuration, and as
the basis for users to modify it and create their own configuration
scripts.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
---
 doc/guides/rawdevs/ioat.rst       |  2 +
 drivers/raw/ioat/dpdk_idxd_cfg.py | 79 +++++++++++++++++++++++++++++++
 2 files changed, 81 insertions(+)
 create mode 100755 drivers/raw/ioat/dpdk_idxd_cfg.py

diff --git a/doc/guides/rawdevs/ioat.rst b/doc/guides/rawdevs/ioat.rst
index 5b8d27980e..7c2a2d4570 100644
--- a/doc/guides/rawdevs/ioat.rst
+++ b/doc/guides/rawdevs/ioat.rst
@@ -50,6 +50,8 @@ The `accel-config <https://github.com/intel/idxd-config>`_ utility library can b
 
 .. note::
         The device configuration can also be done by directly interacting with the sysfs nodes.
+        An example of how this may be done can be seen in the script ``dpdk_idxd_cfg.py``
+        included in the driver source directory.
 
 There are some mandatory configuration steps before being able to use a device with an application.
 The internal engines, which do the copies or other operations,
diff --git a/drivers/raw/ioat/dpdk_idxd_cfg.py b/drivers/raw/ioat/dpdk_idxd_cfg.py
new file mode 100755
index 0000000000..bce4bb5bd4
--- /dev/null
+++ b/drivers/raw/ioat/dpdk_idxd_cfg.py
@@ -0,0 +1,79 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2020 Intel Corporation
+
+"""
+Configure an entire Intel DSA instance, using idxd kernel driver, for DPDK use
+"""
+
+import sys
+import argparse
+import os
+import os.path
+
+
+class SysfsDir:
+    "Used to read/write paths in a sysfs directory"
+    def __init__(self, path):
+        self.path = path
+
+    def read_int(self, filename):
+        "Return a value from sysfs file"
+        with open(os.path.join(self.path, filename)) as f:
+            return int(f.readline())
+
+    def write_values(self, values):
+        "write dictionary, where key is filename and value is value to write"
+        for filename, contents in values.items():
+            with open(os.path.join(self.path, filename), "w") as f:
+                f.write(str(contents))
+
+
+def configure_dsa(dsa_id, queues):
+    "Configure the DSA instance with appropriate number of queues"
+    dsa_dir = SysfsDir(f"/sys/bus/dsa/devices/dsa{dsa_id}")
+    drv_dir = SysfsDir("/sys/bus/dsa/drivers/dsa")
+
+    max_groups = dsa_dir.read_int("max_groups")
+    max_engines = dsa_dir.read_int("max_engines")
+    max_queues = dsa_dir.read_int("max_work_queues")
+    max_tokens = dsa_dir.read_int("max_tokens")
+
+    # we want one engine per group
+    nb_groups = min(max_engines, max_groups)
+    for grp in range(nb_groups):
+        dsa_dir.write_values({f"engine{dsa_id}.{grp}/group_id": grp})
+
+    nb_queues = min(queues, max_queues)
+    if queues > nb_queues:
+        print(f"Setting number of queues to max supported value: {max_queues}")
+
+    # configure each queue
+    for q in range(nb_queues):
+        wq_dir = SysfsDir(os.path.join(dsa_dir.path, f"wq{dsa_id}.{q}"))
+        wq_dir.write_values({"group_id": q % nb_groups,
+                             "type": "user",
+                             "mode": "dedicated",
+                             "name": f"dpdk_wq{dsa_id}.{q}",
+                             "priority": 1,
+                             "size": int(max_tokens / nb_queues)})
+
+    # enable device and then queues
+    drv_dir.write_values({"bind": f"dsa{dsa_id}"})
+    for q in range(nb_queues):
+        drv_dir.write_values({"bind": f"wq{dsa_id}.{q}"})
+
+
+def main(args):
+    "Main function, does arg parsing and calls config function"
+    arg_p = argparse.ArgumentParser(
+        description="Configure whole DSA device instance for DPDK use")
+    arg_p.add_argument('dsa_id', type=int, help="DSA instance number")
+    arg_p.add_argument('-q', metavar='queues', type=int, default=255,
+                       help="Number of queues to set up")
+    parsed_args = arg_p.parse_args(args[1:])
+    configure_dsa(parsed_args.dsa_id, parsed_args.q)
+
+
+if __name__ == "__main__":
+    main(sys.argv)
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v4 14/25] raw/ioat: create rawdev instances on idxd PCI probe
  2020-09-28 16:42 ` [dpdk-dev] [PATCH v4 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (12 preceding siblings ...)
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 13/25] raw/ioat: include example configuration script Bruce Richardson
@ 2020-09-28 16:42   ` Bruce Richardson
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 15/25] raw/ioat: create rawdev instances for idxd vdevs Bruce Richardson
                     ` (12 subsequent siblings)
  26 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-28 16:42 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Bruce Richardson, Kevin Laatz

When a matching device is found via PCI probe create a rawdev instance for
each queue on the hardware. Use empty self-test function for these devices
so that the overall rawdev_autotest does not report failures.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
---
 drivers/raw/ioat/idxd_pci.c            | 237 ++++++++++++++++++++++++-
 drivers/raw/ioat/ioat_common.c         |  68 +++++++
 drivers/raw/ioat/ioat_private.h        |  33 ++++
 drivers/raw/ioat/ioat_rawdev_test.c    |   7 +
 drivers/raw/ioat/ioat_spec.h           |  64 +++++++
 drivers/raw/ioat/meson.build           |   1 +
 drivers/raw/ioat/rte_ioat_rawdev_fns.h |  35 +++-
 7 files changed, 442 insertions(+), 3 deletions(-)
 create mode 100644 drivers/raw/ioat/ioat_common.c

diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
index 1a30e9c316..c3fec56d53 100644
--- a/drivers/raw/ioat/idxd_pci.c
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -3,8 +3,10 @@
  */
 
 #include <rte_bus_pci.h>
+#include <rte_memzone.h>
 
 #include "ioat_private.h"
+#include "ioat_spec.h"
 
 #define IDXD_VENDOR_ID		0x8086
 #define IDXD_DEVICE_ID_SPR	0x0B25
@@ -16,17 +18,246 @@ const struct rte_pci_id pci_id_idxd_map[] = {
 	{ .vendor_id = 0, /* sentinel */ },
 };
 
+static inline int
+idxd_pci_dev_command(struct idxd_rawdev *idxd, enum rte_idxd_cmds command)
+{
+	uint8_t err_code;
+	uint16_t qid = idxd->qid;
+	int i = 0;
+
+	if (command >= idxd_disable_wq && command <= idxd_reset_wq)
+		qid = (1 << qid);
+	rte_spinlock_lock(&idxd->u.pci->lk);
+	idxd->u.pci->regs->cmd = (command << IDXD_CMD_SHIFT) | qid;
+
+	do {
+		rte_pause();
+		err_code = idxd->u.pci->regs->cmdstatus;
+		if (++i >= 1000) {
+			IOAT_PMD_ERR("Timeout waiting for command response from HW");
+			rte_spinlock_unlock(&idxd->u.pci->lk);
+			return err_code;
+		}
+	} while (idxd->u.pci->regs->cmdstatus & CMDSTATUS_ACTIVE_MASK);
+	rte_spinlock_unlock(&idxd->u.pci->lk);
+
+	return err_code & CMDSTATUS_ERR_MASK;
+}
+
+static int
+idxd_is_wq_enabled(struct idxd_rawdev *idxd)
+{
+	uint32_t state = idxd->u.pci->wq_regs[idxd->qid].wqcfg[WQ_STATE_IDX];
+	return ((state >> WQ_STATE_SHIFT) & WQ_STATE_MASK) == 0x1;
+}
+
+static const struct rte_rawdev_ops idxd_pci_ops = {
+		.dev_close = idxd_rawdev_close,
+		.dev_selftest = idxd_rawdev_test,
+};
+
+/* each portal uses 4 x 4k pages */
+#define IDXD_PORTAL_SIZE (4096 * 4)
+
+static int
+init_pci_device(struct rte_pci_device *dev, struct idxd_rawdev *idxd)
+{
+	struct idxd_pci_common *pci;
+	uint8_t nb_groups, nb_engines, nb_wqs;
+	uint16_t grp_offset, wq_offset; /* how far into bar0 the regs are */
+	uint16_t wq_size, total_wq_size;
+	uint8_t lg2_max_batch, lg2_max_copy_size;
+	unsigned int i, err_code;
+
+	pci = malloc(sizeof(*pci));
+	if (pci == NULL) {
+		IOAT_PMD_ERR("%s: Can't allocate memory", __func__);
+		goto err;
+	}
+	rte_spinlock_init(&pci->lk);
+
+	/* assign the bar registers, and then configure device */
+	pci->regs = dev->mem_resource[0].addr;
+	grp_offset = (uint16_t)pci->regs->offsets[0];
+	pci->grp_regs = RTE_PTR_ADD(pci->regs, grp_offset * 0x100);
+	wq_offset = (uint16_t)(pci->regs->offsets[0] >> 16);
+	pci->wq_regs = RTE_PTR_ADD(pci->regs, wq_offset * 0x100);
+	pci->portals = dev->mem_resource[2].addr;
+
+	/* sanity check device status */
+	if (pci->regs->gensts & GENSTS_DEV_STATE_MASK) {
+		/* need function-level-reset (FLR) or is enabled */
+		IOAT_PMD_ERR("Device status is not disabled, cannot init");
+		goto err;
+	}
+	if (pci->regs->cmdstatus & CMDSTATUS_ACTIVE_MASK) {
+		/* command in progress */
+		IOAT_PMD_ERR("Device has a command in progress, cannot init");
+		goto err;
+	}
+
+	/* read basic info about the hardware for use when configuring */
+	nb_groups = (uint8_t)pci->regs->grpcap;
+	nb_engines = (uint8_t)pci->regs->engcap;
+	nb_wqs = (uint8_t)(pci->regs->wqcap >> 16);
+	total_wq_size = (uint16_t)pci->regs->wqcap;
+	lg2_max_copy_size = (uint8_t)(pci->regs->gencap >> 16) & 0x1F;
+	lg2_max_batch = (uint8_t)(pci->regs->gencap >> 21) & 0x0F;
+
+	IOAT_PMD_DEBUG("nb_groups = %u, nb_engines = %u, nb_wqs = %u",
+			nb_groups, nb_engines, nb_wqs);
+
+	/* zero out any old config */
+	for (i = 0; i < nb_groups; i++) {
+		pci->grp_regs[i].grpengcfg = 0;
+		pci->grp_regs[i].grpwqcfg[0] = 0;
+	}
+	for (i = 0; i < nb_wqs; i++)
+		pci->wq_regs[i].wqcfg[0] = 0;
+
+	/* put each engine into a separate group to avoid reordering */
+	if (nb_groups > nb_engines)
+		nb_groups = nb_engines;
+	if (nb_groups < nb_engines)
+		nb_engines = nb_groups;
+
+	/* assign engines to groups, round-robin style */
+	for (i = 0; i < nb_engines; i++) {
+		IOAT_PMD_DEBUG("Assigning engine %u to group %u",
+				i, i % nb_groups);
+		pci->grp_regs[i % nb_groups].grpengcfg |= (1ULL << i);
+	}
+
+	/* now do the same for queues and give work slots to each queue */
+	wq_size = total_wq_size / nb_wqs;
+	IOAT_PMD_DEBUG("Work queue size = %u, max batch = 2^%u, max copy = 2^%u",
+			wq_size, lg2_max_batch, lg2_max_copy_size);
+	for (i = 0; i < nb_wqs; i++) {
+		/* add engine "i" to a group */
+		IOAT_PMD_DEBUG("Assigning work queue %u to group %u",
+				i, i % nb_groups);
+		pci->grp_regs[i % nb_groups].grpwqcfg[0] |= (1ULL << i);
+		/* now configure it, in terms of size, max batch, mode */
+		pci->wq_regs[i].wqcfg[WQ_SIZE_IDX] = wq_size;
+		pci->wq_regs[i].wqcfg[WQ_MODE_IDX] = (1 << WQ_PRIORITY_SHIFT) |
+				WQ_MODE_DEDICATED;
+		pci->wq_regs[i].wqcfg[WQ_SIZES_IDX] = lg2_max_copy_size |
+				(lg2_max_batch << WQ_BATCH_SZ_SHIFT);
+	}
+
+	/* dump the group configuration to output */
+	for (i = 0; i < nb_groups; i++) {
+		IOAT_PMD_DEBUG("## Group %d", i);
+		IOAT_PMD_DEBUG("    GRPWQCFG: %"PRIx64, pci->grp_regs[i].grpwqcfg[0]);
+		IOAT_PMD_DEBUG("    GRPENGCFG: %"PRIx64, pci->grp_regs[i].grpengcfg);
+		IOAT_PMD_DEBUG("    GRPFLAGS: %"PRIx32, pci->grp_regs[i].grpflags);
+	}
+
+	idxd->u.pci = pci;
+	idxd->max_batches = wq_size;
+
+	/* enable the device itself */
+	err_code = idxd_pci_dev_command(idxd, idxd_enable_dev);
+	if (err_code) {
+		IOAT_PMD_ERR("Error enabling device: code %#x", err_code);
+		return err_code;
+	}
+	IOAT_PMD_DEBUG("IDXD Device enabled OK");
+
+	return nb_wqs;
+
+err:
+	free(pci);
+	return -1;
+}
+
 static int
 idxd_rawdev_probe_pci(struct rte_pci_driver *drv, struct rte_pci_device *dev)
 {
-	int ret = 0;
+	struct idxd_rawdev idxd = {{0}}; /* Double {} to avoid error on BSD12 */
+	uint8_t nb_wqs;
+	int qid, ret = 0;
 	char name[PCI_PRI_STR_SIZE];
 
 	rte_pci_device_name(&dev->addr, name, sizeof(name));
 	IOAT_PMD_INFO("Init %s on NUMA node %d", name, dev->device.numa_node);
 	dev->device.driver = &drv->driver;
 
-	return ret;
+	ret = init_pci_device(dev, &idxd);
+	if (ret < 0) {
+		IOAT_PMD_ERR("Error initializing PCI hardware");
+		return ret;
+	}
+	nb_wqs = (uint8_t)ret;
+
+	/* set up one device for each queue */
+	for (qid = 0; qid < nb_wqs; qid++) {
+		char qname[32];
+
+		/* add the queue number to each device name */
+		snprintf(qname, sizeof(qname), "%s-q%d", name, qid);
+		idxd.qid = qid;
+		idxd.public.portal = RTE_PTR_ADD(idxd.u.pci->portals,
+				qid * IDXD_PORTAL_SIZE);
+		if (idxd_is_wq_enabled(&idxd))
+			IOAT_PMD_ERR("Error, WQ %u seems enabled", qid);
+		ret = idxd_rawdev_create(qname, &dev->device,
+				&idxd, &idxd_pci_ops);
+		if (ret != 0) {
+			IOAT_PMD_ERR("Failed to create rawdev %s", name);
+			if (qid == 0) /* if no devices using this, free pci */
+				free(idxd.u.pci);
+			return ret;
+		}
+	}
+
+	return 0;
+}
+
+static int
+idxd_rawdev_destroy(const char *name)
+{
+	int ret;
+	uint8_t err_code;
+	struct rte_rawdev *rdev;
+	struct idxd_rawdev *idxd;
+
+	if (!name) {
+		IOAT_PMD_ERR("Invalid device name");
+		return -EINVAL;
+	}
+
+	rdev = rte_rawdev_pmd_get_named_dev(name);
+	if (!rdev) {
+		IOAT_PMD_ERR("Invalid device name (%s)", name);
+		return -EINVAL;
+	}
+
+	idxd = rdev->dev_private;
+
+	/* disable the device */
+	err_code = idxd_pci_dev_command(idxd, idxd_disable_dev);
+	if (err_code) {
+		IOAT_PMD_ERR("Error disabling device: code %#x", err_code);
+		return err_code;
+	}
+	IOAT_PMD_DEBUG("IDXD Device disabled OK");
+
+	/* free device memory */
+	if (rdev->dev_private != NULL) {
+		IOAT_PMD_DEBUG("Freeing device driver memory");
+		rdev->dev_private = NULL;
+		rte_free(idxd->public.batch_ring);
+		rte_free(idxd->public.hdl_ring);
+		rte_memzone_free(idxd->mz);
+	}
+
+	/* rte_rawdev_close is called by pmd_release */
+	ret = rte_rawdev_pmd_release(rdev);
+	if (ret)
+		IOAT_PMD_DEBUG("Device cleanup failed");
+
+	return 0;
 }
 
 static int
@@ -40,6 +271,8 @@ idxd_rawdev_remove_pci(struct rte_pci_device *dev)
 	IOAT_PMD_INFO("Closing %s on NUMA node %d",
 			name, dev->device.numa_node);
 
+	ret = idxd_rawdev_destroy(name);
+
 	return ret;
 }
 
diff --git a/drivers/raw/ioat/ioat_common.c b/drivers/raw/ioat/ioat_common.c
new file mode 100644
index 0000000000..c3aa015ed3
--- /dev/null
+++ b/drivers/raw/ioat/ioat_common.c
@@ -0,0 +1,68 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_rawdev_pmd.h>
+#include <rte_memzone.h>
+#include <rte_common.h>
+
+#include "ioat_private.h"
+
+int
+idxd_rawdev_close(struct rte_rawdev *dev __rte_unused)
+{
+	return 0;
+}
+
+int
+idxd_rawdev_create(const char *name, struct rte_device *dev,
+		   const struct idxd_rawdev *base_idxd,
+		   const struct rte_rawdev_ops *ops)
+{
+	struct idxd_rawdev *idxd;
+	struct rte_rawdev *rawdev = NULL;
+	const struct rte_memzone *mz = NULL;
+	char mz_name[RTE_MEMZONE_NAMESIZE];
+	int ret = 0;
+
+	if (!name) {
+		IOAT_PMD_ERR("Invalid name of the device!");
+		ret = -EINVAL;
+		goto cleanup;
+	}
+
+	/* Allocate device structure */
+	rawdev = rte_rawdev_pmd_allocate(name, sizeof(struct idxd_rawdev),
+					 dev->numa_node);
+	if (rawdev == NULL) {
+		IOAT_PMD_ERR("Unable to allocate raw device");
+		ret = -ENOMEM;
+		goto cleanup;
+	}
+
+	snprintf(mz_name, sizeof(mz_name), "rawdev%u_private", rawdev->dev_id);
+	mz = rte_memzone_reserve(mz_name, sizeof(struct idxd_rawdev),
+			dev->numa_node, RTE_MEMZONE_IOVA_CONTIG);
+	if (mz == NULL) {
+		IOAT_PMD_ERR("Unable to reserve memzone for private data\n");
+		ret = -ENOMEM;
+		goto cleanup;
+	}
+	rawdev->dev_private = mz->addr;
+	rawdev->dev_ops = ops;
+	rawdev->device = dev;
+	rawdev->driver_name = IOAT_PMD_RAWDEV_NAME_STR;
+
+	idxd = rawdev->dev_private;
+	*idxd = *base_idxd; /* copy over the main fields already passed in */
+	idxd->rawdev = rawdev;
+	idxd->mz = mz;
+
+	return 0;
+
+cleanup:
+	if (rawdev)
+		rte_rawdev_pmd_release(rawdev);
+
+	return ret;
+}
diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
index d87d4d055e..53f00a9f3c 100644
--- a/drivers/raw/ioat/ioat_private.h
+++ b/drivers/raw/ioat/ioat_private.h
@@ -14,6 +14,10 @@
  * @b EXPERIMENTAL: these structures and APIs may change without prior notice
  */
 
+#include <rte_spinlock.h>
+#include <rte_rawdev_pmd.h>
+#include "rte_ioat_rawdev.h"
+
 extern int ioat_pmd_logtype;
 
 #define IOAT_PMD_LOG(level, fmt, args...) rte_log(RTE_LOG_ ## level, \
@@ -24,4 +28,33 @@ extern int ioat_pmd_logtype;
 #define IOAT_PMD_ERR(fmt, args...)    IOAT_PMD_LOG(ERR, fmt, ## args)
 #define IOAT_PMD_WARN(fmt, args...)   IOAT_PMD_LOG(WARNING, fmt, ## args)
 
+struct idxd_pci_common {
+	rte_spinlock_t lk;
+	volatile struct rte_idxd_bar0 *regs;
+	volatile struct rte_idxd_wqcfg *wq_regs;
+	volatile struct rte_idxd_grpcfg *grp_regs;
+	volatile void *portals;
+};
+
+struct idxd_rawdev {
+	struct rte_idxd_rawdev public; /* the public members, must be first */
+
+	struct rte_rawdev *rawdev;
+	const struct rte_memzone *mz;
+	uint8_t qid;
+	uint16_t max_batches;
+
+	union {
+		struct idxd_pci_common *pci;
+	} u;
+};
+
+extern int idxd_rawdev_create(const char *name, struct rte_device *dev,
+		       const struct idxd_rawdev *idxd,
+		       const struct rte_rawdev_ops *ops);
+
+extern int idxd_rawdev_close(struct rte_rawdev *dev);
+
+extern int idxd_rawdev_test(uint16_t dev_id);
+
 #endif /* _IOAT_PRIVATE_H_ */
diff --git a/drivers/raw/ioat/ioat_rawdev_test.c b/drivers/raw/ioat/ioat_rawdev_test.c
index 8ff5468035..7cd0f4abf5 100644
--- a/drivers/raw/ioat/ioat_rawdev_test.c
+++ b/drivers/raw/ioat/ioat_rawdev_test.c
@@ -7,6 +7,7 @@
 #include <rte_mbuf.h>
 #include "rte_rawdev.h"
 #include "rte_ioat_rawdev.h"
+#include "ioat_private.h"
 
 int ioat_rawdev_test(uint16_t dev_id); /* pre-define to keep compiler happy */
 
@@ -258,3 +259,9 @@ ioat_rawdev_test(uint16_t dev_id)
 	free(ids);
 	return -1;
 }
+
+int
+idxd_rawdev_test(uint16_t dev_id __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/raw/ioat/ioat_spec.h b/drivers/raw/ioat/ioat_spec.h
index 9645e16d41..1aa768b9ae 100644
--- a/drivers/raw/ioat/ioat_spec.h
+++ b/drivers/raw/ioat/ioat_spec.h
@@ -268,6 +268,70 @@ union rte_ioat_hw_desc {
 	struct rte_ioat_pq_update_hw_desc pq_update;
 };
 
+/*** Definitions for Intel(R) Data Streaming Accelerator Follow ***/
+
+#define IDXD_CMD_SHIFT 20
+enum rte_idxd_cmds {
+	idxd_enable_dev = 1,
+	idxd_disable_dev,
+	idxd_drain_all,
+	idxd_abort_all,
+	idxd_reset_device,
+	idxd_enable_wq,
+	idxd_disable_wq,
+	idxd_drain_wq,
+	idxd_abort_wq,
+	idxd_reset_wq,
+};
+
+/* General bar0 registers */
+struct rte_idxd_bar0 {
+	uint32_t __rte_cache_aligned version;    /* offset 0x00 */
+	uint64_t __rte_aligned(0x10) gencap;     /* offset 0x10 */
+	uint64_t __rte_aligned(0x10) wqcap;      /* offset 0x20 */
+	uint64_t __rte_aligned(0x10) grpcap;     /* offset 0x30 */
+	uint64_t __rte_aligned(0x08) engcap;     /* offset 0x38 */
+	uint64_t __rte_aligned(0x10) opcap;      /* offset 0x40 */
+	uint64_t __rte_aligned(0x20) offsets[2]; /* offset 0x60 */
+	uint32_t __rte_aligned(0x20) gencfg;     /* offset 0x80 */
+	uint32_t __rte_aligned(0x08) genctrl;    /* offset 0x88 */
+	uint32_t __rte_aligned(0x10) gensts;     /* offset 0x90 */
+	uint32_t __rte_aligned(0x08) intcause;   /* offset 0x98 */
+	uint32_t __rte_aligned(0x10) cmd;        /* offset 0xA0 */
+	uint32_t __rte_aligned(0x08) cmdstatus;  /* offset 0xA8 */
+	uint64_t __rte_aligned(0x20) swerror[4]; /* offset 0xC0 */
+};
+
+struct rte_idxd_wqcfg {
+	uint32_t wqcfg[8] __rte_aligned(32); /* 32-byte register */
+};
+
+#define WQ_SIZE_IDX      0 /* size is in first 32-bit value */
+#define WQ_THRESHOLD_IDX 1 /* WQ threshold second 32-bits */
+#define WQ_MODE_IDX      2 /* WQ mode and other flags */
+#define WQ_SIZES_IDX     3 /* WQ transfer and batch sizes */
+#define WQ_OCC_INT_IDX   4 /* WQ occupancy interrupt handle */
+#define WQ_OCC_LIMIT_IDX 5 /* WQ occupancy limit */
+#define WQ_STATE_IDX     6 /* WQ state and occupancy state */
+
+#define WQ_MODE_SHARED    0
+#define WQ_MODE_DEDICATED 1
+#define WQ_PRIORITY_SHIFT 4
+#define WQ_BATCH_SZ_SHIFT 5
+#define WQ_STATE_SHIFT 30
+#define WQ_STATE_MASK 0x3
+
+struct rte_idxd_grpcfg {
+	uint64_t grpwqcfg[4]  __rte_cache_aligned; /* 64-byte register set */
+	uint64_t grpengcfg;  /* offset 32 */
+	uint32_t grpflags;   /* offset 40 */
+};
+
+#define GENSTS_DEV_STATE_MASK 0x03
+#define CMDSTATUS_ACTIVE_SHIFT 31
+#define CMDSTATUS_ACTIVE_MASK (1 << 31)
+#define CMDSTATUS_ERR_MASK 0xFF
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/drivers/raw/ioat/meson.build b/drivers/raw/ioat/meson.build
index b343b7367b..5eff76a1a3 100644
--- a/drivers/raw/ioat/meson.build
+++ b/drivers/raw/ioat/meson.build
@@ -6,6 +6,7 @@ reason = 'only supported on x86'
 sources = files(
 	'idxd_pci.c',
 	'idxd_vdev.c',
+	'ioat_common.c',
 	'ioat_rawdev.c',
 	'ioat_rawdev_test.c')
 deps += ['bus_pci',
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index c6e0b9a586..fa2eb5334c 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -41,9 +41,20 @@ struct rte_ioat_generic_hw_desc {
 
 /**
  * @internal
- * Structure representing a device instance
+ * Identify the data path to use.
+ * Must be first field of rte_ioat_rawdev and rte_idxd_rawdev structs
+ */
+enum rte_ioat_dev_type {
+	RTE_IOAT_DEV,
+	RTE_IDXD_DEV,
+};
+
+/**
+ * @internal
+ * Structure representing an IOAT device instance
  */
 struct rte_ioat_rawdev {
+	enum rte_ioat_dev_type type;
 	struct rte_rawdev *rawdev;
 	const struct rte_memzone *mz;
 	const struct rte_memzone *desc_mz;
@@ -79,6 +90,28 @@ struct rte_ioat_rawdev {
 #define RTE_IOAT_CHANSTS_HALTED			0x3
 #define RTE_IOAT_CHANSTS_ARMED			0x4
 
+/**
+ * @internal
+ * Structure representing an IDXD device instance
+ */
+struct rte_idxd_rawdev {
+	enum rte_ioat_dev_type type;
+	void *portal; /* address to write the batch descriptor */
+
+	/* counters to track the batches and the individual op handles */
+	uint16_t batch_ring_sz;  /* size of batch ring */
+	uint16_t hdl_ring_sz;    /* size of the user hdl ring */
+
+	uint16_t next_batch;     /* where we write descriptor ops */
+	uint16_t next_completed; /* batch where we read completions */
+	uint16_t next_ret_hdl;   /* the next user hdl to return */
+	uint16_t last_completed_hdl; /* the last user hdl that has completed */
+	uint16_t next_free_hdl;  /* where the handle for next op will go */
+
+	struct rte_idxd_user_hdl *hdl_ring;
+	struct rte_idxd_desc_batch *batch_ring;
+};
+
 /*
  * Enqueue a copy operation onto the ioat device
  */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v4 15/25] raw/ioat: create rawdev instances for idxd vdevs
  2020-09-28 16:42 ` [dpdk-dev] [PATCH v4 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (13 preceding siblings ...)
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 14/25] raw/ioat: create rawdev instances on idxd PCI probe Bruce Richardson
@ 2020-09-28 16:42   ` Bruce Richardson
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 16/25] raw/ioat: add datapath data structures for idxd devices Bruce Richardson
                     ` (11 subsequent siblings)
  26 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-28 16:42 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Kevin Laatz, Bruce Richardson

From: Kevin Laatz <kevin.laatz@intel.com>

For each vdev (DSA work queue) instance, create a rawdev instance.

Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 drivers/raw/ioat/idxd_vdev.c    | 106 +++++++++++++++++++++++++++++++-
 drivers/raw/ioat/ioat_private.h |   4 ++
 2 files changed, 109 insertions(+), 1 deletion(-)

diff --git a/drivers/raw/ioat/idxd_vdev.c b/drivers/raw/ioat/idxd_vdev.c
index 0509fc0842..e61c26c1b4 100644
--- a/drivers/raw/ioat/idxd_vdev.c
+++ b/drivers/raw/ioat/idxd_vdev.c
@@ -2,6 +2,12 @@
  * Copyright(c) 2020 Intel Corporation
  */
 
+#include <fcntl.h>
+#include <unistd.h>
+#include <limits.h>
+#include <sys/mman.h>
+
+#include <rte_memzone.h>
 #include <rte_bus_vdev.h>
 #include <rte_kvargs.h>
 #include <rte_string_fns.h>
@@ -24,6 +30,36 @@ struct idxd_vdev_args {
 	uint8_t wq_id;
 };
 
+static const struct rte_rawdev_ops idxd_vdev_ops = {
+		.dev_close = idxd_rawdev_close,
+		.dev_selftest = idxd_rawdev_test,
+};
+
+static void *
+idxd_vdev_mmap_wq(struct idxd_vdev_args *args)
+{
+	void *addr;
+	char path[PATH_MAX];
+	int fd;
+
+	snprintf(path, sizeof(path), "/dev/dsa/wq%u.%u",
+			args->device_id, args->wq_id);
+	fd = open(path, O_RDWR);
+	if (fd < 0) {
+		IOAT_PMD_ERR("Failed to open device path");
+		return NULL;
+	}
+
+	addr = mmap(NULL, 0x1000, PROT_WRITE, MAP_SHARED, fd, 0);
+	close(fd);
+	if (addr == MAP_FAILED) {
+		IOAT_PMD_ERR("Failed to mmap device");
+		return NULL;
+	}
+
+	return addr;
+}
+
 static int
 idxd_rawdev_parse_wq(const char *key __rte_unused, const char *value,
 			  void *extra_args)
@@ -70,10 +106,32 @@ idxd_vdev_parse_params(struct rte_kvargs *kvlist, struct idxd_vdev_args *args)
 	return -EINVAL;
 }
 
+static int
+idxd_vdev_get_max_batches(struct idxd_vdev_args *args)
+{
+	char sysfs_path[PATH_MAX];
+	FILE *f;
+	int ret;
+
+	snprintf(sysfs_path, sizeof(sysfs_path),
+			"/sys/bus/dsa/devices/wq%u.%u/size",
+			args->device_id, args->wq_id);
+	f = fopen(sysfs_path, "r");
+	if (f == NULL)
+		return -1;
+
+	if (fscanf(f, "%d", &ret) != 1)
+		ret = -1;
+
+	fclose(f);
+	return ret;
+}
+
 static int
 idxd_rawdev_probe_vdev(struct rte_vdev_device *vdev)
 {
 	struct rte_kvargs *kvlist;
+	struct idxd_rawdev idxd = {{0}}; /* double {} to avoid error on BSD12 */
 	struct idxd_vdev_args vdev_args;
 	const char *name;
 	int ret = 0;
@@ -96,13 +154,32 @@ idxd_rawdev_probe_vdev(struct rte_vdev_device *vdev)
 		return -EINVAL;
 	}
 
+	idxd.qid = vdev_args.wq_id;
+	idxd.u.vdev.dsa_id = vdev_args.device_id;
+	idxd.max_batches = idxd_vdev_get_max_batches(&vdev_args);
+
+	idxd.public.portal = idxd_vdev_mmap_wq(&vdev_args);
+	if (idxd.public.portal == NULL) {
+		IOAT_PMD_ERR("WQ mmap failed");
+		return -ENOENT;
+	}
+
+	ret = idxd_rawdev_create(name, &vdev->device, &idxd, &idxd_vdev_ops);
+	if (ret) {
+		IOAT_PMD_ERR("Failed to create rawdev %s", name);
+		return ret;
+	}
+
 	return 0;
 }
 
 static int
 idxd_rawdev_remove_vdev(struct rte_vdev_device *vdev)
 {
+	struct idxd_rawdev *idxd;
 	const char *name;
+	struct rte_rawdev *rdev;
+	int ret = 0;
 
 	name = rte_vdev_device_name(vdev);
 	if (name == NULL)
@@ -110,7 +187,34 @@ idxd_rawdev_remove_vdev(struct rte_vdev_device *vdev)
 
 	IOAT_PMD_INFO("Remove DSA vdev %p", name);
 
-	return 0;
+	rdev = rte_rawdev_pmd_get_named_dev(name);
+	if (!rdev) {
+		IOAT_PMD_ERR("Invalid device name (%s)", name);
+		return -EINVAL;
+	}
+
+	idxd = rdev->dev_private;
+
+	/* free context and memory */
+	if (rdev->dev_private != NULL) {
+		IOAT_PMD_DEBUG("Freeing device driver memory");
+		rdev->dev_private = NULL;
+
+		if (munmap(idxd->public.portal, 0x1000) < 0) {
+			IOAT_PMD_ERR("Error unmapping portal");
+			ret = -errno;
+		}
+
+		rte_free(idxd->public.batch_ring);
+		rte_free(idxd->public.hdl_ring);
+
+		rte_memzone_free(idxd->mz);
+	}
+
+	if (rte_rawdev_pmd_release(rdev))
+		IOAT_PMD_ERR("Device cleanup failed");
+
+	return ret;
 }
 
 struct rte_vdev_driver idxd_rawdev_drv_vdev = {
diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
index 53f00a9f3c..6f7bdb4999 100644
--- a/drivers/raw/ioat/ioat_private.h
+++ b/drivers/raw/ioat/ioat_private.h
@@ -45,6 +45,10 @@ struct idxd_rawdev {
 	uint16_t max_batches;
 
 	union {
+		struct {
+			unsigned int dsa_id;
+		} vdev;
+
 		struct idxd_pci_common *pci;
 	} u;
 };
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v4 16/25] raw/ioat: add datapath data structures for idxd devices
  2020-09-28 16:42 ` [dpdk-dev] [PATCH v4 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (14 preceding siblings ...)
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 15/25] raw/ioat: create rawdev instances for idxd vdevs Bruce Richardson
@ 2020-09-28 16:42   ` Bruce Richardson
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 17/25] raw/ioat: add configure function " Bruce Richardson
                     ` (10 subsequent siblings)
  26 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-28 16:42 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Bruce Richardson, Kevin Laatz

Add in the relevant data structures for the data path for DSA devices. Also
include a device dump function to output the status of each device.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
---
 drivers/raw/ioat/idxd_pci.c            |  1 +
 drivers/raw/ioat/idxd_vdev.c           |  1 +
 drivers/raw/ioat/ioat_common.c         | 34 +++++++++++
 drivers/raw/ioat/ioat_private.h        |  2 +
 drivers/raw/ioat/ioat_rawdev_test.c    |  3 +-
 drivers/raw/ioat/rte_ioat_rawdev_fns.h | 80 ++++++++++++++++++++++++++
 6 files changed, 120 insertions(+), 1 deletion(-)

diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
index c3fec56d53..9bee927661 100644
--- a/drivers/raw/ioat/idxd_pci.c
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -54,6 +54,7 @@ idxd_is_wq_enabled(struct idxd_rawdev *idxd)
 static const struct rte_rawdev_ops idxd_pci_ops = {
 		.dev_close = idxd_rawdev_close,
 		.dev_selftest = idxd_rawdev_test,
+		.dump = idxd_dev_dump,
 };
 
 /* each portal uses 4 x 4k pages */
diff --git a/drivers/raw/ioat/idxd_vdev.c b/drivers/raw/ioat/idxd_vdev.c
index e61c26c1b4..ba78eee907 100644
--- a/drivers/raw/ioat/idxd_vdev.c
+++ b/drivers/raw/ioat/idxd_vdev.c
@@ -33,6 +33,7 @@ struct idxd_vdev_args {
 static const struct rte_rawdev_ops idxd_vdev_ops = {
 		.dev_close = idxd_rawdev_close,
 		.dev_selftest = idxd_rawdev_test,
+		.dump = idxd_dev_dump,
 };
 
 static void *
diff --git a/drivers/raw/ioat/ioat_common.c b/drivers/raw/ioat/ioat_common.c
index c3aa015ed3..672241351b 100644
--- a/drivers/raw/ioat/ioat_common.c
+++ b/drivers/raw/ioat/ioat_common.c
@@ -14,6 +14,36 @@ idxd_rawdev_close(struct rte_rawdev *dev __rte_unused)
 	return 0;
 }
 
+int
+idxd_dev_dump(struct rte_rawdev *dev, FILE *f)
+{
+	struct idxd_rawdev *idxd = dev->dev_private;
+	struct rte_idxd_rawdev *rte_idxd = &idxd->public;
+	int i;
+
+	fprintf(f, "Raw Device #%d\n", dev->dev_id);
+	fprintf(f, "Driver: %s\n\n", dev->driver_name);
+
+	fprintf(f, "Portal: %p\n", rte_idxd->portal);
+	fprintf(f, "Batch Ring size: %u\n", rte_idxd->batch_ring_sz);
+	fprintf(f, "Comp Handle Ring size: %u\n\n", rte_idxd->hdl_ring_sz);
+
+	fprintf(f, "Next batch: %u\n", rte_idxd->next_batch);
+	fprintf(f, "Next batch to be completed: %u\n", rte_idxd->next_completed);
+	for (i = 0; i < rte_idxd->batch_ring_sz; i++) {
+		struct rte_idxd_desc_batch *b = &rte_idxd->batch_ring[i];
+		fprintf(f, "Batch %u @%p: submitted=%u, op_count=%u, hdl_end=%u\n",
+				i, b, b->submitted, b->op_count, b->hdl_end);
+	}
+
+	fprintf(f, "\n");
+	fprintf(f, "Next free hdl: %u\n", rte_idxd->next_free_hdl);
+	fprintf(f, "Last completed hdl: %u\n", rte_idxd->last_completed_hdl);
+	fprintf(f, "Next returned hdl: %u\n", rte_idxd->next_ret_hdl);
+
+	return 0;
+}
+
 int
 idxd_rawdev_create(const char *name, struct rte_device *dev,
 		   const struct idxd_rawdev *base_idxd,
@@ -25,6 +55,10 @@ idxd_rawdev_create(const char *name, struct rte_device *dev,
 	char mz_name[RTE_MEMZONE_NAMESIZE];
 	int ret = 0;
 
+	RTE_BUILD_BUG_ON(sizeof(struct rte_idxd_hw_desc) != 64);
+	RTE_BUILD_BUG_ON(offsetof(struct rte_idxd_hw_desc, size) != 32);
+	RTE_BUILD_BUG_ON(sizeof(struct rte_idxd_completion) != 32);
+
 	if (!name) {
 		IOAT_PMD_ERR("Invalid name of the device!");
 		ret = -EINVAL;
diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
index 6f7bdb4999..f521c85a1a 100644
--- a/drivers/raw/ioat/ioat_private.h
+++ b/drivers/raw/ioat/ioat_private.h
@@ -61,4 +61,6 @@ extern int idxd_rawdev_close(struct rte_rawdev *dev);
 
 extern int idxd_rawdev_test(uint16_t dev_id);
 
+extern int idxd_dev_dump(struct rte_rawdev *dev, FILE *f);
+
 #endif /* _IOAT_PRIVATE_H_ */
diff --git a/drivers/raw/ioat/ioat_rawdev_test.c b/drivers/raw/ioat/ioat_rawdev_test.c
index 7cd0f4abf5..a9132a8f10 100644
--- a/drivers/raw/ioat/ioat_rawdev_test.c
+++ b/drivers/raw/ioat/ioat_rawdev_test.c
@@ -261,7 +261,8 @@ ioat_rawdev_test(uint16_t dev_id)
 }
 
 int
-idxd_rawdev_test(uint16_t dev_id __rte_unused)
+idxd_rawdev_test(uint16_t dev_id)
 {
+	rte_rawdev_dump(dev_id, stdout);
 	return 0;
 }
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index fa2eb5334c..178c432dd9 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -90,6 +90,86 @@ struct rte_ioat_rawdev {
 #define RTE_IOAT_CHANSTS_HALTED			0x3
 #define RTE_IOAT_CHANSTS_ARMED			0x4
 
+/*
+ * Defines used in the data path for interacting with hardware.
+ */
+#define IDXD_CMD_OP_SHIFT 24
+enum rte_idxd_ops {
+	idxd_op_nop = 0,
+	idxd_op_batch,
+	idxd_op_drain,
+	idxd_op_memmove,
+	idxd_op_fill
+};
+
+#define IDXD_FLAG_FENCE                 (1 << 0)
+#define IDXD_FLAG_COMPLETION_ADDR_VALID (1 << 2)
+#define IDXD_FLAG_REQUEST_COMPLETION    (1 << 3)
+#define IDXD_FLAG_CACHE_CONTROL         (1 << 8)
+
+/**
+ * Hardware descriptor used by DSA hardware, for both bursts and
+ * for individual operations.
+ */
+struct rte_idxd_hw_desc {
+	uint32_t pasid;
+	uint32_t op_flags;
+	rte_iova_t completion;
+
+	RTE_STD_C11
+	union {
+		rte_iova_t src;      /* source address for copy ops etc. */
+		rte_iova_t desc_addr; /* descriptor pointer for batch */
+	};
+	rte_iova_t dst;
+
+	uint32_t size;    /* length of data for op, or batch size */
+
+	/* 28 bytes of padding here */
+} __rte_aligned(64);
+
+/**
+ * Completion record structure written back by DSA
+ */
+struct rte_idxd_completion {
+	uint8_t status;
+	uint8_t result;
+	/* 16-bits pad here */
+	uint32_t completed_size; /* data length, or descriptors for batch */
+
+	rte_iova_t fault_address;
+	uint32_t invalid_flags;
+} __rte_aligned(32);
+
+#define BATCH_SIZE 64
+
+/**
+ * Structure used inside the driver for building up and submitting
+ * a batch of operations to the DSA hardware.
+ */
+struct rte_idxd_desc_batch {
+	struct rte_idxd_completion comp; /* the completion record for batch */
+
+	uint16_t submitted;
+	uint16_t op_count;
+	uint16_t hdl_end;
+
+	struct rte_idxd_hw_desc batch_desc;
+
+	/* batches must always have 2 descriptors, so put a null at the start */
+	struct rte_idxd_hw_desc null_desc;
+	struct rte_idxd_hw_desc ops[BATCH_SIZE];
+};
+
+/**
+ * structure used to save the "handles" provided by the user to be
+ * returned to the user on job completion.
+ */
+struct rte_idxd_user_hdl {
+	uint64_t src;
+	uint64_t dst;
+};
+
 /**
  * @internal
  * Structure representing an IDXD device instance
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v4 17/25] raw/ioat: add configure function for idxd devices
  2020-09-28 16:42 ` [dpdk-dev] [PATCH v4 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (15 preceding siblings ...)
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 16/25] raw/ioat: add datapath data structures for idxd devices Bruce Richardson
@ 2020-09-28 16:42   ` Bruce Richardson
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 18/25] raw/ioat: add start and stop functions " Bruce Richardson
                     ` (9 subsequent siblings)
  26 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-28 16:42 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Bruce Richardson, Kevin Laatz

Add configure function for idxd devices, taking the same parameters as the
existing configure function for ioat. The ring_size parameter is used to
compute the maximum number of bursts to be supported by the driver, given
that the hardware works on individual bursts of descriptors at a time.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
---
 drivers/raw/ioat/idxd_pci.c            |  1 +
 drivers/raw/ioat/idxd_vdev.c           |  1 +
 drivers/raw/ioat/ioat_common.c         | 64 ++++++++++++++++++++++++++
 drivers/raw/ioat/ioat_private.h        |  3 ++
 drivers/raw/ioat/rte_ioat_rawdev_fns.h |  1 +
 5 files changed, 70 insertions(+)

diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
index 9bee927661..b173c5ae38 100644
--- a/drivers/raw/ioat/idxd_pci.c
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -55,6 +55,7 @@ static const struct rte_rawdev_ops idxd_pci_ops = {
 		.dev_close = idxd_rawdev_close,
 		.dev_selftest = idxd_rawdev_test,
 		.dump = idxd_dev_dump,
+		.dev_configure = idxd_dev_configure,
 };
 
 /* each portal uses 4 x 4k pages */
diff --git a/drivers/raw/ioat/idxd_vdev.c b/drivers/raw/ioat/idxd_vdev.c
index ba78eee907..3dad1473b4 100644
--- a/drivers/raw/ioat/idxd_vdev.c
+++ b/drivers/raw/ioat/idxd_vdev.c
@@ -34,6 +34,7 @@ static const struct rte_rawdev_ops idxd_vdev_ops = {
 		.dev_close = idxd_rawdev_close,
 		.dev_selftest = idxd_rawdev_test,
 		.dump = idxd_dev_dump,
+		.dev_configure = idxd_dev_configure,
 };
 
 static void *
diff --git a/drivers/raw/ioat/ioat_common.c b/drivers/raw/ioat/ioat_common.c
index 672241351b..5173c331c9 100644
--- a/drivers/raw/ioat/ioat_common.c
+++ b/drivers/raw/ioat/ioat_common.c
@@ -44,6 +44,70 @@ idxd_dev_dump(struct rte_rawdev *dev, FILE *f)
 	return 0;
 }
 
+int
+idxd_dev_configure(const struct rte_rawdev *dev,
+		rte_rawdev_obj_t config, size_t config_size)
+{
+	struct idxd_rawdev *idxd = dev->dev_private;
+	struct rte_idxd_rawdev *rte_idxd = &idxd->public;
+	struct rte_ioat_rawdev_config *cfg = config;
+	uint16_t max_desc = cfg->ring_size;
+	uint16_t max_batches = max_desc / BATCH_SIZE;
+	uint16_t i;
+
+	if (config_size != sizeof(*cfg))
+		return -EINVAL;
+
+	if (dev->started) {
+		IOAT_PMD_ERR("%s: Error, device is started.", __func__);
+		return -EAGAIN;
+	}
+
+	rte_idxd->hdls_disable = cfg->hdls_disable;
+
+	/* limit the batches to what can be stored in hardware */
+	if (max_batches > idxd->max_batches) {
+		IOAT_PMD_DEBUG("Ring size of %u is too large for this device, need to limit to %u batches of %u",
+				max_desc, idxd->max_batches, BATCH_SIZE);
+		max_batches = idxd->max_batches;
+		max_desc = max_batches * BATCH_SIZE;
+	}
+	if (!rte_is_power_of_2(max_desc))
+		max_desc = rte_align32pow2(max_desc);
+	IOAT_PMD_DEBUG("Rawdev %u using %u descriptors in %u batches",
+			dev->dev_id, max_desc, max_batches);
+
+	/* in case we are reconfiguring a device, free any existing memory */
+	rte_free(rte_idxd->batch_ring);
+	rte_free(rte_idxd->hdl_ring);
+
+	rte_idxd->batch_ring = rte_zmalloc(NULL,
+			sizeof(*rte_idxd->batch_ring) * max_batches, 0);
+	if (rte_idxd->batch_ring == NULL)
+		return -ENOMEM;
+
+	rte_idxd->hdl_ring = rte_zmalloc(NULL,
+			sizeof(*rte_idxd->hdl_ring) * max_desc, 0);
+	if (rte_idxd->hdl_ring == NULL) {
+		rte_free(rte_idxd->batch_ring);
+		rte_idxd->batch_ring = NULL;
+		return -ENOMEM;
+	}
+	rte_idxd->batch_ring_sz = max_batches;
+	rte_idxd->hdl_ring_sz = max_desc;
+
+	for (i = 0; i < rte_idxd->batch_ring_sz; i++) {
+		struct rte_idxd_desc_batch *b = &rte_idxd->batch_ring[i];
+		b->batch_desc.completion = rte_mem_virt2iova(&b->comp);
+		b->batch_desc.desc_addr = rte_mem_virt2iova(&b->null_desc);
+		b->batch_desc.op_flags = (idxd_op_batch << IDXD_CMD_OP_SHIFT) |
+				IDXD_FLAG_COMPLETION_ADDR_VALID |
+				IDXD_FLAG_REQUEST_COMPLETION;
+	}
+
+	return 0;
+}
+
 int
 idxd_rawdev_create(const char *name, struct rte_device *dev,
 		   const struct idxd_rawdev *base_idxd,
diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
index f521c85a1a..aba70d8d71 100644
--- a/drivers/raw/ioat/ioat_private.h
+++ b/drivers/raw/ioat/ioat_private.h
@@ -59,6 +59,9 @@ extern int idxd_rawdev_create(const char *name, struct rte_device *dev,
 
 extern int idxd_rawdev_close(struct rte_rawdev *dev);
 
+extern int idxd_dev_configure(const struct rte_rawdev *dev,
+		rte_rawdev_obj_t config, size_t config_size);
+
 extern int idxd_rawdev_test(uint16_t dev_id);
 
 extern int idxd_dev_dump(struct rte_rawdev *dev, FILE *f);
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index 178c432dd9..e9cdce0162 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -187,6 +187,7 @@ struct rte_idxd_rawdev {
 	uint16_t next_ret_hdl;   /* the next user hdl to return */
 	uint16_t last_completed_hdl; /* the last user hdl that has completed */
 	uint16_t next_free_hdl;  /* where the handle for next op will go */
+	uint16_t hdls_disable;   /* disable tracking completion handles */
 
 	struct rte_idxd_user_hdl *hdl_ring;
 	struct rte_idxd_desc_batch *batch_ring;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v4 18/25] raw/ioat: add start and stop functions for idxd devices
  2020-09-28 16:42 ` [dpdk-dev] [PATCH v4 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (16 preceding siblings ...)
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 17/25] raw/ioat: add configure function " Bruce Richardson
@ 2020-09-28 16:42   ` Bruce Richardson
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 19/25] raw/ioat: add data path " Bruce Richardson
                     ` (8 subsequent siblings)
  26 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-28 16:42 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Bruce Richardson, Kevin Laatz

Add the start and stop functions for DSA hardware devices using the
vfio/uio kernel drivers. For vdevs using the idxd kernel driver, the device
must be started using sysfs before the device node appears for vdev use -
making start/stop functions in the driver unnecessary.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
---
 drivers/raw/ioat/idxd_pci.c | 50 +++++++++++++++++++++++++++++++++++++
 1 file changed, 50 insertions(+)

diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
index b173c5ae38..6b5c47392c 100644
--- a/drivers/raw/ioat/idxd_pci.c
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -51,11 +51,61 @@ idxd_is_wq_enabled(struct idxd_rawdev *idxd)
 	return ((state >> WQ_STATE_SHIFT) & WQ_STATE_MASK) == 0x1;
 }
 
+static void
+idxd_pci_dev_stop(struct rte_rawdev *dev)
+{
+	struct idxd_rawdev *idxd = dev->dev_private;
+	uint8_t err_code;
+
+	if (!idxd_is_wq_enabled(idxd)) {
+		IOAT_PMD_ERR("Work queue %d already disabled", idxd->qid);
+		return;
+	}
+
+	err_code = idxd_pci_dev_command(idxd, idxd_disable_wq);
+	if (err_code || idxd_is_wq_enabled(idxd)) {
+		IOAT_PMD_ERR("Failed disabling work queue %d, error code: %#x",
+				idxd->qid, err_code);
+		return;
+	}
+	IOAT_PMD_DEBUG("Work queue %d disabled OK", idxd->qid);
+}
+
+static int
+idxd_pci_dev_start(struct rte_rawdev *dev)
+{
+	struct idxd_rawdev *idxd = dev->dev_private;
+	uint8_t err_code;
+
+	if (idxd_is_wq_enabled(idxd)) {
+		IOAT_PMD_WARN("WQ %d already enabled", idxd->qid);
+		return 0;
+	}
+
+	if (idxd->public.batch_ring == NULL) {
+		IOAT_PMD_ERR("WQ %d has not been fully configured", idxd->qid);
+		return -EINVAL;
+	}
+
+	err_code = idxd_pci_dev_command(idxd, idxd_enable_wq);
+	if (err_code || !idxd_is_wq_enabled(idxd)) {
+		IOAT_PMD_ERR("Failed enabling work queue %d, error code: %#x",
+				idxd->qid, err_code);
+		return err_code == 0 ? -1 : err_code;
+	}
+
+	IOAT_PMD_DEBUG("Work queue %d enabled OK", idxd->qid);
+
+	return 0;
+}
+
 static const struct rte_rawdev_ops idxd_pci_ops = {
 		.dev_close = idxd_rawdev_close,
 		.dev_selftest = idxd_rawdev_test,
 		.dump = idxd_dev_dump,
 		.dev_configure = idxd_dev_configure,
+		.dev_start = idxd_pci_dev_start,
+		.dev_stop = idxd_pci_dev_stop,
 };
 
 /* each portal uses 4 x 4k pages */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v4 19/25] raw/ioat: add data path for idxd devices
  2020-09-28 16:42 ` [dpdk-dev] [PATCH v4 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (17 preceding siblings ...)
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 18/25] raw/ioat: add start and stop functions " Bruce Richardson
@ 2020-09-28 16:42   ` Bruce Richardson
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 20/25] raw/ioat: add info function " Bruce Richardson
                     ` (7 subsequent siblings)
  26 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-28 16:42 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Bruce Richardson, Kevin Laatz

Add support for doing copies using DSA hardware. This is implemented by
just switching on the device type field at the start of the inline
functions. Since there is no hardware which will have both device types
present this branch will always be predictable after the first call,
meaning it has little to no perf penalty.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
---
 drivers/raw/ioat/ioat_common.c         |   1 +
 drivers/raw/ioat/ioat_rawdev.c         |   1 +
 drivers/raw/ioat/rte_ioat_rawdev_fns.h | 201 +++++++++++++++++++++++--
 3 files changed, 192 insertions(+), 11 deletions(-)

diff --git a/drivers/raw/ioat/ioat_common.c b/drivers/raw/ioat/ioat_common.c
index 5173c331c9..6a4e2979f5 100644
--- a/drivers/raw/ioat/ioat_common.c
+++ b/drivers/raw/ioat/ioat_common.c
@@ -153,6 +153,7 @@ idxd_rawdev_create(const char *name, struct rte_device *dev,
 
 	idxd = rawdev->dev_private;
 	*idxd = *base_idxd; /* copy over the main fields already passed in */
+	idxd->public.type = RTE_IDXD_DEV;
 	idxd->rawdev = rawdev;
 	idxd->mz = mz;
 
diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
index 1fe32278d2..0097be87ee 100644
--- a/drivers/raw/ioat/ioat_rawdev.c
+++ b/drivers/raw/ioat/ioat_rawdev.c
@@ -260,6 +260,7 @@ ioat_rawdev_create(const char *name, struct rte_pci_device *dev)
 	rawdev->driver_name = dev->device.driver->name;
 
 	ioat = rawdev->dev_private;
+	ioat->type = RTE_IOAT_DEV;
 	ioat->rawdev = rawdev;
 	ioat->mz = mz;
 	ioat->regs = dev->mem_resource[0].addr;
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index e9cdce0162..36ba876eab 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -196,8 +196,8 @@ struct rte_idxd_rawdev {
 /*
  * Enqueue a copy operation onto the ioat device
  */
-static inline int
-rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
+static __rte_always_inline int
+__ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl)
 {
 	struct rte_ioat_rawdev *ioat =
@@ -233,8 +233,8 @@ rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 }
 
 /* add fence to last written descriptor */
-static inline int
-rte_ioat_fence(int dev_id)
+static __rte_always_inline int
+__ioat_fence(int dev_id)
 {
 	struct rte_ioat_rawdev *ioat =
 			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
@@ -252,8 +252,8 @@ rte_ioat_fence(int dev_id)
 /*
  * Trigger hardware to begin performing enqueued operations
  */
-static inline void
-rte_ioat_perform_ops(int dev_id)
+static __rte_always_inline void
+__ioat_perform_ops(int dev_id)
 {
 	struct rte_ioat_rawdev *ioat =
 			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
@@ -268,8 +268,8 @@ rte_ioat_perform_ops(int dev_id)
  * @internal
  * Returns the index of the last completed operation.
  */
-static inline int
-rte_ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
+static __rte_always_inline int
+__ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
 {
 	uint64_t status = ioat->status;
 
@@ -283,8 +283,8 @@ rte_ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
 /*
  * Returns details of operations that have been completed
  */
-static inline int
-rte_ioat_completed_ops(int dev_id, uint8_t max_copies,
+static __rte_always_inline int
+__ioat_completed_ops(int dev_id, uint8_t max_copies,
 		uintptr_t *src_hdls, uintptr_t *dst_hdls)
 {
 	struct rte_ioat_rawdev *ioat =
@@ -295,7 +295,7 @@ rte_ioat_completed_ops(int dev_id, uint8_t max_copies,
 	int error;
 	int i = 0;
 
-	end_read = (rte_ioat_get_last_completed(ioat, &error) + 1) & mask;
+	end_read = (__ioat_get_last_completed(ioat, &error) + 1) & mask;
 	count = (end_read - (read & mask)) & mask;
 
 	if (error) {
@@ -332,6 +332,185 @@ rte_ioat_completed_ops(int dev_id, uint8_t max_copies,
 	return count;
 }
 
+static __rte_always_inline int
+__idxd_write_desc(int dev_id, const struct rte_idxd_hw_desc *desc,
+		const struct rte_idxd_user_hdl *hdl)
+{
+	struct rte_idxd_rawdev *idxd =
+			(struct rte_idxd_rawdev *)rte_rawdevs[dev_id].dev_private;
+	struct rte_idxd_desc_batch *b = &idxd->batch_ring[idxd->next_batch];
+
+	/* check for room in the handle ring */
+	if (((idxd->next_free_hdl + 1) & (idxd->hdl_ring_sz - 1)) == idxd->next_ret_hdl)
+		goto failed;
+
+	/* check for space in current batch */
+	if (b->op_count >= BATCH_SIZE)
+		goto failed;
+
+	/* check that we can actually use the current batch */
+	if (b->submitted)
+		goto failed;
+
+	/* write the descriptor */
+	b->ops[b->op_count++] = *desc;
+
+	/* store the completion details */
+	if (!idxd->hdls_disable)
+		idxd->hdl_ring[idxd->next_free_hdl] = *hdl;
+	if (++idxd->next_free_hdl == idxd->hdl_ring_sz)
+		idxd->next_free_hdl = 0;
+
+	return 1;
+
+failed:
+	rte_errno = ENOSPC;
+	return 0;
+}
+
+static __rte_always_inline int
+__idxd_enqueue_copy(int dev_id, rte_iova_t src, rte_iova_t dst,
+		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl)
+{
+	const struct rte_idxd_hw_desc desc = {
+			.op_flags =  (idxd_op_memmove << IDXD_CMD_OP_SHIFT) |
+				IDXD_FLAG_CACHE_CONTROL,
+			.src = src,
+			.dst = dst,
+			.size = length
+	};
+	const struct rte_idxd_user_hdl hdl = {
+			.src = src_hdl,
+			.dst = dst_hdl
+	};
+	return __idxd_write_desc(dev_id, &desc, &hdl);
+}
+
+static __rte_always_inline int
+__idxd_fence(int dev_id)
+{
+	static const struct rte_idxd_hw_desc fence = {
+			.op_flags = IDXD_FLAG_FENCE
+	};
+	static const struct rte_idxd_user_hdl null_hdl;
+	return __idxd_write_desc(dev_id, &fence, &null_hdl);
+}
+
+static __rte_always_inline void
+__idxd_movdir64b(volatile void *dst, const void *src)
+{
+	asm volatile (".byte 0x66, 0x0f, 0x38, 0xf8, 0x02"
+			:
+			: "a" (dst), "d" (src));
+}
+
+static __rte_always_inline void
+__idxd_perform_ops(int dev_id)
+{
+	struct rte_idxd_rawdev *idxd =
+			(struct rte_idxd_rawdev *)rte_rawdevs[dev_id].dev_private;
+	struct rte_idxd_desc_batch *b = &idxd->batch_ring[idxd->next_batch];
+
+	if (b->submitted || b->op_count == 0)
+		return;
+	b->hdl_end = idxd->next_free_hdl;
+	b->comp.status = 0;
+	b->submitted = 1;
+	b->batch_desc.size = b->op_count + 1;
+	__idxd_movdir64b(idxd->portal, &b->batch_desc);
+
+	if (++idxd->next_batch == idxd->batch_ring_sz)
+		idxd->next_batch = 0;
+}
+
+static __rte_always_inline int
+__idxd_completed_ops(int dev_id, uint8_t max_ops,
+		uintptr_t *src_hdls, uintptr_t *dst_hdls)
+{
+	struct rte_idxd_rawdev *idxd =
+			(struct rte_idxd_rawdev *)rte_rawdevs[dev_id].dev_private;
+	struct rte_idxd_desc_batch *b = &idxd->batch_ring[idxd->next_completed];
+	uint16_t h_idx = idxd->next_ret_hdl;
+	int n = 0;
+
+	while (b->submitted && b->comp.status != 0) {
+		idxd->last_completed_hdl = b->hdl_end;
+		b->submitted = 0;
+		b->op_count = 0;
+		if (++idxd->next_completed == idxd->batch_ring_sz)
+			idxd->next_completed = 0;
+		b = &idxd->batch_ring[idxd->next_completed];
+	}
+
+	if (!idxd->hdls_disable)
+		for (n = 0; n < max_ops && h_idx != idxd->last_completed_hdl; n++) {
+			src_hdls[n] = idxd->hdl_ring[h_idx].src;
+			dst_hdls[n] = idxd->hdl_ring[h_idx].dst;
+			if (++h_idx == idxd->hdl_ring_sz)
+				h_idx = 0;
+		}
+	else
+		while (h_idx != idxd->last_completed_hdl) {
+			n++;
+			if (++h_idx == idxd->hdl_ring_sz)
+				h_idx = 0;
+		}
+
+	idxd->next_ret_hdl = h_idx;
+
+	return n;
+}
+
+static inline int
+rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
+		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl)
+{
+	enum rte_ioat_dev_type *type =
+			(enum rte_ioat_dev_type *)rte_rawdevs[dev_id].dev_private;
+	if (*type == RTE_IDXD_DEV)
+		return __idxd_enqueue_copy(dev_id, src, dst, length,
+				src_hdl, dst_hdl);
+	else
+		return __ioat_enqueue_copy(dev_id, src, dst, length,
+				src_hdl, dst_hdl);
+}
+
+static inline int
+rte_ioat_fence(int dev_id)
+{
+	enum rte_ioat_dev_type *type =
+			(enum rte_ioat_dev_type *)rte_rawdevs[dev_id].dev_private;
+	if (*type == RTE_IDXD_DEV)
+		return __idxd_fence(dev_id);
+	else
+		return __ioat_fence(dev_id);
+}
+
+static inline void
+rte_ioat_perform_ops(int dev_id)
+{
+	enum rte_ioat_dev_type *type =
+			(enum rte_ioat_dev_type *)rte_rawdevs[dev_id].dev_private;
+	if (*type == RTE_IDXD_DEV)
+		return __idxd_perform_ops(dev_id);
+	else
+		return __ioat_perform_ops(dev_id);
+}
+
+static inline int
+rte_ioat_completed_ops(int dev_id, uint8_t max_copies,
+		uintptr_t *src_hdls, uintptr_t *dst_hdls)
+{
+	enum rte_ioat_dev_type *type =
+			(enum rte_ioat_dev_type *)rte_rawdevs[dev_id].dev_private;
+	if (*type == RTE_IDXD_DEV)
+		return __idxd_completed_ops(dev_id, max_copies,
+				src_hdls, dst_hdls);
+	else
+		return __ioat_completed_ops(dev_id,  max_copies,
+				src_hdls, dst_hdls);
+}
+
 static inline void
 __rte_deprecated_msg("use rte_ioat_perform_ops() instead")
 rte_ioat_do_copies(int dev_id) { rte_ioat_perform_ops(dev_id); }
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v4 20/25] raw/ioat: add info function for idxd devices
  2020-09-28 16:42 ` [dpdk-dev] [PATCH v4 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (18 preceding siblings ...)
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 19/25] raw/ioat: add data path " Bruce Richardson
@ 2020-09-28 16:42   ` Bruce Richardson
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 21/25] raw/ioat: create separate statistics structure Bruce Richardson
                     ` (6 subsequent siblings)
  26 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-28 16:42 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Bruce Richardson, Kevin Laatz

Add the info get function for DSA devices, returning just the ring size
info about the device, same as is returned for existing IOAT/CBDMA devices.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
---
 drivers/raw/ioat/idxd_pci.c     |  1 +
 drivers/raw/ioat/idxd_vdev.c    |  1 +
 drivers/raw/ioat/ioat_common.c  | 18 ++++++++++++++++++
 drivers/raw/ioat/ioat_private.h |  3 +++
 4 files changed, 23 insertions(+)

diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
index 6b5c47392c..bf5edcfddd 100644
--- a/drivers/raw/ioat/idxd_pci.c
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -106,6 +106,7 @@ static const struct rte_rawdev_ops idxd_pci_ops = {
 		.dev_configure = idxd_dev_configure,
 		.dev_start = idxd_pci_dev_start,
 		.dev_stop = idxd_pci_dev_stop,
+		.dev_info_get = idxd_dev_info_get,
 };
 
 /* each portal uses 4 x 4k pages */
diff --git a/drivers/raw/ioat/idxd_vdev.c b/drivers/raw/ioat/idxd_vdev.c
index 3dad1473b4..c75ac43175 100644
--- a/drivers/raw/ioat/idxd_vdev.c
+++ b/drivers/raw/ioat/idxd_vdev.c
@@ -35,6 +35,7 @@ static const struct rte_rawdev_ops idxd_vdev_ops = {
 		.dev_selftest = idxd_rawdev_test,
 		.dump = idxd_dev_dump,
 		.dev_configure = idxd_dev_configure,
+		.dev_info_get = idxd_dev_info_get,
 };
 
 static void *
diff --git a/drivers/raw/ioat/ioat_common.c b/drivers/raw/ioat/ioat_common.c
index 6a4e2979f5..b5cea2fda0 100644
--- a/drivers/raw/ioat/ioat_common.c
+++ b/drivers/raw/ioat/ioat_common.c
@@ -44,6 +44,24 @@ idxd_dev_dump(struct rte_rawdev *dev, FILE *f)
 	return 0;
 }
 
+int
+idxd_dev_info_get(struct rte_rawdev *dev, rte_rawdev_obj_t dev_info,
+		size_t info_size)
+{
+	struct rte_ioat_rawdev_config *cfg = dev_info;
+	struct idxd_rawdev *idxd = dev->dev_private;
+	struct rte_idxd_rawdev *rte_idxd = &idxd->public;
+
+	if (info_size != sizeof(*cfg))
+		return -EINVAL;
+
+	if (cfg != NULL) {
+		cfg->ring_size = rte_idxd->hdl_ring_sz;
+		cfg->hdls_disable = rte_idxd->hdls_disable;
+	}
+	return 0;
+}
+
 int
 idxd_dev_configure(const struct rte_rawdev *dev,
 		rte_rawdev_obj_t config, size_t config_size)
diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
index aba70d8d71..0f80d60bf6 100644
--- a/drivers/raw/ioat/ioat_private.h
+++ b/drivers/raw/ioat/ioat_private.h
@@ -62,6 +62,9 @@ extern int idxd_rawdev_close(struct rte_rawdev *dev);
 extern int idxd_dev_configure(const struct rte_rawdev *dev,
 		rte_rawdev_obj_t config, size_t config_size);
 
+extern int idxd_dev_info_get(struct rte_rawdev *dev, rte_rawdev_obj_t dev_info,
+		size_t info_size);
+
 extern int idxd_rawdev_test(uint16_t dev_id);
 
 extern int idxd_dev_dump(struct rte_rawdev *dev, FILE *f);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v4 21/25] raw/ioat: create separate statistics structure
  2020-09-28 16:42 ` [dpdk-dev] [PATCH v4 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (19 preceding siblings ...)
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 20/25] raw/ioat: add info function " Bruce Richardson
@ 2020-09-28 16:42   ` Bruce Richardson
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 22/25] raw/ioat: move xstats functions to common file Bruce Richardson
                     ` (5 subsequent siblings)
  26 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-28 16:42 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Bruce Richardson, Kevin Laatz

Rather than having the xstats as fields inside the main driver structure,
create a separate structure type for them.

As part of the change, when updating the stats functions referring to the
stats by the old path, we can simplify them to use the id to directly index
into the stats structure, making the code shorter and simpler.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
---
 drivers/raw/ioat/ioat_rawdev.c         | 40 +++++++-------------------
 drivers/raw/ioat/rte_ioat_rawdev_fns.h | 30 ++++++++++++-------
 2 files changed, 29 insertions(+), 41 deletions(-)

diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
index 0097be87ee..4ea913fff1 100644
--- a/drivers/raw/ioat/ioat_rawdev.c
+++ b/drivers/raw/ioat/ioat_rawdev.c
@@ -132,16 +132,14 @@ ioat_xstats_get(const struct rte_rawdev *dev, const unsigned int ids[],
 		uint64_t values[], unsigned int n)
 {
 	const struct rte_ioat_rawdev *ioat = dev->dev_private;
+	const uint64_t *stats = (const void *)&ioat->xstats;
 	unsigned int i;
 
 	for (i = 0; i < n; i++) {
-		switch (ids[i]) {
-		case 0: values[i] = ioat->enqueue_failed; break;
-		case 1: values[i] = ioat->enqueued; break;
-		case 2: values[i] = ioat->started; break;
-		case 3: values[i] = ioat->completed; break;
-		default: values[i] = 0; break;
-		}
+		if (ids[i] < sizeof(ioat->xstats)/sizeof(*stats))
+			values[i] = stats[ids[i]];
+		else
+			values[i] = 0;
 	}
 	return n;
 }
@@ -167,35 +165,17 @@ static int
 ioat_xstats_reset(struct rte_rawdev *dev, const uint32_t *ids, uint32_t nb_ids)
 {
 	struct rte_ioat_rawdev *ioat = dev->dev_private;
+	uint64_t *stats = (void *)&ioat->xstats;
 	unsigned int i;
 
 	if (!ids) {
-		ioat->enqueue_failed = 0;
-		ioat->enqueued = 0;
-		ioat->started = 0;
-		ioat->completed = 0;
+		memset(&ioat->xstats, 0, sizeof(ioat->xstats));
 		return 0;
 	}
 
-	for (i = 0; i < nb_ids; i++) {
-		switch (ids[i]) {
-		case 0:
-			ioat->enqueue_failed = 0;
-			break;
-		case 1:
-			ioat->enqueued = 0;
-			break;
-		case 2:
-			ioat->started = 0;
-			break;
-		case 3:
-			ioat->completed = 0;
-			break;
-		default:
-			IOAT_PMD_WARN("Invalid xstat id - cannot reset value");
-			break;
-		}
-	}
+	for (i = 0; i < nb_ids; i++)
+		if (ids[i] < sizeof(ioat->xstats)/sizeof(*stats))
+			stats[ids[i]] = 0;
 
 	return 0;
 }
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index 36ba876eab..89bfc8d21a 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -49,17 +49,31 @@ enum rte_ioat_dev_type {
 	RTE_IDXD_DEV,
 };
 
+/**
+ * @internal
+ * some statistics for tracking, if added/changed update xstats fns
+ */
+struct rte_ioat_xstats {
+	uint64_t enqueue_failed;
+	uint64_t enqueued;
+	uint64_t started;
+	uint64_t completed;
+};
+
 /**
  * @internal
  * Structure representing an IOAT device instance
  */
 struct rte_ioat_rawdev {
+	/* common fields at the top - match those in rte_idxd_rawdev */
 	enum rte_ioat_dev_type type;
+	struct rte_ioat_xstats xstats;
+
 	struct rte_rawdev *rawdev;
 	const struct rte_memzone *mz;
 	const struct rte_memzone *desc_mz;
 
-	volatile uint16_t *doorbell;
+	volatile uint16_t *doorbell __rte_cache_aligned;
 	phys_addr_t status_addr;
 	phys_addr_t ring_addr;
 
@@ -72,12 +86,6 @@ struct rte_ioat_rawdev {
 	unsigned short next_read;
 	unsigned short next_write;
 
-	/* some statistics for tracking, if added/changed update xstats fns*/
-	uint64_t enqueue_failed __rte_cache_aligned;
-	uint64_t enqueued;
-	uint64_t started;
-	uint64_t completed;
-
 	/* to report completions, the device will write status back here */
 	volatile uint64_t status __rte_cache_aligned;
 
@@ -209,7 +217,7 @@ __ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 	struct rte_ioat_generic_hw_desc *desc;
 
 	if (space == 0) {
-		ioat->enqueue_failed++;
+		ioat->xstats.enqueue_failed++;
 		return 0;
 	}
 
@@ -228,7 +236,7 @@ __ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 					(int64_t)src_hdl);
 	rte_prefetch0(&ioat->desc_ring[ioat->next_write & mask]);
 
-	ioat->enqueued++;
+	ioat->xstats.enqueued++;
 	return 1;
 }
 
@@ -261,7 +269,7 @@ __ioat_perform_ops(int dev_id)
 			.control.completion_update = 1;
 	rte_compiler_barrier();
 	*ioat->doorbell = ioat->next_write;
-	ioat->started = ioat->enqueued;
+	ioat->xstats.started = ioat->xstats.enqueued;
 }
 
 /**
@@ -328,7 +336,7 @@ __ioat_completed_ops(int dev_id, uint8_t max_copies,
 
 end:
 	ioat->next_read = read;
-	ioat->completed += count;
+	ioat->xstats.completed += count;
 	return count;
 }
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v4 22/25] raw/ioat: move xstats functions to common file
  2020-09-28 16:42 ` [dpdk-dev] [PATCH v4 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (20 preceding siblings ...)
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 21/25] raw/ioat: create separate statistics structure Bruce Richardson
@ 2020-09-28 16:42   ` Bruce Richardson
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 23/25] raw/ioat: add xstats tracking for idxd devices Bruce Richardson
                     ` (4 subsequent siblings)
  26 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-28 16:42 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Bruce Richardson, Kevin Laatz

The xstats functions can be used by all ioat devices so move them from the
ioat_rawdev.c file to ioat_common.c, and add the function prototypes to the
internal header file.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
---
 drivers/raw/ioat/ioat_common.c  | 59 +++++++++++++++++++++++++++++++++
 drivers/raw/ioat/ioat_private.h | 10 ++++++
 drivers/raw/ioat/ioat_rawdev.c  | 58 --------------------------------
 3 files changed, 69 insertions(+), 58 deletions(-)

diff --git a/drivers/raw/ioat/ioat_common.c b/drivers/raw/ioat/ioat_common.c
index b5cea2fda0..142e171bc9 100644
--- a/drivers/raw/ioat/ioat_common.c
+++ b/drivers/raw/ioat/ioat_common.c
@@ -5,9 +5,68 @@
 #include <rte_rawdev_pmd.h>
 #include <rte_memzone.h>
 #include <rte_common.h>
+#include <rte_string_fns.h>
 
 #include "ioat_private.h"
 
+static const char * const xstat_names[] = {
+		"failed_enqueues", "successful_enqueues",
+		"copies_started", "copies_completed"
+};
+
+int
+ioat_xstats_get(const struct rte_rawdev *dev, const unsigned int ids[],
+		uint64_t values[], unsigned int n)
+{
+	const struct rte_ioat_rawdev *ioat = dev->dev_private;
+	const uint64_t *stats = (const void *)&ioat->xstats;
+	unsigned int i;
+
+	for (i = 0; i < n; i++) {
+		if (ids[i] > sizeof(ioat->xstats)/sizeof(*stats))
+			values[i] = 0;
+		else
+			values[i] = stats[ids[i]];
+	}
+	return n;
+}
+
+int
+ioat_xstats_get_names(const struct rte_rawdev *dev,
+		struct rte_rawdev_xstats_name *names,
+		unsigned int size)
+{
+	unsigned int i;
+
+	RTE_SET_USED(dev);
+	if (size < RTE_DIM(xstat_names))
+		return RTE_DIM(xstat_names);
+
+	for (i = 0; i < RTE_DIM(xstat_names); i++)
+		strlcpy(names[i].name, xstat_names[i], sizeof(names[i]));
+
+	return RTE_DIM(xstat_names);
+}
+
+int
+ioat_xstats_reset(struct rte_rawdev *dev, const uint32_t *ids, uint32_t nb_ids)
+{
+	struct rte_ioat_rawdev *ioat = dev->dev_private;
+	uint64_t *stats = (void *)&ioat->xstats;
+	unsigned int i;
+
+	if (!ids) {
+		memset(&ioat->xstats, 0, sizeof(ioat->xstats));
+		return 0;
+	}
+
+	for (i = 0; i < nb_ids; i++)
+		if (ids[i] < sizeof(ioat->xstats)/sizeof(*stats))
+			stats[ids[i]] = 0;
+
+	return 0;
+}
+
 int
 idxd_rawdev_close(struct rte_rawdev *dev __rte_unused)
 {
diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
index 0f80d60bf6..ab9a3e6cce 100644
--- a/drivers/raw/ioat/ioat_private.h
+++ b/drivers/raw/ioat/ioat_private.h
@@ -53,6 +53,16 @@ struct idxd_rawdev {
 	} u;
 };
 
+int ioat_xstats_get(const struct rte_rawdev *dev, const unsigned int ids[],
+		uint64_t values[], unsigned int n);
+
+int ioat_xstats_get_names(const struct rte_rawdev *dev,
+		struct rte_rawdev_xstats_name *names,
+		unsigned int size);
+
+int ioat_xstats_reset(struct rte_rawdev *dev, const uint32_t *ids,
+		uint32_t nb_ids);
+
 extern int idxd_rawdev_create(const char *name, struct rte_device *dev,
 		       const struct idxd_rawdev *idxd,
 		       const struct rte_rawdev_ops *ops);
diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
index 4ea913fff1..dd2543c809 100644
--- a/drivers/raw/ioat/ioat_rawdev.c
+++ b/drivers/raw/ioat/ioat_rawdev.c
@@ -122,64 +122,6 @@ ioat_dev_info_get(struct rte_rawdev *dev, rte_rawdev_obj_t dev_info,
 	return 0;
 }
 
-static const char * const xstat_names[] = {
-		"failed_enqueues", "successful_enqueues",
-		"copies_started", "copies_completed"
-};
-
-static int
-ioat_xstats_get(const struct rte_rawdev *dev, const unsigned int ids[],
-		uint64_t values[], unsigned int n)
-{
-	const struct rte_ioat_rawdev *ioat = dev->dev_private;
-	const uint64_t *stats = (const void *)&ioat->xstats;
-	unsigned int i;
-
-	for (i = 0; i < n; i++) {
-		if (ids[i] < sizeof(ioat->xstats)/sizeof(*stats))
-			values[i] = stats[ids[i]];
-		else
-			values[i] = 0;
-	}
-	return n;
-}
-
-static int
-ioat_xstats_get_names(const struct rte_rawdev *dev,
-		struct rte_rawdev_xstats_name *names,
-		unsigned int size)
-{
-	unsigned int i;
-
-	RTE_SET_USED(dev);
-	if (size < RTE_DIM(xstat_names))
-		return RTE_DIM(xstat_names);
-
-	for (i = 0; i < RTE_DIM(xstat_names); i++)
-		strlcpy(names[i].name, xstat_names[i], sizeof(names[i]));
-
-	return RTE_DIM(xstat_names);
-}
-
-static int
-ioat_xstats_reset(struct rte_rawdev *dev, const uint32_t *ids, uint32_t nb_ids)
-{
-	struct rte_ioat_rawdev *ioat = dev->dev_private;
-	uint64_t *stats = (void *)&ioat->xstats;
-	unsigned int i;
-
-	if (!ids) {
-		memset(&ioat->xstats, 0, sizeof(ioat->xstats));
-		return 0;
-	}
-
-	for (i = 0; i < nb_ids; i++)
-		if (ids[i] < sizeof(ioat->xstats)/sizeof(*stats))
-			stats[ids[i]] = 0;
-
-	return 0;
-}
-
 static int
 ioat_dev_close(struct rte_rawdev *dev __rte_unused)
 {
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v4 23/25] raw/ioat: add xstats tracking for idxd devices
  2020-09-28 16:42 ` [dpdk-dev] [PATCH v4 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (21 preceding siblings ...)
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 22/25] raw/ioat: move xstats functions to common file Bruce Richardson
@ 2020-09-28 16:42   ` Bruce Richardson
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 24/25] raw/ioat: clean up use of common test function Bruce Richardson
                     ` (3 subsequent siblings)
  26 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-28 16:42 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Bruce Richardson, Kevin Laatz

Add update of the relevant stats for the data path functions and point the
overall device struct xstats function pointers to the existing ioat
functions.

At this point, all necessary hooks for supporting the existing unit tests
are in place so call them for each device.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
---
 drivers/raw/ioat/idxd_pci.c            | 3 +++
 drivers/raw/ioat/idxd_vdev.c           | 3 +++
 drivers/raw/ioat/ioat_rawdev_test.c    | 2 +-
 drivers/raw/ioat/rte_ioat_rawdev_fns.h | 6 ++++++
 4 files changed, 13 insertions(+), 1 deletion(-)

diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
index bf5edcfddd..9113f8c8e9 100644
--- a/drivers/raw/ioat/idxd_pci.c
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -107,6 +107,9 @@ static const struct rte_rawdev_ops idxd_pci_ops = {
 		.dev_start = idxd_pci_dev_start,
 		.dev_stop = idxd_pci_dev_stop,
 		.dev_info_get = idxd_dev_info_get,
+		.xstats_get = ioat_xstats_get,
+		.xstats_get_names = ioat_xstats_get_names,
+		.xstats_reset = ioat_xstats_reset,
 };
 
 /* each portal uses 4 x 4k pages */
diff --git a/drivers/raw/ioat/idxd_vdev.c b/drivers/raw/ioat/idxd_vdev.c
index c75ac43175..38218cc1e1 100644
--- a/drivers/raw/ioat/idxd_vdev.c
+++ b/drivers/raw/ioat/idxd_vdev.c
@@ -36,6 +36,9 @@ static const struct rte_rawdev_ops idxd_vdev_ops = {
 		.dump = idxd_dev_dump,
 		.dev_configure = idxd_dev_configure,
 		.dev_info_get = idxd_dev_info_get,
+		.xstats_get = ioat_xstats_get,
+		.xstats_get_names = ioat_xstats_get_names,
+		.xstats_reset = ioat_xstats_reset,
 };
 
 static void *
diff --git a/drivers/raw/ioat/ioat_rawdev_test.c b/drivers/raw/ioat/ioat_rawdev_test.c
index a9132a8f10..0b172f3183 100644
--- a/drivers/raw/ioat/ioat_rawdev_test.c
+++ b/drivers/raw/ioat/ioat_rawdev_test.c
@@ -264,5 +264,5 @@ int
 idxd_rawdev_test(uint16_t dev_id)
 {
 	rte_rawdev_dump(dev_id, stdout);
-	return 0;
+	return ioat_rawdev_test(dev_id);
 }
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index 89bfc8d21a..d0045d8a49 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -184,6 +184,8 @@ struct rte_idxd_user_hdl {
  */
 struct rte_idxd_rawdev {
 	enum rte_ioat_dev_type type;
+	struct rte_ioat_xstats xstats;
+
 	void *portal; /* address to write the batch descriptor */
 
 	/* counters to track the batches and the individual op handles */
@@ -369,9 +371,11 @@ __idxd_write_desc(int dev_id, const struct rte_idxd_hw_desc *desc,
 	if (++idxd->next_free_hdl == idxd->hdl_ring_sz)
 		idxd->next_free_hdl = 0;
 
+	idxd->xstats.enqueued++;
 	return 1;
 
 failed:
+	idxd->xstats.enqueue_failed++;
 	rte_errno = ENOSPC;
 	return 0;
 }
@@ -429,6 +433,7 @@ __idxd_perform_ops(int dev_id)
 
 	if (++idxd->next_batch == idxd->batch_ring_sz)
 		idxd->next_batch = 0;
+	idxd->xstats.started = idxd->xstats.enqueued;
 }
 
 static __rte_always_inline int
@@ -466,6 +471,7 @@ __idxd_completed_ops(int dev_id, uint8_t max_ops,
 
 	idxd->next_ret_hdl = h_idx;
 
+	idxd->xstats.completed += n;
 	return n;
 }
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v4 24/25] raw/ioat: clean up use of common test function
  2020-09-28 16:42 ` [dpdk-dev] [PATCH v4 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (22 preceding siblings ...)
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 23/25] raw/ioat: add xstats tracking for idxd devices Bruce Richardson
@ 2020-09-28 16:42   ` Bruce Richardson
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 25/25] raw/ioat: add fill operation Bruce Richardson
                     ` (2 subsequent siblings)
  26 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-28 16:42 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Bruce Richardson, Kevin Laatz

Now that all devices can pass the same set of unit tests, eliminate the
temporary idxd_rawdev_test function and move the prototype for
ioat_rawdev_test to the proper internal header file, to be used by all
device instances.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
---
 drivers/raw/ioat/idxd_pci.c         | 2 +-
 drivers/raw/ioat/idxd_vdev.c        | 2 +-
 drivers/raw/ioat/ioat_private.h     | 4 ++--
 drivers/raw/ioat/ioat_rawdev.c      | 2 --
 drivers/raw/ioat/ioat_rawdev_test.c | 7 -------
 5 files changed, 4 insertions(+), 13 deletions(-)

diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
index 9113f8c8e9..165a9ea7f1 100644
--- a/drivers/raw/ioat/idxd_pci.c
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -101,7 +101,7 @@ idxd_pci_dev_start(struct rte_rawdev *dev)
 
 static const struct rte_rawdev_ops idxd_pci_ops = {
 		.dev_close = idxd_rawdev_close,
-		.dev_selftest = idxd_rawdev_test,
+		.dev_selftest = ioat_rawdev_test,
 		.dump = idxd_dev_dump,
 		.dev_configure = idxd_dev_configure,
 		.dev_start = idxd_pci_dev_start,
diff --git a/drivers/raw/ioat/idxd_vdev.c b/drivers/raw/ioat/idxd_vdev.c
index 38218cc1e1..50d47d05c5 100644
--- a/drivers/raw/ioat/idxd_vdev.c
+++ b/drivers/raw/ioat/idxd_vdev.c
@@ -32,7 +32,7 @@ struct idxd_vdev_args {
 
 static const struct rte_rawdev_ops idxd_vdev_ops = {
 		.dev_close = idxd_rawdev_close,
-		.dev_selftest = idxd_rawdev_test,
+		.dev_selftest = ioat_rawdev_test,
 		.dump = idxd_dev_dump,
 		.dev_configure = idxd_dev_configure,
 		.dev_info_get = idxd_dev_info_get,
diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
index ab9a3e6cce..a74bc0422f 100644
--- a/drivers/raw/ioat/ioat_private.h
+++ b/drivers/raw/ioat/ioat_private.h
@@ -63,6 +63,8 @@ int ioat_xstats_get_names(const struct rte_rawdev *dev,
 int ioat_xstats_reset(struct rte_rawdev *dev, const uint32_t *ids,
 		uint32_t nb_ids);
 
+extern int ioat_rawdev_test(uint16_t dev_id);
+
 extern int idxd_rawdev_create(const char *name, struct rte_device *dev,
 		       const struct idxd_rawdev *idxd,
 		       const struct rte_rawdev_ops *ops);
@@ -75,8 +77,6 @@ extern int idxd_dev_configure(const struct rte_rawdev *dev,
 extern int idxd_dev_info_get(struct rte_rawdev *dev, rte_rawdev_obj_t dev_info,
 		size_t info_size);
 
-extern int idxd_rawdev_test(uint16_t dev_id);
-
 extern int idxd_dev_dump(struct rte_rawdev *dev, FILE *f);
 
 #endif /* _IOAT_PRIVATE_H_ */
diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
index dd2543c809..2c88b4369f 100644
--- a/drivers/raw/ioat/ioat_rawdev.c
+++ b/drivers/raw/ioat/ioat_rawdev.c
@@ -128,8 +128,6 @@ ioat_dev_close(struct rte_rawdev *dev __rte_unused)
 	return 0;
 }
 
-extern int ioat_rawdev_test(uint16_t dev_id);
-
 static int
 ioat_rawdev_create(const char *name, struct rte_pci_device *dev)
 {
diff --git a/drivers/raw/ioat/ioat_rawdev_test.c b/drivers/raw/ioat/ioat_rawdev_test.c
index 0b172f3183..7be6f2a2d1 100644
--- a/drivers/raw/ioat/ioat_rawdev_test.c
+++ b/drivers/raw/ioat/ioat_rawdev_test.c
@@ -259,10 +259,3 @@ ioat_rawdev_test(uint16_t dev_id)
 	free(ids);
 	return -1;
 }
-
-int
-idxd_rawdev_test(uint16_t dev_id)
-{
-	rte_rawdev_dump(dev_id, stdout);
-	return ioat_rawdev_test(dev_id);
-}
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v4 25/25] raw/ioat: add fill operation
  2020-09-28 16:42 ` [dpdk-dev] [PATCH v4 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (23 preceding siblings ...)
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 24/25] raw/ioat: clean up use of common test function Bruce Richardson
@ 2020-09-28 16:42   ` Bruce Richardson
  2020-10-02 14:07   ` [dpdk-dev] [PATCH v4 00/25] raw/ioat: enhancements and new hardware support Nicolau, Radu
  2020-10-06 21:10   ` Thomas Monjalon
  26 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-09-28 16:42 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, Kevin Laatz, Bruce Richardson

From: Kevin Laatz <kevin.laatz@intel.com>

Add fill operation enqueue support for IOAT and IDXD. The fill enqueue is
similar to the copy enqueue, but takes a 'pattern' rather than a source
address to transfer to the destination address. This patch also includes an
additional test case for the new operation type.

Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 doc/guides/rawdevs/ioat.rst            | 10 ++++
 doc/guides/rel_notes/release_20_11.rst |  2 +
 drivers/raw/ioat/ioat_rawdev_test.c    | 62 ++++++++++++++++++++++++
 drivers/raw/ioat/rte_ioat_rawdev.h     | 26 +++++++++++
 drivers/raw/ioat/rte_ioat_rawdev_fns.h | 65 ++++++++++++++++++++++++--
 5 files changed, 160 insertions(+), 5 deletions(-)

diff --git a/doc/guides/rawdevs/ioat.rst b/doc/guides/rawdevs/ioat.rst
index 7c2a2d4570..250cfc48a6 100644
--- a/doc/guides/rawdevs/ioat.rst
+++ b/doc/guides/rawdevs/ioat.rst
@@ -285,6 +285,16 @@ is correct before freeing the data buffers using the returned handles:
         }
 
 
+Filling an Area of Memory
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The IOAT driver also has support for the ``fill`` operation, where an area
+of memory is overwritten, or filled, with a short pattern of data.
+Fill operations can be performed in much the same was as copy operations
+described above, just using the ``rte_ioat_enqueue_fill()`` function rather
+than the ``rte_ioat_enqueue_copy()`` function.
+
+
 Querying Device Statistics
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 4d8b781547..dd65b779dd 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -84,6 +84,8 @@ New Features
 
   * Added support for Intel\ |reg| Data Streaming Accelerator hardware.
     For more information, see https://01.org/blogs/2019/introducing-intel-data-streaming-accelerator
+  * Added support for the fill operation via the API ``rte_ioat_enqueue_fill()``,
+    where the hardware fills an area of memory with a repeating pattern.
   * Added a per-device configuration flag to disable management of user-provided completion handles
   * Renamed the ``rte_ioat_do_copies()`` API to ``rte_ioat_perform_ops()``,
     and renamed the ``rte_ioat_completed_copies()`` API to ``rte_ioat_completed_ops()``
diff --git a/drivers/raw/ioat/ioat_rawdev_test.c b/drivers/raw/ioat/ioat_rawdev_test.c
index 7be6f2a2d1..2c48ca679f 100644
--- a/drivers/raw/ioat/ioat_rawdev_test.c
+++ b/drivers/raw/ioat/ioat_rawdev_test.c
@@ -152,6 +152,52 @@ test_enqueue_copies(int dev_id)
 	return 0;
 }
 
+static int
+test_enqueue_fill(int dev_id)
+{
+	const unsigned int length[] = {8, 64, 1024, 50, 100, 89};
+	struct rte_mbuf *dst = rte_pktmbuf_alloc(pool);
+	char *dst_data = rte_pktmbuf_mtod(dst, char *);
+	struct rte_mbuf *completed[2] = {0};
+	uint64_t pattern = 0xfedcba9876543210;
+	unsigned int i, j;
+
+	for (i = 0; i < RTE_DIM(length); i++) {
+		/* reset dst_data */
+		memset(dst_data, 0, length[i]);
+
+		/* perform the fill operation */
+		if (rte_ioat_enqueue_fill(dev_id, pattern,
+				dst->buf_iova + dst->data_off, length[i],
+				(uintptr_t)dst) != 1) {
+			PRINT_ERR("Error with rte_ioat_enqueue_fill\n");
+			return -1;
+		}
+
+		rte_ioat_perform_ops(dev_id);
+		usleep(100);
+
+		if (rte_ioat_completed_ops(dev_id, 1, (void *)&completed[0],
+			(void *)&completed[1]) != 1) {
+			PRINT_ERR("Error with completed ops\n");
+			return -1;
+		}
+		/* check the result */
+		for (j = 0; j < length[i]; j++) {
+			char pat_byte = ((char *)&pattern)[j % 8];
+			if (dst_data[j] != pat_byte) {
+				PRINT_ERR("Error with fill operation (length = %u): got (%x), not (%x)\n",
+						length[i], dst_data[j],
+						pat_byte);
+				return -1;
+			}
+		}
+	}
+
+	rte_pktmbuf_free(dst);
+	return 0;
+}
+
 int
 ioat_rawdev_test(uint16_t dev_id)
 {
@@ -225,6 +271,7 @@ ioat_rawdev_test(uint16_t dev_id)
 	}
 
 	/* run the test cases */
+	printf("Running Copy Tests\n");
 	for (i = 0; i < 100; i++) {
 		unsigned int j;
 
@@ -238,6 +285,21 @@ ioat_rawdev_test(uint16_t dev_id)
 	}
 	printf("\n");
 
+	/* test enqueue fill operation */
+	printf("Running Fill Tests\n");
+	for (i = 0; i < 100; i++) {
+		unsigned int j;
+
+		if (test_enqueue_fill(dev_id) != 0)
+			goto err;
+
+		rte_rawdev_xstats_get(dev_id, ids, stats, nb_xstats);
+		for (j = 0; j < nb_xstats; j++)
+			printf("%s: %"PRIu64"   ", snames[j].name, stats[j]);
+		printf("\r");
+	}
+	printf("\n");
+
 	rte_rawdev_stop(dev_id);
 	if (rte_rawdev_xstats_reset(dev_id, NULL, 0) != 0) {
 		PRINT_ERR("Error resetting xstat values\n");
diff --git a/drivers/raw/ioat/rte_ioat_rawdev.h b/drivers/raw/ioat/rte_ioat_rawdev.h
index 6b891cd449..b7632ebf3b 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev.h
@@ -37,6 +37,32 @@ struct rte_ioat_rawdev_config {
 	bool hdls_disable;    /**< if set, ignore user-supplied handle params */
 };
 
+/**
+ * Enqueue a fill operation onto the ioat device
+ *
+ * This queues up a fill operation to be performed by hardware, but does not
+ * trigger hardware to begin that operation.
+ *
+ * @param dev_id
+ *   The rawdev device id of the ioat instance
+ * @param pattern
+ *   The pattern to populate the destination buffer with
+ * @param dst
+ *   The physical address of the destination buffer
+ * @param length
+ *   The length of the destination buffer
+ * @param dst_hdl
+ *   An opaque handle for the destination data, to be returned when this
+ *   operation has been completed and the user polls for the completion details.
+ *   NOTE: If hdls_disable configuration option for the device is set, this
+ *   parameter is ignored.
+ * @return
+ *   Number of operations enqueued, either 0 or 1
+ */
+static inline int
+rte_ioat_enqueue_fill(int dev_id, uint64_t pattern, phys_addr_t dst,
+		unsigned int length, uintptr_t dst_hdl);
+
 /**
  * Enqueue a copy operation onto the ioat device
  *
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index d0045d8a49..c2c4601ca7 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -115,6 +115,13 @@ enum rte_idxd_ops {
 #define IDXD_FLAG_REQUEST_COMPLETION    (1 << 3)
 #define IDXD_FLAG_CACHE_CONTROL         (1 << 8)
 
+#define IOAT_COMP_UPDATE_SHIFT	3
+#define IOAT_CMD_OP_SHIFT	24
+enum rte_ioat_ops {
+	ioat_op_copy = 0,	/* Standard DMA Operation */
+	ioat_op_fill		/* Block Fill */
+};
+
 /**
  * Hardware descriptor used by DSA hardware, for both bursts and
  * for individual operations.
@@ -203,11 +210,8 @@ struct rte_idxd_rawdev {
 	struct rte_idxd_desc_batch *batch_ring;
 };
 
-/*
- * Enqueue a copy operation onto the ioat device
- */
 static __rte_always_inline int
-__ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
+__ioat_write_desc(int dev_id, uint32_t op, uint64_t src, phys_addr_t dst,
 		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl)
 {
 	struct rte_ioat_rawdev *ioat =
@@ -229,7 +233,8 @@ __ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 	desc = &ioat->desc_ring[write];
 	desc->size = length;
 	/* set descriptor write-back every 16th descriptor */
-	desc->u.control_raw = (uint32_t)((!(write & 0xF)) << 3);
+	desc->u.control_raw = (uint32_t)((op << IOAT_CMD_OP_SHIFT) |
+			(!(write & 0xF) << IOAT_COMP_UPDATE_SHIFT));
 	desc->src_addr = src;
 	desc->dest_addr = dst;
 
@@ -242,6 +247,27 @@ __ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 	return 1;
 }
 
+static __rte_always_inline int
+__ioat_enqueue_fill(int dev_id, uint64_t pattern, phys_addr_t dst,
+		unsigned int length, uintptr_t dst_hdl)
+{
+	static const uintptr_t null_hdl;
+
+	return __ioat_write_desc(dev_id, ioat_op_fill, pattern, dst, length,
+			null_hdl, dst_hdl);
+}
+
+/*
+ * Enqueue a copy operation onto the ioat device
+ */
+static __rte_always_inline int
+__ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
+		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl)
+{
+	return __ioat_write_desc(dev_id, ioat_op_copy, src, dst, length,
+			src_hdl, dst_hdl);
+}
+
 /* add fence to last written descriptor */
 static __rte_always_inline int
 __ioat_fence(int dev_id)
@@ -380,6 +406,23 @@ __idxd_write_desc(int dev_id, const struct rte_idxd_hw_desc *desc,
 	return 0;
 }
 
+static __rte_always_inline int
+__idxd_enqueue_fill(int dev_id, uint64_t pattern, rte_iova_t dst,
+		unsigned int length, uintptr_t dst_hdl)
+{
+	const struct rte_idxd_hw_desc desc = {
+			.op_flags =  (idxd_op_fill << IDXD_CMD_OP_SHIFT) |
+				IDXD_FLAG_CACHE_CONTROL,
+			.src = pattern,
+			.dst = dst,
+			.size = length
+	};
+	const struct rte_idxd_user_hdl hdl = {
+			.dst = dst_hdl
+	};
+	return __idxd_write_desc(dev_id, &desc, &hdl);
+}
+
 static __rte_always_inline int
 __idxd_enqueue_copy(int dev_id, rte_iova_t src, rte_iova_t dst,
 		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl)
@@ -475,6 +518,18 @@ __idxd_completed_ops(int dev_id, uint8_t max_ops,
 	return n;
 }
 
+static inline int
+rte_ioat_enqueue_fill(int dev_id, uint64_t pattern, phys_addr_t dst,
+		unsigned int len, uintptr_t dst_hdl)
+{
+	enum rte_ioat_dev_type *type =
+			(enum rte_ioat_dev_type *)rte_rawdevs[dev_id].dev_private;
+	if (*type == RTE_IDXD_DEV)
+		return __idxd_enqueue_fill(dev_id, pattern, dst, len, dst_hdl);
+	else
+		return __ioat_enqueue_fill(dev_id, pattern, dst, len, dst_hdl);
+}
+
 static inline int
 rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl)
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* Re: [dpdk-dev] [PATCH v4 00/25] raw/ioat: enhancements and new hardware support
  2020-09-28 16:42 ` [dpdk-dev] [PATCH v4 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (24 preceding siblings ...)
  2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 25/25] raw/ioat: add fill operation Bruce Richardson
@ 2020-10-02 14:07   ` Nicolau, Radu
  2020-10-06 21:10   ` Thomas Monjalon
  26 siblings, 0 replies; 157+ messages in thread
From: Nicolau, Radu @ 2020-10-02 14:07 UTC (permalink / raw)
  To: Bruce Richardson, dev; +Cc: patrick.fu


On 9/28/2020 5:42 PM, Bruce Richardson wrote:
> This patchset adds some small enhancements, some rework and also support
> for new hardware to the ioat rawdev driver. Most rework and enhancements
> are largely self-explanatory from the individual patches.
>
> The new hardware support is for the Intel(R) DSA accelerator which will be
> present in future Intel processors. A description of this new hardware is
> covered in [1]. Functions specific to the new hardware use the "idxd"
> prefix, for consistency with the kernel driver.
>
> [1] https://01.org/blogs/2019/introducing-intel-data-streaming-accelerator
>
> ---
> V4:
>   * Fixed compile with FreeBSD clang
>   * Improved autotests for fill operation
>
> V3:
>   * More doc updates including release note updates throughout the set
>   * Added in fill operation
>   * Added in fix for missing close operation
>   * Added in fix for doc building to ensure ioat is in in the index
>
> V2:
>   * Included documentation additions in the set
>   * Split off the rawdev unit test changes to a separate patchset for easier
>     review
>   * General code improvements and cleanups
>
> Bruce Richardson (19):
>    doc/api: add ioat driver to index
>    raw/ioat: enable use from C++ code
>    raw/ioat: include extra info in error messages
>    raw/ioat: split header for readability
>    raw/ioat: rename functions to be operation-agnostic
>    raw/ioat: add separate API for fence call
>    raw/ioat: make the HW register spec private
>    raw/ioat: add skeleton for VFIO/UIO based DSA device
>    raw/ioat: include example configuration script
>    raw/ioat: create rawdev instances on idxd PCI probe
>    raw/ioat: add datapath data structures for idxd devices
>    raw/ioat: add configure function for idxd devices
>    raw/ioat: add start and stop functions for idxd devices
>    raw/ioat: add data path for idxd devices
>    raw/ioat: add info function for idxd devices
>    raw/ioat: create separate statistics structure
>    raw/ioat: move xstats functions to common file
>    raw/ioat: add xstats tracking for idxd devices
>    raw/ioat: clean up use of common test function
>
> Cheng Jiang (1):
>    raw/ioat: add a flag to control copying handle parameters
>
> Kevin Laatz (5):
>    raw/ioat: fix missing close function
>    usertools/dpdk-devbind.py: add support for DSA HW
>    raw/ioat: add vdev probe for DSA/idxd devices
>    raw/ioat: create rawdev instances for idxd vdevs
>    raw/ioat: add fill operation
>
>   doc/api/doxy-api-index.md                     |   1 +
>   doc/api/doxy-api.conf.in                      |   1 +
>   doc/guides/rawdevs/ioat.rst                   | 163 +++--
>   doc/guides/rel_notes/release_20_11.rst        |  23 +
>   doc/guides/sample_app_ug/ioat.rst             |   8 +-
>   drivers/raw/ioat/dpdk_idxd_cfg.py             |  79 +++
>   drivers/raw/ioat/idxd_pci.c                   | 345 ++++++++++
>   drivers/raw/ioat/idxd_vdev.c                  | 233 +++++++
>   drivers/raw/ioat/ioat_common.c                | 244 +++++++
>   drivers/raw/ioat/ioat_private.h               |  82 +++
>   drivers/raw/ioat/ioat_rawdev.c                |  92 +--
>   drivers/raw/ioat/ioat_rawdev_test.c           | 130 +++-
>   .../raw/ioat/{rte_ioat_spec.h => ioat_spec.h} |  90 ++-
>   drivers/raw/ioat/meson.build                  |  15 +-
>   drivers/raw/ioat/rte_ioat_rawdev.h            | 221 +++----
>   drivers/raw/ioat/rte_ioat_rawdev_fns.h        | 595 ++++++++++++++++++
>   examples/ioat/ioatfwd.c                       |  16 +-
>   lib/librte_eal/include/rte_common.h           |   1 +
>   usertools/dpdk-devbind.py                     |   4 +-
>   19 files changed, 1989 insertions(+), 354 deletions(-)
>   create mode 100755 drivers/raw/ioat/dpdk_idxd_cfg.py
>   create mode 100644 drivers/raw/ioat/idxd_pci.c
>   create mode 100644 drivers/raw/ioat/idxd_vdev.c
>   create mode 100644 drivers/raw/ioat/ioat_common.c
>   create mode 100644 drivers/raw/ioat/ioat_private.h
>   rename drivers/raw/ioat/{rte_ioat_spec.h => ioat_spec.h} (74%)
>   create mode 100644 drivers/raw/ioat/rte_ioat_rawdev_fns.h


Series Acked-by: Radu Nicolau <radu.nicolau@intel.com>


^ permalink raw reply	[flat|nested] 157+ messages in thread

* Re: [dpdk-dev] [PATCH v4 00/25] raw/ioat: enhancements and new hardware support
  2020-09-28 16:42 ` [dpdk-dev] [PATCH v4 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (25 preceding siblings ...)
  2020-10-02 14:07   ` [dpdk-dev] [PATCH v4 00/25] raw/ioat: enhancements and new hardware support Nicolau, Radu
@ 2020-10-06 21:10   ` Thomas Monjalon
  2020-10-07  9:46     ` Bruce Richardson
  26 siblings, 1 reply; 157+ messages in thread
From: Thomas Monjalon @ 2020-10-06 21:10 UTC (permalink / raw)
  To: Bruce Richardson; +Cc: dev, patrick.fu

> This patchset adds some small enhancements, some rework and also support
> for new hardware to the ioat rawdev driver. Most rework and enhancements
> are largely self-explanatory from the individual patches.

Another ioat patch has been merged before.
Please could you rebase?



^ permalink raw reply	[flat|nested] 157+ messages in thread

* Re: [dpdk-dev] [PATCH v4 00/25] raw/ioat: enhancements and new hardware support
  2020-10-06 21:10   ` Thomas Monjalon
@ 2020-10-07  9:46     ` Bruce Richardson
  0 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-07  9:46 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev, patrick.fu

On Tue, Oct 06, 2020 at 11:10:09PM +0200, Thomas Monjalon wrote:
> > This patchset adds some small enhancements, some rework and also support
> > for new hardware to the ioat rawdev driver. Most rework and enhancements
> > are largely self-explanatory from the individual patches.
> 
> Another ioat patch has been merged before.
> Please could you rebase?
> 
Absolutely, will do.

^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v5 00/25] raw/ioat: enhancements and new hardware support
  2020-07-21  9:51 [dpdk-dev] [PATCH 20.11 00/20] raw/ioat: enhancements and new hardware support Bruce Richardson
                   ` (22 preceding siblings ...)
  2020-09-28 16:42 ` [dpdk-dev] [PATCH v4 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
@ 2020-10-07 16:29 ` Bruce Richardson
  2020-10-07 16:29   ` [dpdk-dev] [PATCH v5 01/25] doc/api: add ioat driver to index Bruce Richardson
                     ` (24 more replies)
  2020-10-08  9:51 ` [dpdk-dev] [PATCH v6 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
  24 siblings, 25 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-07 16:29 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Bruce Richardson

This patchset adds some small enhancements, some rework and also support
for new hardware to the ioat rawdev driver. Most rework and enhancements
are largely self-explanatory from the individual patches.

The new hardware support is for the Intel(R) DSA accelerator which will be
present in future Intel processors. A description of this new hardware is
covered in [1]. Functions specific to the new hardware use the "idxd"
prefix, for consistency with the kernel driver.

[1] https://01.org/blogs/2019/introducing-intel-data-streaming-accelerator

---
V5:
 * Rebased to latest main branch.

V4:
 * Fixed compile with FreeBSD clang
 * Improved autotests for fill operation

V3:
 * More doc updates including release note updates throughout the set
 * Added in fill operation
 * Added in fix for missing close operation
 * Added in fix for doc building to ensure ioat is in in the index

V2:
 * Included documentation additions in the set
 * Split off the rawdev unit test changes to a separate patchset for easier
   review
 * General code improvements and cleanups 


Bruce Richardson (19):
  doc/api: add ioat driver to index
  raw/ioat: enable use from C++ code
  raw/ioat: include extra info in error messages
  raw/ioat: split header for readability
  raw/ioat: rename functions to be operation-agnostic
  raw/ioat: add separate API for fence call
  raw/ioat: make the HW register spec private
  raw/ioat: add skeleton for VFIO/UIO based DSA device
  raw/ioat: include example configuration script
  raw/ioat: create rawdev instances on idxd PCI probe
  raw/ioat: add datapath data structures for idxd devices
  raw/ioat: add configure function for idxd devices
  raw/ioat: add start and stop functions for idxd devices
  raw/ioat: add data path for idxd devices
  raw/ioat: add info function for idxd devices
  raw/ioat: create separate statistics structure
  raw/ioat: move xstats functions to common file
  raw/ioat: add xstats tracking for idxd devices
  raw/ioat: clean up use of common test function

Cheng Jiang (1):
  raw/ioat: add a flag to control copying handle parameters

Kevin Laatz (5):
  raw/ioat: fix missing close function
  usertools/dpdk-devbind.py: add support for DSA HW
  raw/ioat: add vdev probe for DSA/idxd devices
  raw/ioat: create rawdev instances for idxd vdevs
  raw/ioat: add fill operation

 doc/api/doxy-api-index.md                     |   1 +
 doc/api/doxy-api.conf.in                      |   1 +
 doc/guides/rawdevs/ioat.rst                   | 163 +++--
 doc/guides/rel_notes/release_20_11.rst        |  23 +
 doc/guides/sample_app_ug/ioat.rst             |   8 +-
 drivers/raw/ioat/dpdk_idxd_cfg.py             |  79 +++
 drivers/raw/ioat/idxd_pci.c                   | 345 ++++++++++
 drivers/raw/ioat/idxd_vdev.c                  | 233 +++++++
 drivers/raw/ioat/ioat_common.c                | 244 +++++++
 drivers/raw/ioat/ioat_private.h               |  82 +++
 drivers/raw/ioat/ioat_rawdev.c                |  92 +--
 drivers/raw/ioat/ioat_rawdev_test.c           | 130 +++-
 .../raw/ioat/{rte_ioat_spec.h => ioat_spec.h} |  90 ++-
 drivers/raw/ioat/meson.build                  |  15 +-
 drivers/raw/ioat/rte_ioat_rawdev.h            | 221 +++----
 drivers/raw/ioat/rte_ioat_rawdev_fns.h        | 595 ++++++++++++++++++
 examples/ioat/ioatfwd.c                       |  16 +-
 lib/librte_eal/include/rte_common.h           |   1 +
 usertools/dpdk-devbind.py                     |   6 +-
 19 files changed, 1991 insertions(+), 354 deletions(-)
 create mode 100755 drivers/raw/ioat/dpdk_idxd_cfg.py
 create mode 100644 drivers/raw/ioat/idxd_pci.c
 create mode 100644 drivers/raw/ioat/idxd_vdev.c
 create mode 100644 drivers/raw/ioat/ioat_common.c
 create mode 100644 drivers/raw/ioat/ioat_private.h
 rename drivers/raw/ioat/{rte_ioat_spec.h => ioat_spec.h} (74%)
 create mode 100644 drivers/raw/ioat/rte_ioat_rawdev_fns.h

-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v5 01/25] doc/api: add ioat driver to index
  2020-10-07 16:29 ` [dpdk-dev] [PATCH v5 " Bruce Richardson
@ 2020-10-07 16:29   ` Bruce Richardson
  2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 02/25] raw/ioat: fix missing close function Bruce Richardson
                     ` (23 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-07 16:29 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Bruce Richardson, Kevin Laatz, Radu Nicolau

Add the ioat driver to the doxygen documentation.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 doc/api/doxy-api-index.md | 1 +
 doc/api/doxy-api.conf.in  | 1 +
 2 files changed, 2 insertions(+)

diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 037c77579..a9c12d1a2 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -44,6 +44,7 @@ The public API headers are grouped by topics:
   [ixgbe]              (@ref rte_pmd_ixgbe.h),
   [i40e]               (@ref rte_pmd_i40e.h),
   [ice]                (@ref rte_pmd_ice.h),
+  [ioat]               (@ref rte_ioat_rawdev.h),
   [bnxt]               (@ref rte_pmd_bnxt.h),
   [dpaa]               (@ref rte_pmd_dpaa.h),
   [dpaa2]              (@ref rte_pmd_dpaa2.h),
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index ddef755c2..e37f8c2e8 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -18,6 +18,7 @@ INPUT                   = @TOPDIR@/doc/api/doxy-api-index.md \
                           @TOPDIR@/drivers/net/softnic \
                           @TOPDIR@/drivers/raw/dpaa2_cmdif \
                           @TOPDIR@/drivers/raw/dpaa2_qdma \
+                          @TOPDIR@/drivers/raw/ioat \
                           @TOPDIR@/lib/librte_eal/include \
                           @TOPDIR@/lib/librte_eal/include/generic \
                           @TOPDIR@/lib/librte_acl \
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v5 02/25] raw/ioat: fix missing close function
  2020-10-07 16:29 ` [dpdk-dev] [PATCH v5 " Bruce Richardson
  2020-10-07 16:29   ` [dpdk-dev] [PATCH v5 01/25] doc/api: add ioat driver to index Bruce Richardson
@ 2020-10-07 16:30   ` Bruce Richardson
  2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 03/25] raw/ioat: enable use from C++ code Bruce Richardson
                     ` (22 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-07 16:30 UTC (permalink / raw)
  To: dev
  Cc: patrick.fu, thomas, Kevin Laatz, stable, Sunil Pai G,
	Bruce Richardson, Radu Nicolau

From: Kevin Laatz <kevin.laatz@intel.com>

When rte_rawdev_pmd_release() is called, rte_rawdev_close() looks for a
dev_close function for the device causing a segmentation fault when no
close() function is implemented for a driver.

This patch resolves the issue by adding a stub function ioat_dev_close().

Fixes: f687e842e328 ("raw/ioat: introduce IOAT driver")
Cc: stable@dpdk.org

Reported-by: Sunil Pai G <sunil.pai.g@intel.com>
Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Reviewed-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Sunil Pai G <sunil.pai.g@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 drivers/raw/ioat/ioat_rawdev.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
index 7f1a15436..0732b059f 100644
--- a/drivers/raw/ioat/ioat_rawdev.c
+++ b/drivers/raw/ioat/ioat_rawdev.c
@@ -203,6 +203,12 @@ ioat_xstats_reset(struct rte_rawdev *dev, const uint32_t *ids, uint32_t nb_ids)
 	return 0;
 }
 
+static int
+ioat_dev_close(struct rte_rawdev *dev __rte_unused)
+{
+	return 0;
+}
+
 extern int ioat_rawdev_test(uint16_t dev_id);
 
 static int
@@ -212,6 +218,7 @@ ioat_rawdev_create(const char *name, struct rte_pci_device *dev)
 			.dev_configure = ioat_dev_configure,
 			.dev_start = ioat_dev_start,
 			.dev_stop = ioat_dev_stop,
+			.dev_close = ioat_dev_close,
 			.dev_info_get = ioat_dev_info_get,
 			.xstats_get = ioat_xstats_get,
 			.xstats_get_names = ioat_xstats_get_names,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v5 03/25] raw/ioat: enable use from C++ code
  2020-10-07 16:29 ` [dpdk-dev] [PATCH v5 " Bruce Richardson
  2020-10-07 16:29   ` [dpdk-dev] [PATCH v5 01/25] doc/api: add ioat driver to index Bruce Richardson
  2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 02/25] raw/ioat: fix missing close function Bruce Richardson
@ 2020-10-07 16:30   ` Bruce Richardson
  2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 04/25] raw/ioat: include extra info in error messages Bruce Richardson
                     ` (21 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-07 16:30 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Bruce Richardson, Kevin Laatz, Radu Nicolau

To allow the header file to be used from C++ code we need to ensure all
typecasts are explicit, and include an 'extern "C"' guard.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 drivers/raw/ioat/rte_ioat_rawdev.h | 23 +++++++++++++++++------
 1 file changed, 17 insertions(+), 6 deletions(-)

diff --git a/drivers/raw/ioat/rte_ioat_rawdev.h b/drivers/raw/ioat/rte_ioat_rawdev.h
index f765a6557..3d8419271 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev.h
@@ -5,6 +5,10 @@
 #ifndef _RTE_IOAT_RAWDEV_H_
 #define _RTE_IOAT_RAWDEV_H_
 
+#ifdef __cplusplus
+extern "C" {
+#endif
+
 /**
  * @file rte_ioat_rawdev.h
  *
@@ -100,7 +104,8 @@ rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl,
 		int fence)
 {
-	struct rte_ioat_rawdev *ioat = rte_rawdevs[dev_id].dev_private;
+	struct rte_ioat_rawdev *ioat =
+			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
 	unsigned short read = ioat->next_read;
 	unsigned short write = ioat->next_write;
 	unsigned short mask = ioat->ring_size - 1;
@@ -141,7 +146,8 @@ rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 static inline void
 rte_ioat_do_copies(int dev_id)
 {
-	struct rte_ioat_rawdev *ioat = rte_rawdevs[dev_id].dev_private;
+	struct rte_ioat_rawdev *ioat =
+			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
 	ioat->desc_ring[(ioat->next_write - 1) & (ioat->ring_size - 1)].u
 			.control.completion_update = 1;
 	rte_compiler_barrier();
@@ -190,7 +196,8 @@ static inline int
 rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
 		uintptr_t *src_hdls, uintptr_t *dst_hdls)
 {
-	struct rte_ioat_rawdev *ioat = rte_rawdevs[dev_id].dev_private;
+	struct rte_ioat_rawdev *ioat =
+			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
 	unsigned short mask = (ioat->ring_size - 1);
 	unsigned short read = ioat->next_read;
 	unsigned short end_read, count;
@@ -212,13 +219,13 @@ rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
 		__m128i hdls0 = _mm_load_si128(&ioat->hdls[read & mask]);
 		__m128i hdls1 = _mm_load_si128(&ioat->hdls[(read + 1) & mask]);
 
-		_mm_storeu_si128((void *)&src_hdls[i],
+		_mm_storeu_si128((__m128i *)&src_hdls[i],
 				_mm_unpacklo_epi64(hdls0, hdls1));
-		_mm_storeu_si128((void *)&dst_hdls[i],
+		_mm_storeu_si128((__m128i *)&dst_hdls[i],
 				_mm_unpackhi_epi64(hdls0, hdls1));
 	}
 	for (; i < count; i++, read++) {
-		uintptr_t *hdls = (void *)&ioat->hdls[read & mask];
+		uintptr_t *hdls = (uintptr_t *)&ioat->hdls[read & mask];
 		src_hdls[i] = hdls[0];
 		dst_hdls[i] = hdls[1];
 	}
@@ -228,4 +235,8 @@ rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
 	return count;
 }
 
+#ifdef __cplusplus
+}
+#endif
+
 #endif /* _RTE_IOAT_RAWDEV_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v5 04/25] raw/ioat: include extra info in error messages
  2020-10-07 16:29 ` [dpdk-dev] [PATCH v5 " Bruce Richardson
                     ` (2 preceding siblings ...)
  2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 03/25] raw/ioat: enable use from C++ code Bruce Richardson
@ 2020-10-07 16:30   ` Bruce Richardson
  2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 05/25] raw/ioat: add a flag to control copying handle parameters Bruce Richardson
                     ` (20 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-07 16:30 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Bruce Richardson, Kevin Laatz, Radu Nicolau

In case of any failures, include the function name and the line number of
the error message in the message, to make tracking down the failure easier.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 drivers/raw/ioat/ioat_rawdev_test.c | 53 +++++++++++++++++++----------
 1 file changed, 35 insertions(+), 18 deletions(-)

diff --git a/drivers/raw/ioat/ioat_rawdev_test.c b/drivers/raw/ioat/ioat_rawdev_test.c
index e5b50ae9f..77f96bba3 100644
--- a/drivers/raw/ioat/ioat_rawdev_test.c
+++ b/drivers/raw/ioat/ioat_rawdev_test.c
@@ -16,6 +16,23 @@ int ioat_rawdev_test(uint16_t dev_id); /* pre-define to keep compiler happy */
 static struct rte_mempool *pool;
 static unsigned short expected_ring_size[MAX_SUPPORTED_RAWDEVS];
 
+#define PRINT_ERR(...) print_err(__func__, __LINE__, __VA_ARGS__)
+
+static inline int
+__rte_format_printf(3, 4)
+print_err(const char *func, int lineno, const char *format, ...)
+{
+	va_list ap;
+	int ret;
+
+	ret = fprintf(stderr, "In %s:%d - ", func, lineno);
+	va_start(ap, format);
+	ret += vfprintf(stderr, format, ap);
+	va_end(ap);
+
+	return ret;
+}
+
 static int
 test_enqueue_copies(int dev_id)
 {
@@ -45,7 +62,7 @@ test_enqueue_copies(int dev_id)
 				(uintptr_t)src,
 				(uintptr_t)dst,
 				0 /* no fence */) != 1) {
-			printf("Error with rte_ioat_enqueue_copy\n");
+			PRINT_ERR("Error with rte_ioat_enqueue_copy\n");
 			return -1;
 		}
 		rte_ioat_do_copies(dev_id);
@@ -53,18 +70,18 @@ test_enqueue_copies(int dev_id)
 
 		if (rte_ioat_completed_copies(dev_id, 1, (void *)&completed[0],
 				(void *)&completed[1]) != 1) {
-			printf("Error with rte_ioat_completed_copies\n");
+			PRINT_ERR("Error with rte_ioat_completed_copies\n");
 			return -1;
 		}
 		if (completed[0] != src || completed[1] != dst) {
-			printf("Error with completions: got (%p, %p), not (%p,%p)\n",
+			PRINT_ERR("Error with completions: got (%p, %p), not (%p,%p)\n",
 					completed[0], completed[1], src, dst);
 			return -1;
 		}
 
 		for (i = 0; i < length; i++)
 			if (dst_data[i] != src_data[i]) {
-				printf("Data mismatch at char %u\n", i);
+				PRINT_ERR("Data mismatch at char %u\n", i);
 				return -1;
 			}
 		rte_pktmbuf_free(src);
@@ -97,7 +114,7 @@ test_enqueue_copies(int dev_id)
 					(uintptr_t)srcs[i],
 					(uintptr_t)dsts[i],
 					0 /* nofence */) != 1) {
-				printf("Error with rte_ioat_enqueue_copy for buffer %u\n",
+				PRINT_ERR("Error with rte_ioat_enqueue_copy for buffer %u\n",
 						i);
 				return -1;
 			}
@@ -107,18 +124,18 @@ test_enqueue_copies(int dev_id)
 
 		if (rte_ioat_completed_copies(dev_id, 64, (void *)completed_src,
 				(void *)completed_dst) != RTE_DIM(srcs)) {
-			printf("Error with rte_ioat_completed_copies\n");
+			PRINT_ERR("Error with rte_ioat_completed_copies\n");
 			return -1;
 		}
 		for (i = 0; i < RTE_DIM(srcs); i++) {
 			char *src_data, *dst_data;
 
 			if (completed_src[i] != srcs[i]) {
-				printf("Error with source pointer %u\n", i);
+				PRINT_ERR("Error with source pointer %u\n", i);
 				return -1;
 			}
 			if (completed_dst[i] != dsts[i]) {
-				printf("Error with dest pointer %u\n", i);
+				PRINT_ERR("Error with dest pointer %u\n", i);
 				return -1;
 			}
 
@@ -126,7 +143,7 @@ test_enqueue_copies(int dev_id)
 			dst_data = rte_pktmbuf_mtod(dsts[i], char *);
 			for (j = 0; j < length; j++)
 				if (src_data[j] != dst_data[j]) {
-					printf("Error with copy of packet %u, byte %u\n",
+					PRINT_ERR("Error with copy of packet %u, byte %u\n",
 							i, j);
 					return -1;
 				}
@@ -159,26 +176,26 @@ ioat_rawdev_test(uint16_t dev_id)
 
 	rte_rawdev_info_get(dev_id, &info, sizeof(p));
 	if (p.ring_size != expected_ring_size[dev_id]) {
-		printf("Error, initial ring size is not as expected (Actual: %d, Expected: %d)\n",
+		PRINT_ERR("Error, initial ring size is not as expected (Actual: %d, Expected: %d)\n",
 				(int)p.ring_size, expected_ring_size[dev_id]);
 		return -1;
 	}
 
 	p.ring_size = IOAT_TEST_RINGSIZE;
 	if (rte_rawdev_configure(dev_id, &info, sizeof(p)) != 0) {
-		printf("Error with rte_rawdev_configure()\n");
+		PRINT_ERR("Error with rte_rawdev_configure()\n");
 		return -1;
 	}
 	rte_rawdev_info_get(dev_id, &info, sizeof(p));
 	if (p.ring_size != IOAT_TEST_RINGSIZE) {
-		printf("Error, ring size is not %d (%d)\n",
+		PRINT_ERR("Error, ring size is not %d (%d)\n",
 				IOAT_TEST_RINGSIZE, (int)p.ring_size);
 		return -1;
 	}
 	expected_ring_size[dev_id] = p.ring_size;
 
 	if (rte_rawdev_start(dev_id) != 0) {
-		printf("Error with rte_rawdev_start()\n");
+		PRINT_ERR("Error with rte_rawdev_start()\n");
 		return -1;
 	}
 
@@ -189,7 +206,7 @@ ioat_rawdev_test(uint16_t dev_id)
 			2048, /* data room size */
 			info.socket_id);
 	if (pool == NULL) {
-		printf("Error with mempool creation\n");
+		PRINT_ERR("Error with mempool creation\n");
 		return -1;
 	}
 
@@ -198,14 +215,14 @@ ioat_rawdev_test(uint16_t dev_id)
 
 	snames = malloc(sizeof(*snames) * nb_xstats);
 	if (snames == NULL) {
-		printf("Error allocating xstat names memory\n");
+		PRINT_ERR("Error allocating xstat names memory\n");
 		goto err;
 	}
 	rte_rawdev_xstats_names_get(dev_id, snames, nb_xstats);
 
 	ids = malloc(sizeof(*ids) * nb_xstats);
 	if (ids == NULL) {
-		printf("Error allocating xstat ids memory\n");
+		PRINT_ERR("Error allocating xstat ids memory\n");
 		goto err;
 	}
 	for (i = 0; i < nb_xstats; i++)
@@ -213,7 +230,7 @@ ioat_rawdev_test(uint16_t dev_id)
 
 	stats = malloc(sizeof(*stats) * nb_xstats);
 	if (stats == NULL) {
-		printf("Error allocating xstat memory\n");
+		PRINT_ERR("Error allocating xstat memory\n");
 		goto err;
 	}
 
@@ -233,7 +250,7 @@ ioat_rawdev_test(uint16_t dev_id)
 
 	rte_rawdev_stop(dev_id);
 	if (rte_rawdev_xstats_reset(dev_id, NULL, 0) != 0) {
-		printf("Error resetting xstat values\n");
+		PRINT_ERR("Error resetting xstat values\n");
 		goto err;
 	}
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v5 05/25] raw/ioat: add a flag to control copying handle parameters
  2020-10-07 16:29 ` [dpdk-dev] [PATCH v5 " Bruce Richardson
                     ` (3 preceding siblings ...)
  2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 04/25] raw/ioat: include extra info in error messages Bruce Richardson
@ 2020-10-07 16:30   ` Bruce Richardson
  2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 06/25] raw/ioat: split header for readability Bruce Richardson
                     ` (19 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-07 16:30 UTC (permalink / raw)
  To: dev
  Cc: patrick.fu, thomas, Cheng Jiang, Bruce Richardson, Kevin Laatz,
	Radu Nicolau

From: Cheng Jiang <Cheng1.jiang@intel.com>

Add a flag which controls whether rte_ioat_enqueue_copy and
rte_ioat_completed_copies function should process handle parameters. Not
doing so can improve the performance when handle parameters are not
necessary.

Signed-off-by: Cheng Jiang <Cheng1.jiang@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 doc/guides/rawdevs/ioat.rst            |  3 ++
 doc/guides/rel_notes/release_20_11.rst |  6 ++++
 drivers/raw/ioat/ioat_rawdev.c         |  2 ++
 drivers/raw/ioat/rte_ioat_rawdev.h     | 45 +++++++++++++++++++-------
 4 files changed, 45 insertions(+), 11 deletions(-)

diff --git a/doc/guides/rawdevs/ioat.rst b/doc/guides/rawdevs/ioat.rst
index c46460ff4..af00d77fb 100644
--- a/doc/guides/rawdevs/ioat.rst
+++ b/doc/guides/rawdevs/ioat.rst
@@ -129,6 +129,9 @@ output, the ``dev_private`` structure element cannot be NULL, and must
 point to a valid ``rte_ioat_rawdev_config`` structure, containing the ring
 size to be used by the device. The ring size must be a power of two,
 between 64 and 4096.
+If it is not needed, the tracking by the driver of user-provided completion
+handles may be disabled by setting the ``hdls_disable`` flag in
+the configuration structure also.
 
 The following code shows how the device is configured in
 ``test_ioat_rawdev.c``:
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index cdf20404c..1e73c26d4 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -116,6 +116,12 @@ New Features
   * Extern objects and functions can be plugged into the pipeline.
   * Transaction-oriented table updates.
 
+* **Updated ioat rawdev driver**
+
+  The ioat rawdev driver has been updated and enhanced. Changes include:
+
+  * Added a per-device configuration flag to disable management of user-provided completion handles
+
 
 Removed Items
 -------------
diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
index 0732b059f..ea9f51ffc 100644
--- a/drivers/raw/ioat/ioat_rawdev.c
+++ b/drivers/raw/ioat/ioat_rawdev.c
@@ -58,6 +58,7 @@ ioat_dev_configure(const struct rte_rawdev *dev, rte_rawdev_obj_t config,
 		return -EINVAL;
 
 	ioat->ring_size = params->ring_size;
+	ioat->hdls_disable = params->hdls_disable;
 	if (ioat->desc_ring != NULL) {
 		rte_memzone_free(ioat->desc_mz);
 		ioat->desc_ring = NULL;
@@ -122,6 +123,7 @@ ioat_dev_info_get(struct rte_rawdev *dev, rte_rawdev_obj_t dev_info,
 		return -EINVAL;
 
 	cfg->ring_size = ioat->ring_size;
+	cfg->hdls_disable = ioat->hdls_disable;
 	return 0;
 }
 
diff --git a/drivers/raw/ioat/rte_ioat_rawdev.h b/drivers/raw/ioat/rte_ioat_rawdev.h
index 3d8419271..28ce95cc9 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev.h
@@ -38,7 +38,8 @@ extern "C" {
  * an ioat rawdev instance.
  */
 struct rte_ioat_rawdev_config {
-	unsigned short ring_size;
+	unsigned short ring_size; /**< size of job submission descriptor ring */
+	bool hdls_disable;    /**< if set, ignore user-supplied handle params */
 };
 
 /**
@@ -56,6 +57,7 @@ struct rte_ioat_rawdev {
 
 	unsigned short ring_size;
 	struct rte_ioat_generic_hw_desc *desc_ring;
+	bool hdls_disable;
 	__m128i *hdls; /* completion handles for returning to user */
 
 
@@ -88,10 +90,14 @@ struct rte_ioat_rawdev {
  *   The length of the data to be copied
  * @param src_hdl
  *   An opaque handle for the source data, to be returned when this operation
- *   has been completed and the user polls for the completion details
+ *   has been completed and the user polls for the completion details.
+ *   NOTE: If hdls_disable configuration option for the device is set, this
+ *   parameter is ignored.
  * @param dst_hdl
  *   An opaque handle for the destination data, to be returned when this
- *   operation has been completed and the user polls for the completion details
+ *   operation has been completed and the user polls for the completion details.
+ *   NOTE: If hdls_disable configuration option for the device is set, this
+ *   parameter is ignored.
  * @param fence
  *   A flag parameter indicating that hardware should not begin to perform any
  *   subsequently enqueued copy operations until after this operation has
@@ -126,8 +132,10 @@ rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 	desc->u.control_raw = (uint32_t)((!!fence << 4) | (!(write & 0xF)) << 3);
 	desc->src_addr = src;
 	desc->dest_addr = dst;
+	if (!ioat->hdls_disable)
+		ioat->hdls[write] = _mm_set_epi64x((int64_t)dst_hdl,
+					(int64_t)src_hdl);
 
-	ioat->hdls[write] = _mm_set_epi64x((int64_t)dst_hdl, (int64_t)src_hdl);
 	rte_prefetch0(&ioat->desc_ring[ioat->next_write & mask]);
 
 	ioat->enqueued++;
@@ -174,19 +182,29 @@ rte_ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
 /**
  * Returns details of copy operations that have been completed
  *
- * Returns to the caller the user-provided "handles" for the copy operations
- * which have been completed by the hardware, and not already returned by
- * a previous call to this API.
+ * If the hdls_disable option was not set when the device was configured,
+ * the function will return to the caller the user-provided "handles" for
+ * the copy operations which have been completed by the hardware, and not
+ * already returned by a previous call to this API.
+ * If the hdls_disable option for the device was set on configure, the
+ * max_copies, src_hdls and dst_hdls parameters will be ignored, and the
+ * function returns the number of newly-completed operations.
  *
  * @param dev_id
  *   The rawdev device id of the ioat instance
  * @param max_copies
  *   The number of entries which can fit in the src_hdls and dst_hdls
- *   arrays, i.e. max number of completed operations to report
+ *   arrays, i.e. max number of completed operations to report.
+ *   NOTE: If hdls_disable configuration option for the device is set, this
+ *   parameter is ignored.
  * @param src_hdls
- *   Array to hold the source handle parameters of the completed copies
+ *   Array to hold the source handle parameters of the completed copies.
+ *   NOTE: If hdls_disable configuration option for the device is set, this
+ *   parameter is ignored.
  * @param dst_hdls
- *   Array to hold the destination handle parameters of the completed copies
+ *   Array to hold the destination handle parameters of the completed copies.
+ *   NOTE: If hdls_disable configuration option for the device is set, this
+ *   parameter is ignored.
  * @return
  *   -1 on error, with rte_errno set appropriately.
  *   Otherwise number of completed operations i.e. number of entries written
@@ -212,6 +230,11 @@ rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
 		return -1;
 	}
 
+	if (ioat->hdls_disable) {
+		read += count;
+		goto end;
+	}
+
 	if (count > max_copies)
 		count = max_copies;
 
@@ -229,7 +252,7 @@ rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
 		src_hdls[i] = hdls[0];
 		dst_hdls[i] = hdls[1];
 	}
-
+end:
 	ioat->next_read = read;
 	ioat->completed += count;
 	return count;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v5 06/25] raw/ioat: split header for readability
  2020-10-07 16:29 ` [dpdk-dev] [PATCH v5 " Bruce Richardson
                     ` (4 preceding siblings ...)
  2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 05/25] raw/ioat: add a flag to control copying handle parameters Bruce Richardson
@ 2020-10-07 16:30   ` Bruce Richardson
  2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 07/25] raw/ioat: rename functions to be operation-agnostic Bruce Richardson
                     ` (18 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-07 16:30 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Bruce Richardson, Kevin Laatz, Radu Nicolau

Rather than having a single long complicated header file for general use we
can split things so that there is one header with all the publicly needed
information - data structs and function prototypes - while the rest of the
internal details are put separately. This makes it easier to read,
understand and use the APIs.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 drivers/raw/ioat/meson.build           |   1 +
 drivers/raw/ioat/rte_ioat_rawdev.h     | 147 +---------------------
 drivers/raw/ioat/rte_ioat_rawdev_fns.h | 168 +++++++++++++++++++++++++
 3 files changed, 175 insertions(+), 141 deletions(-)
 create mode 100644 drivers/raw/ioat/rte_ioat_rawdev_fns.h

diff --git a/drivers/raw/ioat/meson.build b/drivers/raw/ioat/meson.build
index 0878418ae..f66e9b605 100644
--- a/drivers/raw/ioat/meson.build
+++ b/drivers/raw/ioat/meson.build
@@ -8,4 +8,5 @@ sources = files('ioat_rawdev.c',
 deps += ['rawdev', 'bus_pci', 'mbuf']
 
 install_headers('rte_ioat_rawdev.h',
+		'rte_ioat_rawdev_fns.h',
 		'rte_ioat_spec.h')
diff --git a/drivers/raw/ioat/rte_ioat_rawdev.h b/drivers/raw/ioat/rte_ioat_rawdev.h
index 28ce95cc9..7067b352f 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev.h
@@ -18,12 +18,7 @@ extern "C" {
  * @b EXPERIMENTAL: these structures and APIs may change without prior notice
  */
 
-#include <x86intrin.h>
-#include <rte_atomic.h>
-#include <rte_memory.h>
-#include <rte_memzone.h>
-#include <rte_prefetch.h>
-#include "rte_ioat_spec.h"
+#include <rte_common.h>
 
 /** Name of the device driver */
 #define IOAT_PMD_RAWDEV_NAME rawdev_ioat
@@ -42,38 +37,6 @@ struct rte_ioat_rawdev_config {
 	bool hdls_disable;    /**< if set, ignore user-supplied handle params */
 };
 
-/**
- * @internal
- * Structure representing a device instance
- */
-struct rte_ioat_rawdev {
-	struct rte_rawdev *rawdev;
-	const struct rte_memzone *mz;
-	const struct rte_memzone *desc_mz;
-
-	volatile struct rte_ioat_registers *regs;
-	phys_addr_t status_addr;
-	phys_addr_t ring_addr;
-
-	unsigned short ring_size;
-	struct rte_ioat_generic_hw_desc *desc_ring;
-	bool hdls_disable;
-	__m128i *hdls; /* completion handles for returning to user */
-
-
-	unsigned short next_read;
-	unsigned short next_write;
-
-	/* some statistics for tracking, if added/changed update xstats fns*/
-	uint64_t enqueue_failed __rte_cache_aligned;
-	uint64_t enqueued;
-	uint64_t started;
-	uint64_t completed;
-
-	/* to report completions, the device will write status back here */
-	volatile uint64_t status __rte_cache_aligned;
-};
-
 /**
  * Enqueue a copy operation onto the ioat device
  *
@@ -108,39 +71,7 @@ struct rte_ioat_rawdev {
 static inline int
 rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl,
-		int fence)
-{
-	struct rte_ioat_rawdev *ioat =
-			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
-	unsigned short read = ioat->next_read;
-	unsigned short write = ioat->next_write;
-	unsigned short mask = ioat->ring_size - 1;
-	unsigned short space = mask + read - write;
-	struct rte_ioat_generic_hw_desc *desc;
-
-	if (space == 0) {
-		ioat->enqueue_failed++;
-		return 0;
-	}
-
-	ioat->next_write = write + 1;
-	write &= mask;
-
-	desc = &ioat->desc_ring[write];
-	desc->size = length;
-	/* set descriptor write-back every 16th descriptor */
-	desc->u.control_raw = (uint32_t)((!!fence << 4) | (!(write & 0xF)) << 3);
-	desc->src_addr = src;
-	desc->dest_addr = dst;
-	if (!ioat->hdls_disable)
-		ioat->hdls[write] = _mm_set_epi64x((int64_t)dst_hdl,
-					(int64_t)src_hdl);
-
-	rte_prefetch0(&ioat->desc_ring[ioat->next_write & mask]);
-
-	ioat->enqueued++;
-	return 1;
-}
+		int fence);
 
 /**
  * Trigger hardware to begin performing enqueued copy operations
@@ -152,32 +83,7 @@ rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
  *   The rawdev device id of the ioat instance
  */
 static inline void
-rte_ioat_do_copies(int dev_id)
-{
-	struct rte_ioat_rawdev *ioat =
-			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
-	ioat->desc_ring[(ioat->next_write - 1) & (ioat->ring_size - 1)].u
-			.control.completion_update = 1;
-	rte_compiler_barrier();
-	ioat->regs->dmacount = ioat->next_write;
-	ioat->started = ioat->enqueued;
-}
-
-/**
- * @internal
- * Returns the index of the last completed operation.
- */
-static inline int
-rte_ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
-{
-	uint64_t status = ioat->status;
-
-	/* lower 3 bits indicate "transfer status" : active, idle, halted.
-	 * We can ignore bit 0.
-	 */
-	*error = status & (RTE_IOAT_CHANSTS_SUSPENDED | RTE_IOAT_CHANSTS_ARMED);
-	return (status - ioat->ring_addr) >> 6;
-}
+rte_ioat_do_copies(int dev_id);
 
 /**
  * Returns details of copy operations that have been completed
@@ -212,51 +118,10 @@ rte_ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
  */
 static inline int
 rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
-		uintptr_t *src_hdls, uintptr_t *dst_hdls)
-{
-	struct rte_ioat_rawdev *ioat =
-			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
-	unsigned short mask = (ioat->ring_size - 1);
-	unsigned short read = ioat->next_read;
-	unsigned short end_read, count;
-	int error;
-	int i = 0;
-
-	end_read = (rte_ioat_get_last_completed(ioat, &error) + 1) & mask;
-	count = (end_read - (read & mask)) & mask;
-
-	if (error) {
-		rte_errno = EIO;
-		return -1;
-	}
-
-	if (ioat->hdls_disable) {
-		read += count;
-		goto end;
-	}
+		uintptr_t *src_hdls, uintptr_t *dst_hdls);
 
-	if (count > max_copies)
-		count = max_copies;
-
-	for (; i < count - 1; i += 2, read += 2) {
-		__m128i hdls0 = _mm_load_si128(&ioat->hdls[read & mask]);
-		__m128i hdls1 = _mm_load_si128(&ioat->hdls[(read + 1) & mask]);
-
-		_mm_storeu_si128((__m128i *)&src_hdls[i],
-				_mm_unpacklo_epi64(hdls0, hdls1));
-		_mm_storeu_si128((__m128i *)&dst_hdls[i],
-				_mm_unpackhi_epi64(hdls0, hdls1));
-	}
-	for (; i < count; i++, read++) {
-		uintptr_t *hdls = (uintptr_t *)&ioat->hdls[read & mask];
-		src_hdls[i] = hdls[0];
-		dst_hdls[i] = hdls[1];
-	}
-end:
-	ioat->next_read = read;
-	ioat->completed += count;
-	return count;
-}
+/* include the implementation details from a separate file */
+#include "rte_ioat_rawdev_fns.h"
 
 #ifdef __cplusplus
 }
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
new file mode 100644
index 000000000..4b7bdb8e2
--- /dev/null
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -0,0 +1,168 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Intel Corporation
+ */
+#ifndef _RTE_IOAT_RAWDEV_FNS_H_
+#define _RTE_IOAT_RAWDEV_FNS_H_
+
+#include <x86intrin.h>
+#include <rte_rawdev.h>
+#include <rte_memzone.h>
+#include <rte_prefetch.h>
+#include "rte_ioat_spec.h"
+
+/**
+ * @internal
+ * Structure representing a device instance
+ */
+struct rte_ioat_rawdev {
+	struct rte_rawdev *rawdev;
+	const struct rte_memzone *mz;
+	const struct rte_memzone *desc_mz;
+
+	volatile struct rte_ioat_registers *regs;
+	phys_addr_t status_addr;
+	phys_addr_t ring_addr;
+
+	unsigned short ring_size;
+	bool hdls_disable;
+	struct rte_ioat_generic_hw_desc *desc_ring;
+	__m128i *hdls; /* completion handles for returning to user */
+
+
+	unsigned short next_read;
+	unsigned short next_write;
+
+	/* some statistics for tracking, if added/changed update xstats fns*/
+	uint64_t enqueue_failed __rte_cache_aligned;
+	uint64_t enqueued;
+	uint64_t started;
+	uint64_t completed;
+
+	/* to report completions, the device will write status back here */
+	volatile uint64_t status __rte_cache_aligned;
+};
+
+/*
+ * Enqueue a copy operation onto the ioat device
+ */
+static inline int
+rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
+		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl,
+		int fence)
+{
+	struct rte_ioat_rawdev *ioat =
+			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
+	unsigned short read = ioat->next_read;
+	unsigned short write = ioat->next_write;
+	unsigned short mask = ioat->ring_size - 1;
+	unsigned short space = mask + read - write;
+	struct rte_ioat_generic_hw_desc *desc;
+
+	if (space == 0) {
+		ioat->enqueue_failed++;
+		return 0;
+	}
+
+	ioat->next_write = write + 1;
+	write &= mask;
+
+	desc = &ioat->desc_ring[write];
+	desc->size = length;
+	/* set descriptor write-back every 16th descriptor */
+	desc->u.control_raw = (uint32_t)((!!fence << 4) | (!(write & 0xF)) << 3);
+	desc->src_addr = src;
+	desc->dest_addr = dst;
+
+	if (!ioat->hdls_disable)
+		ioat->hdls[write] = _mm_set_epi64x((int64_t)dst_hdl,
+					(int64_t)src_hdl);
+	rte_prefetch0(&ioat->desc_ring[ioat->next_write & mask]);
+
+	ioat->enqueued++;
+	return 1;
+}
+
+/*
+ * Trigger hardware to begin performing enqueued copy operations
+ */
+static inline void
+rte_ioat_do_copies(int dev_id)
+{
+	struct rte_ioat_rawdev *ioat =
+			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
+	ioat->desc_ring[(ioat->next_write - 1) & (ioat->ring_size - 1)].u
+			.control.completion_update = 1;
+	rte_compiler_barrier();
+	ioat->regs->dmacount = ioat->next_write;
+	ioat->started = ioat->enqueued;
+}
+
+/**
+ * @internal
+ * Returns the index of the last completed operation.
+ */
+static inline int
+rte_ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
+{
+	uint64_t status = ioat->status;
+
+	/* lower 3 bits indicate "transfer status" : active, idle, halted.
+	 * We can ignore bit 0.
+	 */
+	*error = status & (RTE_IOAT_CHANSTS_SUSPENDED | RTE_IOAT_CHANSTS_ARMED);
+	return (status - ioat->ring_addr) >> 6;
+}
+
+/*
+ * Returns details of copy operations that have been completed
+ */
+static inline int
+rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
+		uintptr_t *src_hdls, uintptr_t *dst_hdls)
+{
+	struct rte_ioat_rawdev *ioat =
+			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
+	unsigned short mask = (ioat->ring_size - 1);
+	unsigned short read = ioat->next_read;
+	unsigned short end_read, count;
+	int error;
+	int i = 0;
+
+	end_read = (rte_ioat_get_last_completed(ioat, &error) + 1) & mask;
+	count = (end_read - (read & mask)) & mask;
+
+	if (error) {
+		rte_errno = EIO;
+		return -1;
+	}
+
+	if (ioat->hdls_disable) {
+		read += count;
+		goto end;
+	}
+
+	if (count > max_copies)
+		count = max_copies;
+
+	for (; i < count - 1; i += 2, read += 2) {
+		__m128i hdls0 = _mm_load_si128(&ioat->hdls[read & mask]);
+		__m128i hdls1 = _mm_load_si128(&ioat->hdls[(read + 1) & mask]);
+
+		_mm_storeu_si128((__m128i *)&src_hdls[i],
+				_mm_unpacklo_epi64(hdls0, hdls1));
+		_mm_storeu_si128((__m128i *)&dst_hdls[i],
+				_mm_unpackhi_epi64(hdls0, hdls1));
+	}
+	for (; i < count; i++, read++) {
+		uintptr_t *hdls = (uintptr_t *)&ioat->hdls[read & mask];
+		src_hdls[i] = hdls[0];
+		dst_hdls[i] = hdls[1];
+	}
+
+end:
+	ioat->next_read = read;
+	ioat->completed += count;
+	return count;
+}
+
+#endif /* _RTE_IOAT_RAWDEV_FNS_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v5 07/25] raw/ioat: rename functions to be operation-agnostic
  2020-10-07 16:29 ` [dpdk-dev] [PATCH v5 " Bruce Richardson
                     ` (5 preceding siblings ...)
  2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 06/25] raw/ioat: split header for readability Bruce Richardson
@ 2020-10-07 16:30   ` Bruce Richardson
  2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 08/25] raw/ioat: add separate API for fence call Bruce Richardson
                     ` (17 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-07 16:30 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Bruce Richardson, Kevin Laatz, Radu Nicolau

Since the hardware supported by the ioat driver is capable of operations
other than just copies, we can rename the doorbell and completion-return
functions to not have "copies" in their names. These functions are not
copy-specific, and so would apply for other operations which may be added
later to the driver.

Also add a suitable warning using deprecation attribute for any code using
the old functions names.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 doc/guides/rawdevs/ioat.rst            | 16 ++++++++--------
 doc/guides/rel_notes/release_20_11.rst |  9 +++++++++
 doc/guides/sample_app_ug/ioat.rst      |  8 ++++----
 drivers/raw/ioat/ioat_rawdev_test.c    | 12 ++++++------
 drivers/raw/ioat/rte_ioat_rawdev.h     | 14 +++++++-------
 drivers/raw/ioat/rte_ioat_rawdev_fns.h | 20 ++++++++++++++++----
 examples/ioat/ioatfwd.c                |  4 ++--
 lib/librte_eal/include/rte_common.h    |  1 +
 8 files changed, 53 insertions(+), 31 deletions(-)

diff --git a/doc/guides/rawdevs/ioat.rst b/doc/guides/rawdevs/ioat.rst
index af00d77fb..3db5f5d09 100644
--- a/doc/guides/rawdevs/ioat.rst
+++ b/doc/guides/rawdevs/ioat.rst
@@ -157,9 +157,9 @@ Performing Data Copies
 ~~~~~~~~~~~~~~~~~~~~~~~
 
 To perform data copies using IOAT rawdev devices, the functions
-``rte_ioat_enqueue_copy()`` and ``rte_ioat_do_copies()`` should be used.
+``rte_ioat_enqueue_copy()`` and ``rte_ioat_perform_ops()`` should be used.
 Once copies have been completed, the completion will be reported back when
-the application calls ``rte_ioat_completed_copies()``.
+the application calls ``rte_ioat_completed_ops()``.
 
 The ``rte_ioat_enqueue_copy()`` function enqueues a single copy to the
 device ring for copying at a later point. The parameters to that function
@@ -172,11 +172,11 @@ pointers if packet data is being copied.
 
 While the ``rte_ioat_enqueue_copy()`` function enqueues a copy operation on
 the device ring, the copy will not actually be performed until after the
-application calls the ``rte_ioat_do_copies()`` function. This function
+application calls the ``rte_ioat_perform_ops()`` function. This function
 informs the device hardware of the elements enqueued on the ring, and the
 device will begin to process them. It is expected that, for efficiency
 reasons, a burst of operations will be enqueued to the device via multiple
-enqueue calls between calls to the ``rte_ioat_do_copies()`` function.
+enqueue calls between calls to the ``rte_ioat_perform_ops()`` function.
 
 The following code from ``test_ioat_rawdev.c`` demonstrates how to enqueue
 a burst of copies to the device and start the hardware processing of them:
@@ -210,10 +210,10 @@ a burst of copies to the device and start the hardware processing of them:
                         return -1;
                 }
         }
-        rte_ioat_do_copies(dev_id);
+        rte_ioat_perform_ops(dev_id);
 
 To retrieve information about completed copies, the API
-``rte_ioat_completed_copies()`` should be used. This API will return to the
+``rte_ioat_completed_ops()`` should be used. This API will return to the
 application a set of completion handles passed in when the relevant copies
 were enqueued.
 
@@ -223,9 +223,9 @@ is correct before freeing the data buffers using the returned handles:
 
 .. code-block:: C
 
-        if (rte_ioat_completed_copies(dev_id, 64, (void *)completed_src,
+        if (rte_ioat_completed_ops(dev_id, 64, (void *)completed_src,
                         (void *)completed_dst) != RTE_DIM(srcs)) {
-                printf("Error with rte_ioat_completed_copies\n");
+                printf("Error with rte_ioat_completed_ops\n");
                 return -1;
         }
         for (i = 0; i < RTE_DIM(srcs); i++) {
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 1e73c26d4..e7d038f31 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -121,6 +121,11 @@ New Features
   The ioat rawdev driver has been updated and enhanced. Changes include:
 
   * Added a per-device configuration flag to disable management of user-provided completion handles
+  * Renamed the ``rte_ioat_do_copies()`` API to ``rte_ioat_perform_ops()``,
+    and renamed the ``rte_ioat_completed_copies()`` API to ``rte_ioat_completed_ops()``
+    to better reflect the APIs' purposes, and remove the implication that
+    they are limited to copy operations only.
+    [Note: The old API is still provided but marked as deprecated in the code]
 
 
 Removed Items
@@ -234,6 +239,10 @@ API Changes
 
 * bpf: ``RTE_BPF_XTYPE_NUM`` has been dropped from ``rte_bpf_xtype``.
 
+* raw/ioat: As noted above, the ``rte_ioat_do_copies()`` and
+  ``rte_ioat_completed_copies()`` functions have been renamed to
+  ``rte_ioat_perform_ops()`` and ``rte_ioat_completed_ops()`` respectively.
+
 
 ABI Changes
 -----------
diff --git a/doc/guides/sample_app_ug/ioat.rst b/doc/guides/sample_app_ug/ioat.rst
index 3f7d5c34a..964160dff 100644
--- a/doc/guides/sample_app_ug/ioat.rst
+++ b/doc/guides/sample_app_ug/ioat.rst
@@ -394,7 +394,7 @@ packet using ``pktmbuf_sw_copy()`` function and enqueue them to an rte_ring:
                 nb_enq = ioat_enqueue_packets(pkts_burst,
                     nb_rx, rx_config->ioat_ids[i]);
                 if (nb_enq > 0)
-                    rte_ioat_do_copies(rx_config->ioat_ids[i]);
+                    rte_ioat_perform_ops(rx_config->ioat_ids[i]);
             } else {
                 /* Perform packet software copy, free source packets */
                 int ret;
@@ -433,7 +433,7 @@ The packets are received in burst mode using ``rte_eth_rx_burst()``
 function. When using hardware copy mode the packets are enqueued in
 copying device's buffer using ``ioat_enqueue_packets()`` which calls
 ``rte_ioat_enqueue_copy()``. When all received packets are in the
-buffer the copy operations are started by calling ``rte_ioat_do_copies()``.
+buffer the copy operations are started by calling ``rte_ioat_perform_ops()``.
 Function ``rte_ioat_enqueue_copy()`` operates on physical address of
 the packet. Structure ``rte_mbuf`` contains only physical address to
 start of the data buffer (``buf_iova``). Thus the address is adjusted
@@ -490,7 +490,7 @@ or indirect mbufs, then multiple copy operations must be used.
 
 
 All completed copies are processed by ``ioat_tx_port()`` function. When using
-hardware copy mode the function invokes ``rte_ioat_completed_copies()``
+hardware copy mode the function invokes ``rte_ioat_completed_ops()``
 on each assigned IOAT channel to gather copied packets. If software copy
 mode is used the function dequeues copied packets from the rte_ring. Then each
 packet MAC address is changed if it was enabled. After that copies are sent
@@ -510,7 +510,7 @@ in burst mode using `` rte_eth_tx_burst()``.
         for (i = 0; i < tx_config->nb_queues; i++) {
             if (copy_mode == COPY_MODE_IOAT_NUM) {
                 /* Deque the mbufs from IOAT device. */
-                nb_dq = rte_ioat_completed_copies(
+                nb_dq = rte_ioat_completed_ops(
                     tx_config->ioat_ids[i], MAX_PKT_BURST,
                     (void *)mbufs_src, (void *)mbufs_dst);
             } else {
diff --git a/drivers/raw/ioat/ioat_rawdev_test.c b/drivers/raw/ioat/ioat_rawdev_test.c
index 77f96bba3..439b46c03 100644
--- a/drivers/raw/ioat/ioat_rawdev_test.c
+++ b/drivers/raw/ioat/ioat_rawdev_test.c
@@ -65,12 +65,12 @@ test_enqueue_copies(int dev_id)
 			PRINT_ERR("Error with rte_ioat_enqueue_copy\n");
 			return -1;
 		}
-		rte_ioat_do_copies(dev_id);
+		rte_ioat_perform_ops(dev_id);
 		usleep(10);
 
-		if (rte_ioat_completed_copies(dev_id, 1, (void *)&completed[0],
+		if (rte_ioat_completed_ops(dev_id, 1, (void *)&completed[0],
 				(void *)&completed[1]) != 1) {
-			PRINT_ERR("Error with rte_ioat_completed_copies\n");
+			PRINT_ERR("Error with rte_ioat_completed_ops\n");
 			return -1;
 		}
 		if (completed[0] != src || completed[1] != dst) {
@@ -119,12 +119,12 @@ test_enqueue_copies(int dev_id)
 				return -1;
 			}
 		}
-		rte_ioat_do_copies(dev_id);
+		rte_ioat_perform_ops(dev_id);
 		usleep(100);
 
-		if (rte_ioat_completed_copies(dev_id, 64, (void *)completed_src,
+		if (rte_ioat_completed_ops(dev_id, 64, (void *)completed_src,
 				(void *)completed_dst) != RTE_DIM(srcs)) {
-			PRINT_ERR("Error with rte_ioat_completed_copies\n");
+			PRINT_ERR("Error with rte_ioat_completed_ops\n");
 			return -1;
 		}
 		for (i = 0; i < RTE_DIM(srcs); i++) {
diff --git a/drivers/raw/ioat/rte_ioat_rawdev.h b/drivers/raw/ioat/rte_ioat_rawdev.h
index 7067b352f..5b2c47e8c 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev.h
@@ -74,19 +74,19 @@ rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 		int fence);
 
 /**
- * Trigger hardware to begin performing enqueued copy operations
+ * Trigger hardware to begin performing enqueued operations
  *
  * This API is used to write the "doorbell" to the hardware to trigger it
- * to begin the copy operations previously enqueued by rte_ioat_enqueue_copy()
+ * to begin the operations previously enqueued by rte_ioat_enqueue_copy()
  *
  * @param dev_id
  *   The rawdev device id of the ioat instance
  */
 static inline void
-rte_ioat_do_copies(int dev_id);
+rte_ioat_perform_ops(int dev_id);
 
 /**
- * Returns details of copy operations that have been completed
+ * Returns details of operations that have been completed
  *
  * If the hdls_disable option was not set when the device was configured,
  * the function will return to the caller the user-provided "handles" for
@@ -104,11 +104,11 @@ rte_ioat_do_copies(int dev_id);
  *   NOTE: If hdls_disable configuration option for the device is set, this
  *   parameter is ignored.
  * @param src_hdls
- *   Array to hold the source handle parameters of the completed copies.
+ *   Array to hold the source handle parameters of the completed ops.
  *   NOTE: If hdls_disable configuration option for the device is set, this
  *   parameter is ignored.
  * @param dst_hdls
- *   Array to hold the destination handle parameters of the completed copies.
+ *   Array to hold the destination handle parameters of the completed ops.
  *   NOTE: If hdls_disable configuration option for the device is set, this
  *   parameter is ignored.
  * @return
@@ -117,7 +117,7 @@ rte_ioat_do_copies(int dev_id);
  *   to the src_hdls and dst_hdls array parameters.
  */
 static inline int
-rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
+rte_ioat_completed_ops(int dev_id, uint8_t max_copies,
 		uintptr_t *src_hdls, uintptr_t *dst_hdls);
 
 /* include the implementation details from a separate file */
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index 4b7bdb8e2..b155d79c4 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -83,10 +83,10 @@ rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 }
 
 /*
- * Trigger hardware to begin performing enqueued copy operations
+ * Trigger hardware to begin performing enqueued operations
  */
 static inline void
-rte_ioat_do_copies(int dev_id)
+rte_ioat_perform_ops(int dev_id)
 {
 	struct rte_ioat_rawdev *ioat =
 			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
@@ -114,10 +114,10 @@ rte_ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
 }
 
 /*
- * Returns details of copy operations that have been completed
+ * Returns details of operations that have been completed
  */
 static inline int
-rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
+rte_ioat_completed_ops(int dev_id, uint8_t max_copies,
 		uintptr_t *src_hdls, uintptr_t *dst_hdls)
 {
 	struct rte_ioat_rawdev *ioat =
@@ -165,4 +165,16 @@ rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
 	return count;
 }
 
+static inline void
+__rte_deprecated_msg("use rte_ioat_perform_ops() instead")
+rte_ioat_do_copies(int dev_id) { rte_ioat_perform_ops(dev_id); }
+
+static inline int
+__rte_deprecated_msg("use rte_ioat_completed_ops() instead")
+rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
+		uintptr_t *src_hdls, uintptr_t *dst_hdls)
+{
+	return rte_ioat_completed_ops(dev_id, max_copies, src_hdls, dst_hdls);
+}
+
 #endif /* _RTE_IOAT_RAWDEV_FNS_H_ */
diff --git a/examples/ioat/ioatfwd.c b/examples/ioat/ioatfwd.c
index 288a75c7b..67f75737b 100644
--- a/examples/ioat/ioatfwd.c
+++ b/examples/ioat/ioatfwd.c
@@ -406,7 +406,7 @@ ioat_rx_port(struct rxtx_port_config *rx_config)
 			nb_enq = ioat_enqueue_packets(pkts_burst,
 				nb_rx, rx_config->ioat_ids[i]);
 			if (nb_enq > 0)
-				rte_ioat_do_copies(rx_config->ioat_ids[i]);
+				rte_ioat_perform_ops(rx_config->ioat_ids[i]);
 		} else {
 			/* Perform packet software copy, free source packets */
 			int ret;
@@ -452,7 +452,7 @@ ioat_tx_port(struct rxtx_port_config *tx_config)
 	for (i = 0; i < tx_config->nb_queues; i++) {
 		if (copy_mode == COPY_MODE_IOAT_NUM) {
 			/* Deque the mbufs from IOAT device. */
-			nb_dq = rte_ioat_completed_copies(
+			nb_dq = rte_ioat_completed_ops(
 				tx_config->ioat_ids[i], MAX_PKT_BURST,
 				(void *)mbufs_src, (void *)mbufs_dst);
 		} else {
diff --git a/lib/librte_eal/include/rte_common.h b/lib/librte_eal/include/rte_common.h
index 8f487a563..2920255fc 100644
--- a/lib/librte_eal/include/rte_common.h
+++ b/lib/librte_eal/include/rte_common.h
@@ -85,6 +85,7 @@ typedef uint16_t unaligned_uint16_t;
 
 /******* Macro to mark functions and fields scheduled for removal *****/
 #define __rte_deprecated	__attribute__((__deprecated__))
+#define __rte_deprecated_msg(msg)	__attribute__((__deprecated__(msg)))
 
 /**
  * Mark a function or variable to a weak reference.
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v5 08/25] raw/ioat: add separate API for fence call
  2020-10-07 16:29 ` [dpdk-dev] [PATCH v5 " Bruce Richardson
                     ` (6 preceding siblings ...)
  2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 07/25] raw/ioat: rename functions to be operation-agnostic Bruce Richardson
@ 2020-10-07 16:30   ` Bruce Richardson
  2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 09/25] raw/ioat: make the HW register spec private Bruce Richardson
                     ` (16 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-07 16:30 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Bruce Richardson, Kevin Laatz, Radu Nicolau

Rather than having the fence signalled via a flag on a descriptor - which
requires reading the docs to find out whether the flag needs to go on the
last descriptor before, or the first descriptor after the fence - we can
instead add a separate fence API call. This becomes unambiguous to use,
since the fence call explicitly comes between two other enqueue calls. It
also allows more freedom of implementation in the driver code.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 doc/guides/rawdevs/ioat.rst            |  3 +--
 doc/guides/rel_notes/release_20_11.rst |  4 ++++
 drivers/raw/ioat/ioat_rawdev_test.c    |  6 ++----
 drivers/raw/ioat/rte_ioat_rawdev.h     | 26 ++++++++++++++++++++------
 drivers/raw/ioat/rte_ioat_rawdev_fns.h | 22 +++++++++++++++++++---
 examples/ioat/ioatfwd.c                | 12 ++++--------
 6 files changed, 50 insertions(+), 23 deletions(-)

diff --git a/doc/guides/rawdevs/ioat.rst b/doc/guides/rawdevs/ioat.rst
index 3db5f5d09..71bca0b28 100644
--- a/doc/guides/rawdevs/ioat.rst
+++ b/doc/guides/rawdevs/ioat.rst
@@ -203,8 +203,7 @@ a burst of copies to the device and start the hardware processing of them:
                                 dsts[i]->buf_iova + dsts[i]->data_off,
                                 length,
                                 (uintptr_t)srcs[i],
-                                (uintptr_t)dsts[i],
-                                0 /* nofence */) != 1) {
+                                (uintptr_t)dsts[i]) != 1) {
                         printf("Error with rte_ioat_enqueue_copy for buffer %u\n",
                                         i);
                         return -1;
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index e7d038f31..25ede96d9 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -126,6 +126,10 @@ New Features
     to better reflect the APIs' purposes, and remove the implication that
     they are limited to copy operations only.
     [Note: The old API is still provided but marked as deprecated in the code]
+  * Added a new API ``rte_ioat_fence()`` to add a fence between operations.
+    This API replaces the ``fence`` flag parameter in the ``rte_ioat_enqueue_copies()`` function,
+    and is clearer as there is no ambiguity as to whether the flag should be
+    set on the last operation before the fence or the first operation after it.
 
 
 Removed Items
diff --git a/drivers/raw/ioat/ioat_rawdev_test.c b/drivers/raw/ioat/ioat_rawdev_test.c
index 439b46c03..8b665cc9a 100644
--- a/drivers/raw/ioat/ioat_rawdev_test.c
+++ b/drivers/raw/ioat/ioat_rawdev_test.c
@@ -60,8 +60,7 @@ test_enqueue_copies(int dev_id)
 				dst->buf_iova + dst->data_off,
 				length,
 				(uintptr_t)src,
-				(uintptr_t)dst,
-				0 /* no fence */) != 1) {
+				(uintptr_t)dst) != 1) {
 			PRINT_ERR("Error with rte_ioat_enqueue_copy\n");
 			return -1;
 		}
@@ -112,8 +111,7 @@ test_enqueue_copies(int dev_id)
 					dsts[i]->buf_iova + dsts[i]->data_off,
 					length,
 					(uintptr_t)srcs[i],
-					(uintptr_t)dsts[i],
-					0 /* nofence */) != 1) {
+					(uintptr_t)dsts[i]) != 1) {
 				PRINT_ERR("Error with rte_ioat_enqueue_copy for buffer %u\n",
 						i);
 				return -1;
diff --git a/drivers/raw/ioat/rte_ioat_rawdev.h b/drivers/raw/ioat/rte_ioat_rawdev.h
index 5b2c47e8c..6b891cd44 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev.h
@@ -61,17 +61,31 @@ struct rte_ioat_rawdev_config {
  *   operation has been completed and the user polls for the completion details.
  *   NOTE: If hdls_disable configuration option for the device is set, this
  *   parameter is ignored.
- * @param fence
- *   A flag parameter indicating that hardware should not begin to perform any
- *   subsequently enqueued copy operations until after this operation has
- *   completed
  * @return
  *   Number of operations enqueued, either 0 or 1
  */
 static inline int
 rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
-		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl,
-		int fence);
+		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl);
+
+/**
+ * Add a fence to force ordering between operations
+ *
+ * This adds a fence to a sequence of operations to enforce ordering, such that
+ * all operations enqueued before the fence must be completed before operations
+ * after the fence.
+ * NOTE: Since this fence may be added as a flag to the last operation enqueued,
+ * this API may not function correctly when called immediately after an
+ * "rte_ioat_perform_ops" call i.e. before any new operations are enqueued.
+ *
+ * @param dev_id
+ *   The rawdev device id of the ioat instance
+ * @return
+ *   Number of fences enqueued, either 0 or 1
+ */
+static inline int
+rte_ioat_fence(int dev_id);
+
 
 /**
  * Trigger hardware to begin performing enqueued operations
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index b155d79c4..466721a23 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -47,8 +47,7 @@ struct rte_ioat_rawdev {
  */
 static inline int
 rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
-		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl,
-		int fence)
+		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl)
 {
 	struct rte_ioat_rawdev *ioat =
 			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
@@ -69,7 +68,7 @@ rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 	desc = &ioat->desc_ring[write];
 	desc->size = length;
 	/* set descriptor write-back every 16th descriptor */
-	desc->u.control_raw = (uint32_t)((!!fence << 4) | (!(write & 0xF)) << 3);
+	desc->u.control_raw = (uint32_t)((!(write & 0xF)) << 3);
 	desc->src_addr = src;
 	desc->dest_addr = dst;
 
@@ -82,6 +81,23 @@ rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 	return 1;
 }
 
+/* add fence to last written descriptor */
+static inline int
+rte_ioat_fence(int dev_id)
+{
+	struct rte_ioat_rawdev *ioat =
+			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
+	unsigned short write = ioat->next_write;
+	unsigned short mask = ioat->ring_size - 1;
+	struct rte_ioat_generic_hw_desc *desc;
+
+	write = (write - 1) & mask;
+	desc = &ioat->desc_ring[write];
+
+	desc->u.control.fence = 1;
+	return 0;
+}
+
 /*
  * Trigger hardware to begin performing enqueued operations
  */
diff --git a/examples/ioat/ioatfwd.c b/examples/ioat/ioatfwd.c
index 67f75737b..e6d1d1236 100644
--- a/examples/ioat/ioatfwd.c
+++ b/examples/ioat/ioatfwd.c
@@ -361,15 +361,11 @@ ioat_enqueue_packets(struct rte_mbuf **pkts,
 	for (i = 0; i < nb_rx; i++) {
 		/* Perform data copy */
 		ret = rte_ioat_enqueue_copy(dev_id,
-			pkts[i]->buf_iova
-			- addr_offset,
-			pkts_copy[i]->buf_iova
-			- addr_offset,
-			rte_pktmbuf_data_len(pkts[i])
-			+ addr_offset,
+			pkts[i]->buf_iova - addr_offset,
+			pkts_copy[i]->buf_iova - addr_offset,
+			rte_pktmbuf_data_len(pkts[i]) + addr_offset,
 			(uintptr_t)pkts[i],
-			(uintptr_t)pkts_copy[i],
-			0 /* nofence */);
+			(uintptr_t)pkts_copy[i]);
 
 		if (ret != 1)
 			break;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v5 09/25] raw/ioat: make the HW register spec private
  2020-10-07 16:29 ` [dpdk-dev] [PATCH v5 " Bruce Richardson
                     ` (7 preceding siblings ...)
  2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 08/25] raw/ioat: add separate API for fence call Bruce Richardson
@ 2020-10-07 16:30   ` Bruce Richardson
  2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 10/25] usertools/dpdk-devbind.py: add support for DSA HW Bruce Richardson
                     ` (15 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-07 16:30 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Bruce Richardson, Kevin Laatz, Radu Nicolau

Only a few definitions from the hardware spec are actually used in the
driver runtime, so we can copy over those few and make the rest of the spec
a private header in the driver.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 drivers/raw/ioat/ioat_rawdev.c                |  3 ++
 .../raw/ioat/{rte_ioat_spec.h => ioat_spec.h} | 26 -----------
 drivers/raw/ioat/meson.build                  |  3 +-
 drivers/raw/ioat/rte_ioat_rawdev_fns.h        | 43 +++++++++++++++++--
 4 files changed, 44 insertions(+), 31 deletions(-)
 rename drivers/raw/ioat/{rte_ioat_spec.h => ioat_spec.h} (91%)

diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
index ea9f51ffc..aa59b731f 100644
--- a/drivers/raw/ioat/ioat_rawdev.c
+++ b/drivers/raw/ioat/ioat_rawdev.c
@@ -4,10 +4,12 @@
 
 #include <rte_cycles.h>
 #include <rte_bus_pci.h>
+#include <rte_memzone.h>
 #include <rte_string_fns.h>
 #include <rte_rawdev_pmd.h>
 
 #include "rte_ioat_rawdev.h"
+#include "ioat_spec.h"
 
 static struct rte_pci_driver ioat_pmd_drv;
 
@@ -268,6 +270,7 @@ ioat_rawdev_create(const char *name, struct rte_pci_device *dev)
 	ioat->rawdev = rawdev;
 	ioat->mz = mz;
 	ioat->regs = dev->mem_resource[0].addr;
+	ioat->doorbell = &ioat->regs->dmacount;
 	ioat->ring_size = 0;
 	ioat->desc_ring = NULL;
 	ioat->status_addr = ioat->mz->iova +
diff --git a/drivers/raw/ioat/rte_ioat_spec.h b/drivers/raw/ioat/ioat_spec.h
similarity index 91%
rename from drivers/raw/ioat/rte_ioat_spec.h
rename to drivers/raw/ioat/ioat_spec.h
index c6e7929b2..9645e16d4 100644
--- a/drivers/raw/ioat/rte_ioat_spec.h
+++ b/drivers/raw/ioat/ioat_spec.h
@@ -86,32 +86,6 @@ struct rte_ioat_registers {
 
 #define RTE_IOAT_CHANCMP_ALIGN			8	/* CHANCMP address must be 64-bit aligned */
 
-struct rte_ioat_generic_hw_desc {
-	uint32_t size;
-	union {
-		uint32_t control_raw;
-		struct {
-			uint32_t int_enable: 1;
-			uint32_t src_snoop_disable: 1;
-			uint32_t dest_snoop_disable: 1;
-			uint32_t completion_update: 1;
-			uint32_t fence: 1;
-			uint32_t reserved2: 1;
-			uint32_t src_page_break: 1;
-			uint32_t dest_page_break: 1;
-			uint32_t bundle: 1;
-			uint32_t dest_dca: 1;
-			uint32_t hint: 1;
-			uint32_t reserved: 13;
-			uint32_t op: 8;
-		} control;
-	} u;
-	uint64_t src_addr;
-	uint64_t dest_addr;
-	uint64_t next;
-	uint64_t op_specific[4];
-};
-
 struct rte_ioat_dma_hw_desc {
 	uint32_t size;
 	union {
diff --git a/drivers/raw/ioat/meson.build b/drivers/raw/ioat/meson.build
index f66e9b605..06636f8a9 100644
--- a/drivers/raw/ioat/meson.build
+++ b/drivers/raw/ioat/meson.build
@@ -8,5 +8,4 @@ sources = files('ioat_rawdev.c',
 deps += ['rawdev', 'bus_pci', 'mbuf']
 
 install_headers('rte_ioat_rawdev.h',
-		'rte_ioat_rawdev_fns.h',
-		'rte_ioat_spec.h')
+		'rte_ioat_rawdev_fns.h')
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index 466721a23..c6e0b9a58 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -8,7 +8,36 @@
 #include <rte_rawdev.h>
 #include <rte_memzone.h>
 #include <rte_prefetch.h>
-#include "rte_ioat_spec.h"
+
+/**
+ * @internal
+ * Structure representing a device descriptor
+ */
+struct rte_ioat_generic_hw_desc {
+	uint32_t size;
+	union {
+		uint32_t control_raw;
+		struct {
+			uint32_t int_enable: 1;
+			uint32_t src_snoop_disable: 1;
+			uint32_t dest_snoop_disable: 1;
+			uint32_t completion_update: 1;
+			uint32_t fence: 1;
+			uint32_t reserved2: 1;
+			uint32_t src_page_break: 1;
+			uint32_t dest_page_break: 1;
+			uint32_t bundle: 1;
+			uint32_t dest_dca: 1;
+			uint32_t hint: 1;
+			uint32_t reserved: 13;
+			uint32_t op: 8;
+		} control;
+	} u;
+	uint64_t src_addr;
+	uint64_t dest_addr;
+	uint64_t next;
+	uint64_t op_specific[4];
+};
 
 /**
  * @internal
@@ -19,7 +48,7 @@ struct rte_ioat_rawdev {
 	const struct rte_memzone *mz;
 	const struct rte_memzone *desc_mz;
 
-	volatile struct rte_ioat_registers *regs;
+	volatile uint16_t *doorbell;
 	phys_addr_t status_addr;
 	phys_addr_t ring_addr;
 
@@ -40,8 +69,16 @@ struct rte_ioat_rawdev {
 
 	/* to report completions, the device will write status back here */
 	volatile uint64_t status __rte_cache_aligned;
+
+	/* pointer to the register bar */
+	volatile struct rte_ioat_registers *regs;
 };
 
+#define RTE_IOAT_CHANSTS_IDLE			0x1
+#define RTE_IOAT_CHANSTS_SUSPENDED		0x2
+#define RTE_IOAT_CHANSTS_HALTED			0x3
+#define RTE_IOAT_CHANSTS_ARMED			0x4
+
 /*
  * Enqueue a copy operation onto the ioat device
  */
@@ -109,7 +146,7 @@ rte_ioat_perform_ops(int dev_id)
 	ioat->desc_ring[(ioat->next_write - 1) & (ioat->ring_size - 1)].u
 			.control.completion_update = 1;
 	rte_compiler_barrier();
-	ioat->regs->dmacount = ioat->next_write;
+	*ioat->doorbell = ioat->next_write;
 	ioat->started = ioat->enqueued;
 }
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v5 10/25] usertools/dpdk-devbind.py: add support for DSA HW
  2020-10-07 16:29 ` [dpdk-dev] [PATCH v5 " Bruce Richardson
                     ` (8 preceding siblings ...)
  2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 09/25] raw/ioat: make the HW register spec private Bruce Richardson
@ 2020-10-07 16:30   ` Bruce Richardson
  2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 11/25] raw/ioat: add skeleton for VFIO/UIO based DSA device Bruce Richardson
                     ` (14 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-07 16:30 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Kevin Laatz, Bruce Richardson, Radu Nicolau

From: Kevin Laatz <kevin.laatz@intel.com>

Intel Data Streaming Accelerator (Intel DSA) is a high-performance data
copy and transformation accelerator which will be integrated in future
Intel processors [1].

Add DSA device support to dpdk-devbind.py script.

[1] https://01.org/blogs/2019/introducing-intel-data-streaming-accelerator

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 doc/guides/rel_notes/release_20_11.rst | 2 ++
 usertools/dpdk-devbind.py              | 6 +++++-
 2 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 25ede96d9..e48e6ea75 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -120,6 +120,8 @@ New Features
 
   The ioat rawdev driver has been updated and enhanced. Changes include:
 
+  * Added support for Intel\ |reg| Data Streaming Accelerator hardware.
+    For more information, see https://01.org/blogs/2019/introducing-intel-data-streaming-accelerator
   * Added a per-device configuration flag to disable management of user-provided completion handles
   * Renamed the ``rte_ioat_do_copies()`` API to ``rte_ioat_perform_ops()``,
     and renamed the ``rte_ioat_completed_copies()`` API to ``rte_ioat_completed_ops()``
diff --git a/usertools/dpdk-devbind.py b/usertools/dpdk-devbind.py
index d149bac13..1d1113a08 100755
--- a/usertools/dpdk-devbind.py
+++ b/usertools/dpdk-devbind.py
@@ -48,6 +48,8 @@
               'SVendor': None, 'SDevice': None}
 intel_ioat_icx = {'Class': '08', 'Vendor': '8086', 'Device': '0b00',
               'SVendor': None, 'SDevice': None}
+intel_idxd_spr = {'Class': '08', 'Vendor': '8086', 'Device': '0b25',
+              'SVendor': None, 'SDevice': None}
 intel_ntb_skx = {'Class': '06', 'Vendor': '8086', 'Device': '201c',
               'SVendor': None, 'SDevice': None}
 intel_ntb_icx = {'Class': '06', 'Vendor': '8086', 'Device': '347e',
@@ -59,7 +61,9 @@
 eventdev_devices = [cavium_sso, cavium_tim, octeontx2_sso]
 mempool_devices = [cavium_fpa, octeontx2_npa]
 compress_devices = [cavium_zip]
-misc_devices = [intel_ioat_bdw, intel_ioat_skx, intel_ioat_icx, intel_ntb_skx, intel_ntb_icx, octeontx2_dma]
+misc_devices = [intel_ioat_bdw, intel_ioat_skx, intel_ioat_icx, intel_idxd_spr,
+                intel_ntb_skx, intel_ntb_icx,
+                octeontx2_dma]
 
 # global dict ethernet devices present. Dictionary indexed by PCI address.
 # Each device within this is itself a dictionary of device properties
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v5 11/25] raw/ioat: add skeleton for VFIO/UIO based DSA device
  2020-10-07 16:29 ` [dpdk-dev] [PATCH v5 " Bruce Richardson
                     ` (9 preceding siblings ...)
  2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 10/25] usertools/dpdk-devbind.py: add support for DSA HW Bruce Richardson
@ 2020-10-07 16:30   ` Bruce Richardson
  2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 12/25] raw/ioat: add vdev probe for DSA/idxd devices Bruce Richardson
                     ` (13 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-07 16:30 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Bruce Richardson, Kevin Laatz, Radu Nicolau

Add in the basic probe/remove skeleton code for DSA devices which are bound
directly to vfio or uio driver. The kernel module for supporting these uses
the "idxd" name, so that name is used as function and file prefix to avoid
conflict with existing "ioat" prefixed functions.

Since we are adding new files to the driver and there will be common
definitions shared between the various files, we create a new internal
header file ioat_private.h to hold common macros and function prototypes.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 doc/guides/rawdevs/ioat.rst     | 69 ++++++++++-----------------------
 drivers/raw/ioat/idxd_pci.c     | 56 ++++++++++++++++++++++++++
 drivers/raw/ioat/ioat_private.h | 27 +++++++++++++
 drivers/raw/ioat/ioat_rawdev.c  |  9 +----
 drivers/raw/ioat/meson.build    |  6 ++-
 5 files changed, 108 insertions(+), 59 deletions(-)
 create mode 100644 drivers/raw/ioat/idxd_pci.c
 create mode 100644 drivers/raw/ioat/ioat_private.h

diff --git a/doc/guides/rawdevs/ioat.rst b/doc/guides/rawdevs/ioat.rst
index 71bca0b28..b898f98d5 100644
--- a/doc/guides/rawdevs/ioat.rst
+++ b/doc/guides/rawdevs/ioat.rst
@@ -3,10 +3,12 @@
 
 .. include:: <isonum.txt>
 
-IOAT Rawdev Driver for Intel\ |reg| QuickData Technology
-======================================================================
+IOAT Rawdev Driver
+===================
 
 The ``ioat`` rawdev driver provides a poll-mode driver (PMD) for Intel\ |reg|
+Data Streaming Accelerator `(Intel DSA)
+<https://01.org/blogs/2019/introducing-intel-data-streaming-accelerator>`_ and for Intel\ |reg|
 QuickData Technology, part of Intel\ |reg| I/O Acceleration Technology
 `(Intel I/OAT)
 <https://www.intel.com/content/www/us/en/wireless-network/accel-technology.html>`_.
@@ -17,61 +19,30 @@ be done by software, freeing up CPU cycles for other tasks.
 Hardware Requirements
 ----------------------
 
-On Linux, the presence of an Intel\ |reg| QuickData Technology hardware can
-be detected by checking the output of the ``lspci`` command, where the
-hardware will be often listed as "Crystal Beach DMA" or "CBDMA". For
-example, on a system with Intel\ |reg| Xeon\ |reg| CPU E5-2699 v4 @2.20GHz,
-lspci shows:
-
-.. code-block:: console
-
-  # lspci | grep DMA
-  00:04.0 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 0 (rev 01)
-  00:04.1 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 1 (rev 01)
-  00:04.2 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 2 (rev 01)
-  00:04.3 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 3 (rev 01)
-  00:04.4 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 4 (rev 01)
-  00:04.5 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 5 (rev 01)
-  00:04.6 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 6 (rev 01)
-  00:04.7 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 7 (rev 01)
-
-On a system with Intel\ |reg| Xeon\ |reg| Gold 6154 CPU @ 3.00GHz, lspci
-shows:
-
-.. code-block:: console
-
-  # lspci | grep DMA
-  00:04.0 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
-  00:04.1 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
-  00:04.2 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
-  00:04.3 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
-  00:04.4 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
-  00:04.5 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
-  00:04.6 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
-  00:04.7 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
-
+The ``dpdk-devbind.py`` script, included with DPDK,
+can be used to show the presence of supported hardware.
+Running ``dpdk-devbind.py --status-dev misc`` will show all the miscellaneous,
+or rawdev-based devices on the system.
+For Intel\ |reg| QuickData Technology devices, the hardware will be often listed as "Crystal Beach DMA",
+or "CBDMA".
+For Intel\ |reg| DSA devices, they are currently (at time of writing) appearing as devices with type "0b25",
+due to the absence of pci-id database entries for them at this point.
 
 Compilation
 ------------
 
-For builds done with ``make``, the driver compilation is enabled by the
-``CONFIG_RTE_LIBRTE_PMD_IOAT_RAWDEV`` build configuration option. This is
-enabled by default in builds for x86 platforms, and disabled in other
-configurations.
-
-For builds using ``meson`` and ``ninja``, the driver will be built when the
-target platform is x86-based.
+For builds using ``meson`` and ``ninja``, the driver will be built when the target platform is x86-based.
+No additional compilation steps are necessary.
 
 Device Setup
 -------------
 
-The Intel\ |reg| QuickData Technology HW devices will need to be bound to a
-user-space IO driver for use. The script ``dpdk-devbind.py`` script
-included with DPDK can be used to view the state of the devices and to bind
-them to a suitable DPDK-supported kernel driver. When querying the status
-of the devices, they will appear under the category of "Misc (rawdev)
-devices", i.e. the command ``dpdk-devbind.py --status-dev misc`` can be
-used to see the state of those devices alone.
+The HW devices to be used will need to be bound to a user-space IO driver for use.
+The ``dpdk-devbind.py`` script can be used to view the state of the devices
+and to bind them to a suitable DPDK-supported kernel driver, such as ``vfio-pci``.
+For example::
+
+	$ dpdk-devbind.py -b vfio-pci 00:04.0 00:04.1
 
 Device Probing and Initialization
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
new file mode 100644
index 000000000..1a30e9c31
--- /dev/null
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -0,0 +1,56 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_bus_pci.h>
+
+#include "ioat_private.h"
+
+#define IDXD_VENDOR_ID		0x8086
+#define IDXD_DEVICE_ID_SPR	0x0B25
+
+#define IDXD_PMD_RAWDEV_NAME_PCI rawdev_idxd_pci
+
+const struct rte_pci_id pci_id_idxd_map[] = {
+	{ RTE_PCI_DEVICE(IDXD_VENDOR_ID, IDXD_DEVICE_ID_SPR) },
+	{ .vendor_id = 0, /* sentinel */ },
+};
+
+static int
+idxd_rawdev_probe_pci(struct rte_pci_driver *drv, struct rte_pci_device *dev)
+{
+	int ret = 0;
+	char name[PCI_PRI_STR_SIZE];
+
+	rte_pci_device_name(&dev->addr, name, sizeof(name));
+	IOAT_PMD_INFO("Init %s on NUMA node %d", name, dev->device.numa_node);
+	dev->device.driver = &drv->driver;
+
+	return ret;
+}
+
+static int
+idxd_rawdev_remove_pci(struct rte_pci_device *dev)
+{
+	char name[PCI_PRI_STR_SIZE];
+	int ret = 0;
+
+	rte_pci_device_name(&dev->addr, name, sizeof(name));
+
+	IOAT_PMD_INFO("Closing %s on NUMA node %d",
+			name, dev->device.numa_node);
+
+	return ret;
+}
+
+struct rte_pci_driver idxd_pmd_drv_pci = {
+	.id_table = pci_id_idxd_map,
+	.drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+	.probe = idxd_rawdev_probe_pci,
+	.remove = idxd_rawdev_remove_pci,
+};
+
+RTE_PMD_REGISTER_PCI(IDXD_PMD_RAWDEV_NAME_PCI, idxd_pmd_drv_pci);
+RTE_PMD_REGISTER_PCI_TABLE(IDXD_PMD_RAWDEV_NAME_PCI, pci_id_idxd_map);
+RTE_PMD_REGISTER_KMOD_DEP(IDXD_PMD_RAWDEV_NAME_PCI,
+			  "* igb_uio | uio_pci_generic | vfio-pci");
diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
new file mode 100644
index 000000000..d87d4d055
--- /dev/null
+++ b/drivers/raw/ioat/ioat_private.h
@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#ifndef _IOAT_PRIVATE_H_
+#define _IOAT_PRIVATE_H_
+
+/**
+ * @file idxd_private.h
+ *
+ * Private data structures for the idxd/DSA part of ioat device driver
+ *
+ * @warning
+ * @b EXPERIMENTAL: these structures and APIs may change without prior notice
+ */
+
+extern int ioat_pmd_logtype;
+
+#define IOAT_PMD_LOG(level, fmt, args...) rte_log(RTE_LOG_ ## level, \
+		ioat_pmd_logtype, "%s(): " fmt "\n", __func__, ##args)
+
+#define IOAT_PMD_DEBUG(fmt, args...)  IOAT_PMD_LOG(DEBUG, fmt, ## args)
+#define IOAT_PMD_INFO(fmt, args...)   IOAT_PMD_LOG(INFO, fmt, ## args)
+#define IOAT_PMD_ERR(fmt, args...)    IOAT_PMD_LOG(ERR, fmt, ## args)
+#define IOAT_PMD_WARN(fmt, args...)   IOAT_PMD_LOG(WARNING, fmt, ## args)
+
+#endif /* _IOAT_PRIVATE_H_ */
diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
index aa59b731f..1fe32278d 100644
--- a/drivers/raw/ioat/ioat_rawdev.c
+++ b/drivers/raw/ioat/ioat_rawdev.c
@@ -10,6 +10,7 @@
 
 #include "rte_ioat_rawdev.h"
 #include "ioat_spec.h"
+#include "ioat_private.h"
 
 static struct rte_pci_driver ioat_pmd_drv;
 
@@ -29,14 +30,6 @@ static struct rte_pci_driver ioat_pmd_drv;
 
 RTE_LOG_REGISTER(ioat_pmd_logtype, rawdev.ioat, INFO);
 
-#define IOAT_PMD_LOG(level, fmt, args...) rte_log(RTE_LOG_ ## level, \
-	ioat_pmd_logtype, "%s(): " fmt "\n", __func__, ##args)
-
-#define IOAT_PMD_DEBUG(fmt, args...)  IOAT_PMD_LOG(DEBUG, fmt, ## args)
-#define IOAT_PMD_INFO(fmt, args...)   IOAT_PMD_LOG(INFO, fmt, ## args)
-#define IOAT_PMD_ERR(fmt, args...)    IOAT_PMD_LOG(ERR, fmt, ## args)
-#define IOAT_PMD_WARN(fmt, args...)   IOAT_PMD_LOG(WARNING, fmt, ## args)
-
 #define DESC_SZ sizeof(struct rte_ioat_generic_hw_desc)
 #define COMPLETION_SZ sizeof(__m128i)
 
diff --git a/drivers/raw/ioat/meson.build b/drivers/raw/ioat/meson.build
index 06636f8a9..3529635e9 100644
--- a/drivers/raw/ioat/meson.build
+++ b/drivers/raw/ioat/meson.build
@@ -3,8 +3,10 @@
 
 build = dpdk_conf.has('RTE_ARCH_X86')
 reason = 'only supported on x86'
-sources = files('ioat_rawdev.c',
-		'ioat_rawdev_test.c')
+sources = files(
+	'idxd_pci.c',
+	'ioat_rawdev.c',
+	'ioat_rawdev_test.c')
 deps += ['rawdev', 'bus_pci', 'mbuf']
 
 install_headers('rte_ioat_rawdev.h',
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v5 12/25] raw/ioat: add vdev probe for DSA/idxd devices
  2020-10-07 16:29 ` [dpdk-dev] [PATCH v5 " Bruce Richardson
                     ` (10 preceding siblings ...)
  2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 11/25] raw/ioat: add skeleton for VFIO/UIO based DSA device Bruce Richardson
@ 2020-10-07 16:30   ` Bruce Richardson
  2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 13/25] raw/ioat: include example configuration script Bruce Richardson
                     ` (12 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-07 16:30 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Kevin Laatz, Bruce Richardson, Radu Nicolau

From: Kevin Laatz <kevin.laatz@intel.com>

The Intel DSA devices can be exposed to userspace via kernel driver, so can
be used without having to bind them to vfio/uio. Therefore we add support
for using those kernel-configured devices as vdevs, taking as parameter the
individual HW work queue to be used by the vdev.

Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 doc/guides/rawdevs/ioat.rst  |  68 +++++++++++++++++--
 drivers/raw/ioat/idxd_vdev.c | 123 +++++++++++++++++++++++++++++++++++
 drivers/raw/ioat/meson.build |   6 +-
 3 files changed, 192 insertions(+), 5 deletions(-)
 create mode 100644 drivers/raw/ioat/idxd_vdev.c

diff --git a/doc/guides/rawdevs/ioat.rst b/doc/guides/rawdevs/ioat.rst
index b898f98d5..5b8d27980 100644
--- a/doc/guides/rawdevs/ioat.rst
+++ b/doc/guides/rawdevs/ioat.rst
@@ -37,9 +37,62 @@ No additional compilation steps are necessary.
 Device Setup
 -------------
 
+Depending on support provided by the PMD, HW devices can either use the kernel configured driver
+or be bound to a user-space IO driver for use.
+For example, Intel\ |reg| DSA devices can use the IDXD kernel driver or DPDK-supported drivers,
+such as ``vfio-pci``.
+
+Intel\ |reg| DSA devices using idxd kernel driver
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To use a Intel\ |reg| DSA device bound to the IDXD kernel driver, the device must first be configured.
+The `accel-config <https://github.com/intel/idxd-config>`_ utility library can be used for configuration.
+
+.. note::
+        The device configuration can also be done by directly interacting with the sysfs nodes.
+
+There are some mandatory configuration steps before being able to use a device with an application.
+The internal engines, which do the copies or other operations,
+and the work-queues, which are used by applications to assign work to the device,
+need to be assigned to groups, and the various other configuration options,
+such as priority or queue depth, need to be set for each queue.
+
+To assign an engine to a group::
+
+        $ accel-config config-engine dsa0/engine0.0 --group-id=0
+        $ accel-config config-engine dsa0/engine0.1 --group-id=1
+
+To assign work queues to groups for passing descriptors to the engines a similar accel-config command can be used.
+However, the work queues also need to be configured depending on the use-case.
+Some configuration options include:
+
+* mode (Dedicated/Shared): Indicates whether a WQ may accept jobs from multiple queues simultaneously.
+* priority: WQ priority between 1 and 15. Larger value means higher priority.
+* wq-size: the size of the WQ. Sum of all WQ sizes must be less that the total-size defined by the device.
+* type: WQ type (kernel/mdev/user). Determines how the device is presented.
+* name: identifier given to the WQ.
+
+Example configuration for a work queue::
+
+        $ accel-config config-wq dsa0/wq0.0 --group-id=0 \
+           --mode=dedicated --priority=10 --wq-size=8 \
+           --type=user --name=app1
+
+Once the devices have been configured, they need to be enabled::
+
+        $ accel-config enable-device dsa0
+        $ accel-config enable-wq dsa0/wq0.0
+
+Check the device configuration::
+
+        $ accel-config list
+
+Devices using VFIO/UIO drivers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
 The HW devices to be used will need to be bound to a user-space IO driver for use.
 The ``dpdk-devbind.py`` script can be used to view the state of the devices
-and to bind them to a suitable DPDK-supported kernel driver, such as ``vfio-pci``.
+and to bind them to a suitable DPDK-supported driver, such as ``vfio-pci``.
 For example::
 
 	$ dpdk-devbind.py -b vfio-pci 00:04.0 00:04.1
@@ -47,9 +100,16 @@ For example::
 Device Probing and Initialization
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-Once bound to a suitable kernel device driver, the HW devices will be found
-as part of the PCI scan done at application initialization time. No vdev
-parameters need to be passed to create or initialize the device.
+For devices bound to a suitable DPDK-supported VFIO/UIO driver, the HW devices will
+be found as part of the device scan done at application initialization time without
+the need to pass parameters to the application.
+
+If the device is bound to the IDXD kernel driver (and previously configured with sysfs),
+then a specific work queue needs to be passed to the application via a vdev parameter.
+This vdev parameter take the driver name and work queue name as parameters.
+For example, to use work queue 0 on Intel\ |reg| DSA instance 0::
+
+        $ dpdk-test --no-pci --vdev=rawdev_idxd,wq=0.0
 
 Once probed successfully, the device will appear as a ``rawdev``, that is a
 "raw device type" inside DPDK, and can be accessed using APIs from the
diff --git a/drivers/raw/ioat/idxd_vdev.c b/drivers/raw/ioat/idxd_vdev.c
new file mode 100644
index 000000000..0509fc084
--- /dev/null
+++ b/drivers/raw/ioat/idxd_vdev.c
@@ -0,0 +1,123 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_bus_vdev.h>
+#include <rte_kvargs.h>
+#include <rte_string_fns.h>
+#include <rte_rawdev_pmd.h>
+
+#include "ioat_private.h"
+
+/** Name of the device driver */
+#define IDXD_PMD_RAWDEV_NAME rawdev_idxd
+/* takes a work queue(WQ) as parameter */
+#define IDXD_ARG_WQ		"wq"
+
+static const char * const valid_args[] = {
+	IDXD_ARG_WQ,
+	NULL
+};
+
+struct idxd_vdev_args {
+	uint8_t device_id;
+	uint8_t wq_id;
+};
+
+static int
+idxd_rawdev_parse_wq(const char *key __rte_unused, const char *value,
+			  void *extra_args)
+{
+	struct idxd_vdev_args *args = (struct idxd_vdev_args *)extra_args;
+	int dev, wq, bytes = -1;
+	int read = sscanf(value, "%d.%d%n", &dev, &wq, &bytes);
+
+	if (read != 2 || bytes != (int)strlen(value)) {
+		IOAT_PMD_ERR("Error parsing work-queue id. Must be in <dev_id>.<queue_id> format");
+		return -EINVAL;
+	}
+
+	if (dev >= UINT8_MAX || wq >= UINT8_MAX) {
+		IOAT_PMD_ERR("Device or work queue id out of range");
+		return -EINVAL;
+	}
+
+	args->device_id = dev;
+	args->wq_id = wq;
+
+	return 0;
+}
+
+static int
+idxd_vdev_parse_params(struct rte_kvargs *kvlist, struct idxd_vdev_args *args)
+{
+	if (rte_kvargs_count(kvlist, IDXD_ARG_WQ) == 1) {
+		if (rte_kvargs_process(kvlist, IDXD_ARG_WQ,
+				&idxd_rawdev_parse_wq, args) < 0) {
+			IOAT_PMD_ERR("Error parsing %s", IDXD_ARG_WQ);
+			goto free;
+		}
+	} else {
+		IOAT_PMD_ERR("%s is a mandatory arg", IDXD_ARG_WQ);
+		return -EINVAL;
+	}
+
+	return 0;
+
+free:
+	if (kvlist)
+		rte_kvargs_free(kvlist);
+	return -EINVAL;
+}
+
+static int
+idxd_rawdev_probe_vdev(struct rte_vdev_device *vdev)
+{
+	struct rte_kvargs *kvlist;
+	struct idxd_vdev_args vdev_args;
+	const char *name;
+	int ret = 0;
+
+	name = rte_vdev_device_name(vdev);
+	if (name == NULL)
+		return -EINVAL;
+
+	IOAT_PMD_INFO("Initializing pmd_idxd for %s", name);
+
+	kvlist = rte_kvargs_parse(rte_vdev_device_args(vdev), valid_args);
+	if (kvlist == NULL) {
+		IOAT_PMD_ERR("Invalid kvargs key");
+		return -EINVAL;
+	}
+
+	ret = idxd_vdev_parse_params(kvlist, &vdev_args);
+	if (ret) {
+		IOAT_PMD_ERR("Failed to parse kvargs");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int
+idxd_rawdev_remove_vdev(struct rte_vdev_device *vdev)
+{
+	const char *name;
+
+	name = rte_vdev_device_name(vdev);
+	if (name == NULL)
+		return -EINVAL;
+
+	IOAT_PMD_INFO("Remove DSA vdev %p", name);
+
+	return 0;
+}
+
+struct rte_vdev_driver idxd_rawdev_drv_vdev = {
+	.probe = idxd_rawdev_probe_vdev,
+	.remove = idxd_rawdev_remove_vdev,
+};
+
+RTE_PMD_REGISTER_VDEV(IDXD_PMD_RAWDEV_NAME, idxd_rawdev_drv_vdev);
+RTE_PMD_REGISTER_PARAM_STRING(IDXD_PMD_RAWDEV_NAME,
+			      "wq=<string>");
diff --git a/drivers/raw/ioat/meson.build b/drivers/raw/ioat/meson.build
index 3529635e9..b343b7367 100644
--- a/drivers/raw/ioat/meson.build
+++ b/drivers/raw/ioat/meson.build
@@ -5,9 +5,13 @@ build = dpdk_conf.has('RTE_ARCH_X86')
 reason = 'only supported on x86'
 sources = files(
 	'idxd_pci.c',
+	'idxd_vdev.c',
 	'ioat_rawdev.c',
 	'ioat_rawdev_test.c')
-deps += ['rawdev', 'bus_pci', 'mbuf']
+deps += ['bus_pci',
+	'bus_vdev',
+	'mbuf',
+	'rawdev']
 
 install_headers('rte_ioat_rawdev.h',
 		'rte_ioat_rawdev_fns.h')
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v5 13/25] raw/ioat: include example configuration script
  2020-10-07 16:29 ` [dpdk-dev] [PATCH v5 " Bruce Richardson
                     ` (11 preceding siblings ...)
  2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 12/25] raw/ioat: add vdev probe for DSA/idxd devices Bruce Richardson
@ 2020-10-07 16:30   ` Bruce Richardson
  2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 14/25] raw/ioat: create rawdev instances on idxd PCI probe Bruce Richardson
                     ` (11 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-07 16:30 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Bruce Richardson, Kevin Laatz, Radu Nicolau

Devices managed by the idxd kernel driver must be configured for DPDK use
before it can be used by the ioat driver. This example script serves both
as a quick way to get the driver set up with a simple configuration, and as
the basis for users to modify it and create their own configuration
scripts.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 doc/guides/rawdevs/ioat.rst       |  2 +
 drivers/raw/ioat/dpdk_idxd_cfg.py | 79 +++++++++++++++++++++++++++++++
 2 files changed, 81 insertions(+)
 create mode 100755 drivers/raw/ioat/dpdk_idxd_cfg.py

diff --git a/doc/guides/rawdevs/ioat.rst b/doc/guides/rawdevs/ioat.rst
index 5b8d27980..7c2a2d457 100644
--- a/doc/guides/rawdevs/ioat.rst
+++ b/doc/guides/rawdevs/ioat.rst
@@ -50,6 +50,8 @@ The `accel-config <https://github.com/intel/idxd-config>`_ utility library can b
 
 .. note::
         The device configuration can also be done by directly interacting with the sysfs nodes.
+        An example of how this may be done can be seen in the script ``dpdk_idxd_cfg.py``
+        included in the driver source directory.
 
 There are some mandatory configuration steps before being able to use a device with an application.
 The internal engines, which do the copies or other operations,
diff --git a/drivers/raw/ioat/dpdk_idxd_cfg.py b/drivers/raw/ioat/dpdk_idxd_cfg.py
new file mode 100755
index 000000000..bce4bb5bd
--- /dev/null
+++ b/drivers/raw/ioat/dpdk_idxd_cfg.py
@@ -0,0 +1,79 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2020 Intel Corporation
+
+"""
+Configure an entire Intel DSA instance, using idxd kernel driver, for DPDK use
+"""
+
+import sys
+import argparse
+import os
+import os.path
+
+
+class SysfsDir:
+    "Used to read/write paths in a sysfs directory"
+    def __init__(self, path):
+        self.path = path
+
+    def read_int(self, filename):
+        "Return a value from sysfs file"
+        with open(os.path.join(self.path, filename)) as f:
+            return int(f.readline())
+
+    def write_values(self, values):
+        "write dictionary, where key is filename and value is value to write"
+        for filename, contents in values.items():
+            with open(os.path.join(self.path, filename), "w") as f:
+                f.write(str(contents))
+
+
+def configure_dsa(dsa_id, queues):
+    "Configure the DSA instance with appropriate number of queues"
+    dsa_dir = SysfsDir(f"/sys/bus/dsa/devices/dsa{dsa_id}")
+    drv_dir = SysfsDir("/sys/bus/dsa/drivers/dsa")
+
+    max_groups = dsa_dir.read_int("max_groups")
+    max_engines = dsa_dir.read_int("max_engines")
+    max_queues = dsa_dir.read_int("max_work_queues")
+    max_tokens = dsa_dir.read_int("max_tokens")
+
+    # we want one engine per group
+    nb_groups = min(max_engines, max_groups)
+    for grp in range(nb_groups):
+        dsa_dir.write_values({f"engine{dsa_id}.{grp}/group_id": grp})
+
+    nb_queues = min(queues, max_queues)
+    if queues > nb_queues:
+        print(f"Setting number of queues to max supported value: {max_queues}")
+
+    # configure each queue
+    for q in range(nb_queues):
+        wq_dir = SysfsDir(os.path.join(dsa_dir.path, f"wq{dsa_id}.{q}"))
+        wq_dir.write_values({"group_id": q % nb_groups,
+                             "type": "user",
+                             "mode": "dedicated",
+                             "name": f"dpdk_wq{dsa_id}.{q}",
+                             "priority": 1,
+                             "size": int(max_tokens / nb_queues)})
+
+    # enable device and then queues
+    drv_dir.write_values({"bind": f"dsa{dsa_id}"})
+    for q in range(nb_queues):
+        drv_dir.write_values({"bind": f"wq{dsa_id}.{q}"})
+
+
+def main(args):
+    "Main function, does arg parsing and calls config function"
+    arg_p = argparse.ArgumentParser(
+        description="Configure whole DSA device instance for DPDK use")
+    arg_p.add_argument('dsa_id', type=int, help="DSA instance number")
+    arg_p.add_argument('-q', metavar='queues', type=int, default=255,
+                       help="Number of queues to set up")
+    parsed_args = arg_p.parse_args(args[1:])
+    configure_dsa(parsed_args.dsa_id, parsed_args.q)
+
+
+if __name__ == "__main__":
+    main(sys.argv)
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v5 14/25] raw/ioat: create rawdev instances on idxd PCI probe
  2020-10-07 16:29 ` [dpdk-dev] [PATCH v5 " Bruce Richardson
                     ` (12 preceding siblings ...)
  2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 13/25] raw/ioat: include example configuration script Bruce Richardson
@ 2020-10-07 16:30   ` Bruce Richardson
  2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 15/25] raw/ioat: create rawdev instances for idxd vdevs Bruce Richardson
                     ` (10 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-07 16:30 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Bruce Richardson, Kevin Laatz, Radu Nicolau

When a matching device is found via PCI probe create a rawdev instance for
each queue on the hardware. Use empty self-test function for these devices
so that the overall rawdev_autotest does not report failures.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 drivers/raw/ioat/idxd_pci.c            | 237 ++++++++++++++++++++++++-
 drivers/raw/ioat/ioat_common.c         |  68 +++++++
 drivers/raw/ioat/ioat_private.h        |  33 ++++
 drivers/raw/ioat/ioat_rawdev_test.c    |   7 +
 drivers/raw/ioat/ioat_spec.h           |  64 +++++++
 drivers/raw/ioat/meson.build           |   1 +
 drivers/raw/ioat/rte_ioat_rawdev_fns.h |  35 +++-
 7 files changed, 442 insertions(+), 3 deletions(-)
 create mode 100644 drivers/raw/ioat/ioat_common.c

diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
index 1a30e9c31..c3fec56d5 100644
--- a/drivers/raw/ioat/idxd_pci.c
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -3,8 +3,10 @@
  */
 
 #include <rte_bus_pci.h>
+#include <rte_memzone.h>
 
 #include "ioat_private.h"
+#include "ioat_spec.h"
 
 #define IDXD_VENDOR_ID		0x8086
 #define IDXD_DEVICE_ID_SPR	0x0B25
@@ -16,17 +18,246 @@ const struct rte_pci_id pci_id_idxd_map[] = {
 	{ .vendor_id = 0, /* sentinel */ },
 };
 
+static inline int
+idxd_pci_dev_command(struct idxd_rawdev *idxd, enum rte_idxd_cmds command)
+{
+	uint8_t err_code;
+	uint16_t qid = idxd->qid;
+	int i = 0;
+
+	if (command >= idxd_disable_wq && command <= idxd_reset_wq)
+		qid = (1 << qid);
+	rte_spinlock_lock(&idxd->u.pci->lk);
+	idxd->u.pci->regs->cmd = (command << IDXD_CMD_SHIFT) | qid;
+
+	do {
+		rte_pause();
+		err_code = idxd->u.pci->regs->cmdstatus;
+		if (++i >= 1000) {
+			IOAT_PMD_ERR("Timeout waiting for command response from HW");
+			rte_spinlock_unlock(&idxd->u.pci->lk);
+			return err_code;
+		}
+	} while (idxd->u.pci->regs->cmdstatus & CMDSTATUS_ACTIVE_MASK);
+	rte_spinlock_unlock(&idxd->u.pci->lk);
+
+	return err_code & CMDSTATUS_ERR_MASK;
+}
+
+static int
+idxd_is_wq_enabled(struct idxd_rawdev *idxd)
+{
+	uint32_t state = idxd->u.pci->wq_regs[idxd->qid].wqcfg[WQ_STATE_IDX];
+	return ((state >> WQ_STATE_SHIFT) & WQ_STATE_MASK) == 0x1;
+}
+
+static const struct rte_rawdev_ops idxd_pci_ops = {
+		.dev_close = idxd_rawdev_close,
+		.dev_selftest = idxd_rawdev_test,
+};
+
+/* each portal uses 4 x 4k pages */
+#define IDXD_PORTAL_SIZE (4096 * 4)
+
+static int
+init_pci_device(struct rte_pci_device *dev, struct idxd_rawdev *idxd)
+{
+	struct idxd_pci_common *pci;
+	uint8_t nb_groups, nb_engines, nb_wqs;
+	uint16_t grp_offset, wq_offset; /* how far into bar0 the regs are */
+	uint16_t wq_size, total_wq_size;
+	uint8_t lg2_max_batch, lg2_max_copy_size;
+	unsigned int i, err_code;
+
+	pci = malloc(sizeof(*pci));
+	if (pci == NULL) {
+		IOAT_PMD_ERR("%s: Can't allocate memory", __func__);
+		goto err;
+	}
+	rte_spinlock_init(&pci->lk);
+
+	/* assign the bar registers, and then configure device */
+	pci->regs = dev->mem_resource[0].addr;
+	grp_offset = (uint16_t)pci->regs->offsets[0];
+	pci->grp_regs = RTE_PTR_ADD(pci->regs, grp_offset * 0x100);
+	wq_offset = (uint16_t)(pci->regs->offsets[0] >> 16);
+	pci->wq_regs = RTE_PTR_ADD(pci->regs, wq_offset * 0x100);
+	pci->portals = dev->mem_resource[2].addr;
+
+	/* sanity check device status */
+	if (pci->regs->gensts & GENSTS_DEV_STATE_MASK) {
+		/* need function-level-reset (FLR) or is enabled */
+		IOAT_PMD_ERR("Device status is not disabled, cannot init");
+		goto err;
+	}
+	if (pci->regs->cmdstatus & CMDSTATUS_ACTIVE_MASK) {
+		/* command in progress */
+		IOAT_PMD_ERR("Device has a command in progress, cannot init");
+		goto err;
+	}
+
+	/* read basic info about the hardware for use when configuring */
+	nb_groups = (uint8_t)pci->regs->grpcap;
+	nb_engines = (uint8_t)pci->regs->engcap;
+	nb_wqs = (uint8_t)(pci->regs->wqcap >> 16);
+	total_wq_size = (uint16_t)pci->regs->wqcap;
+	lg2_max_copy_size = (uint8_t)(pci->regs->gencap >> 16) & 0x1F;
+	lg2_max_batch = (uint8_t)(pci->regs->gencap >> 21) & 0x0F;
+
+	IOAT_PMD_DEBUG("nb_groups = %u, nb_engines = %u, nb_wqs = %u",
+			nb_groups, nb_engines, nb_wqs);
+
+	/* zero out any old config */
+	for (i = 0; i < nb_groups; i++) {
+		pci->grp_regs[i].grpengcfg = 0;
+		pci->grp_regs[i].grpwqcfg[0] = 0;
+	}
+	for (i = 0; i < nb_wqs; i++)
+		pci->wq_regs[i].wqcfg[0] = 0;
+
+	/* put each engine into a separate group to avoid reordering */
+	if (nb_groups > nb_engines)
+		nb_groups = nb_engines;
+	if (nb_groups < nb_engines)
+		nb_engines = nb_groups;
+
+	/* assign engines to groups, round-robin style */
+	for (i = 0; i < nb_engines; i++) {
+		IOAT_PMD_DEBUG("Assigning engine %u to group %u",
+				i, i % nb_groups);
+		pci->grp_regs[i % nb_groups].grpengcfg |= (1ULL << i);
+	}
+
+	/* now do the same for queues and give work slots to each queue */
+	wq_size = total_wq_size / nb_wqs;
+	IOAT_PMD_DEBUG("Work queue size = %u, max batch = 2^%u, max copy = 2^%u",
+			wq_size, lg2_max_batch, lg2_max_copy_size);
+	for (i = 0; i < nb_wqs; i++) {
+		/* add engine "i" to a group */
+		IOAT_PMD_DEBUG("Assigning work queue %u to group %u",
+				i, i % nb_groups);
+		pci->grp_regs[i % nb_groups].grpwqcfg[0] |= (1ULL << i);
+		/* now configure it, in terms of size, max batch, mode */
+		pci->wq_regs[i].wqcfg[WQ_SIZE_IDX] = wq_size;
+		pci->wq_regs[i].wqcfg[WQ_MODE_IDX] = (1 << WQ_PRIORITY_SHIFT) |
+				WQ_MODE_DEDICATED;
+		pci->wq_regs[i].wqcfg[WQ_SIZES_IDX] = lg2_max_copy_size |
+				(lg2_max_batch << WQ_BATCH_SZ_SHIFT);
+	}
+
+	/* dump the group configuration to output */
+	for (i = 0; i < nb_groups; i++) {
+		IOAT_PMD_DEBUG("## Group %d", i);
+		IOAT_PMD_DEBUG("    GRPWQCFG: %"PRIx64, pci->grp_regs[i].grpwqcfg[0]);
+		IOAT_PMD_DEBUG("    GRPENGCFG: %"PRIx64, pci->grp_regs[i].grpengcfg);
+		IOAT_PMD_DEBUG("    GRPFLAGS: %"PRIx32, pci->grp_regs[i].grpflags);
+	}
+
+	idxd->u.pci = pci;
+	idxd->max_batches = wq_size;
+
+	/* enable the device itself */
+	err_code = idxd_pci_dev_command(idxd, idxd_enable_dev);
+	if (err_code) {
+		IOAT_PMD_ERR("Error enabling device: code %#x", err_code);
+		return err_code;
+	}
+	IOAT_PMD_DEBUG("IDXD Device enabled OK");
+
+	return nb_wqs;
+
+err:
+	free(pci);
+	return -1;
+}
+
 static int
 idxd_rawdev_probe_pci(struct rte_pci_driver *drv, struct rte_pci_device *dev)
 {
-	int ret = 0;
+	struct idxd_rawdev idxd = {{0}}; /* Double {} to avoid error on BSD12 */
+	uint8_t nb_wqs;
+	int qid, ret = 0;
 	char name[PCI_PRI_STR_SIZE];
 
 	rte_pci_device_name(&dev->addr, name, sizeof(name));
 	IOAT_PMD_INFO("Init %s on NUMA node %d", name, dev->device.numa_node);
 	dev->device.driver = &drv->driver;
 
-	return ret;
+	ret = init_pci_device(dev, &idxd);
+	if (ret < 0) {
+		IOAT_PMD_ERR("Error initializing PCI hardware");
+		return ret;
+	}
+	nb_wqs = (uint8_t)ret;
+
+	/* set up one device for each queue */
+	for (qid = 0; qid < nb_wqs; qid++) {
+		char qname[32];
+
+		/* add the queue number to each device name */
+		snprintf(qname, sizeof(qname), "%s-q%d", name, qid);
+		idxd.qid = qid;
+		idxd.public.portal = RTE_PTR_ADD(idxd.u.pci->portals,
+				qid * IDXD_PORTAL_SIZE);
+		if (idxd_is_wq_enabled(&idxd))
+			IOAT_PMD_ERR("Error, WQ %u seems enabled", qid);
+		ret = idxd_rawdev_create(qname, &dev->device,
+				&idxd, &idxd_pci_ops);
+		if (ret != 0) {
+			IOAT_PMD_ERR("Failed to create rawdev %s", name);
+			if (qid == 0) /* if no devices using this, free pci */
+				free(idxd.u.pci);
+			return ret;
+		}
+	}
+
+	return 0;
+}
+
+static int
+idxd_rawdev_destroy(const char *name)
+{
+	int ret;
+	uint8_t err_code;
+	struct rte_rawdev *rdev;
+	struct idxd_rawdev *idxd;
+
+	if (!name) {
+		IOAT_PMD_ERR("Invalid device name");
+		return -EINVAL;
+	}
+
+	rdev = rte_rawdev_pmd_get_named_dev(name);
+	if (!rdev) {
+		IOAT_PMD_ERR("Invalid device name (%s)", name);
+		return -EINVAL;
+	}
+
+	idxd = rdev->dev_private;
+
+	/* disable the device */
+	err_code = idxd_pci_dev_command(idxd, idxd_disable_dev);
+	if (err_code) {
+		IOAT_PMD_ERR("Error disabling device: code %#x", err_code);
+		return err_code;
+	}
+	IOAT_PMD_DEBUG("IDXD Device disabled OK");
+
+	/* free device memory */
+	if (rdev->dev_private != NULL) {
+		IOAT_PMD_DEBUG("Freeing device driver memory");
+		rdev->dev_private = NULL;
+		rte_free(idxd->public.batch_ring);
+		rte_free(idxd->public.hdl_ring);
+		rte_memzone_free(idxd->mz);
+	}
+
+	/* rte_rawdev_close is called by pmd_release */
+	ret = rte_rawdev_pmd_release(rdev);
+	if (ret)
+		IOAT_PMD_DEBUG("Device cleanup failed");
+
+	return 0;
 }
 
 static int
@@ -40,6 +271,8 @@ idxd_rawdev_remove_pci(struct rte_pci_device *dev)
 	IOAT_PMD_INFO("Closing %s on NUMA node %d",
 			name, dev->device.numa_node);
 
+	ret = idxd_rawdev_destroy(name);
+
 	return ret;
 }
 
diff --git a/drivers/raw/ioat/ioat_common.c b/drivers/raw/ioat/ioat_common.c
new file mode 100644
index 000000000..c3aa015ed
--- /dev/null
+++ b/drivers/raw/ioat/ioat_common.c
@@ -0,0 +1,68 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_rawdev_pmd.h>
+#include <rte_memzone.h>
+#include <rte_common.h>
+
+#include "ioat_private.h"
+
+int
+idxd_rawdev_close(struct rte_rawdev *dev __rte_unused)
+{
+	return 0;
+}
+
+int
+idxd_rawdev_create(const char *name, struct rte_device *dev,
+		   const struct idxd_rawdev *base_idxd,
+		   const struct rte_rawdev_ops *ops)
+{
+	struct idxd_rawdev *idxd;
+	struct rte_rawdev *rawdev = NULL;
+	const struct rte_memzone *mz = NULL;
+	char mz_name[RTE_MEMZONE_NAMESIZE];
+	int ret = 0;
+
+	if (!name) {
+		IOAT_PMD_ERR("Invalid name of the device!");
+		ret = -EINVAL;
+		goto cleanup;
+	}
+
+	/* Allocate device structure */
+	rawdev = rte_rawdev_pmd_allocate(name, sizeof(struct idxd_rawdev),
+					 dev->numa_node);
+	if (rawdev == NULL) {
+		IOAT_PMD_ERR("Unable to allocate raw device");
+		ret = -ENOMEM;
+		goto cleanup;
+	}
+
+	snprintf(mz_name, sizeof(mz_name), "rawdev%u_private", rawdev->dev_id);
+	mz = rte_memzone_reserve(mz_name, sizeof(struct idxd_rawdev),
+			dev->numa_node, RTE_MEMZONE_IOVA_CONTIG);
+	if (mz == NULL) {
+		IOAT_PMD_ERR("Unable to reserve memzone for private data\n");
+		ret = -ENOMEM;
+		goto cleanup;
+	}
+	rawdev->dev_private = mz->addr;
+	rawdev->dev_ops = ops;
+	rawdev->device = dev;
+	rawdev->driver_name = IOAT_PMD_RAWDEV_NAME_STR;
+
+	idxd = rawdev->dev_private;
+	*idxd = *base_idxd; /* copy over the main fields already passed in */
+	idxd->rawdev = rawdev;
+	idxd->mz = mz;
+
+	return 0;
+
+cleanup:
+	if (rawdev)
+		rte_rawdev_pmd_release(rawdev);
+
+	return ret;
+}
diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
index d87d4d055..53f00a9f3 100644
--- a/drivers/raw/ioat/ioat_private.h
+++ b/drivers/raw/ioat/ioat_private.h
@@ -14,6 +14,10 @@
  * @b EXPERIMENTAL: these structures and APIs may change without prior notice
  */
 
+#include <rte_spinlock.h>
+#include <rte_rawdev_pmd.h>
+#include "rte_ioat_rawdev.h"
+
 extern int ioat_pmd_logtype;
 
 #define IOAT_PMD_LOG(level, fmt, args...) rte_log(RTE_LOG_ ## level, \
@@ -24,4 +28,33 @@ extern int ioat_pmd_logtype;
 #define IOAT_PMD_ERR(fmt, args...)    IOAT_PMD_LOG(ERR, fmt, ## args)
 #define IOAT_PMD_WARN(fmt, args...)   IOAT_PMD_LOG(WARNING, fmt, ## args)
 
+struct idxd_pci_common {
+	rte_spinlock_t lk;
+	volatile struct rte_idxd_bar0 *regs;
+	volatile struct rte_idxd_wqcfg *wq_regs;
+	volatile struct rte_idxd_grpcfg *grp_regs;
+	volatile void *portals;
+};
+
+struct idxd_rawdev {
+	struct rte_idxd_rawdev public; /* the public members, must be first */
+
+	struct rte_rawdev *rawdev;
+	const struct rte_memzone *mz;
+	uint8_t qid;
+	uint16_t max_batches;
+
+	union {
+		struct idxd_pci_common *pci;
+	} u;
+};
+
+extern int idxd_rawdev_create(const char *name, struct rte_device *dev,
+		       const struct idxd_rawdev *idxd,
+		       const struct rte_rawdev_ops *ops);
+
+extern int idxd_rawdev_close(struct rte_rawdev *dev);
+
+extern int idxd_rawdev_test(uint16_t dev_id);
+
 #endif /* _IOAT_PRIVATE_H_ */
diff --git a/drivers/raw/ioat/ioat_rawdev_test.c b/drivers/raw/ioat/ioat_rawdev_test.c
index 8b665cc9a..87a65b7ae 100644
--- a/drivers/raw/ioat/ioat_rawdev_test.c
+++ b/drivers/raw/ioat/ioat_rawdev_test.c
@@ -7,6 +7,7 @@
 #include <rte_mbuf.h>
 #include "rte_rawdev.h"
 #include "rte_ioat_rawdev.h"
+#include "ioat_private.h"
 
 #define MAX_SUPPORTED_RAWDEVS 64
 #define TEST_SKIPPED 77
@@ -267,3 +268,9 @@ ioat_rawdev_test(uint16_t dev_id)
 	free(ids);
 	return -1;
 }
+
+int
+idxd_rawdev_test(uint16_t dev_id __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/raw/ioat/ioat_spec.h b/drivers/raw/ioat/ioat_spec.h
index 9645e16d4..1aa768b9a 100644
--- a/drivers/raw/ioat/ioat_spec.h
+++ b/drivers/raw/ioat/ioat_spec.h
@@ -268,6 +268,70 @@ union rte_ioat_hw_desc {
 	struct rte_ioat_pq_update_hw_desc pq_update;
 };
 
+/*** Definitions for Intel(R) Data Streaming Accelerator Follow ***/
+
+#define IDXD_CMD_SHIFT 20
+enum rte_idxd_cmds {
+	idxd_enable_dev = 1,
+	idxd_disable_dev,
+	idxd_drain_all,
+	idxd_abort_all,
+	idxd_reset_device,
+	idxd_enable_wq,
+	idxd_disable_wq,
+	idxd_drain_wq,
+	idxd_abort_wq,
+	idxd_reset_wq,
+};
+
+/* General bar0 registers */
+struct rte_idxd_bar0 {
+	uint32_t __rte_cache_aligned version;    /* offset 0x00 */
+	uint64_t __rte_aligned(0x10) gencap;     /* offset 0x10 */
+	uint64_t __rte_aligned(0x10) wqcap;      /* offset 0x20 */
+	uint64_t __rte_aligned(0x10) grpcap;     /* offset 0x30 */
+	uint64_t __rte_aligned(0x08) engcap;     /* offset 0x38 */
+	uint64_t __rte_aligned(0x10) opcap;      /* offset 0x40 */
+	uint64_t __rte_aligned(0x20) offsets[2]; /* offset 0x60 */
+	uint32_t __rte_aligned(0x20) gencfg;     /* offset 0x80 */
+	uint32_t __rte_aligned(0x08) genctrl;    /* offset 0x88 */
+	uint32_t __rte_aligned(0x10) gensts;     /* offset 0x90 */
+	uint32_t __rte_aligned(0x08) intcause;   /* offset 0x98 */
+	uint32_t __rte_aligned(0x10) cmd;        /* offset 0xA0 */
+	uint32_t __rte_aligned(0x08) cmdstatus;  /* offset 0xA8 */
+	uint64_t __rte_aligned(0x20) swerror[4]; /* offset 0xC0 */
+};
+
+struct rte_idxd_wqcfg {
+	uint32_t wqcfg[8] __rte_aligned(32); /* 32-byte register */
+};
+
+#define WQ_SIZE_IDX      0 /* size is in first 32-bit value */
+#define WQ_THRESHOLD_IDX 1 /* WQ threshold second 32-bits */
+#define WQ_MODE_IDX      2 /* WQ mode and other flags */
+#define WQ_SIZES_IDX     3 /* WQ transfer and batch sizes */
+#define WQ_OCC_INT_IDX   4 /* WQ occupancy interrupt handle */
+#define WQ_OCC_LIMIT_IDX 5 /* WQ occupancy limit */
+#define WQ_STATE_IDX     6 /* WQ state and occupancy state */
+
+#define WQ_MODE_SHARED    0
+#define WQ_MODE_DEDICATED 1
+#define WQ_PRIORITY_SHIFT 4
+#define WQ_BATCH_SZ_SHIFT 5
+#define WQ_STATE_SHIFT 30
+#define WQ_STATE_MASK 0x3
+
+struct rte_idxd_grpcfg {
+	uint64_t grpwqcfg[4]  __rte_cache_aligned; /* 64-byte register set */
+	uint64_t grpengcfg;  /* offset 32 */
+	uint32_t grpflags;   /* offset 40 */
+};
+
+#define GENSTS_DEV_STATE_MASK 0x03
+#define CMDSTATUS_ACTIVE_SHIFT 31
+#define CMDSTATUS_ACTIVE_MASK (1 << 31)
+#define CMDSTATUS_ERR_MASK 0xFF
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/drivers/raw/ioat/meson.build b/drivers/raw/ioat/meson.build
index b343b7367..5eff76a1a 100644
--- a/drivers/raw/ioat/meson.build
+++ b/drivers/raw/ioat/meson.build
@@ -6,6 +6,7 @@ reason = 'only supported on x86'
 sources = files(
 	'idxd_pci.c',
 	'idxd_vdev.c',
+	'ioat_common.c',
 	'ioat_rawdev.c',
 	'ioat_rawdev_test.c')
 deps += ['bus_pci',
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index c6e0b9a58..fa2eb5334 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -41,9 +41,20 @@ struct rte_ioat_generic_hw_desc {
 
 /**
  * @internal
- * Structure representing a device instance
+ * Identify the data path to use.
+ * Must be first field of rte_ioat_rawdev and rte_idxd_rawdev structs
+ */
+enum rte_ioat_dev_type {
+	RTE_IOAT_DEV,
+	RTE_IDXD_DEV,
+};
+
+/**
+ * @internal
+ * Structure representing an IOAT device instance
  */
 struct rte_ioat_rawdev {
+	enum rte_ioat_dev_type type;
 	struct rte_rawdev *rawdev;
 	const struct rte_memzone *mz;
 	const struct rte_memzone *desc_mz;
@@ -79,6 +90,28 @@ struct rte_ioat_rawdev {
 #define RTE_IOAT_CHANSTS_HALTED			0x3
 #define RTE_IOAT_CHANSTS_ARMED			0x4
 
+/**
+ * @internal
+ * Structure representing an IDXD device instance
+ */
+struct rte_idxd_rawdev {
+	enum rte_ioat_dev_type type;
+	void *portal; /* address to write the batch descriptor */
+
+	/* counters to track the batches and the individual op handles */
+	uint16_t batch_ring_sz;  /* size of batch ring */
+	uint16_t hdl_ring_sz;    /* size of the user hdl ring */
+
+	uint16_t next_batch;     /* where we write descriptor ops */
+	uint16_t next_completed; /* batch where we read completions */
+	uint16_t next_ret_hdl;   /* the next user hdl to return */
+	uint16_t last_completed_hdl; /* the last user hdl that has completed */
+	uint16_t next_free_hdl;  /* where the handle for next op will go */
+
+	struct rte_idxd_user_hdl *hdl_ring;
+	struct rte_idxd_desc_batch *batch_ring;
+};
+
 /*
  * Enqueue a copy operation onto the ioat device
  */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v5 15/25] raw/ioat: create rawdev instances for idxd vdevs
  2020-10-07 16:29 ` [dpdk-dev] [PATCH v5 " Bruce Richardson
                     ` (13 preceding siblings ...)
  2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 14/25] raw/ioat: create rawdev instances on idxd PCI probe Bruce Richardson
@ 2020-10-07 16:30   ` Bruce Richardson
  2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 16/25] raw/ioat: add datapath data structures for idxd devices Bruce Richardson
                     ` (9 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-07 16:30 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Kevin Laatz, Bruce Richardson, Radu Nicolau

From: Kevin Laatz <kevin.laatz@intel.com>

For each vdev (DSA work queue) instance, create a rawdev instance.

Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 drivers/raw/ioat/idxd_vdev.c    | 106 +++++++++++++++++++++++++++++++-
 drivers/raw/ioat/ioat_private.h |   4 ++
 2 files changed, 109 insertions(+), 1 deletion(-)

diff --git a/drivers/raw/ioat/idxd_vdev.c b/drivers/raw/ioat/idxd_vdev.c
index 0509fc084..e61c26c1b 100644
--- a/drivers/raw/ioat/idxd_vdev.c
+++ b/drivers/raw/ioat/idxd_vdev.c
@@ -2,6 +2,12 @@
  * Copyright(c) 2020 Intel Corporation
  */
 
+#include <fcntl.h>
+#include <unistd.h>
+#include <limits.h>
+#include <sys/mman.h>
+
+#include <rte_memzone.h>
 #include <rte_bus_vdev.h>
 #include <rte_kvargs.h>
 #include <rte_string_fns.h>
@@ -24,6 +30,36 @@ struct idxd_vdev_args {
 	uint8_t wq_id;
 };
 
+static const struct rte_rawdev_ops idxd_vdev_ops = {
+		.dev_close = idxd_rawdev_close,
+		.dev_selftest = idxd_rawdev_test,
+};
+
+static void *
+idxd_vdev_mmap_wq(struct idxd_vdev_args *args)
+{
+	void *addr;
+	char path[PATH_MAX];
+	int fd;
+
+	snprintf(path, sizeof(path), "/dev/dsa/wq%u.%u",
+			args->device_id, args->wq_id);
+	fd = open(path, O_RDWR);
+	if (fd < 0) {
+		IOAT_PMD_ERR("Failed to open device path");
+		return NULL;
+	}
+
+	addr = mmap(NULL, 0x1000, PROT_WRITE, MAP_SHARED, fd, 0);
+	close(fd);
+	if (addr == MAP_FAILED) {
+		IOAT_PMD_ERR("Failed to mmap device");
+		return NULL;
+	}
+
+	return addr;
+}
+
 static int
 idxd_rawdev_parse_wq(const char *key __rte_unused, const char *value,
 			  void *extra_args)
@@ -70,10 +106,32 @@ idxd_vdev_parse_params(struct rte_kvargs *kvlist, struct idxd_vdev_args *args)
 	return -EINVAL;
 }
 
+static int
+idxd_vdev_get_max_batches(struct idxd_vdev_args *args)
+{
+	char sysfs_path[PATH_MAX];
+	FILE *f;
+	int ret;
+
+	snprintf(sysfs_path, sizeof(sysfs_path),
+			"/sys/bus/dsa/devices/wq%u.%u/size",
+			args->device_id, args->wq_id);
+	f = fopen(sysfs_path, "r");
+	if (f == NULL)
+		return -1;
+
+	if (fscanf(f, "%d", &ret) != 1)
+		ret = -1;
+
+	fclose(f);
+	return ret;
+}
+
 static int
 idxd_rawdev_probe_vdev(struct rte_vdev_device *vdev)
 {
 	struct rte_kvargs *kvlist;
+	struct idxd_rawdev idxd = {{0}}; /* double {} to avoid error on BSD12 */
 	struct idxd_vdev_args vdev_args;
 	const char *name;
 	int ret = 0;
@@ -96,13 +154,32 @@ idxd_rawdev_probe_vdev(struct rte_vdev_device *vdev)
 		return -EINVAL;
 	}
 
+	idxd.qid = vdev_args.wq_id;
+	idxd.u.vdev.dsa_id = vdev_args.device_id;
+	idxd.max_batches = idxd_vdev_get_max_batches(&vdev_args);
+
+	idxd.public.portal = idxd_vdev_mmap_wq(&vdev_args);
+	if (idxd.public.portal == NULL) {
+		IOAT_PMD_ERR("WQ mmap failed");
+		return -ENOENT;
+	}
+
+	ret = idxd_rawdev_create(name, &vdev->device, &idxd, &idxd_vdev_ops);
+	if (ret) {
+		IOAT_PMD_ERR("Failed to create rawdev %s", name);
+		return ret;
+	}
+
 	return 0;
 }
 
 static int
 idxd_rawdev_remove_vdev(struct rte_vdev_device *vdev)
 {
+	struct idxd_rawdev *idxd;
 	const char *name;
+	struct rte_rawdev *rdev;
+	int ret = 0;
 
 	name = rte_vdev_device_name(vdev);
 	if (name == NULL)
@@ -110,7 +187,34 @@ idxd_rawdev_remove_vdev(struct rte_vdev_device *vdev)
 
 	IOAT_PMD_INFO("Remove DSA vdev %p", name);
 
-	return 0;
+	rdev = rte_rawdev_pmd_get_named_dev(name);
+	if (!rdev) {
+		IOAT_PMD_ERR("Invalid device name (%s)", name);
+		return -EINVAL;
+	}
+
+	idxd = rdev->dev_private;
+
+	/* free context and memory */
+	if (rdev->dev_private != NULL) {
+		IOAT_PMD_DEBUG("Freeing device driver memory");
+		rdev->dev_private = NULL;
+
+		if (munmap(idxd->public.portal, 0x1000) < 0) {
+			IOAT_PMD_ERR("Error unmapping portal");
+			ret = -errno;
+		}
+
+		rte_free(idxd->public.batch_ring);
+		rte_free(idxd->public.hdl_ring);
+
+		rte_memzone_free(idxd->mz);
+	}
+
+	if (rte_rawdev_pmd_release(rdev))
+		IOAT_PMD_ERR("Device cleanup failed");
+
+	return ret;
 }
 
 struct rte_vdev_driver idxd_rawdev_drv_vdev = {
diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
index 53f00a9f3..6f7bdb499 100644
--- a/drivers/raw/ioat/ioat_private.h
+++ b/drivers/raw/ioat/ioat_private.h
@@ -45,6 +45,10 @@ struct idxd_rawdev {
 	uint16_t max_batches;
 
 	union {
+		struct {
+			unsigned int dsa_id;
+		} vdev;
+
 		struct idxd_pci_common *pci;
 	} u;
 };
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v5 16/25] raw/ioat: add datapath data structures for idxd devices
  2020-10-07 16:29 ` [dpdk-dev] [PATCH v5 " Bruce Richardson
                     ` (14 preceding siblings ...)
  2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 15/25] raw/ioat: create rawdev instances for idxd vdevs Bruce Richardson
@ 2020-10-07 16:30   ` Bruce Richardson
  2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 17/25] raw/ioat: add configure function " Bruce Richardson
                     ` (8 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-07 16:30 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Bruce Richardson, Kevin Laatz, Radu Nicolau

Add in the relevant data structures for the data path for DSA devices. Also
include a device dump function to output the status of each device.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 drivers/raw/ioat/idxd_pci.c            |  1 +
 drivers/raw/ioat/idxd_vdev.c           |  1 +
 drivers/raw/ioat/ioat_common.c         | 34 +++++++++++
 drivers/raw/ioat/ioat_private.h        |  2 +
 drivers/raw/ioat/ioat_rawdev_test.c    |  3 +-
 drivers/raw/ioat/rte_ioat_rawdev_fns.h | 80 ++++++++++++++++++++++++++
 6 files changed, 120 insertions(+), 1 deletion(-)

diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
index c3fec56d5..9bee92766 100644
--- a/drivers/raw/ioat/idxd_pci.c
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -54,6 +54,7 @@ idxd_is_wq_enabled(struct idxd_rawdev *idxd)
 static const struct rte_rawdev_ops idxd_pci_ops = {
 		.dev_close = idxd_rawdev_close,
 		.dev_selftest = idxd_rawdev_test,
+		.dump = idxd_dev_dump,
 };
 
 /* each portal uses 4 x 4k pages */
diff --git a/drivers/raw/ioat/idxd_vdev.c b/drivers/raw/ioat/idxd_vdev.c
index e61c26c1b..ba78eee90 100644
--- a/drivers/raw/ioat/idxd_vdev.c
+++ b/drivers/raw/ioat/idxd_vdev.c
@@ -33,6 +33,7 @@ struct idxd_vdev_args {
 static const struct rte_rawdev_ops idxd_vdev_ops = {
 		.dev_close = idxd_rawdev_close,
 		.dev_selftest = idxd_rawdev_test,
+		.dump = idxd_dev_dump,
 };
 
 static void *
diff --git a/drivers/raw/ioat/ioat_common.c b/drivers/raw/ioat/ioat_common.c
index c3aa015ed..672241351 100644
--- a/drivers/raw/ioat/ioat_common.c
+++ b/drivers/raw/ioat/ioat_common.c
@@ -14,6 +14,36 @@ idxd_rawdev_close(struct rte_rawdev *dev __rte_unused)
 	return 0;
 }
 
+int
+idxd_dev_dump(struct rte_rawdev *dev, FILE *f)
+{
+	struct idxd_rawdev *idxd = dev->dev_private;
+	struct rte_idxd_rawdev *rte_idxd = &idxd->public;
+	int i;
+
+	fprintf(f, "Raw Device #%d\n", dev->dev_id);
+	fprintf(f, "Driver: %s\n\n", dev->driver_name);
+
+	fprintf(f, "Portal: %p\n", rte_idxd->portal);
+	fprintf(f, "Batch Ring size: %u\n", rte_idxd->batch_ring_sz);
+	fprintf(f, "Comp Handle Ring size: %u\n\n", rte_idxd->hdl_ring_sz);
+
+	fprintf(f, "Next batch: %u\n", rte_idxd->next_batch);
+	fprintf(f, "Next batch to be completed: %u\n", rte_idxd->next_completed);
+	for (i = 0; i < rte_idxd->batch_ring_sz; i++) {
+		struct rte_idxd_desc_batch *b = &rte_idxd->batch_ring[i];
+		fprintf(f, "Batch %u @%p: submitted=%u, op_count=%u, hdl_end=%u\n",
+				i, b, b->submitted, b->op_count, b->hdl_end);
+	}
+
+	fprintf(f, "\n");
+	fprintf(f, "Next free hdl: %u\n", rte_idxd->next_free_hdl);
+	fprintf(f, "Last completed hdl: %u\n", rte_idxd->last_completed_hdl);
+	fprintf(f, "Next returned hdl: %u\n", rte_idxd->next_ret_hdl);
+
+	return 0;
+}
+
 int
 idxd_rawdev_create(const char *name, struct rte_device *dev,
 		   const struct idxd_rawdev *base_idxd,
@@ -25,6 +55,10 @@ idxd_rawdev_create(const char *name, struct rte_device *dev,
 	char mz_name[RTE_MEMZONE_NAMESIZE];
 	int ret = 0;
 
+	RTE_BUILD_BUG_ON(sizeof(struct rte_idxd_hw_desc) != 64);
+	RTE_BUILD_BUG_ON(offsetof(struct rte_idxd_hw_desc, size) != 32);
+	RTE_BUILD_BUG_ON(sizeof(struct rte_idxd_completion) != 32);
+
 	if (!name) {
 		IOAT_PMD_ERR("Invalid name of the device!");
 		ret = -EINVAL;
diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
index 6f7bdb499..f521c85a1 100644
--- a/drivers/raw/ioat/ioat_private.h
+++ b/drivers/raw/ioat/ioat_private.h
@@ -61,4 +61,6 @@ extern int idxd_rawdev_close(struct rte_rawdev *dev);
 
 extern int idxd_rawdev_test(uint16_t dev_id);
 
+extern int idxd_dev_dump(struct rte_rawdev *dev, FILE *f);
+
 #endif /* _IOAT_PRIVATE_H_ */
diff --git a/drivers/raw/ioat/ioat_rawdev_test.c b/drivers/raw/ioat/ioat_rawdev_test.c
index 87a65b7ae..ceeac92ef 100644
--- a/drivers/raw/ioat/ioat_rawdev_test.c
+++ b/drivers/raw/ioat/ioat_rawdev_test.c
@@ -270,7 +270,8 @@ ioat_rawdev_test(uint16_t dev_id)
 }
 
 int
-idxd_rawdev_test(uint16_t dev_id __rte_unused)
+idxd_rawdev_test(uint16_t dev_id)
 {
+	rte_rawdev_dump(dev_id, stdout);
 	return 0;
 }
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index fa2eb5334..178c432dd 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -90,6 +90,86 @@ struct rte_ioat_rawdev {
 #define RTE_IOAT_CHANSTS_HALTED			0x3
 #define RTE_IOAT_CHANSTS_ARMED			0x4
 
+/*
+ * Defines used in the data path for interacting with hardware.
+ */
+#define IDXD_CMD_OP_SHIFT 24
+enum rte_idxd_ops {
+	idxd_op_nop = 0,
+	idxd_op_batch,
+	idxd_op_drain,
+	idxd_op_memmove,
+	idxd_op_fill
+};
+
+#define IDXD_FLAG_FENCE                 (1 << 0)
+#define IDXD_FLAG_COMPLETION_ADDR_VALID (1 << 2)
+#define IDXD_FLAG_REQUEST_COMPLETION    (1 << 3)
+#define IDXD_FLAG_CACHE_CONTROL         (1 << 8)
+
+/**
+ * Hardware descriptor used by DSA hardware, for both bursts and
+ * for individual operations.
+ */
+struct rte_idxd_hw_desc {
+	uint32_t pasid;
+	uint32_t op_flags;
+	rte_iova_t completion;
+
+	RTE_STD_C11
+	union {
+		rte_iova_t src;      /* source address for copy ops etc. */
+		rte_iova_t desc_addr; /* descriptor pointer for batch */
+	};
+	rte_iova_t dst;
+
+	uint32_t size;    /* length of data for op, or batch size */
+
+	/* 28 bytes of padding here */
+} __rte_aligned(64);
+
+/**
+ * Completion record structure written back by DSA
+ */
+struct rte_idxd_completion {
+	uint8_t status;
+	uint8_t result;
+	/* 16-bits pad here */
+	uint32_t completed_size; /* data length, or descriptors for batch */
+
+	rte_iova_t fault_address;
+	uint32_t invalid_flags;
+} __rte_aligned(32);
+
+#define BATCH_SIZE 64
+
+/**
+ * Structure used inside the driver for building up and submitting
+ * a batch of operations to the DSA hardware.
+ */
+struct rte_idxd_desc_batch {
+	struct rte_idxd_completion comp; /* the completion record for batch */
+
+	uint16_t submitted;
+	uint16_t op_count;
+	uint16_t hdl_end;
+
+	struct rte_idxd_hw_desc batch_desc;
+
+	/* batches must always have 2 descriptors, so put a null at the start */
+	struct rte_idxd_hw_desc null_desc;
+	struct rte_idxd_hw_desc ops[BATCH_SIZE];
+};
+
+/**
+ * structure used to save the "handles" provided by the user to be
+ * returned to the user on job completion.
+ */
+struct rte_idxd_user_hdl {
+	uint64_t src;
+	uint64_t dst;
+};
+
 /**
  * @internal
  * Structure representing an IDXD device instance
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v5 17/25] raw/ioat: add configure function for idxd devices
  2020-10-07 16:29 ` [dpdk-dev] [PATCH v5 " Bruce Richardson
                     ` (15 preceding siblings ...)
  2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 16/25] raw/ioat: add datapath data structures for idxd devices Bruce Richardson
@ 2020-10-07 16:30   ` Bruce Richardson
  2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 18/25] raw/ioat: add start and stop functions " Bruce Richardson
                     ` (7 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-07 16:30 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Bruce Richardson, Kevin Laatz, Radu Nicolau

Add configure function for idxd devices, taking the same parameters as the
existing configure function for ioat. The ring_size parameter is used to
compute the maximum number of bursts to be supported by the driver, given
that the hardware works on individual bursts of descriptors at a time.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 drivers/raw/ioat/idxd_pci.c            |  1 +
 drivers/raw/ioat/idxd_vdev.c           |  1 +
 drivers/raw/ioat/ioat_common.c         | 64 ++++++++++++++++++++++++++
 drivers/raw/ioat/ioat_private.h        |  3 ++
 drivers/raw/ioat/rte_ioat_rawdev_fns.h |  1 +
 5 files changed, 70 insertions(+)

diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
index 9bee92766..b173c5ae3 100644
--- a/drivers/raw/ioat/idxd_pci.c
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -55,6 +55,7 @@ static const struct rte_rawdev_ops idxd_pci_ops = {
 		.dev_close = idxd_rawdev_close,
 		.dev_selftest = idxd_rawdev_test,
 		.dump = idxd_dev_dump,
+		.dev_configure = idxd_dev_configure,
 };
 
 /* each portal uses 4 x 4k pages */
diff --git a/drivers/raw/ioat/idxd_vdev.c b/drivers/raw/ioat/idxd_vdev.c
index ba78eee90..3dad1473b 100644
--- a/drivers/raw/ioat/idxd_vdev.c
+++ b/drivers/raw/ioat/idxd_vdev.c
@@ -34,6 +34,7 @@ static const struct rte_rawdev_ops idxd_vdev_ops = {
 		.dev_close = idxd_rawdev_close,
 		.dev_selftest = idxd_rawdev_test,
 		.dump = idxd_dev_dump,
+		.dev_configure = idxd_dev_configure,
 };
 
 static void *
diff --git a/drivers/raw/ioat/ioat_common.c b/drivers/raw/ioat/ioat_common.c
index 672241351..5173c331c 100644
--- a/drivers/raw/ioat/ioat_common.c
+++ b/drivers/raw/ioat/ioat_common.c
@@ -44,6 +44,70 @@ idxd_dev_dump(struct rte_rawdev *dev, FILE *f)
 	return 0;
 }
 
+int
+idxd_dev_configure(const struct rte_rawdev *dev,
+		rte_rawdev_obj_t config, size_t config_size)
+{
+	struct idxd_rawdev *idxd = dev->dev_private;
+	struct rte_idxd_rawdev *rte_idxd = &idxd->public;
+	struct rte_ioat_rawdev_config *cfg = config;
+	uint16_t max_desc = cfg->ring_size;
+	uint16_t max_batches = max_desc / BATCH_SIZE;
+	uint16_t i;
+
+	if (config_size != sizeof(*cfg))
+		return -EINVAL;
+
+	if (dev->started) {
+		IOAT_PMD_ERR("%s: Error, device is started.", __func__);
+		return -EAGAIN;
+	}
+
+	rte_idxd->hdls_disable = cfg->hdls_disable;
+
+	/* limit the batches to what can be stored in hardware */
+	if (max_batches > idxd->max_batches) {
+		IOAT_PMD_DEBUG("Ring size of %u is too large for this device, need to limit to %u batches of %u",
+				max_desc, idxd->max_batches, BATCH_SIZE);
+		max_batches = idxd->max_batches;
+		max_desc = max_batches * BATCH_SIZE;
+	}
+	if (!rte_is_power_of_2(max_desc))
+		max_desc = rte_align32pow2(max_desc);
+	IOAT_PMD_DEBUG("Rawdev %u using %u descriptors in %u batches",
+			dev->dev_id, max_desc, max_batches);
+
+	/* in case we are reconfiguring a device, free any existing memory */
+	rte_free(rte_idxd->batch_ring);
+	rte_free(rte_idxd->hdl_ring);
+
+	rte_idxd->batch_ring = rte_zmalloc(NULL,
+			sizeof(*rte_idxd->batch_ring) * max_batches, 0);
+	if (rte_idxd->batch_ring == NULL)
+		return -ENOMEM;
+
+	rte_idxd->hdl_ring = rte_zmalloc(NULL,
+			sizeof(*rte_idxd->hdl_ring) * max_desc, 0);
+	if (rte_idxd->hdl_ring == NULL) {
+		rte_free(rte_idxd->batch_ring);
+		rte_idxd->batch_ring = NULL;
+		return -ENOMEM;
+	}
+	rte_idxd->batch_ring_sz = max_batches;
+	rte_idxd->hdl_ring_sz = max_desc;
+
+	for (i = 0; i < rte_idxd->batch_ring_sz; i++) {
+		struct rte_idxd_desc_batch *b = &rte_idxd->batch_ring[i];
+		b->batch_desc.completion = rte_mem_virt2iova(&b->comp);
+		b->batch_desc.desc_addr = rte_mem_virt2iova(&b->null_desc);
+		b->batch_desc.op_flags = (idxd_op_batch << IDXD_CMD_OP_SHIFT) |
+				IDXD_FLAG_COMPLETION_ADDR_VALID |
+				IDXD_FLAG_REQUEST_COMPLETION;
+	}
+
+	return 0;
+}
+
 int
 idxd_rawdev_create(const char *name, struct rte_device *dev,
 		   const struct idxd_rawdev *base_idxd,
diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
index f521c85a1..aba70d8d7 100644
--- a/drivers/raw/ioat/ioat_private.h
+++ b/drivers/raw/ioat/ioat_private.h
@@ -59,6 +59,9 @@ extern int idxd_rawdev_create(const char *name, struct rte_device *dev,
 
 extern int idxd_rawdev_close(struct rte_rawdev *dev);
 
+extern int idxd_dev_configure(const struct rte_rawdev *dev,
+		rte_rawdev_obj_t config, size_t config_size);
+
 extern int idxd_rawdev_test(uint16_t dev_id);
 
 extern int idxd_dev_dump(struct rte_rawdev *dev, FILE *f);
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index 178c432dd..e9cdce016 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -187,6 +187,7 @@ struct rte_idxd_rawdev {
 	uint16_t next_ret_hdl;   /* the next user hdl to return */
 	uint16_t last_completed_hdl; /* the last user hdl that has completed */
 	uint16_t next_free_hdl;  /* where the handle for next op will go */
+	uint16_t hdls_disable;   /* disable tracking completion handles */
 
 	struct rte_idxd_user_hdl *hdl_ring;
 	struct rte_idxd_desc_batch *batch_ring;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v5 18/25] raw/ioat: add start and stop functions for idxd devices
  2020-10-07 16:29 ` [dpdk-dev] [PATCH v5 " Bruce Richardson
                     ` (16 preceding siblings ...)
  2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 17/25] raw/ioat: add configure function " Bruce Richardson
@ 2020-10-07 16:30   ` Bruce Richardson
  2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 19/25] raw/ioat: add data path " Bruce Richardson
                     ` (6 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-07 16:30 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Bruce Richardson, Kevin Laatz, Radu Nicolau

Add the start and stop functions for DSA hardware devices using the
vfio/uio kernel drivers. For vdevs using the idxd kernel driver, the device
must be started using sysfs before the device node appears for vdev use -
making start/stop functions in the driver unnecessary.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 drivers/raw/ioat/idxd_pci.c | 50 +++++++++++++++++++++++++++++++++++++
 1 file changed, 50 insertions(+)

diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
index b173c5ae3..6b5c47392 100644
--- a/drivers/raw/ioat/idxd_pci.c
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -51,11 +51,61 @@ idxd_is_wq_enabled(struct idxd_rawdev *idxd)
 	return ((state >> WQ_STATE_SHIFT) & WQ_STATE_MASK) == 0x1;
 }
 
+static void
+idxd_pci_dev_stop(struct rte_rawdev *dev)
+{
+	struct idxd_rawdev *idxd = dev->dev_private;
+	uint8_t err_code;
+
+	if (!idxd_is_wq_enabled(idxd)) {
+		IOAT_PMD_ERR("Work queue %d already disabled", idxd->qid);
+		return;
+	}
+
+	err_code = idxd_pci_dev_command(idxd, idxd_disable_wq);
+	if (err_code || idxd_is_wq_enabled(idxd)) {
+		IOAT_PMD_ERR("Failed disabling work queue %d, error code: %#x",
+				idxd->qid, err_code);
+		return;
+	}
+	IOAT_PMD_DEBUG("Work queue %d disabled OK", idxd->qid);
+}
+
+static int
+idxd_pci_dev_start(struct rte_rawdev *dev)
+{
+	struct idxd_rawdev *idxd = dev->dev_private;
+	uint8_t err_code;
+
+	if (idxd_is_wq_enabled(idxd)) {
+		IOAT_PMD_WARN("WQ %d already enabled", idxd->qid);
+		return 0;
+	}
+
+	if (idxd->public.batch_ring == NULL) {
+		IOAT_PMD_ERR("WQ %d has not been fully configured", idxd->qid);
+		return -EINVAL;
+	}
+
+	err_code = idxd_pci_dev_command(idxd, idxd_enable_wq);
+	if (err_code || !idxd_is_wq_enabled(idxd)) {
+		IOAT_PMD_ERR("Failed enabling work queue %d, error code: %#x",
+				idxd->qid, err_code);
+		return err_code == 0 ? -1 : err_code;
+	}
+
+	IOAT_PMD_DEBUG("Work queue %d enabled OK", idxd->qid);
+
+	return 0;
+}
+
 static const struct rte_rawdev_ops idxd_pci_ops = {
 		.dev_close = idxd_rawdev_close,
 		.dev_selftest = idxd_rawdev_test,
 		.dump = idxd_dev_dump,
 		.dev_configure = idxd_dev_configure,
+		.dev_start = idxd_pci_dev_start,
+		.dev_stop = idxd_pci_dev_stop,
 };
 
 /* each portal uses 4 x 4k pages */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v5 19/25] raw/ioat: add data path for idxd devices
  2020-10-07 16:29 ` [dpdk-dev] [PATCH v5 " Bruce Richardson
                     ` (17 preceding siblings ...)
  2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 18/25] raw/ioat: add start and stop functions " Bruce Richardson
@ 2020-10-07 16:30   ` Bruce Richardson
  2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 20/25] raw/ioat: add info function " Bruce Richardson
                     ` (5 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-07 16:30 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Bruce Richardson, Kevin Laatz, Radu Nicolau

Add support for doing copies using DSA hardware. This is implemented by
just switching on the device type field at the start of the inline
functions. Since there is no hardware which will have both device types
present this branch will always be predictable after the first call,
meaning it has little to no perf penalty.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 drivers/raw/ioat/ioat_common.c         |   1 +
 drivers/raw/ioat/ioat_rawdev.c         |   1 +
 drivers/raw/ioat/rte_ioat_rawdev_fns.h | 201 +++++++++++++++++++++++--
 3 files changed, 192 insertions(+), 11 deletions(-)

diff --git a/drivers/raw/ioat/ioat_common.c b/drivers/raw/ioat/ioat_common.c
index 5173c331c..6a4e2979f 100644
--- a/drivers/raw/ioat/ioat_common.c
+++ b/drivers/raw/ioat/ioat_common.c
@@ -153,6 +153,7 @@ idxd_rawdev_create(const char *name, struct rte_device *dev,
 
 	idxd = rawdev->dev_private;
 	*idxd = *base_idxd; /* copy over the main fields already passed in */
+	idxd->public.type = RTE_IDXD_DEV;
 	idxd->rawdev = rawdev;
 	idxd->mz = mz;
 
diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
index 1fe32278d..0097be87e 100644
--- a/drivers/raw/ioat/ioat_rawdev.c
+++ b/drivers/raw/ioat/ioat_rawdev.c
@@ -260,6 +260,7 @@ ioat_rawdev_create(const char *name, struct rte_pci_device *dev)
 	rawdev->driver_name = dev->device.driver->name;
 
 	ioat = rawdev->dev_private;
+	ioat->type = RTE_IOAT_DEV;
 	ioat->rawdev = rawdev;
 	ioat->mz = mz;
 	ioat->regs = dev->mem_resource[0].addr;
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index e9cdce016..36ba876ea 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -196,8 +196,8 @@ struct rte_idxd_rawdev {
 /*
  * Enqueue a copy operation onto the ioat device
  */
-static inline int
-rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
+static __rte_always_inline int
+__ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl)
 {
 	struct rte_ioat_rawdev *ioat =
@@ -233,8 +233,8 @@ rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 }
 
 /* add fence to last written descriptor */
-static inline int
-rte_ioat_fence(int dev_id)
+static __rte_always_inline int
+__ioat_fence(int dev_id)
 {
 	struct rte_ioat_rawdev *ioat =
 			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
@@ -252,8 +252,8 @@ rte_ioat_fence(int dev_id)
 /*
  * Trigger hardware to begin performing enqueued operations
  */
-static inline void
-rte_ioat_perform_ops(int dev_id)
+static __rte_always_inline void
+__ioat_perform_ops(int dev_id)
 {
 	struct rte_ioat_rawdev *ioat =
 			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
@@ -268,8 +268,8 @@ rte_ioat_perform_ops(int dev_id)
  * @internal
  * Returns the index of the last completed operation.
  */
-static inline int
-rte_ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
+static __rte_always_inline int
+__ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
 {
 	uint64_t status = ioat->status;
 
@@ -283,8 +283,8 @@ rte_ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
 /*
  * Returns details of operations that have been completed
  */
-static inline int
-rte_ioat_completed_ops(int dev_id, uint8_t max_copies,
+static __rte_always_inline int
+__ioat_completed_ops(int dev_id, uint8_t max_copies,
 		uintptr_t *src_hdls, uintptr_t *dst_hdls)
 {
 	struct rte_ioat_rawdev *ioat =
@@ -295,7 +295,7 @@ rte_ioat_completed_ops(int dev_id, uint8_t max_copies,
 	int error;
 	int i = 0;
 
-	end_read = (rte_ioat_get_last_completed(ioat, &error) + 1) & mask;
+	end_read = (__ioat_get_last_completed(ioat, &error) + 1) & mask;
 	count = (end_read - (read & mask)) & mask;
 
 	if (error) {
@@ -332,6 +332,185 @@ rte_ioat_completed_ops(int dev_id, uint8_t max_copies,
 	return count;
 }
 
+static __rte_always_inline int
+__idxd_write_desc(int dev_id, const struct rte_idxd_hw_desc *desc,
+		const struct rte_idxd_user_hdl *hdl)
+{
+	struct rte_idxd_rawdev *idxd =
+			(struct rte_idxd_rawdev *)rte_rawdevs[dev_id].dev_private;
+	struct rte_idxd_desc_batch *b = &idxd->batch_ring[idxd->next_batch];
+
+	/* check for room in the handle ring */
+	if (((idxd->next_free_hdl + 1) & (idxd->hdl_ring_sz - 1)) == idxd->next_ret_hdl)
+		goto failed;
+
+	/* check for space in current batch */
+	if (b->op_count >= BATCH_SIZE)
+		goto failed;
+
+	/* check that we can actually use the current batch */
+	if (b->submitted)
+		goto failed;
+
+	/* write the descriptor */
+	b->ops[b->op_count++] = *desc;
+
+	/* store the completion details */
+	if (!idxd->hdls_disable)
+		idxd->hdl_ring[idxd->next_free_hdl] = *hdl;
+	if (++idxd->next_free_hdl == idxd->hdl_ring_sz)
+		idxd->next_free_hdl = 0;
+
+	return 1;
+
+failed:
+	rte_errno = ENOSPC;
+	return 0;
+}
+
+static __rte_always_inline int
+__idxd_enqueue_copy(int dev_id, rte_iova_t src, rte_iova_t dst,
+		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl)
+{
+	const struct rte_idxd_hw_desc desc = {
+			.op_flags =  (idxd_op_memmove << IDXD_CMD_OP_SHIFT) |
+				IDXD_FLAG_CACHE_CONTROL,
+			.src = src,
+			.dst = dst,
+			.size = length
+	};
+	const struct rte_idxd_user_hdl hdl = {
+			.src = src_hdl,
+			.dst = dst_hdl
+	};
+	return __idxd_write_desc(dev_id, &desc, &hdl);
+}
+
+static __rte_always_inline int
+__idxd_fence(int dev_id)
+{
+	static const struct rte_idxd_hw_desc fence = {
+			.op_flags = IDXD_FLAG_FENCE
+	};
+	static const struct rte_idxd_user_hdl null_hdl;
+	return __idxd_write_desc(dev_id, &fence, &null_hdl);
+}
+
+static __rte_always_inline void
+__idxd_movdir64b(volatile void *dst, const void *src)
+{
+	asm volatile (".byte 0x66, 0x0f, 0x38, 0xf8, 0x02"
+			:
+			: "a" (dst), "d" (src));
+}
+
+static __rte_always_inline void
+__idxd_perform_ops(int dev_id)
+{
+	struct rte_idxd_rawdev *idxd =
+			(struct rte_idxd_rawdev *)rte_rawdevs[dev_id].dev_private;
+	struct rte_idxd_desc_batch *b = &idxd->batch_ring[idxd->next_batch];
+
+	if (b->submitted || b->op_count == 0)
+		return;
+	b->hdl_end = idxd->next_free_hdl;
+	b->comp.status = 0;
+	b->submitted = 1;
+	b->batch_desc.size = b->op_count + 1;
+	__idxd_movdir64b(idxd->portal, &b->batch_desc);
+
+	if (++idxd->next_batch == idxd->batch_ring_sz)
+		idxd->next_batch = 0;
+}
+
+static __rte_always_inline int
+__idxd_completed_ops(int dev_id, uint8_t max_ops,
+		uintptr_t *src_hdls, uintptr_t *dst_hdls)
+{
+	struct rte_idxd_rawdev *idxd =
+			(struct rte_idxd_rawdev *)rte_rawdevs[dev_id].dev_private;
+	struct rte_idxd_desc_batch *b = &idxd->batch_ring[idxd->next_completed];
+	uint16_t h_idx = idxd->next_ret_hdl;
+	int n = 0;
+
+	while (b->submitted && b->comp.status != 0) {
+		idxd->last_completed_hdl = b->hdl_end;
+		b->submitted = 0;
+		b->op_count = 0;
+		if (++idxd->next_completed == idxd->batch_ring_sz)
+			idxd->next_completed = 0;
+		b = &idxd->batch_ring[idxd->next_completed];
+	}
+
+	if (!idxd->hdls_disable)
+		for (n = 0; n < max_ops && h_idx != idxd->last_completed_hdl; n++) {
+			src_hdls[n] = idxd->hdl_ring[h_idx].src;
+			dst_hdls[n] = idxd->hdl_ring[h_idx].dst;
+			if (++h_idx == idxd->hdl_ring_sz)
+				h_idx = 0;
+		}
+	else
+		while (h_idx != idxd->last_completed_hdl) {
+			n++;
+			if (++h_idx == idxd->hdl_ring_sz)
+				h_idx = 0;
+		}
+
+	idxd->next_ret_hdl = h_idx;
+
+	return n;
+}
+
+static inline int
+rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
+		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl)
+{
+	enum rte_ioat_dev_type *type =
+			(enum rte_ioat_dev_type *)rte_rawdevs[dev_id].dev_private;
+	if (*type == RTE_IDXD_DEV)
+		return __idxd_enqueue_copy(dev_id, src, dst, length,
+				src_hdl, dst_hdl);
+	else
+		return __ioat_enqueue_copy(dev_id, src, dst, length,
+				src_hdl, dst_hdl);
+}
+
+static inline int
+rte_ioat_fence(int dev_id)
+{
+	enum rte_ioat_dev_type *type =
+			(enum rte_ioat_dev_type *)rte_rawdevs[dev_id].dev_private;
+	if (*type == RTE_IDXD_DEV)
+		return __idxd_fence(dev_id);
+	else
+		return __ioat_fence(dev_id);
+}
+
+static inline void
+rte_ioat_perform_ops(int dev_id)
+{
+	enum rte_ioat_dev_type *type =
+			(enum rte_ioat_dev_type *)rte_rawdevs[dev_id].dev_private;
+	if (*type == RTE_IDXD_DEV)
+		return __idxd_perform_ops(dev_id);
+	else
+		return __ioat_perform_ops(dev_id);
+}
+
+static inline int
+rte_ioat_completed_ops(int dev_id, uint8_t max_copies,
+		uintptr_t *src_hdls, uintptr_t *dst_hdls)
+{
+	enum rte_ioat_dev_type *type =
+			(enum rte_ioat_dev_type *)rte_rawdevs[dev_id].dev_private;
+	if (*type == RTE_IDXD_DEV)
+		return __idxd_completed_ops(dev_id, max_copies,
+				src_hdls, dst_hdls);
+	else
+		return __ioat_completed_ops(dev_id,  max_copies,
+				src_hdls, dst_hdls);
+}
+
 static inline void
 __rte_deprecated_msg("use rte_ioat_perform_ops() instead")
 rte_ioat_do_copies(int dev_id) { rte_ioat_perform_ops(dev_id); }
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v5 20/25] raw/ioat: add info function for idxd devices
  2020-10-07 16:29 ` [dpdk-dev] [PATCH v5 " Bruce Richardson
                     ` (18 preceding siblings ...)
  2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 19/25] raw/ioat: add data path " Bruce Richardson
@ 2020-10-07 16:30   ` Bruce Richardson
  2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 21/25] raw/ioat: create separate statistics structure Bruce Richardson
                     ` (4 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-07 16:30 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Bruce Richardson, Kevin Laatz, Radu Nicolau

Add the info get function for DSA devices, returning just the ring size
info about the device, same as is returned for existing IOAT/CBDMA devices.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 drivers/raw/ioat/idxd_pci.c     |  1 +
 drivers/raw/ioat/idxd_vdev.c    |  1 +
 drivers/raw/ioat/ioat_common.c  | 18 ++++++++++++++++++
 drivers/raw/ioat/ioat_private.h |  3 +++
 4 files changed, 23 insertions(+)

diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
index 6b5c47392..bf5edcfdd 100644
--- a/drivers/raw/ioat/idxd_pci.c
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -106,6 +106,7 @@ static const struct rte_rawdev_ops idxd_pci_ops = {
 		.dev_configure = idxd_dev_configure,
 		.dev_start = idxd_pci_dev_start,
 		.dev_stop = idxd_pci_dev_stop,
+		.dev_info_get = idxd_dev_info_get,
 };
 
 /* each portal uses 4 x 4k pages */
diff --git a/drivers/raw/ioat/idxd_vdev.c b/drivers/raw/ioat/idxd_vdev.c
index 3dad1473b..c75ac4317 100644
--- a/drivers/raw/ioat/idxd_vdev.c
+++ b/drivers/raw/ioat/idxd_vdev.c
@@ -35,6 +35,7 @@ static const struct rte_rawdev_ops idxd_vdev_ops = {
 		.dev_selftest = idxd_rawdev_test,
 		.dump = idxd_dev_dump,
 		.dev_configure = idxd_dev_configure,
+		.dev_info_get = idxd_dev_info_get,
 };
 
 static void *
diff --git a/drivers/raw/ioat/ioat_common.c b/drivers/raw/ioat/ioat_common.c
index 6a4e2979f..b5cea2fda 100644
--- a/drivers/raw/ioat/ioat_common.c
+++ b/drivers/raw/ioat/ioat_common.c
@@ -44,6 +44,24 @@ idxd_dev_dump(struct rte_rawdev *dev, FILE *f)
 	return 0;
 }
 
+int
+idxd_dev_info_get(struct rte_rawdev *dev, rte_rawdev_obj_t dev_info,
+		size_t info_size)
+{
+	struct rte_ioat_rawdev_config *cfg = dev_info;
+	struct idxd_rawdev *idxd = dev->dev_private;
+	struct rte_idxd_rawdev *rte_idxd = &idxd->public;
+
+	if (info_size != sizeof(*cfg))
+		return -EINVAL;
+
+	if (cfg != NULL) {
+		cfg->ring_size = rte_idxd->hdl_ring_sz;
+		cfg->hdls_disable = rte_idxd->hdls_disable;
+	}
+	return 0;
+}
+
 int
 idxd_dev_configure(const struct rte_rawdev *dev,
 		rte_rawdev_obj_t config, size_t config_size)
diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
index aba70d8d7..0f80d60bf 100644
--- a/drivers/raw/ioat/ioat_private.h
+++ b/drivers/raw/ioat/ioat_private.h
@@ -62,6 +62,9 @@ extern int idxd_rawdev_close(struct rte_rawdev *dev);
 extern int idxd_dev_configure(const struct rte_rawdev *dev,
 		rte_rawdev_obj_t config, size_t config_size);
 
+extern int idxd_dev_info_get(struct rte_rawdev *dev, rte_rawdev_obj_t dev_info,
+		size_t info_size);
+
 extern int idxd_rawdev_test(uint16_t dev_id);
 
 extern int idxd_dev_dump(struct rte_rawdev *dev, FILE *f);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v5 21/25] raw/ioat: create separate statistics structure
  2020-10-07 16:29 ` [dpdk-dev] [PATCH v5 " Bruce Richardson
                     ` (19 preceding siblings ...)
  2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 20/25] raw/ioat: add info function " Bruce Richardson
@ 2020-10-07 16:30   ` Bruce Richardson
  2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 22/25] raw/ioat: move xstats functions to common file Bruce Richardson
                     ` (3 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-07 16:30 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Bruce Richardson, Kevin Laatz, Radu Nicolau

Rather than having the xstats as fields inside the main driver structure,
create a separate structure type for them.

As part of the change, when updating the stats functions referring to the
stats by the old path, we can simplify them to use the id to directly index
into the stats structure, making the code shorter and simpler.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 drivers/raw/ioat/ioat_rawdev.c         | 40 +++++++-------------------
 drivers/raw/ioat/rte_ioat_rawdev_fns.h | 30 ++++++++++++-------
 2 files changed, 29 insertions(+), 41 deletions(-)

diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
index 0097be87e..4ea913fff 100644
--- a/drivers/raw/ioat/ioat_rawdev.c
+++ b/drivers/raw/ioat/ioat_rawdev.c
@@ -132,16 +132,14 @@ ioat_xstats_get(const struct rte_rawdev *dev, const unsigned int ids[],
 		uint64_t values[], unsigned int n)
 {
 	const struct rte_ioat_rawdev *ioat = dev->dev_private;
+	const uint64_t *stats = (const void *)&ioat->xstats;
 	unsigned int i;
 
 	for (i = 0; i < n; i++) {
-		switch (ids[i]) {
-		case 0: values[i] = ioat->enqueue_failed; break;
-		case 1: values[i] = ioat->enqueued; break;
-		case 2: values[i] = ioat->started; break;
-		case 3: values[i] = ioat->completed; break;
-		default: values[i] = 0; break;
-		}
+		if (ids[i] < sizeof(ioat->xstats)/sizeof(*stats))
+			values[i] = stats[ids[i]];
+		else
+			values[i] = 0;
 	}
 	return n;
 }
@@ -167,35 +165,17 @@ static int
 ioat_xstats_reset(struct rte_rawdev *dev, const uint32_t *ids, uint32_t nb_ids)
 {
 	struct rte_ioat_rawdev *ioat = dev->dev_private;
+	uint64_t *stats = (void *)&ioat->xstats;
 	unsigned int i;
 
 	if (!ids) {
-		ioat->enqueue_failed = 0;
-		ioat->enqueued = 0;
-		ioat->started = 0;
-		ioat->completed = 0;
+		memset(&ioat->xstats, 0, sizeof(ioat->xstats));
 		return 0;
 	}
 
-	for (i = 0; i < nb_ids; i++) {
-		switch (ids[i]) {
-		case 0:
-			ioat->enqueue_failed = 0;
-			break;
-		case 1:
-			ioat->enqueued = 0;
-			break;
-		case 2:
-			ioat->started = 0;
-			break;
-		case 3:
-			ioat->completed = 0;
-			break;
-		default:
-			IOAT_PMD_WARN("Invalid xstat id - cannot reset value");
-			break;
-		}
-	}
+	for (i = 0; i < nb_ids; i++)
+		if (ids[i] < sizeof(ioat->xstats)/sizeof(*stats))
+			stats[ids[i]] = 0;
 
 	return 0;
 }
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index 36ba876ea..89bfc8d21 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -49,17 +49,31 @@ enum rte_ioat_dev_type {
 	RTE_IDXD_DEV,
 };
 
+/**
+ * @internal
+ * some statistics for tracking, if added/changed update xstats fns
+ */
+struct rte_ioat_xstats {
+	uint64_t enqueue_failed;
+	uint64_t enqueued;
+	uint64_t started;
+	uint64_t completed;
+};
+
 /**
  * @internal
  * Structure representing an IOAT device instance
  */
 struct rte_ioat_rawdev {
+	/* common fields at the top - match those in rte_idxd_rawdev */
 	enum rte_ioat_dev_type type;
+	struct rte_ioat_xstats xstats;
+
 	struct rte_rawdev *rawdev;
 	const struct rte_memzone *mz;
 	const struct rte_memzone *desc_mz;
 
-	volatile uint16_t *doorbell;
+	volatile uint16_t *doorbell __rte_cache_aligned;
 	phys_addr_t status_addr;
 	phys_addr_t ring_addr;
 
@@ -72,12 +86,6 @@ struct rte_ioat_rawdev {
 	unsigned short next_read;
 	unsigned short next_write;
 
-	/* some statistics for tracking, if added/changed update xstats fns*/
-	uint64_t enqueue_failed __rte_cache_aligned;
-	uint64_t enqueued;
-	uint64_t started;
-	uint64_t completed;
-
 	/* to report completions, the device will write status back here */
 	volatile uint64_t status __rte_cache_aligned;
 
@@ -209,7 +217,7 @@ __ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 	struct rte_ioat_generic_hw_desc *desc;
 
 	if (space == 0) {
-		ioat->enqueue_failed++;
+		ioat->xstats.enqueue_failed++;
 		return 0;
 	}
 
@@ -228,7 +236,7 @@ __ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 					(int64_t)src_hdl);
 	rte_prefetch0(&ioat->desc_ring[ioat->next_write & mask]);
 
-	ioat->enqueued++;
+	ioat->xstats.enqueued++;
 	return 1;
 }
 
@@ -261,7 +269,7 @@ __ioat_perform_ops(int dev_id)
 			.control.completion_update = 1;
 	rte_compiler_barrier();
 	*ioat->doorbell = ioat->next_write;
-	ioat->started = ioat->enqueued;
+	ioat->xstats.started = ioat->xstats.enqueued;
 }
 
 /**
@@ -328,7 +336,7 @@ __ioat_completed_ops(int dev_id, uint8_t max_copies,
 
 end:
 	ioat->next_read = read;
-	ioat->completed += count;
+	ioat->xstats.completed += count;
 	return count;
 }
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v5 22/25] raw/ioat: move xstats functions to common file
  2020-10-07 16:29 ` [dpdk-dev] [PATCH v5 " Bruce Richardson
                     ` (20 preceding siblings ...)
  2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 21/25] raw/ioat: create separate statistics structure Bruce Richardson
@ 2020-10-07 16:30   ` Bruce Richardson
  2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 23/25] raw/ioat: add xstats tracking for idxd devices Bruce Richardson
                     ` (2 subsequent siblings)
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-07 16:30 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Bruce Richardson, Kevin Laatz, Radu Nicolau

The xstats functions can be used by all ioat devices so move them from the
ioat_rawdev.c file to ioat_common.c, and add the function prototypes to the
internal header file.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 drivers/raw/ioat/ioat_common.c  | 59 +++++++++++++++++++++++++++++++++
 drivers/raw/ioat/ioat_private.h | 10 ++++++
 drivers/raw/ioat/ioat_rawdev.c  | 58 --------------------------------
 3 files changed, 69 insertions(+), 58 deletions(-)

diff --git a/drivers/raw/ioat/ioat_common.c b/drivers/raw/ioat/ioat_common.c
index b5cea2fda..142e171bc 100644
--- a/drivers/raw/ioat/ioat_common.c
+++ b/drivers/raw/ioat/ioat_common.c
@@ -5,9 +5,68 @@
 #include <rte_rawdev_pmd.h>
 #include <rte_memzone.h>
 #include <rte_common.h>
+#include <rte_string_fns.h>
 
 #include "ioat_private.h"
 
+static const char * const xstat_names[] = {
+		"failed_enqueues", "successful_enqueues",
+		"copies_started", "copies_completed"
+};
+
+int
+ioat_xstats_get(const struct rte_rawdev *dev, const unsigned int ids[],
+		uint64_t values[], unsigned int n)
+{
+	const struct rte_ioat_rawdev *ioat = dev->dev_private;
+	const uint64_t *stats = (const void *)&ioat->xstats;
+	unsigned int i;
+
+	for (i = 0; i < n; i++) {
+		if (ids[i] > sizeof(ioat->xstats)/sizeof(*stats))
+			values[i] = 0;
+		else
+			values[i] = stats[ids[i]];
+	}
+	return n;
+}
+
+int
+ioat_xstats_get_names(const struct rte_rawdev *dev,
+		struct rte_rawdev_xstats_name *names,
+		unsigned int size)
+{
+	unsigned int i;
+
+	RTE_SET_USED(dev);
+	if (size < RTE_DIM(xstat_names))
+		return RTE_DIM(xstat_names);
+
+	for (i = 0; i < RTE_DIM(xstat_names); i++)
+		strlcpy(names[i].name, xstat_names[i], sizeof(names[i]));
+
+	return RTE_DIM(xstat_names);
+}
+
+int
+ioat_xstats_reset(struct rte_rawdev *dev, const uint32_t *ids, uint32_t nb_ids)
+{
+	struct rte_ioat_rawdev *ioat = dev->dev_private;
+	uint64_t *stats = (void *)&ioat->xstats;
+	unsigned int i;
+
+	if (!ids) {
+		memset(&ioat->xstats, 0, sizeof(ioat->xstats));
+		return 0;
+	}
+
+	for (i = 0; i < nb_ids; i++)
+		if (ids[i] < sizeof(ioat->xstats)/sizeof(*stats))
+			stats[ids[i]] = 0;
+
+	return 0;
+}
+
 int
 idxd_rawdev_close(struct rte_rawdev *dev __rte_unused)
 {
diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
index 0f80d60bf..ab9a3e6cc 100644
--- a/drivers/raw/ioat/ioat_private.h
+++ b/drivers/raw/ioat/ioat_private.h
@@ -53,6 +53,16 @@ struct idxd_rawdev {
 	} u;
 };
 
+int ioat_xstats_get(const struct rte_rawdev *dev, const unsigned int ids[],
+		uint64_t values[], unsigned int n);
+
+int ioat_xstats_get_names(const struct rte_rawdev *dev,
+		struct rte_rawdev_xstats_name *names,
+		unsigned int size);
+
+int ioat_xstats_reset(struct rte_rawdev *dev, const uint32_t *ids,
+		uint32_t nb_ids);
+
 extern int idxd_rawdev_create(const char *name, struct rte_device *dev,
 		       const struct idxd_rawdev *idxd,
 		       const struct rte_rawdev_ops *ops);
diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
index 4ea913fff..dd2543c80 100644
--- a/drivers/raw/ioat/ioat_rawdev.c
+++ b/drivers/raw/ioat/ioat_rawdev.c
@@ -122,64 +122,6 @@ ioat_dev_info_get(struct rte_rawdev *dev, rte_rawdev_obj_t dev_info,
 	return 0;
 }
 
-static const char * const xstat_names[] = {
-		"failed_enqueues", "successful_enqueues",
-		"copies_started", "copies_completed"
-};
-
-static int
-ioat_xstats_get(const struct rte_rawdev *dev, const unsigned int ids[],
-		uint64_t values[], unsigned int n)
-{
-	const struct rte_ioat_rawdev *ioat = dev->dev_private;
-	const uint64_t *stats = (const void *)&ioat->xstats;
-	unsigned int i;
-
-	for (i = 0; i < n; i++) {
-		if (ids[i] < sizeof(ioat->xstats)/sizeof(*stats))
-			values[i] = stats[ids[i]];
-		else
-			values[i] = 0;
-	}
-	return n;
-}
-
-static int
-ioat_xstats_get_names(const struct rte_rawdev *dev,
-		struct rte_rawdev_xstats_name *names,
-		unsigned int size)
-{
-	unsigned int i;
-
-	RTE_SET_USED(dev);
-	if (size < RTE_DIM(xstat_names))
-		return RTE_DIM(xstat_names);
-
-	for (i = 0; i < RTE_DIM(xstat_names); i++)
-		strlcpy(names[i].name, xstat_names[i], sizeof(names[i]));
-
-	return RTE_DIM(xstat_names);
-}
-
-static int
-ioat_xstats_reset(struct rte_rawdev *dev, const uint32_t *ids, uint32_t nb_ids)
-{
-	struct rte_ioat_rawdev *ioat = dev->dev_private;
-	uint64_t *stats = (void *)&ioat->xstats;
-	unsigned int i;
-
-	if (!ids) {
-		memset(&ioat->xstats, 0, sizeof(ioat->xstats));
-		return 0;
-	}
-
-	for (i = 0; i < nb_ids; i++)
-		if (ids[i] < sizeof(ioat->xstats)/sizeof(*stats))
-			stats[ids[i]] = 0;
-
-	return 0;
-}
-
 static int
 ioat_dev_close(struct rte_rawdev *dev __rte_unused)
 {
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v5 23/25] raw/ioat: add xstats tracking for idxd devices
  2020-10-07 16:29 ` [dpdk-dev] [PATCH v5 " Bruce Richardson
                     ` (21 preceding siblings ...)
  2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 22/25] raw/ioat: move xstats functions to common file Bruce Richardson
@ 2020-10-07 16:30   ` Bruce Richardson
  2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 24/25] raw/ioat: clean up use of common test function Bruce Richardson
  2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 25/25] raw/ioat: add fill operation Bruce Richardson
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-07 16:30 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Bruce Richardson, Kevin Laatz, Radu Nicolau

Add update of the relevant stats for the data path functions and point the
overall device struct xstats function pointers to the existing ioat
functions.

At this point, all necessary hooks for supporting the existing unit tests
are in place so call them for each device.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 drivers/raw/ioat/idxd_pci.c            | 3 +++
 drivers/raw/ioat/idxd_vdev.c           | 3 +++
 drivers/raw/ioat/ioat_rawdev_test.c    | 2 +-
 drivers/raw/ioat/rte_ioat_rawdev_fns.h | 6 ++++++
 4 files changed, 13 insertions(+), 1 deletion(-)

diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
index bf5edcfdd..9113f8c8e 100644
--- a/drivers/raw/ioat/idxd_pci.c
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -107,6 +107,9 @@ static const struct rte_rawdev_ops idxd_pci_ops = {
 		.dev_start = idxd_pci_dev_start,
 		.dev_stop = idxd_pci_dev_stop,
 		.dev_info_get = idxd_dev_info_get,
+		.xstats_get = ioat_xstats_get,
+		.xstats_get_names = ioat_xstats_get_names,
+		.xstats_reset = ioat_xstats_reset,
 };
 
 /* each portal uses 4 x 4k pages */
diff --git a/drivers/raw/ioat/idxd_vdev.c b/drivers/raw/ioat/idxd_vdev.c
index c75ac4317..38218cc1e 100644
--- a/drivers/raw/ioat/idxd_vdev.c
+++ b/drivers/raw/ioat/idxd_vdev.c
@@ -36,6 +36,9 @@ static const struct rte_rawdev_ops idxd_vdev_ops = {
 		.dump = idxd_dev_dump,
 		.dev_configure = idxd_dev_configure,
 		.dev_info_get = idxd_dev_info_get,
+		.xstats_get = ioat_xstats_get,
+		.xstats_get_names = ioat_xstats_get_names,
+		.xstats_reset = ioat_xstats_reset,
 };
 
 static void *
diff --git a/drivers/raw/ioat/ioat_rawdev_test.c b/drivers/raw/ioat/ioat_rawdev_test.c
index ceeac92ef..a84be56c4 100644
--- a/drivers/raw/ioat/ioat_rawdev_test.c
+++ b/drivers/raw/ioat/ioat_rawdev_test.c
@@ -273,5 +273,5 @@ int
 idxd_rawdev_test(uint16_t dev_id)
 {
 	rte_rawdev_dump(dev_id, stdout);
-	return 0;
+	return ioat_rawdev_test(dev_id);
 }
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index 89bfc8d21..d0045d8a4 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -184,6 +184,8 @@ struct rte_idxd_user_hdl {
  */
 struct rte_idxd_rawdev {
 	enum rte_ioat_dev_type type;
+	struct rte_ioat_xstats xstats;
+
 	void *portal; /* address to write the batch descriptor */
 
 	/* counters to track the batches and the individual op handles */
@@ -369,9 +371,11 @@ __idxd_write_desc(int dev_id, const struct rte_idxd_hw_desc *desc,
 	if (++idxd->next_free_hdl == idxd->hdl_ring_sz)
 		idxd->next_free_hdl = 0;
 
+	idxd->xstats.enqueued++;
 	return 1;
 
 failed:
+	idxd->xstats.enqueue_failed++;
 	rte_errno = ENOSPC;
 	return 0;
 }
@@ -429,6 +433,7 @@ __idxd_perform_ops(int dev_id)
 
 	if (++idxd->next_batch == idxd->batch_ring_sz)
 		idxd->next_batch = 0;
+	idxd->xstats.started = idxd->xstats.enqueued;
 }
 
 static __rte_always_inline int
@@ -466,6 +471,7 @@ __idxd_completed_ops(int dev_id, uint8_t max_ops,
 
 	idxd->next_ret_hdl = h_idx;
 
+	idxd->xstats.completed += n;
 	return n;
 }
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v5 24/25] raw/ioat: clean up use of common test function
  2020-10-07 16:29 ` [dpdk-dev] [PATCH v5 " Bruce Richardson
                     ` (22 preceding siblings ...)
  2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 23/25] raw/ioat: add xstats tracking for idxd devices Bruce Richardson
@ 2020-10-07 16:30   ` Bruce Richardson
  2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 25/25] raw/ioat: add fill operation Bruce Richardson
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-07 16:30 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Bruce Richardson, Kevin Laatz, Radu Nicolau

Now that all devices can pass the same set of unit tests, eliminate the
temporary idxd_rawdev_test function and move the prototype for
ioat_rawdev_test to the proper internal header file, to be used by all
device instances.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 drivers/raw/ioat/idxd_pci.c         | 2 +-
 drivers/raw/ioat/idxd_vdev.c        | 2 +-
 drivers/raw/ioat/ioat_private.h     | 4 ++--
 drivers/raw/ioat/ioat_rawdev.c      | 2 --
 drivers/raw/ioat/ioat_rawdev_test.c | 7 -------
 5 files changed, 4 insertions(+), 13 deletions(-)

diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
index 9113f8c8e..165a9ea7f 100644
--- a/drivers/raw/ioat/idxd_pci.c
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -101,7 +101,7 @@ idxd_pci_dev_start(struct rte_rawdev *dev)
 
 static const struct rte_rawdev_ops idxd_pci_ops = {
 		.dev_close = idxd_rawdev_close,
-		.dev_selftest = idxd_rawdev_test,
+		.dev_selftest = ioat_rawdev_test,
 		.dump = idxd_dev_dump,
 		.dev_configure = idxd_dev_configure,
 		.dev_start = idxd_pci_dev_start,
diff --git a/drivers/raw/ioat/idxd_vdev.c b/drivers/raw/ioat/idxd_vdev.c
index 38218cc1e..50d47d05c 100644
--- a/drivers/raw/ioat/idxd_vdev.c
+++ b/drivers/raw/ioat/idxd_vdev.c
@@ -32,7 +32,7 @@ struct idxd_vdev_args {
 
 static const struct rte_rawdev_ops idxd_vdev_ops = {
 		.dev_close = idxd_rawdev_close,
-		.dev_selftest = idxd_rawdev_test,
+		.dev_selftest = ioat_rawdev_test,
 		.dump = idxd_dev_dump,
 		.dev_configure = idxd_dev_configure,
 		.dev_info_get = idxd_dev_info_get,
diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
index ab9a3e6cc..a74bc0422 100644
--- a/drivers/raw/ioat/ioat_private.h
+++ b/drivers/raw/ioat/ioat_private.h
@@ -63,6 +63,8 @@ int ioat_xstats_get_names(const struct rte_rawdev *dev,
 int ioat_xstats_reset(struct rte_rawdev *dev, const uint32_t *ids,
 		uint32_t nb_ids);
 
+extern int ioat_rawdev_test(uint16_t dev_id);
+
 extern int idxd_rawdev_create(const char *name, struct rte_device *dev,
 		       const struct idxd_rawdev *idxd,
 		       const struct rte_rawdev_ops *ops);
@@ -75,8 +77,6 @@ extern int idxd_dev_configure(const struct rte_rawdev *dev,
 extern int idxd_dev_info_get(struct rte_rawdev *dev, rte_rawdev_obj_t dev_info,
 		size_t info_size);
 
-extern int idxd_rawdev_test(uint16_t dev_id);
-
 extern int idxd_dev_dump(struct rte_rawdev *dev, FILE *f);
 
 #endif /* _IOAT_PRIVATE_H_ */
diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
index dd2543c80..2c88b4369 100644
--- a/drivers/raw/ioat/ioat_rawdev.c
+++ b/drivers/raw/ioat/ioat_rawdev.c
@@ -128,8 +128,6 @@ ioat_dev_close(struct rte_rawdev *dev __rte_unused)
 	return 0;
 }
 
-extern int ioat_rawdev_test(uint16_t dev_id);
-
 static int
 ioat_rawdev_create(const char *name, struct rte_pci_device *dev)
 {
diff --git a/drivers/raw/ioat/ioat_rawdev_test.c b/drivers/raw/ioat/ioat_rawdev_test.c
index a84be56c4..60d189b62 100644
--- a/drivers/raw/ioat/ioat_rawdev_test.c
+++ b/drivers/raw/ioat/ioat_rawdev_test.c
@@ -268,10 +268,3 @@ ioat_rawdev_test(uint16_t dev_id)
 	free(ids);
 	return -1;
 }
-
-int
-idxd_rawdev_test(uint16_t dev_id)
-{
-	rte_rawdev_dump(dev_id, stdout);
-	return ioat_rawdev_test(dev_id);
-}
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v5 25/25] raw/ioat: add fill operation
  2020-10-07 16:29 ` [dpdk-dev] [PATCH v5 " Bruce Richardson
                     ` (23 preceding siblings ...)
  2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 24/25] raw/ioat: clean up use of common test function Bruce Richardson
@ 2020-10-07 16:30   ` Bruce Richardson
  24 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-07 16:30 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Kevin Laatz, Bruce Richardson, Radu Nicolau

From: Kevin Laatz <kevin.laatz@intel.com>

Add fill operation enqueue support for IOAT and IDXD. The fill enqueue is
similar to the copy enqueue, but takes a 'pattern' rather than a source
address to transfer to the destination address. This patch also includes an
additional test case for the new operation type.

Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 doc/guides/rawdevs/ioat.rst            | 10 ++++
 doc/guides/rel_notes/release_20_11.rst |  2 +
 drivers/raw/ioat/ioat_rawdev_test.c    | 62 ++++++++++++++++++++++++
 drivers/raw/ioat/rte_ioat_rawdev.h     | 26 +++++++++++
 drivers/raw/ioat/rte_ioat_rawdev_fns.h | 65 ++++++++++++++++++++++++--
 5 files changed, 160 insertions(+), 5 deletions(-)

diff --git a/doc/guides/rawdevs/ioat.rst b/doc/guides/rawdevs/ioat.rst
index 7c2a2d457..250cfc48a 100644
--- a/doc/guides/rawdevs/ioat.rst
+++ b/doc/guides/rawdevs/ioat.rst
@@ -285,6 +285,16 @@ is correct before freeing the data buffers using the returned handles:
         }
 
 
+Filling an Area of Memory
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The IOAT driver also has support for the ``fill`` operation, where an area
+of memory is overwritten, or filled, with a short pattern of data.
+Fill operations can be performed in much the same was as copy operations
+described above, just using the ``rte_ioat_enqueue_fill()`` function rather
+than the ``rte_ioat_enqueue_copy()`` function.
+
+
 Querying Device Statistics
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index e48e6ea75..943ec83fd 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -122,6 +122,8 @@ New Features
 
   * Added support for Intel\ |reg| Data Streaming Accelerator hardware.
     For more information, see https://01.org/blogs/2019/introducing-intel-data-streaming-accelerator
+  * Added support for the fill operation via the API ``rte_ioat_enqueue_fill()``,
+    where the hardware fills an area of memory with a repeating pattern.
   * Added a per-device configuration flag to disable management of user-provided completion handles
   * Renamed the ``rte_ioat_do_copies()`` API to ``rte_ioat_perform_ops()``,
     and renamed the ``rte_ioat_completed_copies()`` API to ``rte_ioat_completed_ops()``
diff --git a/drivers/raw/ioat/ioat_rawdev_test.c b/drivers/raw/ioat/ioat_rawdev_test.c
index 60d189b62..101f24a67 100644
--- a/drivers/raw/ioat/ioat_rawdev_test.c
+++ b/drivers/raw/ioat/ioat_rawdev_test.c
@@ -155,6 +155,52 @@ test_enqueue_copies(int dev_id)
 	return 0;
 }
 
+static int
+test_enqueue_fill(int dev_id)
+{
+	const unsigned int length[] = {8, 64, 1024, 50, 100, 89};
+	struct rte_mbuf *dst = rte_pktmbuf_alloc(pool);
+	char *dst_data = rte_pktmbuf_mtod(dst, char *);
+	struct rte_mbuf *completed[2] = {0};
+	uint64_t pattern = 0xfedcba9876543210;
+	unsigned int i, j;
+
+	for (i = 0; i < RTE_DIM(length); i++) {
+		/* reset dst_data */
+		memset(dst_data, 0, length[i]);
+
+		/* perform the fill operation */
+		if (rte_ioat_enqueue_fill(dev_id, pattern,
+				dst->buf_iova + dst->data_off, length[i],
+				(uintptr_t)dst) != 1) {
+			PRINT_ERR("Error with rte_ioat_enqueue_fill\n");
+			return -1;
+		}
+
+		rte_ioat_perform_ops(dev_id);
+		usleep(100);
+
+		if (rte_ioat_completed_ops(dev_id, 1, (void *)&completed[0],
+			(void *)&completed[1]) != 1) {
+			PRINT_ERR("Error with completed ops\n");
+			return -1;
+		}
+		/* check the result */
+		for (j = 0; j < length[i]; j++) {
+			char pat_byte = ((char *)&pattern)[j % 8];
+			if (dst_data[j] != pat_byte) {
+				PRINT_ERR("Error with fill operation (length = %u): got (%x), not (%x)\n",
+						length[i], dst_data[j],
+						pat_byte);
+				return -1;
+			}
+		}
+	}
+
+	rte_pktmbuf_free(dst);
+	return 0;
+}
+
 int
 ioat_rawdev_test(uint16_t dev_id)
 {
@@ -234,6 +280,7 @@ ioat_rawdev_test(uint16_t dev_id)
 	}
 
 	/* run the test cases */
+	printf("Running Copy Tests\n");
 	for (i = 0; i < 100; i++) {
 		unsigned int j;
 
@@ -247,6 +294,21 @@ ioat_rawdev_test(uint16_t dev_id)
 	}
 	printf("\n");
 
+	/* test enqueue fill operation */
+	printf("Running Fill Tests\n");
+	for (i = 0; i < 100; i++) {
+		unsigned int j;
+
+		if (test_enqueue_fill(dev_id) != 0)
+			goto err;
+
+		rte_rawdev_xstats_get(dev_id, ids, stats, nb_xstats);
+		for (j = 0; j < nb_xstats; j++)
+			printf("%s: %"PRIu64"   ", snames[j].name, stats[j]);
+		printf("\r");
+	}
+	printf("\n");
+
 	rte_rawdev_stop(dev_id);
 	if (rte_rawdev_xstats_reset(dev_id, NULL, 0) != 0) {
 		PRINT_ERR("Error resetting xstat values\n");
diff --git a/drivers/raw/ioat/rte_ioat_rawdev.h b/drivers/raw/ioat/rte_ioat_rawdev.h
index 6b891cd44..b7632ebf3 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev.h
@@ -37,6 +37,32 @@ struct rte_ioat_rawdev_config {
 	bool hdls_disable;    /**< if set, ignore user-supplied handle params */
 };
 
+/**
+ * Enqueue a fill operation onto the ioat device
+ *
+ * This queues up a fill operation to be performed by hardware, but does not
+ * trigger hardware to begin that operation.
+ *
+ * @param dev_id
+ *   The rawdev device id of the ioat instance
+ * @param pattern
+ *   The pattern to populate the destination buffer with
+ * @param dst
+ *   The physical address of the destination buffer
+ * @param length
+ *   The length of the destination buffer
+ * @param dst_hdl
+ *   An opaque handle for the destination data, to be returned when this
+ *   operation has been completed and the user polls for the completion details.
+ *   NOTE: If hdls_disable configuration option for the device is set, this
+ *   parameter is ignored.
+ * @return
+ *   Number of operations enqueued, either 0 or 1
+ */
+static inline int
+rte_ioat_enqueue_fill(int dev_id, uint64_t pattern, phys_addr_t dst,
+		unsigned int length, uintptr_t dst_hdl);
+
 /**
  * Enqueue a copy operation onto the ioat device
  *
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index d0045d8a4..c2c4601ca 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -115,6 +115,13 @@ enum rte_idxd_ops {
 #define IDXD_FLAG_REQUEST_COMPLETION    (1 << 3)
 #define IDXD_FLAG_CACHE_CONTROL         (1 << 8)
 
+#define IOAT_COMP_UPDATE_SHIFT	3
+#define IOAT_CMD_OP_SHIFT	24
+enum rte_ioat_ops {
+	ioat_op_copy = 0,	/* Standard DMA Operation */
+	ioat_op_fill		/* Block Fill */
+};
+
 /**
  * Hardware descriptor used by DSA hardware, for both bursts and
  * for individual operations.
@@ -203,11 +210,8 @@ struct rte_idxd_rawdev {
 	struct rte_idxd_desc_batch *batch_ring;
 };
 
-/*
- * Enqueue a copy operation onto the ioat device
- */
 static __rte_always_inline int
-__ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
+__ioat_write_desc(int dev_id, uint32_t op, uint64_t src, phys_addr_t dst,
 		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl)
 {
 	struct rte_ioat_rawdev *ioat =
@@ -229,7 +233,8 @@ __ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 	desc = &ioat->desc_ring[write];
 	desc->size = length;
 	/* set descriptor write-back every 16th descriptor */
-	desc->u.control_raw = (uint32_t)((!(write & 0xF)) << 3);
+	desc->u.control_raw = (uint32_t)((op << IOAT_CMD_OP_SHIFT) |
+			(!(write & 0xF) << IOAT_COMP_UPDATE_SHIFT));
 	desc->src_addr = src;
 	desc->dest_addr = dst;
 
@@ -242,6 +247,27 @@ __ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 	return 1;
 }
 
+static __rte_always_inline int
+__ioat_enqueue_fill(int dev_id, uint64_t pattern, phys_addr_t dst,
+		unsigned int length, uintptr_t dst_hdl)
+{
+	static const uintptr_t null_hdl;
+
+	return __ioat_write_desc(dev_id, ioat_op_fill, pattern, dst, length,
+			null_hdl, dst_hdl);
+}
+
+/*
+ * Enqueue a copy operation onto the ioat device
+ */
+static __rte_always_inline int
+__ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
+		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl)
+{
+	return __ioat_write_desc(dev_id, ioat_op_copy, src, dst, length,
+			src_hdl, dst_hdl);
+}
+
 /* add fence to last written descriptor */
 static __rte_always_inline int
 __ioat_fence(int dev_id)
@@ -380,6 +406,23 @@ __idxd_write_desc(int dev_id, const struct rte_idxd_hw_desc *desc,
 	return 0;
 }
 
+static __rte_always_inline int
+__idxd_enqueue_fill(int dev_id, uint64_t pattern, rte_iova_t dst,
+		unsigned int length, uintptr_t dst_hdl)
+{
+	const struct rte_idxd_hw_desc desc = {
+			.op_flags =  (idxd_op_fill << IDXD_CMD_OP_SHIFT) |
+				IDXD_FLAG_CACHE_CONTROL,
+			.src = pattern,
+			.dst = dst,
+			.size = length
+	};
+	const struct rte_idxd_user_hdl hdl = {
+			.dst = dst_hdl
+	};
+	return __idxd_write_desc(dev_id, &desc, &hdl);
+}
+
 static __rte_always_inline int
 __idxd_enqueue_copy(int dev_id, rte_iova_t src, rte_iova_t dst,
 		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl)
@@ -475,6 +518,18 @@ __idxd_completed_ops(int dev_id, uint8_t max_ops,
 	return n;
 }
 
+static inline int
+rte_ioat_enqueue_fill(int dev_id, uint64_t pattern, phys_addr_t dst,
+		unsigned int len, uintptr_t dst_hdl)
+{
+	enum rte_ioat_dev_type *type =
+			(enum rte_ioat_dev_type *)rte_rawdevs[dev_id].dev_private;
+	if (*type == RTE_IDXD_DEV)
+		return __idxd_enqueue_fill(dev_id, pattern, dst, len, dst_hdl);
+	else
+		return __ioat_enqueue_fill(dev_id, pattern, dst, len, dst_hdl);
+}
+
 static inline int
 rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl)
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v6 00/25] raw/ioat: enhancements and new hardware support
  2020-07-21  9:51 [dpdk-dev] [PATCH 20.11 00/20] raw/ioat: enhancements and new hardware support Bruce Richardson
                   ` (23 preceding siblings ...)
  2020-10-07 16:29 ` [dpdk-dev] [PATCH v5 " Bruce Richardson
@ 2020-10-08  9:51 ` Bruce Richardson
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 01/25] doc/api: add ioat driver to index Bruce Richardson
                     ` (25 more replies)
  24 siblings, 26 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-08  9:51 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Bruce Richardson

This patchset adds some small enhancements, some rework and also support
for new hardware to the ioat rawdev driver. Most rework and enhancements
are largely self-explanatory from the individual patches.

The new hardware support is for the Intel(R) DSA accelerator which will be
present in future Intel processors. A description of this new hardware is
covered in [1]. Functions specific to the new hardware use the "idxd"
prefix, for consistency with the kernel driver.

[1] https://01.org/blogs/2019/introducing-intel-data-streaming-accelerator

---
V6:
 * Add explicit __rte_experimental tag on all functions.
   [Previously header just had note saying all contents are experimental]

V5:
 * Rebased to latest main branch.

V4:
 * Fixed compile with FreeBSD clang
 * Improved autotests for fill operation

V3:
 * More doc updates including release note updates throughout the set
 * Added in fill operation
 * Added in fix for missing close operation
 * Added in fix for doc building to ensure ioat is in in the index

V2:
 * Included documentation additions in the set
 * Split off the rawdev unit test changes to a separate patchset for easier
   review
 * General code improvements and cleanups 


Bruce Richardson (19):
  doc/api: add ioat driver to index
  raw/ioat: enable use from C++ code
  raw/ioat: include extra info in error messages
  raw/ioat: split header for readability
  raw/ioat: rename functions to be operation-agnostic
  raw/ioat: add separate API for fence call
  raw/ioat: make the HW register spec private
  raw/ioat: add skeleton for VFIO/UIO based DSA device
  raw/ioat: include example configuration script
  raw/ioat: create rawdev instances on idxd PCI probe
  raw/ioat: add datapath data structures for idxd devices
  raw/ioat: add configure function for idxd devices
  raw/ioat: add start and stop functions for idxd devices
  raw/ioat: add data path for idxd devices
  raw/ioat: add info function for idxd devices
  raw/ioat: create separate statistics structure
  raw/ioat: move xstats functions to common file
  raw/ioat: add xstats tracking for idxd devices
  raw/ioat: clean up use of common test function

Cheng Jiang (1):
  raw/ioat: add a flag to control copying handle parameters

Kevin Laatz (5):
  raw/ioat: fix missing close function
  usertools/dpdk-devbind.py: add support for DSA HW
  raw/ioat: add vdev probe for DSA/idxd devices
  raw/ioat: create rawdev instances for idxd vdevs
  raw/ioat: add fill operation

 doc/api/doxy-api-index.md                     |   1 +
 doc/api/doxy-api.conf.in                      |   1 +
 doc/guides/rawdevs/ioat.rst                   | 163 +++--
 doc/guides/rel_notes/release_20_11.rst        |  23 +
 doc/guides/sample_app_ug/ioat.rst             |   8 +-
 drivers/raw/ioat/dpdk_idxd_cfg.py             |  79 +++
 drivers/raw/ioat/idxd_pci.c                   | 345 ++++++++++
 drivers/raw/ioat/idxd_vdev.c                  | 233 +++++++
 drivers/raw/ioat/ioat_common.c                | 244 +++++++
 drivers/raw/ioat/ioat_private.h               |  82 +++
 drivers/raw/ioat/ioat_rawdev.c                |  92 +--
 drivers/raw/ioat/ioat_rawdev_test.c           | 130 +++-
 .../raw/ioat/{rte_ioat_spec.h => ioat_spec.h} |  90 ++-
 drivers/raw/ioat/meson.build                  |  15 +-
 drivers/raw/ioat/rte_ioat_rawdev.h            | 226 +++----
 drivers/raw/ioat/rte_ioat_rawdev_fns.h        | 595 ++++++++++++++++++
 examples/ioat/ioatfwd.c                       |  16 +-
 lib/librte_eal/include/rte_common.h           |   1 +
 usertools/dpdk-devbind.py                     |   6 +-
 19 files changed, 1996 insertions(+), 354 deletions(-)
 create mode 100755 drivers/raw/ioat/dpdk_idxd_cfg.py
 create mode 100644 drivers/raw/ioat/idxd_pci.c
 create mode 100644 drivers/raw/ioat/idxd_vdev.c
 create mode 100644 drivers/raw/ioat/ioat_common.c
 create mode 100644 drivers/raw/ioat/ioat_private.h
 rename drivers/raw/ioat/{rte_ioat_spec.h => ioat_spec.h} (74%)
 create mode 100644 drivers/raw/ioat/rte_ioat_rawdev_fns.h

-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v6 01/25] doc/api: add ioat driver to index
  2020-10-08  9:51 ` [dpdk-dev] [PATCH v6 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
@ 2020-10-08  9:51   ` Bruce Richardson
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 02/25] raw/ioat: fix missing close function Bruce Richardson
                     ` (24 subsequent siblings)
  25 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-08  9:51 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Bruce Richardson, Kevin Laatz, Radu Nicolau

Add the ioat driver to the doxygen documentation.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 doc/api/doxy-api-index.md | 1 +
 doc/api/doxy-api.conf.in  | 1 +
 2 files changed, 2 insertions(+)

diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 037c77579..a9c12d1a2 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -44,6 +44,7 @@ The public API headers are grouped by topics:
   [ixgbe]              (@ref rte_pmd_ixgbe.h),
   [i40e]               (@ref rte_pmd_i40e.h),
   [ice]                (@ref rte_pmd_ice.h),
+  [ioat]               (@ref rte_ioat_rawdev.h),
   [bnxt]               (@ref rte_pmd_bnxt.h),
   [dpaa]               (@ref rte_pmd_dpaa.h),
   [dpaa2]              (@ref rte_pmd_dpaa2.h),
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index ddef755c2..e37f8c2e8 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -18,6 +18,7 @@ INPUT                   = @TOPDIR@/doc/api/doxy-api-index.md \
                           @TOPDIR@/drivers/net/softnic \
                           @TOPDIR@/drivers/raw/dpaa2_cmdif \
                           @TOPDIR@/drivers/raw/dpaa2_qdma \
+                          @TOPDIR@/drivers/raw/ioat \
                           @TOPDIR@/lib/librte_eal/include \
                           @TOPDIR@/lib/librte_eal/include/generic \
                           @TOPDIR@/lib/librte_acl \
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v6 02/25] raw/ioat: fix missing close function
  2020-10-08  9:51 ` [dpdk-dev] [PATCH v6 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 01/25] doc/api: add ioat driver to index Bruce Richardson
@ 2020-10-08  9:51   ` Bruce Richardson
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 03/25] raw/ioat: enable use from C++ code Bruce Richardson
                     ` (23 subsequent siblings)
  25 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-08  9:51 UTC (permalink / raw)
  To: dev
  Cc: patrick.fu, thomas, Kevin Laatz, stable, Sunil Pai G,
	Bruce Richardson, Radu Nicolau

From: Kevin Laatz <kevin.laatz@intel.com>

When rte_rawdev_pmd_release() is called, rte_rawdev_close() looks for a
dev_close function for the device causing a segmentation fault when no
close() function is implemented for a driver.

This patch resolves the issue by adding a stub function ioat_dev_close().

Fixes: f687e842e328 ("raw/ioat: introduce IOAT driver")
Cc: stable@dpdk.org

Reported-by: Sunil Pai G <sunil.pai.g@intel.com>
Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Reviewed-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Sunil Pai G <sunil.pai.g@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 drivers/raw/ioat/ioat_rawdev.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
index 7f1a15436..0732b059f 100644
--- a/drivers/raw/ioat/ioat_rawdev.c
+++ b/drivers/raw/ioat/ioat_rawdev.c
@@ -203,6 +203,12 @@ ioat_xstats_reset(struct rte_rawdev *dev, const uint32_t *ids, uint32_t nb_ids)
 	return 0;
 }
 
+static int
+ioat_dev_close(struct rte_rawdev *dev __rte_unused)
+{
+	return 0;
+}
+
 extern int ioat_rawdev_test(uint16_t dev_id);
 
 static int
@@ -212,6 +218,7 @@ ioat_rawdev_create(const char *name, struct rte_pci_device *dev)
 			.dev_configure = ioat_dev_configure,
 			.dev_start = ioat_dev_start,
 			.dev_stop = ioat_dev_stop,
+			.dev_close = ioat_dev_close,
 			.dev_info_get = ioat_dev_info_get,
 			.xstats_get = ioat_xstats_get,
 			.xstats_get_names = ioat_xstats_get_names,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v6 03/25] raw/ioat: enable use from C++ code
  2020-10-08  9:51 ` [dpdk-dev] [PATCH v6 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 01/25] doc/api: add ioat driver to index Bruce Richardson
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 02/25] raw/ioat: fix missing close function Bruce Richardson
@ 2020-10-08  9:51   ` Bruce Richardson
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 04/25] raw/ioat: include extra info in error messages Bruce Richardson
                     ` (22 subsequent siblings)
  25 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-08  9:51 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Bruce Richardson, Kevin Laatz, Radu Nicolau

To allow the header file to be used from C++ code we need to ensure all
typecasts are explicit, and include an 'extern "C"' guard.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 drivers/raw/ioat/rte_ioat_rawdev.h | 23 +++++++++++++++++------
 1 file changed, 17 insertions(+), 6 deletions(-)

diff --git a/drivers/raw/ioat/rte_ioat_rawdev.h b/drivers/raw/ioat/rte_ioat_rawdev.h
index f765a6557..3d8419271 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev.h
@@ -5,6 +5,10 @@
 #ifndef _RTE_IOAT_RAWDEV_H_
 #define _RTE_IOAT_RAWDEV_H_
 
+#ifdef __cplusplus
+extern "C" {
+#endif
+
 /**
  * @file rte_ioat_rawdev.h
  *
@@ -100,7 +104,8 @@ rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl,
 		int fence)
 {
-	struct rte_ioat_rawdev *ioat = rte_rawdevs[dev_id].dev_private;
+	struct rte_ioat_rawdev *ioat =
+			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
 	unsigned short read = ioat->next_read;
 	unsigned short write = ioat->next_write;
 	unsigned short mask = ioat->ring_size - 1;
@@ -141,7 +146,8 @@ rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 static inline void
 rte_ioat_do_copies(int dev_id)
 {
-	struct rte_ioat_rawdev *ioat = rte_rawdevs[dev_id].dev_private;
+	struct rte_ioat_rawdev *ioat =
+			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
 	ioat->desc_ring[(ioat->next_write - 1) & (ioat->ring_size - 1)].u
 			.control.completion_update = 1;
 	rte_compiler_barrier();
@@ -190,7 +196,8 @@ static inline int
 rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
 		uintptr_t *src_hdls, uintptr_t *dst_hdls)
 {
-	struct rte_ioat_rawdev *ioat = rte_rawdevs[dev_id].dev_private;
+	struct rte_ioat_rawdev *ioat =
+			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
 	unsigned short mask = (ioat->ring_size - 1);
 	unsigned short read = ioat->next_read;
 	unsigned short end_read, count;
@@ -212,13 +219,13 @@ rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
 		__m128i hdls0 = _mm_load_si128(&ioat->hdls[read & mask]);
 		__m128i hdls1 = _mm_load_si128(&ioat->hdls[(read + 1) & mask]);
 
-		_mm_storeu_si128((void *)&src_hdls[i],
+		_mm_storeu_si128((__m128i *)&src_hdls[i],
 				_mm_unpacklo_epi64(hdls0, hdls1));
-		_mm_storeu_si128((void *)&dst_hdls[i],
+		_mm_storeu_si128((__m128i *)&dst_hdls[i],
 				_mm_unpackhi_epi64(hdls0, hdls1));
 	}
 	for (; i < count; i++, read++) {
-		uintptr_t *hdls = (void *)&ioat->hdls[read & mask];
+		uintptr_t *hdls = (uintptr_t *)&ioat->hdls[read & mask];
 		src_hdls[i] = hdls[0];
 		dst_hdls[i] = hdls[1];
 	}
@@ -228,4 +235,8 @@ rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
 	return count;
 }
 
+#ifdef __cplusplus
+}
+#endif
+
 #endif /* _RTE_IOAT_RAWDEV_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v6 04/25] raw/ioat: include extra info in error messages
  2020-10-08  9:51 ` [dpdk-dev] [PATCH v6 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (2 preceding siblings ...)
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 03/25] raw/ioat: enable use from C++ code Bruce Richardson
@ 2020-10-08  9:51   ` Bruce Richardson
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 05/25] raw/ioat: add a flag to control copying handle parameters Bruce Richardson
                     ` (21 subsequent siblings)
  25 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-08  9:51 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Bruce Richardson, Kevin Laatz, Radu Nicolau

In case of any failures, include the function name and the line number of
the error message in the message, to make tracking down the failure easier.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 drivers/raw/ioat/ioat_rawdev_test.c | 53 +++++++++++++++++++----------
 1 file changed, 35 insertions(+), 18 deletions(-)

diff --git a/drivers/raw/ioat/ioat_rawdev_test.c b/drivers/raw/ioat/ioat_rawdev_test.c
index e5b50ae9f..77f96bba3 100644
--- a/drivers/raw/ioat/ioat_rawdev_test.c
+++ b/drivers/raw/ioat/ioat_rawdev_test.c
@@ -16,6 +16,23 @@ int ioat_rawdev_test(uint16_t dev_id); /* pre-define to keep compiler happy */
 static struct rte_mempool *pool;
 static unsigned short expected_ring_size[MAX_SUPPORTED_RAWDEVS];
 
+#define PRINT_ERR(...) print_err(__func__, __LINE__, __VA_ARGS__)
+
+static inline int
+__rte_format_printf(3, 4)
+print_err(const char *func, int lineno, const char *format, ...)
+{
+	va_list ap;
+	int ret;
+
+	ret = fprintf(stderr, "In %s:%d - ", func, lineno);
+	va_start(ap, format);
+	ret += vfprintf(stderr, format, ap);
+	va_end(ap);
+
+	return ret;
+}
+
 static int
 test_enqueue_copies(int dev_id)
 {
@@ -45,7 +62,7 @@ test_enqueue_copies(int dev_id)
 				(uintptr_t)src,
 				(uintptr_t)dst,
 				0 /* no fence */) != 1) {
-			printf("Error with rte_ioat_enqueue_copy\n");
+			PRINT_ERR("Error with rte_ioat_enqueue_copy\n");
 			return -1;
 		}
 		rte_ioat_do_copies(dev_id);
@@ -53,18 +70,18 @@ test_enqueue_copies(int dev_id)
 
 		if (rte_ioat_completed_copies(dev_id, 1, (void *)&completed[0],
 				(void *)&completed[1]) != 1) {
-			printf("Error with rte_ioat_completed_copies\n");
+			PRINT_ERR("Error with rte_ioat_completed_copies\n");
 			return -1;
 		}
 		if (completed[0] != src || completed[1] != dst) {
-			printf("Error with completions: got (%p, %p), not (%p,%p)\n",
+			PRINT_ERR("Error with completions: got (%p, %p), not (%p,%p)\n",
 					completed[0], completed[1], src, dst);
 			return -1;
 		}
 
 		for (i = 0; i < length; i++)
 			if (dst_data[i] != src_data[i]) {
-				printf("Data mismatch at char %u\n", i);
+				PRINT_ERR("Data mismatch at char %u\n", i);
 				return -1;
 			}
 		rte_pktmbuf_free(src);
@@ -97,7 +114,7 @@ test_enqueue_copies(int dev_id)
 					(uintptr_t)srcs[i],
 					(uintptr_t)dsts[i],
 					0 /* nofence */) != 1) {
-				printf("Error with rte_ioat_enqueue_copy for buffer %u\n",
+				PRINT_ERR("Error with rte_ioat_enqueue_copy for buffer %u\n",
 						i);
 				return -1;
 			}
@@ -107,18 +124,18 @@ test_enqueue_copies(int dev_id)
 
 		if (rte_ioat_completed_copies(dev_id, 64, (void *)completed_src,
 				(void *)completed_dst) != RTE_DIM(srcs)) {
-			printf("Error with rte_ioat_completed_copies\n");
+			PRINT_ERR("Error with rte_ioat_completed_copies\n");
 			return -1;
 		}
 		for (i = 0; i < RTE_DIM(srcs); i++) {
 			char *src_data, *dst_data;
 
 			if (completed_src[i] != srcs[i]) {
-				printf("Error with source pointer %u\n", i);
+				PRINT_ERR("Error with source pointer %u\n", i);
 				return -1;
 			}
 			if (completed_dst[i] != dsts[i]) {
-				printf("Error with dest pointer %u\n", i);
+				PRINT_ERR("Error with dest pointer %u\n", i);
 				return -1;
 			}
 
@@ -126,7 +143,7 @@ test_enqueue_copies(int dev_id)
 			dst_data = rte_pktmbuf_mtod(dsts[i], char *);
 			for (j = 0; j < length; j++)
 				if (src_data[j] != dst_data[j]) {
-					printf("Error with copy of packet %u, byte %u\n",
+					PRINT_ERR("Error with copy of packet %u, byte %u\n",
 							i, j);
 					return -1;
 				}
@@ -159,26 +176,26 @@ ioat_rawdev_test(uint16_t dev_id)
 
 	rte_rawdev_info_get(dev_id, &info, sizeof(p));
 	if (p.ring_size != expected_ring_size[dev_id]) {
-		printf("Error, initial ring size is not as expected (Actual: %d, Expected: %d)\n",
+		PRINT_ERR("Error, initial ring size is not as expected (Actual: %d, Expected: %d)\n",
 				(int)p.ring_size, expected_ring_size[dev_id]);
 		return -1;
 	}
 
 	p.ring_size = IOAT_TEST_RINGSIZE;
 	if (rte_rawdev_configure(dev_id, &info, sizeof(p)) != 0) {
-		printf("Error with rte_rawdev_configure()\n");
+		PRINT_ERR("Error with rte_rawdev_configure()\n");
 		return -1;
 	}
 	rte_rawdev_info_get(dev_id, &info, sizeof(p));
 	if (p.ring_size != IOAT_TEST_RINGSIZE) {
-		printf("Error, ring size is not %d (%d)\n",
+		PRINT_ERR("Error, ring size is not %d (%d)\n",
 				IOAT_TEST_RINGSIZE, (int)p.ring_size);
 		return -1;
 	}
 	expected_ring_size[dev_id] = p.ring_size;
 
 	if (rte_rawdev_start(dev_id) != 0) {
-		printf("Error with rte_rawdev_start()\n");
+		PRINT_ERR("Error with rte_rawdev_start()\n");
 		return -1;
 	}
 
@@ -189,7 +206,7 @@ ioat_rawdev_test(uint16_t dev_id)
 			2048, /* data room size */
 			info.socket_id);
 	if (pool == NULL) {
-		printf("Error with mempool creation\n");
+		PRINT_ERR("Error with mempool creation\n");
 		return -1;
 	}
 
@@ -198,14 +215,14 @@ ioat_rawdev_test(uint16_t dev_id)
 
 	snames = malloc(sizeof(*snames) * nb_xstats);
 	if (snames == NULL) {
-		printf("Error allocating xstat names memory\n");
+		PRINT_ERR("Error allocating xstat names memory\n");
 		goto err;
 	}
 	rte_rawdev_xstats_names_get(dev_id, snames, nb_xstats);
 
 	ids = malloc(sizeof(*ids) * nb_xstats);
 	if (ids == NULL) {
-		printf("Error allocating xstat ids memory\n");
+		PRINT_ERR("Error allocating xstat ids memory\n");
 		goto err;
 	}
 	for (i = 0; i < nb_xstats; i++)
@@ -213,7 +230,7 @@ ioat_rawdev_test(uint16_t dev_id)
 
 	stats = malloc(sizeof(*stats) * nb_xstats);
 	if (stats == NULL) {
-		printf("Error allocating xstat memory\n");
+		PRINT_ERR("Error allocating xstat memory\n");
 		goto err;
 	}
 
@@ -233,7 +250,7 @@ ioat_rawdev_test(uint16_t dev_id)
 
 	rte_rawdev_stop(dev_id);
 	if (rte_rawdev_xstats_reset(dev_id, NULL, 0) != 0) {
-		printf("Error resetting xstat values\n");
+		PRINT_ERR("Error resetting xstat values\n");
 		goto err;
 	}
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v6 05/25] raw/ioat: add a flag to control copying handle parameters
  2020-10-08  9:51 ` [dpdk-dev] [PATCH v6 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (3 preceding siblings ...)
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 04/25] raw/ioat: include extra info in error messages Bruce Richardson
@ 2020-10-08  9:51   ` Bruce Richardson
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 06/25] raw/ioat: split header for readability Bruce Richardson
                     ` (20 subsequent siblings)
  25 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-08  9:51 UTC (permalink / raw)
  To: dev
  Cc: patrick.fu, thomas, Cheng Jiang, Bruce Richardson, Kevin Laatz,
	Radu Nicolau

From: Cheng Jiang <Cheng1.jiang@intel.com>

Add a flag which controls whether rte_ioat_enqueue_copy and
rte_ioat_completed_copies function should process handle parameters. Not
doing so can improve the performance when handle parameters are not
necessary.

Signed-off-by: Cheng Jiang <Cheng1.jiang@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 doc/guides/rawdevs/ioat.rst            |  3 ++
 doc/guides/rel_notes/release_20_11.rst |  6 ++++
 drivers/raw/ioat/ioat_rawdev.c         |  2 ++
 drivers/raw/ioat/rte_ioat_rawdev.h     | 45 +++++++++++++++++++-------
 4 files changed, 45 insertions(+), 11 deletions(-)

diff --git a/doc/guides/rawdevs/ioat.rst b/doc/guides/rawdevs/ioat.rst
index c46460ff4..af00d77fb 100644
--- a/doc/guides/rawdevs/ioat.rst
+++ b/doc/guides/rawdevs/ioat.rst
@@ -129,6 +129,9 @@ output, the ``dev_private`` structure element cannot be NULL, and must
 point to a valid ``rte_ioat_rawdev_config`` structure, containing the ring
 size to be used by the device. The ring size must be a power of two,
 between 64 and 4096.
+If it is not needed, the tracking by the driver of user-provided completion
+handles may be disabled by setting the ``hdls_disable`` flag in
+the configuration structure also.
 
 The following code shows how the device is configured in
 ``test_ioat_rawdev.c``:
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index cdf20404c..1e73c26d4 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -116,6 +116,12 @@ New Features
   * Extern objects and functions can be plugged into the pipeline.
   * Transaction-oriented table updates.
 
+* **Updated ioat rawdev driver**
+
+  The ioat rawdev driver has been updated and enhanced. Changes include:
+
+  * Added a per-device configuration flag to disable management of user-provided completion handles
+
 
 Removed Items
 -------------
diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
index 0732b059f..ea9f51ffc 100644
--- a/drivers/raw/ioat/ioat_rawdev.c
+++ b/drivers/raw/ioat/ioat_rawdev.c
@@ -58,6 +58,7 @@ ioat_dev_configure(const struct rte_rawdev *dev, rte_rawdev_obj_t config,
 		return -EINVAL;
 
 	ioat->ring_size = params->ring_size;
+	ioat->hdls_disable = params->hdls_disable;
 	if (ioat->desc_ring != NULL) {
 		rte_memzone_free(ioat->desc_mz);
 		ioat->desc_ring = NULL;
@@ -122,6 +123,7 @@ ioat_dev_info_get(struct rte_rawdev *dev, rte_rawdev_obj_t dev_info,
 		return -EINVAL;
 
 	cfg->ring_size = ioat->ring_size;
+	cfg->hdls_disable = ioat->hdls_disable;
 	return 0;
 }
 
diff --git a/drivers/raw/ioat/rte_ioat_rawdev.h b/drivers/raw/ioat/rte_ioat_rawdev.h
index 3d8419271..28ce95cc9 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev.h
@@ -38,7 +38,8 @@ extern "C" {
  * an ioat rawdev instance.
  */
 struct rte_ioat_rawdev_config {
-	unsigned short ring_size;
+	unsigned short ring_size; /**< size of job submission descriptor ring */
+	bool hdls_disable;    /**< if set, ignore user-supplied handle params */
 };
 
 /**
@@ -56,6 +57,7 @@ struct rte_ioat_rawdev {
 
 	unsigned short ring_size;
 	struct rte_ioat_generic_hw_desc *desc_ring;
+	bool hdls_disable;
 	__m128i *hdls; /* completion handles for returning to user */
 
 
@@ -88,10 +90,14 @@ struct rte_ioat_rawdev {
  *   The length of the data to be copied
  * @param src_hdl
  *   An opaque handle for the source data, to be returned when this operation
- *   has been completed and the user polls for the completion details
+ *   has been completed and the user polls for the completion details.
+ *   NOTE: If hdls_disable configuration option for the device is set, this
+ *   parameter is ignored.
  * @param dst_hdl
  *   An opaque handle for the destination data, to be returned when this
- *   operation has been completed and the user polls for the completion details
+ *   operation has been completed and the user polls for the completion details.
+ *   NOTE: If hdls_disable configuration option for the device is set, this
+ *   parameter is ignored.
  * @param fence
  *   A flag parameter indicating that hardware should not begin to perform any
  *   subsequently enqueued copy operations until after this operation has
@@ -126,8 +132,10 @@ rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 	desc->u.control_raw = (uint32_t)((!!fence << 4) | (!(write & 0xF)) << 3);
 	desc->src_addr = src;
 	desc->dest_addr = dst;
+	if (!ioat->hdls_disable)
+		ioat->hdls[write] = _mm_set_epi64x((int64_t)dst_hdl,
+					(int64_t)src_hdl);
 
-	ioat->hdls[write] = _mm_set_epi64x((int64_t)dst_hdl, (int64_t)src_hdl);
 	rte_prefetch0(&ioat->desc_ring[ioat->next_write & mask]);
 
 	ioat->enqueued++;
@@ -174,19 +182,29 @@ rte_ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
 /**
  * Returns details of copy operations that have been completed
  *
- * Returns to the caller the user-provided "handles" for the copy operations
- * which have been completed by the hardware, and not already returned by
- * a previous call to this API.
+ * If the hdls_disable option was not set when the device was configured,
+ * the function will return to the caller the user-provided "handles" for
+ * the copy operations which have been completed by the hardware, and not
+ * already returned by a previous call to this API.
+ * If the hdls_disable option for the device was set on configure, the
+ * max_copies, src_hdls and dst_hdls parameters will be ignored, and the
+ * function returns the number of newly-completed operations.
  *
  * @param dev_id
  *   The rawdev device id of the ioat instance
  * @param max_copies
  *   The number of entries which can fit in the src_hdls and dst_hdls
- *   arrays, i.e. max number of completed operations to report
+ *   arrays, i.e. max number of completed operations to report.
+ *   NOTE: If hdls_disable configuration option for the device is set, this
+ *   parameter is ignored.
  * @param src_hdls
- *   Array to hold the source handle parameters of the completed copies
+ *   Array to hold the source handle parameters of the completed copies.
+ *   NOTE: If hdls_disable configuration option for the device is set, this
+ *   parameter is ignored.
  * @param dst_hdls
- *   Array to hold the destination handle parameters of the completed copies
+ *   Array to hold the destination handle parameters of the completed copies.
+ *   NOTE: If hdls_disable configuration option for the device is set, this
+ *   parameter is ignored.
  * @return
  *   -1 on error, with rte_errno set appropriately.
  *   Otherwise number of completed operations i.e. number of entries written
@@ -212,6 +230,11 @@ rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
 		return -1;
 	}
 
+	if (ioat->hdls_disable) {
+		read += count;
+		goto end;
+	}
+
 	if (count > max_copies)
 		count = max_copies;
 
@@ -229,7 +252,7 @@ rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
 		src_hdls[i] = hdls[0];
 		dst_hdls[i] = hdls[1];
 	}
-
+end:
 	ioat->next_read = read;
 	ioat->completed += count;
 	return count;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v6 06/25] raw/ioat: split header for readability
  2020-10-08  9:51 ` [dpdk-dev] [PATCH v6 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (4 preceding siblings ...)
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 05/25] raw/ioat: add a flag to control copying handle parameters Bruce Richardson
@ 2020-10-08  9:51   ` Bruce Richardson
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 07/25] raw/ioat: rename functions to be operation-agnostic Bruce Richardson
                     ` (19 subsequent siblings)
  25 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-08  9:51 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Bruce Richardson, Kevin Laatz, Radu Nicolau

Rather than having a single long complicated header file for general use we
can split things so that there is one header with all the publicly needed
information - data structs and function prototypes - while the rest of the
internal details are put separately. This makes it easier to read,
understand and use the APIs.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 drivers/raw/ioat/meson.build           |   1 +
 drivers/raw/ioat/rte_ioat_rawdev.h     | 147 +---------------------
 drivers/raw/ioat/rte_ioat_rawdev_fns.h | 168 +++++++++++++++++++++++++
 3 files changed, 175 insertions(+), 141 deletions(-)
 create mode 100644 drivers/raw/ioat/rte_ioat_rawdev_fns.h

diff --git a/drivers/raw/ioat/meson.build b/drivers/raw/ioat/meson.build
index 0878418ae..f66e9b605 100644
--- a/drivers/raw/ioat/meson.build
+++ b/drivers/raw/ioat/meson.build
@@ -8,4 +8,5 @@ sources = files('ioat_rawdev.c',
 deps += ['rawdev', 'bus_pci', 'mbuf']
 
 install_headers('rte_ioat_rawdev.h',
+		'rte_ioat_rawdev_fns.h',
 		'rte_ioat_spec.h')
diff --git a/drivers/raw/ioat/rte_ioat_rawdev.h b/drivers/raw/ioat/rte_ioat_rawdev.h
index 28ce95cc9..7067b352f 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev.h
@@ -18,12 +18,7 @@ extern "C" {
  * @b EXPERIMENTAL: these structures and APIs may change without prior notice
  */
 
-#include <x86intrin.h>
-#include <rte_atomic.h>
-#include <rte_memory.h>
-#include <rte_memzone.h>
-#include <rte_prefetch.h>
-#include "rte_ioat_spec.h"
+#include <rte_common.h>
 
 /** Name of the device driver */
 #define IOAT_PMD_RAWDEV_NAME rawdev_ioat
@@ -42,38 +37,6 @@ struct rte_ioat_rawdev_config {
 	bool hdls_disable;    /**< if set, ignore user-supplied handle params */
 };
 
-/**
- * @internal
- * Structure representing a device instance
- */
-struct rte_ioat_rawdev {
-	struct rte_rawdev *rawdev;
-	const struct rte_memzone *mz;
-	const struct rte_memzone *desc_mz;
-
-	volatile struct rte_ioat_registers *regs;
-	phys_addr_t status_addr;
-	phys_addr_t ring_addr;
-
-	unsigned short ring_size;
-	struct rte_ioat_generic_hw_desc *desc_ring;
-	bool hdls_disable;
-	__m128i *hdls; /* completion handles for returning to user */
-
-
-	unsigned short next_read;
-	unsigned short next_write;
-
-	/* some statistics for tracking, if added/changed update xstats fns*/
-	uint64_t enqueue_failed __rte_cache_aligned;
-	uint64_t enqueued;
-	uint64_t started;
-	uint64_t completed;
-
-	/* to report completions, the device will write status back here */
-	volatile uint64_t status __rte_cache_aligned;
-};
-
 /**
  * Enqueue a copy operation onto the ioat device
  *
@@ -108,39 +71,7 @@ struct rte_ioat_rawdev {
 static inline int
 rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl,
-		int fence)
-{
-	struct rte_ioat_rawdev *ioat =
-			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
-	unsigned short read = ioat->next_read;
-	unsigned short write = ioat->next_write;
-	unsigned short mask = ioat->ring_size - 1;
-	unsigned short space = mask + read - write;
-	struct rte_ioat_generic_hw_desc *desc;
-
-	if (space == 0) {
-		ioat->enqueue_failed++;
-		return 0;
-	}
-
-	ioat->next_write = write + 1;
-	write &= mask;
-
-	desc = &ioat->desc_ring[write];
-	desc->size = length;
-	/* set descriptor write-back every 16th descriptor */
-	desc->u.control_raw = (uint32_t)((!!fence << 4) | (!(write & 0xF)) << 3);
-	desc->src_addr = src;
-	desc->dest_addr = dst;
-	if (!ioat->hdls_disable)
-		ioat->hdls[write] = _mm_set_epi64x((int64_t)dst_hdl,
-					(int64_t)src_hdl);
-
-	rte_prefetch0(&ioat->desc_ring[ioat->next_write & mask]);
-
-	ioat->enqueued++;
-	return 1;
-}
+		int fence);
 
 /**
  * Trigger hardware to begin performing enqueued copy operations
@@ -152,32 +83,7 @@ rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
  *   The rawdev device id of the ioat instance
  */
 static inline void
-rte_ioat_do_copies(int dev_id)
-{
-	struct rte_ioat_rawdev *ioat =
-			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
-	ioat->desc_ring[(ioat->next_write - 1) & (ioat->ring_size - 1)].u
-			.control.completion_update = 1;
-	rte_compiler_barrier();
-	ioat->regs->dmacount = ioat->next_write;
-	ioat->started = ioat->enqueued;
-}
-
-/**
- * @internal
- * Returns the index of the last completed operation.
- */
-static inline int
-rte_ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
-{
-	uint64_t status = ioat->status;
-
-	/* lower 3 bits indicate "transfer status" : active, idle, halted.
-	 * We can ignore bit 0.
-	 */
-	*error = status & (RTE_IOAT_CHANSTS_SUSPENDED | RTE_IOAT_CHANSTS_ARMED);
-	return (status - ioat->ring_addr) >> 6;
-}
+rte_ioat_do_copies(int dev_id);
 
 /**
  * Returns details of copy operations that have been completed
@@ -212,51 +118,10 @@ rte_ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
  */
 static inline int
 rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
-		uintptr_t *src_hdls, uintptr_t *dst_hdls)
-{
-	struct rte_ioat_rawdev *ioat =
-			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
-	unsigned short mask = (ioat->ring_size - 1);
-	unsigned short read = ioat->next_read;
-	unsigned short end_read, count;
-	int error;
-	int i = 0;
-
-	end_read = (rte_ioat_get_last_completed(ioat, &error) + 1) & mask;
-	count = (end_read - (read & mask)) & mask;
-
-	if (error) {
-		rte_errno = EIO;
-		return -1;
-	}
-
-	if (ioat->hdls_disable) {
-		read += count;
-		goto end;
-	}
+		uintptr_t *src_hdls, uintptr_t *dst_hdls);
 
-	if (count > max_copies)
-		count = max_copies;
-
-	for (; i < count - 1; i += 2, read += 2) {
-		__m128i hdls0 = _mm_load_si128(&ioat->hdls[read & mask]);
-		__m128i hdls1 = _mm_load_si128(&ioat->hdls[(read + 1) & mask]);
-
-		_mm_storeu_si128((__m128i *)&src_hdls[i],
-				_mm_unpacklo_epi64(hdls0, hdls1));
-		_mm_storeu_si128((__m128i *)&dst_hdls[i],
-				_mm_unpackhi_epi64(hdls0, hdls1));
-	}
-	for (; i < count; i++, read++) {
-		uintptr_t *hdls = (uintptr_t *)&ioat->hdls[read & mask];
-		src_hdls[i] = hdls[0];
-		dst_hdls[i] = hdls[1];
-	}
-end:
-	ioat->next_read = read;
-	ioat->completed += count;
-	return count;
-}
+/* include the implementation details from a separate file */
+#include "rte_ioat_rawdev_fns.h"
 
 #ifdef __cplusplus
 }
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
new file mode 100644
index 000000000..4b7bdb8e2
--- /dev/null
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -0,0 +1,168 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Intel Corporation
+ */
+#ifndef _RTE_IOAT_RAWDEV_FNS_H_
+#define _RTE_IOAT_RAWDEV_FNS_H_
+
+#include <x86intrin.h>
+#include <rte_rawdev.h>
+#include <rte_memzone.h>
+#include <rte_prefetch.h>
+#include "rte_ioat_spec.h"
+
+/**
+ * @internal
+ * Structure representing a device instance
+ */
+struct rte_ioat_rawdev {
+	struct rte_rawdev *rawdev;
+	const struct rte_memzone *mz;
+	const struct rte_memzone *desc_mz;
+
+	volatile struct rte_ioat_registers *regs;
+	phys_addr_t status_addr;
+	phys_addr_t ring_addr;
+
+	unsigned short ring_size;
+	bool hdls_disable;
+	struct rte_ioat_generic_hw_desc *desc_ring;
+	__m128i *hdls; /* completion handles for returning to user */
+
+
+	unsigned short next_read;
+	unsigned short next_write;
+
+	/* some statistics for tracking, if added/changed update xstats fns*/
+	uint64_t enqueue_failed __rte_cache_aligned;
+	uint64_t enqueued;
+	uint64_t started;
+	uint64_t completed;
+
+	/* to report completions, the device will write status back here */
+	volatile uint64_t status __rte_cache_aligned;
+};
+
+/*
+ * Enqueue a copy operation onto the ioat device
+ */
+static inline int
+rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
+		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl,
+		int fence)
+{
+	struct rte_ioat_rawdev *ioat =
+			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
+	unsigned short read = ioat->next_read;
+	unsigned short write = ioat->next_write;
+	unsigned short mask = ioat->ring_size - 1;
+	unsigned short space = mask + read - write;
+	struct rte_ioat_generic_hw_desc *desc;
+
+	if (space == 0) {
+		ioat->enqueue_failed++;
+		return 0;
+	}
+
+	ioat->next_write = write + 1;
+	write &= mask;
+
+	desc = &ioat->desc_ring[write];
+	desc->size = length;
+	/* set descriptor write-back every 16th descriptor */
+	desc->u.control_raw = (uint32_t)((!!fence << 4) | (!(write & 0xF)) << 3);
+	desc->src_addr = src;
+	desc->dest_addr = dst;
+
+	if (!ioat->hdls_disable)
+		ioat->hdls[write] = _mm_set_epi64x((int64_t)dst_hdl,
+					(int64_t)src_hdl);
+	rte_prefetch0(&ioat->desc_ring[ioat->next_write & mask]);
+
+	ioat->enqueued++;
+	return 1;
+}
+
+/*
+ * Trigger hardware to begin performing enqueued copy operations
+ */
+static inline void
+rte_ioat_do_copies(int dev_id)
+{
+	struct rte_ioat_rawdev *ioat =
+			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
+	ioat->desc_ring[(ioat->next_write - 1) & (ioat->ring_size - 1)].u
+			.control.completion_update = 1;
+	rte_compiler_barrier();
+	ioat->regs->dmacount = ioat->next_write;
+	ioat->started = ioat->enqueued;
+}
+
+/**
+ * @internal
+ * Returns the index of the last completed operation.
+ */
+static inline int
+rte_ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
+{
+	uint64_t status = ioat->status;
+
+	/* lower 3 bits indicate "transfer status" : active, idle, halted.
+	 * We can ignore bit 0.
+	 */
+	*error = status & (RTE_IOAT_CHANSTS_SUSPENDED | RTE_IOAT_CHANSTS_ARMED);
+	return (status - ioat->ring_addr) >> 6;
+}
+
+/*
+ * Returns details of copy operations that have been completed
+ */
+static inline int
+rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
+		uintptr_t *src_hdls, uintptr_t *dst_hdls)
+{
+	struct rte_ioat_rawdev *ioat =
+			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
+	unsigned short mask = (ioat->ring_size - 1);
+	unsigned short read = ioat->next_read;
+	unsigned short end_read, count;
+	int error;
+	int i = 0;
+
+	end_read = (rte_ioat_get_last_completed(ioat, &error) + 1) & mask;
+	count = (end_read - (read & mask)) & mask;
+
+	if (error) {
+		rte_errno = EIO;
+		return -1;
+	}
+
+	if (ioat->hdls_disable) {
+		read += count;
+		goto end;
+	}
+
+	if (count > max_copies)
+		count = max_copies;
+
+	for (; i < count - 1; i += 2, read += 2) {
+		__m128i hdls0 = _mm_load_si128(&ioat->hdls[read & mask]);
+		__m128i hdls1 = _mm_load_si128(&ioat->hdls[(read + 1) & mask]);
+
+		_mm_storeu_si128((__m128i *)&src_hdls[i],
+				_mm_unpacklo_epi64(hdls0, hdls1));
+		_mm_storeu_si128((__m128i *)&dst_hdls[i],
+				_mm_unpackhi_epi64(hdls0, hdls1));
+	}
+	for (; i < count; i++, read++) {
+		uintptr_t *hdls = (uintptr_t *)&ioat->hdls[read & mask];
+		src_hdls[i] = hdls[0];
+		dst_hdls[i] = hdls[1];
+	}
+
+end:
+	ioat->next_read = read;
+	ioat->completed += count;
+	return count;
+}
+
+#endif /* _RTE_IOAT_RAWDEV_FNS_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v6 07/25] raw/ioat: rename functions to be operation-agnostic
  2020-10-08  9:51 ` [dpdk-dev] [PATCH v6 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (5 preceding siblings ...)
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 06/25] raw/ioat: split header for readability Bruce Richardson
@ 2020-10-08  9:51   ` Bruce Richardson
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 08/25] raw/ioat: add separate API for fence call Bruce Richardson
                     ` (18 subsequent siblings)
  25 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-08  9:51 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Bruce Richardson, Kevin Laatz, Radu Nicolau

Since the hardware supported by the ioat driver is capable of operations
other than just copies, we can rename the doorbell and completion-return
functions to not have "copies" in their names. These functions are not
copy-specific, and so would apply for other operations which may be added
later to the driver.

Also add a suitable warning using deprecation attribute for any code using
the old functions names.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 doc/guides/rawdevs/ioat.rst            | 16 ++++++++--------
 doc/guides/rel_notes/release_20_11.rst |  9 +++++++++
 doc/guides/sample_app_ug/ioat.rst      |  8 ++++----
 drivers/raw/ioat/ioat_rawdev_test.c    | 12 ++++++------
 drivers/raw/ioat/rte_ioat_rawdev.h     | 17 ++++++++++-------
 drivers/raw/ioat/rte_ioat_rawdev_fns.h | 20 ++++++++++++++++----
 examples/ioat/ioatfwd.c                |  4 ++--
 lib/librte_eal/include/rte_common.h    |  1 +
 8 files changed, 56 insertions(+), 31 deletions(-)

diff --git a/doc/guides/rawdevs/ioat.rst b/doc/guides/rawdevs/ioat.rst
index af00d77fb..3db5f5d09 100644
--- a/doc/guides/rawdevs/ioat.rst
+++ b/doc/guides/rawdevs/ioat.rst
@@ -157,9 +157,9 @@ Performing Data Copies
 ~~~~~~~~~~~~~~~~~~~~~~~
 
 To perform data copies using IOAT rawdev devices, the functions
-``rte_ioat_enqueue_copy()`` and ``rte_ioat_do_copies()`` should be used.
+``rte_ioat_enqueue_copy()`` and ``rte_ioat_perform_ops()`` should be used.
 Once copies have been completed, the completion will be reported back when
-the application calls ``rte_ioat_completed_copies()``.
+the application calls ``rte_ioat_completed_ops()``.
 
 The ``rte_ioat_enqueue_copy()`` function enqueues a single copy to the
 device ring for copying at a later point. The parameters to that function
@@ -172,11 +172,11 @@ pointers if packet data is being copied.
 
 While the ``rte_ioat_enqueue_copy()`` function enqueues a copy operation on
 the device ring, the copy will not actually be performed until after the
-application calls the ``rte_ioat_do_copies()`` function. This function
+application calls the ``rte_ioat_perform_ops()`` function. This function
 informs the device hardware of the elements enqueued on the ring, and the
 device will begin to process them. It is expected that, for efficiency
 reasons, a burst of operations will be enqueued to the device via multiple
-enqueue calls between calls to the ``rte_ioat_do_copies()`` function.
+enqueue calls between calls to the ``rte_ioat_perform_ops()`` function.
 
 The following code from ``test_ioat_rawdev.c`` demonstrates how to enqueue
 a burst of copies to the device and start the hardware processing of them:
@@ -210,10 +210,10 @@ a burst of copies to the device and start the hardware processing of them:
                         return -1;
                 }
         }
-        rte_ioat_do_copies(dev_id);
+        rte_ioat_perform_ops(dev_id);
 
 To retrieve information about completed copies, the API
-``rte_ioat_completed_copies()`` should be used. This API will return to the
+``rte_ioat_completed_ops()`` should be used. This API will return to the
 application a set of completion handles passed in when the relevant copies
 were enqueued.
 
@@ -223,9 +223,9 @@ is correct before freeing the data buffers using the returned handles:
 
 .. code-block:: C
 
-        if (rte_ioat_completed_copies(dev_id, 64, (void *)completed_src,
+        if (rte_ioat_completed_ops(dev_id, 64, (void *)completed_src,
                         (void *)completed_dst) != RTE_DIM(srcs)) {
-                printf("Error with rte_ioat_completed_copies\n");
+                printf("Error with rte_ioat_completed_ops\n");
                 return -1;
         }
         for (i = 0; i < RTE_DIM(srcs); i++) {
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 1e73c26d4..e7d038f31 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -121,6 +121,11 @@ New Features
   The ioat rawdev driver has been updated and enhanced. Changes include:
 
   * Added a per-device configuration flag to disable management of user-provided completion handles
+  * Renamed the ``rte_ioat_do_copies()`` API to ``rte_ioat_perform_ops()``,
+    and renamed the ``rte_ioat_completed_copies()`` API to ``rte_ioat_completed_ops()``
+    to better reflect the APIs' purposes, and remove the implication that
+    they are limited to copy operations only.
+    [Note: The old API is still provided but marked as deprecated in the code]
 
 
 Removed Items
@@ -234,6 +239,10 @@ API Changes
 
 * bpf: ``RTE_BPF_XTYPE_NUM`` has been dropped from ``rte_bpf_xtype``.
 
+* raw/ioat: As noted above, the ``rte_ioat_do_copies()`` and
+  ``rte_ioat_completed_copies()`` functions have been renamed to
+  ``rte_ioat_perform_ops()`` and ``rte_ioat_completed_ops()`` respectively.
+
 
 ABI Changes
 -----------
diff --git a/doc/guides/sample_app_ug/ioat.rst b/doc/guides/sample_app_ug/ioat.rst
index 3f7d5c34a..964160dff 100644
--- a/doc/guides/sample_app_ug/ioat.rst
+++ b/doc/guides/sample_app_ug/ioat.rst
@@ -394,7 +394,7 @@ packet using ``pktmbuf_sw_copy()`` function and enqueue them to an rte_ring:
                 nb_enq = ioat_enqueue_packets(pkts_burst,
                     nb_rx, rx_config->ioat_ids[i]);
                 if (nb_enq > 0)
-                    rte_ioat_do_copies(rx_config->ioat_ids[i]);
+                    rte_ioat_perform_ops(rx_config->ioat_ids[i]);
             } else {
                 /* Perform packet software copy, free source packets */
                 int ret;
@@ -433,7 +433,7 @@ The packets are received in burst mode using ``rte_eth_rx_burst()``
 function. When using hardware copy mode the packets are enqueued in
 copying device's buffer using ``ioat_enqueue_packets()`` which calls
 ``rte_ioat_enqueue_copy()``. When all received packets are in the
-buffer the copy operations are started by calling ``rte_ioat_do_copies()``.
+buffer the copy operations are started by calling ``rte_ioat_perform_ops()``.
 Function ``rte_ioat_enqueue_copy()`` operates on physical address of
 the packet. Structure ``rte_mbuf`` contains only physical address to
 start of the data buffer (``buf_iova``). Thus the address is adjusted
@@ -490,7 +490,7 @@ or indirect mbufs, then multiple copy operations must be used.
 
 
 All completed copies are processed by ``ioat_tx_port()`` function. When using
-hardware copy mode the function invokes ``rte_ioat_completed_copies()``
+hardware copy mode the function invokes ``rte_ioat_completed_ops()``
 on each assigned IOAT channel to gather copied packets. If software copy
 mode is used the function dequeues copied packets from the rte_ring. Then each
 packet MAC address is changed if it was enabled. After that copies are sent
@@ -510,7 +510,7 @@ in burst mode using `` rte_eth_tx_burst()``.
         for (i = 0; i < tx_config->nb_queues; i++) {
             if (copy_mode == COPY_MODE_IOAT_NUM) {
                 /* Deque the mbufs from IOAT device. */
-                nb_dq = rte_ioat_completed_copies(
+                nb_dq = rte_ioat_completed_ops(
                     tx_config->ioat_ids[i], MAX_PKT_BURST,
                     (void *)mbufs_src, (void *)mbufs_dst);
             } else {
diff --git a/drivers/raw/ioat/ioat_rawdev_test.c b/drivers/raw/ioat/ioat_rawdev_test.c
index 77f96bba3..439b46c03 100644
--- a/drivers/raw/ioat/ioat_rawdev_test.c
+++ b/drivers/raw/ioat/ioat_rawdev_test.c
@@ -65,12 +65,12 @@ test_enqueue_copies(int dev_id)
 			PRINT_ERR("Error with rte_ioat_enqueue_copy\n");
 			return -1;
 		}
-		rte_ioat_do_copies(dev_id);
+		rte_ioat_perform_ops(dev_id);
 		usleep(10);
 
-		if (rte_ioat_completed_copies(dev_id, 1, (void *)&completed[0],
+		if (rte_ioat_completed_ops(dev_id, 1, (void *)&completed[0],
 				(void *)&completed[1]) != 1) {
-			PRINT_ERR("Error with rte_ioat_completed_copies\n");
+			PRINT_ERR("Error with rte_ioat_completed_ops\n");
 			return -1;
 		}
 		if (completed[0] != src || completed[1] != dst) {
@@ -119,12 +119,12 @@ test_enqueue_copies(int dev_id)
 				return -1;
 			}
 		}
-		rte_ioat_do_copies(dev_id);
+		rte_ioat_perform_ops(dev_id);
 		usleep(100);
 
-		if (rte_ioat_completed_copies(dev_id, 64, (void *)completed_src,
+		if (rte_ioat_completed_ops(dev_id, 64, (void *)completed_src,
 				(void *)completed_dst) != RTE_DIM(srcs)) {
-			PRINT_ERR("Error with rte_ioat_completed_copies\n");
+			PRINT_ERR("Error with rte_ioat_completed_ops\n");
 			return -1;
 		}
 		for (i = 0; i < RTE_DIM(srcs); i++) {
diff --git a/drivers/raw/ioat/rte_ioat_rawdev.h b/drivers/raw/ioat/rte_ioat_rawdev.h
index 7067b352f..ae6393951 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev.h
@@ -69,24 +69,26 @@ struct rte_ioat_rawdev_config {
  *   Number of operations enqueued, either 0 or 1
  */
 static inline int
+__rte_experimental
 rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl,
 		int fence);
 
 /**
- * Trigger hardware to begin performing enqueued copy operations
+ * Trigger hardware to begin performing enqueued operations
  *
  * This API is used to write the "doorbell" to the hardware to trigger it
- * to begin the copy operations previously enqueued by rte_ioat_enqueue_copy()
+ * to begin the operations previously enqueued by rte_ioat_enqueue_copy()
  *
  * @param dev_id
  *   The rawdev device id of the ioat instance
  */
 static inline void
-rte_ioat_do_copies(int dev_id);
+__rte_experimental
+rte_ioat_perform_ops(int dev_id);
 
 /**
- * Returns details of copy operations that have been completed
+ * Returns details of operations that have been completed
  *
  * If the hdls_disable option was not set when the device was configured,
  * the function will return to the caller the user-provided "handles" for
@@ -104,11 +106,11 @@ rte_ioat_do_copies(int dev_id);
  *   NOTE: If hdls_disable configuration option for the device is set, this
  *   parameter is ignored.
  * @param src_hdls
- *   Array to hold the source handle parameters of the completed copies.
+ *   Array to hold the source handle parameters of the completed ops.
  *   NOTE: If hdls_disable configuration option for the device is set, this
  *   parameter is ignored.
  * @param dst_hdls
- *   Array to hold the destination handle parameters of the completed copies.
+ *   Array to hold the destination handle parameters of the completed ops.
  *   NOTE: If hdls_disable configuration option for the device is set, this
  *   parameter is ignored.
  * @return
@@ -117,7 +119,8 @@ rte_ioat_do_copies(int dev_id);
  *   to the src_hdls and dst_hdls array parameters.
  */
 static inline int
-rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
+__rte_experimental
+rte_ioat_completed_ops(int dev_id, uint8_t max_copies,
 		uintptr_t *src_hdls, uintptr_t *dst_hdls);
 
 /* include the implementation details from a separate file */
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index 4b7bdb8e2..b155d79c4 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -83,10 +83,10 @@ rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 }
 
 /*
- * Trigger hardware to begin performing enqueued copy operations
+ * Trigger hardware to begin performing enqueued operations
  */
 static inline void
-rte_ioat_do_copies(int dev_id)
+rte_ioat_perform_ops(int dev_id)
 {
 	struct rte_ioat_rawdev *ioat =
 			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
@@ -114,10 +114,10 @@ rte_ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
 }
 
 /*
- * Returns details of copy operations that have been completed
+ * Returns details of operations that have been completed
  */
 static inline int
-rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
+rte_ioat_completed_ops(int dev_id, uint8_t max_copies,
 		uintptr_t *src_hdls, uintptr_t *dst_hdls)
 {
 	struct rte_ioat_rawdev *ioat =
@@ -165,4 +165,16 @@ rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
 	return count;
 }
 
+static inline void
+__rte_deprecated_msg("use rte_ioat_perform_ops() instead")
+rte_ioat_do_copies(int dev_id) { rte_ioat_perform_ops(dev_id); }
+
+static inline int
+__rte_deprecated_msg("use rte_ioat_completed_ops() instead")
+rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
+		uintptr_t *src_hdls, uintptr_t *dst_hdls)
+{
+	return rte_ioat_completed_ops(dev_id, max_copies, src_hdls, dst_hdls);
+}
+
 #endif /* _RTE_IOAT_RAWDEV_FNS_H_ */
diff --git a/examples/ioat/ioatfwd.c b/examples/ioat/ioatfwd.c
index 288a75c7b..67f75737b 100644
--- a/examples/ioat/ioatfwd.c
+++ b/examples/ioat/ioatfwd.c
@@ -406,7 +406,7 @@ ioat_rx_port(struct rxtx_port_config *rx_config)
 			nb_enq = ioat_enqueue_packets(pkts_burst,
 				nb_rx, rx_config->ioat_ids[i]);
 			if (nb_enq > 0)
-				rte_ioat_do_copies(rx_config->ioat_ids[i]);
+				rte_ioat_perform_ops(rx_config->ioat_ids[i]);
 		} else {
 			/* Perform packet software copy, free source packets */
 			int ret;
@@ -452,7 +452,7 @@ ioat_tx_port(struct rxtx_port_config *tx_config)
 	for (i = 0; i < tx_config->nb_queues; i++) {
 		if (copy_mode == COPY_MODE_IOAT_NUM) {
 			/* Deque the mbufs from IOAT device. */
-			nb_dq = rte_ioat_completed_copies(
+			nb_dq = rte_ioat_completed_ops(
 				tx_config->ioat_ids[i], MAX_PKT_BURST,
 				(void *)mbufs_src, (void *)mbufs_dst);
 		} else {
diff --git a/lib/librte_eal/include/rte_common.h b/lib/librte_eal/include/rte_common.h
index 8f487a563..2920255fc 100644
--- a/lib/librte_eal/include/rte_common.h
+++ b/lib/librte_eal/include/rte_common.h
@@ -85,6 +85,7 @@ typedef uint16_t unaligned_uint16_t;
 
 /******* Macro to mark functions and fields scheduled for removal *****/
 #define __rte_deprecated	__attribute__((__deprecated__))
+#define __rte_deprecated_msg(msg)	__attribute__((__deprecated__(msg)))
 
 /**
  * Mark a function or variable to a weak reference.
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v6 08/25] raw/ioat: add separate API for fence call
  2020-10-08  9:51 ` [dpdk-dev] [PATCH v6 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (6 preceding siblings ...)
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 07/25] raw/ioat: rename functions to be operation-agnostic Bruce Richardson
@ 2020-10-08  9:51   ` Bruce Richardson
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 09/25] raw/ioat: make the HW register spec private Bruce Richardson
                     ` (17 subsequent siblings)
  25 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-08  9:51 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Bruce Richardson, Kevin Laatz, Radu Nicolau

Rather than having the fence signalled via a flag on a descriptor - which
requires reading the docs to find out whether the flag needs to go on the
last descriptor before, or the first descriptor after the fence - we can
instead add a separate fence API call. This becomes unambiguous to use,
since the fence call explicitly comes between two other enqueue calls. It
also allows more freedom of implementation in the driver code.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 doc/guides/rawdevs/ioat.rst            |  3 +--
 doc/guides/rel_notes/release_20_11.rst |  4 ++++
 drivers/raw/ioat/ioat_rawdev_test.c    |  6 ++----
 drivers/raw/ioat/rte_ioat_rawdev.h     | 27 ++++++++++++++++++++------
 drivers/raw/ioat/rte_ioat_rawdev_fns.h | 22 ++++++++++++++++++---
 examples/ioat/ioatfwd.c                | 12 ++++--------
 6 files changed, 51 insertions(+), 23 deletions(-)

diff --git a/doc/guides/rawdevs/ioat.rst b/doc/guides/rawdevs/ioat.rst
index 3db5f5d09..71bca0b28 100644
--- a/doc/guides/rawdevs/ioat.rst
+++ b/doc/guides/rawdevs/ioat.rst
@@ -203,8 +203,7 @@ a burst of copies to the device and start the hardware processing of them:
                                 dsts[i]->buf_iova + dsts[i]->data_off,
                                 length,
                                 (uintptr_t)srcs[i],
-                                (uintptr_t)dsts[i],
-                                0 /* nofence */) != 1) {
+                                (uintptr_t)dsts[i]) != 1) {
                         printf("Error with rte_ioat_enqueue_copy for buffer %u\n",
                                         i);
                         return -1;
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index e7d038f31..25ede96d9 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -126,6 +126,10 @@ New Features
     to better reflect the APIs' purposes, and remove the implication that
     they are limited to copy operations only.
     [Note: The old API is still provided but marked as deprecated in the code]
+  * Added a new API ``rte_ioat_fence()`` to add a fence between operations.
+    This API replaces the ``fence`` flag parameter in the ``rte_ioat_enqueue_copies()`` function,
+    and is clearer as there is no ambiguity as to whether the flag should be
+    set on the last operation before the fence or the first operation after it.
 
 
 Removed Items
diff --git a/drivers/raw/ioat/ioat_rawdev_test.c b/drivers/raw/ioat/ioat_rawdev_test.c
index 439b46c03..8b665cc9a 100644
--- a/drivers/raw/ioat/ioat_rawdev_test.c
+++ b/drivers/raw/ioat/ioat_rawdev_test.c
@@ -60,8 +60,7 @@ test_enqueue_copies(int dev_id)
 				dst->buf_iova + dst->data_off,
 				length,
 				(uintptr_t)src,
-				(uintptr_t)dst,
-				0 /* no fence */) != 1) {
+				(uintptr_t)dst) != 1) {
 			PRINT_ERR("Error with rte_ioat_enqueue_copy\n");
 			return -1;
 		}
@@ -112,8 +111,7 @@ test_enqueue_copies(int dev_id)
 					dsts[i]->buf_iova + dsts[i]->data_off,
 					length,
 					(uintptr_t)srcs[i],
-					(uintptr_t)dsts[i],
-					0 /* nofence */) != 1) {
+					(uintptr_t)dsts[i]) != 1) {
 				PRINT_ERR("Error with rte_ioat_enqueue_copy for buffer %u\n",
 						i);
 				return -1;
diff --git a/drivers/raw/ioat/rte_ioat_rawdev.h b/drivers/raw/ioat/rte_ioat_rawdev.h
index ae6393951..21a929012 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev.h
@@ -61,18 +61,33 @@ struct rte_ioat_rawdev_config {
  *   operation has been completed and the user polls for the completion details.
  *   NOTE: If hdls_disable configuration option for the device is set, this
  *   parameter is ignored.
- * @param fence
- *   A flag parameter indicating that hardware should not begin to perform any
- *   subsequently enqueued copy operations until after this operation has
- *   completed
  * @return
  *   Number of operations enqueued, either 0 or 1
  */
 static inline int
 __rte_experimental
 rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
-		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl,
-		int fence);
+		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl);
+
+/**
+ * Add a fence to force ordering between operations
+ *
+ * This adds a fence to a sequence of operations to enforce ordering, such that
+ * all operations enqueued before the fence must be completed before operations
+ * after the fence.
+ * NOTE: Since this fence may be added as a flag to the last operation enqueued,
+ * this API may not function correctly when called immediately after an
+ * "rte_ioat_perform_ops" call i.e. before any new operations are enqueued.
+ *
+ * @param dev_id
+ *   The rawdev device id of the ioat instance
+ * @return
+ *   Number of fences enqueued, either 0 or 1
+ */
+static inline int
+__rte_experimental
+rte_ioat_fence(int dev_id);
+
 
 /**
  * Trigger hardware to begin performing enqueued operations
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index b155d79c4..466721a23 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -47,8 +47,7 @@ struct rte_ioat_rawdev {
  */
 static inline int
 rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
-		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl,
-		int fence)
+		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl)
 {
 	struct rte_ioat_rawdev *ioat =
 			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
@@ -69,7 +68,7 @@ rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 	desc = &ioat->desc_ring[write];
 	desc->size = length;
 	/* set descriptor write-back every 16th descriptor */
-	desc->u.control_raw = (uint32_t)((!!fence << 4) | (!(write & 0xF)) << 3);
+	desc->u.control_raw = (uint32_t)((!(write & 0xF)) << 3);
 	desc->src_addr = src;
 	desc->dest_addr = dst;
 
@@ -82,6 +81,23 @@ rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 	return 1;
 }
 
+/* add fence to last written descriptor */
+static inline int
+rte_ioat_fence(int dev_id)
+{
+	struct rte_ioat_rawdev *ioat =
+			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
+	unsigned short write = ioat->next_write;
+	unsigned short mask = ioat->ring_size - 1;
+	struct rte_ioat_generic_hw_desc *desc;
+
+	write = (write - 1) & mask;
+	desc = &ioat->desc_ring[write];
+
+	desc->u.control.fence = 1;
+	return 0;
+}
+
 /*
  * Trigger hardware to begin performing enqueued operations
  */
diff --git a/examples/ioat/ioatfwd.c b/examples/ioat/ioatfwd.c
index 67f75737b..e6d1d1236 100644
--- a/examples/ioat/ioatfwd.c
+++ b/examples/ioat/ioatfwd.c
@@ -361,15 +361,11 @@ ioat_enqueue_packets(struct rte_mbuf **pkts,
 	for (i = 0; i < nb_rx; i++) {
 		/* Perform data copy */
 		ret = rte_ioat_enqueue_copy(dev_id,
-			pkts[i]->buf_iova
-			- addr_offset,
-			pkts_copy[i]->buf_iova
-			- addr_offset,
-			rte_pktmbuf_data_len(pkts[i])
-			+ addr_offset,
+			pkts[i]->buf_iova - addr_offset,
+			pkts_copy[i]->buf_iova - addr_offset,
+			rte_pktmbuf_data_len(pkts[i]) + addr_offset,
 			(uintptr_t)pkts[i],
-			(uintptr_t)pkts_copy[i],
-			0 /* nofence */);
+			(uintptr_t)pkts_copy[i]);
 
 		if (ret != 1)
 			break;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v6 09/25] raw/ioat: make the HW register spec private
  2020-10-08  9:51 ` [dpdk-dev] [PATCH v6 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (7 preceding siblings ...)
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 08/25] raw/ioat: add separate API for fence call Bruce Richardson
@ 2020-10-08  9:51   ` Bruce Richardson
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 10/25] usertools/dpdk-devbind.py: add support for DSA HW Bruce Richardson
                     ` (16 subsequent siblings)
  25 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-08  9:51 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Bruce Richardson, Kevin Laatz, Radu Nicolau

Only a few definitions from the hardware spec are actually used in the
driver runtime, so we can copy over those few and make the rest of the spec
a private header in the driver.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 drivers/raw/ioat/ioat_rawdev.c                |  3 ++
 .../raw/ioat/{rte_ioat_spec.h => ioat_spec.h} | 26 -----------
 drivers/raw/ioat/meson.build                  |  3 +-
 drivers/raw/ioat/rte_ioat_rawdev_fns.h        | 43 +++++++++++++++++--
 4 files changed, 44 insertions(+), 31 deletions(-)
 rename drivers/raw/ioat/{rte_ioat_spec.h => ioat_spec.h} (91%)

diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
index ea9f51ffc..aa59b731f 100644
--- a/drivers/raw/ioat/ioat_rawdev.c
+++ b/drivers/raw/ioat/ioat_rawdev.c
@@ -4,10 +4,12 @@
 
 #include <rte_cycles.h>
 #include <rte_bus_pci.h>
+#include <rte_memzone.h>
 #include <rte_string_fns.h>
 #include <rte_rawdev_pmd.h>
 
 #include "rte_ioat_rawdev.h"
+#include "ioat_spec.h"
 
 static struct rte_pci_driver ioat_pmd_drv;
 
@@ -268,6 +270,7 @@ ioat_rawdev_create(const char *name, struct rte_pci_device *dev)
 	ioat->rawdev = rawdev;
 	ioat->mz = mz;
 	ioat->regs = dev->mem_resource[0].addr;
+	ioat->doorbell = &ioat->regs->dmacount;
 	ioat->ring_size = 0;
 	ioat->desc_ring = NULL;
 	ioat->status_addr = ioat->mz->iova +
diff --git a/drivers/raw/ioat/rte_ioat_spec.h b/drivers/raw/ioat/ioat_spec.h
similarity index 91%
rename from drivers/raw/ioat/rte_ioat_spec.h
rename to drivers/raw/ioat/ioat_spec.h
index c6e7929b2..9645e16d4 100644
--- a/drivers/raw/ioat/rte_ioat_spec.h
+++ b/drivers/raw/ioat/ioat_spec.h
@@ -86,32 +86,6 @@ struct rte_ioat_registers {
 
 #define RTE_IOAT_CHANCMP_ALIGN			8	/* CHANCMP address must be 64-bit aligned */
 
-struct rte_ioat_generic_hw_desc {
-	uint32_t size;
-	union {
-		uint32_t control_raw;
-		struct {
-			uint32_t int_enable: 1;
-			uint32_t src_snoop_disable: 1;
-			uint32_t dest_snoop_disable: 1;
-			uint32_t completion_update: 1;
-			uint32_t fence: 1;
-			uint32_t reserved2: 1;
-			uint32_t src_page_break: 1;
-			uint32_t dest_page_break: 1;
-			uint32_t bundle: 1;
-			uint32_t dest_dca: 1;
-			uint32_t hint: 1;
-			uint32_t reserved: 13;
-			uint32_t op: 8;
-		} control;
-	} u;
-	uint64_t src_addr;
-	uint64_t dest_addr;
-	uint64_t next;
-	uint64_t op_specific[4];
-};
-
 struct rte_ioat_dma_hw_desc {
 	uint32_t size;
 	union {
diff --git a/drivers/raw/ioat/meson.build b/drivers/raw/ioat/meson.build
index f66e9b605..06636f8a9 100644
--- a/drivers/raw/ioat/meson.build
+++ b/drivers/raw/ioat/meson.build
@@ -8,5 +8,4 @@ sources = files('ioat_rawdev.c',
 deps += ['rawdev', 'bus_pci', 'mbuf']
 
 install_headers('rte_ioat_rawdev.h',
-		'rte_ioat_rawdev_fns.h',
-		'rte_ioat_spec.h')
+		'rte_ioat_rawdev_fns.h')
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index 466721a23..c6e0b9a58 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -8,7 +8,36 @@
 #include <rte_rawdev.h>
 #include <rte_memzone.h>
 #include <rte_prefetch.h>
-#include "rte_ioat_spec.h"
+
+/**
+ * @internal
+ * Structure representing a device descriptor
+ */
+struct rte_ioat_generic_hw_desc {
+	uint32_t size;
+	union {
+		uint32_t control_raw;
+		struct {
+			uint32_t int_enable: 1;
+			uint32_t src_snoop_disable: 1;
+			uint32_t dest_snoop_disable: 1;
+			uint32_t completion_update: 1;
+			uint32_t fence: 1;
+			uint32_t reserved2: 1;
+			uint32_t src_page_break: 1;
+			uint32_t dest_page_break: 1;
+			uint32_t bundle: 1;
+			uint32_t dest_dca: 1;
+			uint32_t hint: 1;
+			uint32_t reserved: 13;
+			uint32_t op: 8;
+		} control;
+	} u;
+	uint64_t src_addr;
+	uint64_t dest_addr;
+	uint64_t next;
+	uint64_t op_specific[4];
+};
 
 /**
  * @internal
@@ -19,7 +48,7 @@ struct rte_ioat_rawdev {
 	const struct rte_memzone *mz;
 	const struct rte_memzone *desc_mz;
 
-	volatile struct rte_ioat_registers *regs;
+	volatile uint16_t *doorbell;
 	phys_addr_t status_addr;
 	phys_addr_t ring_addr;
 
@@ -40,8 +69,16 @@ struct rte_ioat_rawdev {
 
 	/* to report completions, the device will write status back here */
 	volatile uint64_t status __rte_cache_aligned;
+
+	/* pointer to the register bar */
+	volatile struct rte_ioat_registers *regs;
 };
 
+#define RTE_IOAT_CHANSTS_IDLE			0x1
+#define RTE_IOAT_CHANSTS_SUSPENDED		0x2
+#define RTE_IOAT_CHANSTS_HALTED			0x3
+#define RTE_IOAT_CHANSTS_ARMED			0x4
+
 /*
  * Enqueue a copy operation onto the ioat device
  */
@@ -109,7 +146,7 @@ rte_ioat_perform_ops(int dev_id)
 	ioat->desc_ring[(ioat->next_write - 1) & (ioat->ring_size - 1)].u
 			.control.completion_update = 1;
 	rte_compiler_barrier();
-	ioat->regs->dmacount = ioat->next_write;
+	*ioat->doorbell = ioat->next_write;
 	ioat->started = ioat->enqueued;
 }
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v6 10/25] usertools/dpdk-devbind.py: add support for DSA HW
  2020-10-08  9:51 ` [dpdk-dev] [PATCH v6 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (8 preceding siblings ...)
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 09/25] raw/ioat: make the HW register spec private Bruce Richardson
@ 2020-10-08  9:51   ` Bruce Richardson
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 11/25] raw/ioat: add skeleton for VFIO/UIO based DSA device Bruce Richardson
                     ` (15 subsequent siblings)
  25 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-08  9:51 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Kevin Laatz, Bruce Richardson, Radu Nicolau

From: Kevin Laatz <kevin.laatz@intel.com>

Intel Data Streaming Accelerator (Intel DSA) is a high-performance data
copy and transformation accelerator which will be integrated in future
Intel processors [1].

Add DSA device support to dpdk-devbind.py script.

[1] https://01.org/blogs/2019/introducing-intel-data-streaming-accelerator

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 doc/guides/rel_notes/release_20_11.rst | 2 ++
 usertools/dpdk-devbind.py              | 6 +++++-
 2 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 25ede96d9..e48e6ea75 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -120,6 +120,8 @@ New Features
 
   The ioat rawdev driver has been updated and enhanced. Changes include:
 
+  * Added support for Intel\ |reg| Data Streaming Accelerator hardware.
+    For more information, see https://01.org/blogs/2019/introducing-intel-data-streaming-accelerator
   * Added a per-device configuration flag to disable management of user-provided completion handles
   * Renamed the ``rte_ioat_do_copies()`` API to ``rte_ioat_perform_ops()``,
     and renamed the ``rte_ioat_completed_copies()`` API to ``rte_ioat_completed_ops()``
diff --git a/usertools/dpdk-devbind.py b/usertools/dpdk-devbind.py
index d149bac13..1d1113a08 100755
--- a/usertools/dpdk-devbind.py
+++ b/usertools/dpdk-devbind.py
@@ -48,6 +48,8 @@
               'SVendor': None, 'SDevice': None}
 intel_ioat_icx = {'Class': '08', 'Vendor': '8086', 'Device': '0b00',
               'SVendor': None, 'SDevice': None}
+intel_idxd_spr = {'Class': '08', 'Vendor': '8086', 'Device': '0b25',
+              'SVendor': None, 'SDevice': None}
 intel_ntb_skx = {'Class': '06', 'Vendor': '8086', 'Device': '201c',
               'SVendor': None, 'SDevice': None}
 intel_ntb_icx = {'Class': '06', 'Vendor': '8086', 'Device': '347e',
@@ -59,7 +61,9 @@
 eventdev_devices = [cavium_sso, cavium_tim, octeontx2_sso]
 mempool_devices = [cavium_fpa, octeontx2_npa]
 compress_devices = [cavium_zip]
-misc_devices = [intel_ioat_bdw, intel_ioat_skx, intel_ioat_icx, intel_ntb_skx, intel_ntb_icx, octeontx2_dma]
+misc_devices = [intel_ioat_bdw, intel_ioat_skx, intel_ioat_icx, intel_idxd_spr,
+                intel_ntb_skx, intel_ntb_icx,
+                octeontx2_dma]
 
 # global dict ethernet devices present. Dictionary indexed by PCI address.
 # Each device within this is itself a dictionary of device properties
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v6 11/25] raw/ioat: add skeleton for VFIO/UIO based DSA device
  2020-10-08  9:51 ` [dpdk-dev] [PATCH v6 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (9 preceding siblings ...)
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 10/25] usertools/dpdk-devbind.py: add support for DSA HW Bruce Richardson
@ 2020-10-08  9:51   ` Bruce Richardson
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 12/25] raw/ioat: add vdev probe for DSA/idxd devices Bruce Richardson
                     ` (14 subsequent siblings)
  25 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-08  9:51 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Bruce Richardson, Kevin Laatz, Radu Nicolau

Add in the basic probe/remove skeleton code for DSA devices which are bound
directly to vfio or uio driver. The kernel module for supporting these uses
the "idxd" name, so that name is used as function and file prefix to avoid
conflict with existing "ioat" prefixed functions.

Since we are adding new files to the driver and there will be common
definitions shared between the various files, we create a new internal
header file ioat_private.h to hold common macros and function prototypes.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 doc/guides/rawdevs/ioat.rst     | 69 ++++++++++-----------------------
 drivers/raw/ioat/idxd_pci.c     | 56 ++++++++++++++++++++++++++
 drivers/raw/ioat/ioat_private.h | 27 +++++++++++++
 drivers/raw/ioat/ioat_rawdev.c  |  9 +----
 drivers/raw/ioat/meson.build    |  6 ++-
 5 files changed, 108 insertions(+), 59 deletions(-)
 create mode 100644 drivers/raw/ioat/idxd_pci.c
 create mode 100644 drivers/raw/ioat/ioat_private.h

diff --git a/doc/guides/rawdevs/ioat.rst b/doc/guides/rawdevs/ioat.rst
index 71bca0b28..b898f98d5 100644
--- a/doc/guides/rawdevs/ioat.rst
+++ b/doc/guides/rawdevs/ioat.rst
@@ -3,10 +3,12 @@
 
 .. include:: <isonum.txt>
 
-IOAT Rawdev Driver for Intel\ |reg| QuickData Technology
-======================================================================
+IOAT Rawdev Driver
+===================
 
 The ``ioat`` rawdev driver provides a poll-mode driver (PMD) for Intel\ |reg|
+Data Streaming Accelerator `(Intel DSA)
+<https://01.org/blogs/2019/introducing-intel-data-streaming-accelerator>`_ and for Intel\ |reg|
 QuickData Technology, part of Intel\ |reg| I/O Acceleration Technology
 `(Intel I/OAT)
 <https://www.intel.com/content/www/us/en/wireless-network/accel-technology.html>`_.
@@ -17,61 +19,30 @@ be done by software, freeing up CPU cycles for other tasks.
 Hardware Requirements
 ----------------------
 
-On Linux, the presence of an Intel\ |reg| QuickData Technology hardware can
-be detected by checking the output of the ``lspci`` command, where the
-hardware will be often listed as "Crystal Beach DMA" or "CBDMA". For
-example, on a system with Intel\ |reg| Xeon\ |reg| CPU E5-2699 v4 @2.20GHz,
-lspci shows:
-
-.. code-block:: console
-
-  # lspci | grep DMA
-  00:04.0 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 0 (rev 01)
-  00:04.1 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 1 (rev 01)
-  00:04.2 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 2 (rev 01)
-  00:04.3 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 3 (rev 01)
-  00:04.4 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 4 (rev 01)
-  00:04.5 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 5 (rev 01)
-  00:04.6 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 6 (rev 01)
-  00:04.7 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 7 (rev 01)
-
-On a system with Intel\ |reg| Xeon\ |reg| Gold 6154 CPU @ 3.00GHz, lspci
-shows:
-
-.. code-block:: console
-
-  # lspci | grep DMA
-  00:04.0 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
-  00:04.1 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
-  00:04.2 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
-  00:04.3 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
-  00:04.4 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
-  00:04.5 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
-  00:04.6 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
-  00:04.7 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
-
+The ``dpdk-devbind.py`` script, included with DPDK,
+can be used to show the presence of supported hardware.
+Running ``dpdk-devbind.py --status-dev misc`` will show all the miscellaneous,
+or rawdev-based devices on the system.
+For Intel\ |reg| QuickData Technology devices, the hardware will be often listed as "Crystal Beach DMA",
+or "CBDMA".
+For Intel\ |reg| DSA devices, they are currently (at time of writing) appearing as devices with type "0b25",
+due to the absence of pci-id database entries for them at this point.
 
 Compilation
 ------------
 
-For builds done with ``make``, the driver compilation is enabled by the
-``CONFIG_RTE_LIBRTE_PMD_IOAT_RAWDEV`` build configuration option. This is
-enabled by default in builds for x86 platforms, and disabled in other
-configurations.
-
-For builds using ``meson`` and ``ninja``, the driver will be built when the
-target platform is x86-based.
+For builds using ``meson`` and ``ninja``, the driver will be built when the target platform is x86-based.
+No additional compilation steps are necessary.
 
 Device Setup
 -------------
 
-The Intel\ |reg| QuickData Technology HW devices will need to be bound to a
-user-space IO driver for use. The script ``dpdk-devbind.py`` script
-included with DPDK can be used to view the state of the devices and to bind
-them to a suitable DPDK-supported kernel driver. When querying the status
-of the devices, they will appear under the category of "Misc (rawdev)
-devices", i.e. the command ``dpdk-devbind.py --status-dev misc`` can be
-used to see the state of those devices alone.
+The HW devices to be used will need to be bound to a user-space IO driver for use.
+The ``dpdk-devbind.py`` script can be used to view the state of the devices
+and to bind them to a suitable DPDK-supported kernel driver, such as ``vfio-pci``.
+For example::
+
+	$ dpdk-devbind.py -b vfio-pci 00:04.0 00:04.1
 
 Device Probing and Initialization
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
new file mode 100644
index 000000000..1a30e9c31
--- /dev/null
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -0,0 +1,56 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_bus_pci.h>
+
+#include "ioat_private.h"
+
+#define IDXD_VENDOR_ID		0x8086
+#define IDXD_DEVICE_ID_SPR	0x0B25
+
+#define IDXD_PMD_RAWDEV_NAME_PCI rawdev_idxd_pci
+
+const struct rte_pci_id pci_id_idxd_map[] = {
+	{ RTE_PCI_DEVICE(IDXD_VENDOR_ID, IDXD_DEVICE_ID_SPR) },
+	{ .vendor_id = 0, /* sentinel */ },
+};
+
+static int
+idxd_rawdev_probe_pci(struct rte_pci_driver *drv, struct rte_pci_device *dev)
+{
+	int ret = 0;
+	char name[PCI_PRI_STR_SIZE];
+
+	rte_pci_device_name(&dev->addr, name, sizeof(name));
+	IOAT_PMD_INFO("Init %s on NUMA node %d", name, dev->device.numa_node);
+	dev->device.driver = &drv->driver;
+
+	return ret;
+}
+
+static int
+idxd_rawdev_remove_pci(struct rte_pci_device *dev)
+{
+	char name[PCI_PRI_STR_SIZE];
+	int ret = 0;
+
+	rte_pci_device_name(&dev->addr, name, sizeof(name));
+
+	IOAT_PMD_INFO("Closing %s on NUMA node %d",
+			name, dev->device.numa_node);
+
+	return ret;
+}
+
+struct rte_pci_driver idxd_pmd_drv_pci = {
+	.id_table = pci_id_idxd_map,
+	.drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+	.probe = idxd_rawdev_probe_pci,
+	.remove = idxd_rawdev_remove_pci,
+};
+
+RTE_PMD_REGISTER_PCI(IDXD_PMD_RAWDEV_NAME_PCI, idxd_pmd_drv_pci);
+RTE_PMD_REGISTER_PCI_TABLE(IDXD_PMD_RAWDEV_NAME_PCI, pci_id_idxd_map);
+RTE_PMD_REGISTER_KMOD_DEP(IDXD_PMD_RAWDEV_NAME_PCI,
+			  "* igb_uio | uio_pci_generic | vfio-pci");
diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
new file mode 100644
index 000000000..d87d4d055
--- /dev/null
+++ b/drivers/raw/ioat/ioat_private.h
@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#ifndef _IOAT_PRIVATE_H_
+#define _IOAT_PRIVATE_H_
+
+/**
+ * @file idxd_private.h
+ *
+ * Private data structures for the idxd/DSA part of ioat device driver
+ *
+ * @warning
+ * @b EXPERIMENTAL: these structures and APIs may change without prior notice
+ */
+
+extern int ioat_pmd_logtype;
+
+#define IOAT_PMD_LOG(level, fmt, args...) rte_log(RTE_LOG_ ## level, \
+		ioat_pmd_logtype, "%s(): " fmt "\n", __func__, ##args)
+
+#define IOAT_PMD_DEBUG(fmt, args...)  IOAT_PMD_LOG(DEBUG, fmt, ## args)
+#define IOAT_PMD_INFO(fmt, args...)   IOAT_PMD_LOG(INFO, fmt, ## args)
+#define IOAT_PMD_ERR(fmt, args...)    IOAT_PMD_LOG(ERR, fmt, ## args)
+#define IOAT_PMD_WARN(fmt, args...)   IOAT_PMD_LOG(WARNING, fmt, ## args)
+
+#endif /* _IOAT_PRIVATE_H_ */
diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
index aa59b731f..1fe32278d 100644
--- a/drivers/raw/ioat/ioat_rawdev.c
+++ b/drivers/raw/ioat/ioat_rawdev.c
@@ -10,6 +10,7 @@
 
 #include "rte_ioat_rawdev.h"
 #include "ioat_spec.h"
+#include "ioat_private.h"
 
 static struct rte_pci_driver ioat_pmd_drv;
 
@@ -29,14 +30,6 @@ static struct rte_pci_driver ioat_pmd_drv;
 
 RTE_LOG_REGISTER(ioat_pmd_logtype, rawdev.ioat, INFO);
 
-#define IOAT_PMD_LOG(level, fmt, args...) rte_log(RTE_LOG_ ## level, \
-	ioat_pmd_logtype, "%s(): " fmt "\n", __func__, ##args)
-
-#define IOAT_PMD_DEBUG(fmt, args...)  IOAT_PMD_LOG(DEBUG, fmt, ## args)
-#define IOAT_PMD_INFO(fmt, args...)   IOAT_PMD_LOG(INFO, fmt, ## args)
-#define IOAT_PMD_ERR(fmt, args...)    IOAT_PMD_LOG(ERR, fmt, ## args)
-#define IOAT_PMD_WARN(fmt, args...)   IOAT_PMD_LOG(WARNING, fmt, ## args)
-
 #define DESC_SZ sizeof(struct rte_ioat_generic_hw_desc)
 #define COMPLETION_SZ sizeof(__m128i)
 
diff --git a/drivers/raw/ioat/meson.build b/drivers/raw/ioat/meson.build
index 06636f8a9..3529635e9 100644
--- a/drivers/raw/ioat/meson.build
+++ b/drivers/raw/ioat/meson.build
@@ -3,8 +3,10 @@
 
 build = dpdk_conf.has('RTE_ARCH_X86')
 reason = 'only supported on x86'
-sources = files('ioat_rawdev.c',
-		'ioat_rawdev_test.c')
+sources = files(
+	'idxd_pci.c',
+	'ioat_rawdev.c',
+	'ioat_rawdev_test.c')
 deps += ['rawdev', 'bus_pci', 'mbuf']
 
 install_headers('rte_ioat_rawdev.h',
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v6 12/25] raw/ioat: add vdev probe for DSA/idxd devices
  2020-10-08  9:51 ` [dpdk-dev] [PATCH v6 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (10 preceding siblings ...)
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 11/25] raw/ioat: add skeleton for VFIO/UIO based DSA device Bruce Richardson
@ 2020-10-08  9:51   ` Bruce Richardson
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 13/25] raw/ioat: include example configuration script Bruce Richardson
                     ` (13 subsequent siblings)
  25 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-08  9:51 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Kevin Laatz, Bruce Richardson, Radu Nicolau

From: Kevin Laatz <kevin.laatz@intel.com>

The Intel DSA devices can be exposed to userspace via kernel driver, so can
be used without having to bind them to vfio/uio. Therefore we add support
for using those kernel-configured devices as vdevs, taking as parameter the
individual HW work queue to be used by the vdev.

Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 doc/guides/rawdevs/ioat.rst  |  68 +++++++++++++++++--
 drivers/raw/ioat/idxd_vdev.c | 123 +++++++++++++++++++++++++++++++++++
 drivers/raw/ioat/meson.build |   6 +-
 3 files changed, 192 insertions(+), 5 deletions(-)
 create mode 100644 drivers/raw/ioat/idxd_vdev.c

diff --git a/doc/guides/rawdevs/ioat.rst b/doc/guides/rawdevs/ioat.rst
index b898f98d5..5b8d27980 100644
--- a/doc/guides/rawdevs/ioat.rst
+++ b/doc/guides/rawdevs/ioat.rst
@@ -37,9 +37,62 @@ No additional compilation steps are necessary.
 Device Setup
 -------------
 
+Depending on support provided by the PMD, HW devices can either use the kernel configured driver
+or be bound to a user-space IO driver for use.
+For example, Intel\ |reg| DSA devices can use the IDXD kernel driver or DPDK-supported drivers,
+such as ``vfio-pci``.
+
+Intel\ |reg| DSA devices using idxd kernel driver
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To use a Intel\ |reg| DSA device bound to the IDXD kernel driver, the device must first be configured.
+The `accel-config <https://github.com/intel/idxd-config>`_ utility library can be used for configuration.
+
+.. note::
+        The device configuration can also be done by directly interacting with the sysfs nodes.
+
+There are some mandatory configuration steps before being able to use a device with an application.
+The internal engines, which do the copies or other operations,
+and the work-queues, which are used by applications to assign work to the device,
+need to be assigned to groups, and the various other configuration options,
+such as priority or queue depth, need to be set for each queue.
+
+To assign an engine to a group::
+
+        $ accel-config config-engine dsa0/engine0.0 --group-id=0
+        $ accel-config config-engine dsa0/engine0.1 --group-id=1
+
+To assign work queues to groups for passing descriptors to the engines a similar accel-config command can be used.
+However, the work queues also need to be configured depending on the use-case.
+Some configuration options include:
+
+* mode (Dedicated/Shared): Indicates whether a WQ may accept jobs from multiple queues simultaneously.
+* priority: WQ priority between 1 and 15. Larger value means higher priority.
+* wq-size: the size of the WQ. Sum of all WQ sizes must be less that the total-size defined by the device.
+* type: WQ type (kernel/mdev/user). Determines how the device is presented.
+* name: identifier given to the WQ.
+
+Example configuration for a work queue::
+
+        $ accel-config config-wq dsa0/wq0.0 --group-id=0 \
+           --mode=dedicated --priority=10 --wq-size=8 \
+           --type=user --name=app1
+
+Once the devices have been configured, they need to be enabled::
+
+        $ accel-config enable-device dsa0
+        $ accel-config enable-wq dsa0/wq0.0
+
+Check the device configuration::
+
+        $ accel-config list
+
+Devices using VFIO/UIO drivers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
 The HW devices to be used will need to be bound to a user-space IO driver for use.
 The ``dpdk-devbind.py`` script can be used to view the state of the devices
-and to bind them to a suitable DPDK-supported kernel driver, such as ``vfio-pci``.
+and to bind them to a suitable DPDK-supported driver, such as ``vfio-pci``.
 For example::
 
 	$ dpdk-devbind.py -b vfio-pci 00:04.0 00:04.1
@@ -47,9 +100,16 @@ For example::
 Device Probing and Initialization
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-Once bound to a suitable kernel device driver, the HW devices will be found
-as part of the PCI scan done at application initialization time. No vdev
-parameters need to be passed to create or initialize the device.
+For devices bound to a suitable DPDK-supported VFIO/UIO driver, the HW devices will
+be found as part of the device scan done at application initialization time without
+the need to pass parameters to the application.
+
+If the device is bound to the IDXD kernel driver (and previously configured with sysfs),
+then a specific work queue needs to be passed to the application via a vdev parameter.
+This vdev parameter take the driver name and work queue name as parameters.
+For example, to use work queue 0 on Intel\ |reg| DSA instance 0::
+
+        $ dpdk-test --no-pci --vdev=rawdev_idxd,wq=0.0
 
 Once probed successfully, the device will appear as a ``rawdev``, that is a
 "raw device type" inside DPDK, and can be accessed using APIs from the
diff --git a/drivers/raw/ioat/idxd_vdev.c b/drivers/raw/ioat/idxd_vdev.c
new file mode 100644
index 000000000..0509fc084
--- /dev/null
+++ b/drivers/raw/ioat/idxd_vdev.c
@@ -0,0 +1,123 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_bus_vdev.h>
+#include <rte_kvargs.h>
+#include <rte_string_fns.h>
+#include <rte_rawdev_pmd.h>
+
+#include "ioat_private.h"
+
+/** Name of the device driver */
+#define IDXD_PMD_RAWDEV_NAME rawdev_idxd
+/* takes a work queue(WQ) as parameter */
+#define IDXD_ARG_WQ		"wq"
+
+static const char * const valid_args[] = {
+	IDXD_ARG_WQ,
+	NULL
+};
+
+struct idxd_vdev_args {
+	uint8_t device_id;
+	uint8_t wq_id;
+};
+
+static int
+idxd_rawdev_parse_wq(const char *key __rte_unused, const char *value,
+			  void *extra_args)
+{
+	struct idxd_vdev_args *args = (struct idxd_vdev_args *)extra_args;
+	int dev, wq, bytes = -1;
+	int read = sscanf(value, "%d.%d%n", &dev, &wq, &bytes);
+
+	if (read != 2 || bytes != (int)strlen(value)) {
+		IOAT_PMD_ERR("Error parsing work-queue id. Must be in <dev_id>.<queue_id> format");
+		return -EINVAL;
+	}
+
+	if (dev >= UINT8_MAX || wq >= UINT8_MAX) {
+		IOAT_PMD_ERR("Device or work queue id out of range");
+		return -EINVAL;
+	}
+
+	args->device_id = dev;
+	args->wq_id = wq;
+
+	return 0;
+}
+
+static int
+idxd_vdev_parse_params(struct rte_kvargs *kvlist, struct idxd_vdev_args *args)
+{
+	if (rte_kvargs_count(kvlist, IDXD_ARG_WQ) == 1) {
+		if (rte_kvargs_process(kvlist, IDXD_ARG_WQ,
+				&idxd_rawdev_parse_wq, args) < 0) {
+			IOAT_PMD_ERR("Error parsing %s", IDXD_ARG_WQ);
+			goto free;
+		}
+	} else {
+		IOAT_PMD_ERR("%s is a mandatory arg", IDXD_ARG_WQ);
+		return -EINVAL;
+	}
+
+	return 0;
+
+free:
+	if (kvlist)
+		rte_kvargs_free(kvlist);
+	return -EINVAL;
+}
+
+static int
+idxd_rawdev_probe_vdev(struct rte_vdev_device *vdev)
+{
+	struct rte_kvargs *kvlist;
+	struct idxd_vdev_args vdev_args;
+	const char *name;
+	int ret = 0;
+
+	name = rte_vdev_device_name(vdev);
+	if (name == NULL)
+		return -EINVAL;
+
+	IOAT_PMD_INFO("Initializing pmd_idxd for %s", name);
+
+	kvlist = rte_kvargs_parse(rte_vdev_device_args(vdev), valid_args);
+	if (kvlist == NULL) {
+		IOAT_PMD_ERR("Invalid kvargs key");
+		return -EINVAL;
+	}
+
+	ret = idxd_vdev_parse_params(kvlist, &vdev_args);
+	if (ret) {
+		IOAT_PMD_ERR("Failed to parse kvargs");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int
+idxd_rawdev_remove_vdev(struct rte_vdev_device *vdev)
+{
+	const char *name;
+
+	name = rte_vdev_device_name(vdev);
+	if (name == NULL)
+		return -EINVAL;
+
+	IOAT_PMD_INFO("Remove DSA vdev %p", name);
+
+	return 0;
+}
+
+struct rte_vdev_driver idxd_rawdev_drv_vdev = {
+	.probe = idxd_rawdev_probe_vdev,
+	.remove = idxd_rawdev_remove_vdev,
+};
+
+RTE_PMD_REGISTER_VDEV(IDXD_PMD_RAWDEV_NAME, idxd_rawdev_drv_vdev);
+RTE_PMD_REGISTER_PARAM_STRING(IDXD_PMD_RAWDEV_NAME,
+			      "wq=<string>");
diff --git a/drivers/raw/ioat/meson.build b/drivers/raw/ioat/meson.build
index 3529635e9..b343b7367 100644
--- a/drivers/raw/ioat/meson.build
+++ b/drivers/raw/ioat/meson.build
@@ -5,9 +5,13 @@ build = dpdk_conf.has('RTE_ARCH_X86')
 reason = 'only supported on x86'
 sources = files(
 	'idxd_pci.c',
+	'idxd_vdev.c',
 	'ioat_rawdev.c',
 	'ioat_rawdev_test.c')
-deps += ['rawdev', 'bus_pci', 'mbuf']
+deps += ['bus_pci',
+	'bus_vdev',
+	'mbuf',
+	'rawdev']
 
 install_headers('rte_ioat_rawdev.h',
 		'rte_ioat_rawdev_fns.h')
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v6 13/25] raw/ioat: include example configuration script
  2020-10-08  9:51 ` [dpdk-dev] [PATCH v6 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (11 preceding siblings ...)
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 12/25] raw/ioat: add vdev probe for DSA/idxd devices Bruce Richardson
@ 2020-10-08  9:51   ` Bruce Richardson
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 14/25] raw/ioat: create rawdev instances on idxd PCI probe Bruce Richardson
                     ` (12 subsequent siblings)
  25 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-08  9:51 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Bruce Richardson, Kevin Laatz, Radu Nicolau

Devices managed by the idxd kernel driver must be configured for DPDK use
before it can be used by the ioat driver. This example script serves both
as a quick way to get the driver set up with a simple configuration, and as
the basis for users to modify it and create their own configuration
scripts.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 doc/guides/rawdevs/ioat.rst       |  2 +
 drivers/raw/ioat/dpdk_idxd_cfg.py | 79 +++++++++++++++++++++++++++++++
 2 files changed, 81 insertions(+)
 create mode 100755 drivers/raw/ioat/dpdk_idxd_cfg.py

diff --git a/doc/guides/rawdevs/ioat.rst b/doc/guides/rawdevs/ioat.rst
index 5b8d27980..7c2a2d457 100644
--- a/doc/guides/rawdevs/ioat.rst
+++ b/doc/guides/rawdevs/ioat.rst
@@ -50,6 +50,8 @@ The `accel-config <https://github.com/intel/idxd-config>`_ utility library can b
 
 .. note::
         The device configuration can also be done by directly interacting with the sysfs nodes.
+        An example of how this may be done can be seen in the script ``dpdk_idxd_cfg.py``
+        included in the driver source directory.
 
 There are some mandatory configuration steps before being able to use a device with an application.
 The internal engines, which do the copies or other operations,
diff --git a/drivers/raw/ioat/dpdk_idxd_cfg.py b/drivers/raw/ioat/dpdk_idxd_cfg.py
new file mode 100755
index 000000000..bce4bb5bd
--- /dev/null
+++ b/drivers/raw/ioat/dpdk_idxd_cfg.py
@@ -0,0 +1,79 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2020 Intel Corporation
+
+"""
+Configure an entire Intel DSA instance, using idxd kernel driver, for DPDK use
+"""
+
+import sys
+import argparse
+import os
+import os.path
+
+
+class SysfsDir:
+    "Used to read/write paths in a sysfs directory"
+    def __init__(self, path):
+        self.path = path
+
+    def read_int(self, filename):
+        "Return a value from sysfs file"
+        with open(os.path.join(self.path, filename)) as f:
+            return int(f.readline())
+
+    def write_values(self, values):
+        "write dictionary, where key is filename and value is value to write"
+        for filename, contents in values.items():
+            with open(os.path.join(self.path, filename), "w") as f:
+                f.write(str(contents))
+
+
+def configure_dsa(dsa_id, queues):
+    "Configure the DSA instance with appropriate number of queues"
+    dsa_dir = SysfsDir(f"/sys/bus/dsa/devices/dsa{dsa_id}")
+    drv_dir = SysfsDir("/sys/bus/dsa/drivers/dsa")
+
+    max_groups = dsa_dir.read_int("max_groups")
+    max_engines = dsa_dir.read_int("max_engines")
+    max_queues = dsa_dir.read_int("max_work_queues")
+    max_tokens = dsa_dir.read_int("max_tokens")
+
+    # we want one engine per group
+    nb_groups = min(max_engines, max_groups)
+    for grp in range(nb_groups):
+        dsa_dir.write_values({f"engine{dsa_id}.{grp}/group_id": grp})
+
+    nb_queues = min(queues, max_queues)
+    if queues > nb_queues:
+        print(f"Setting number of queues to max supported value: {max_queues}")
+
+    # configure each queue
+    for q in range(nb_queues):
+        wq_dir = SysfsDir(os.path.join(dsa_dir.path, f"wq{dsa_id}.{q}"))
+        wq_dir.write_values({"group_id": q % nb_groups,
+                             "type": "user",
+                             "mode": "dedicated",
+                             "name": f"dpdk_wq{dsa_id}.{q}",
+                             "priority": 1,
+                             "size": int(max_tokens / nb_queues)})
+
+    # enable device and then queues
+    drv_dir.write_values({"bind": f"dsa{dsa_id}"})
+    for q in range(nb_queues):
+        drv_dir.write_values({"bind": f"wq{dsa_id}.{q}"})
+
+
+def main(args):
+    "Main function, does arg parsing and calls config function"
+    arg_p = argparse.ArgumentParser(
+        description="Configure whole DSA device instance for DPDK use")
+    arg_p.add_argument('dsa_id', type=int, help="DSA instance number")
+    arg_p.add_argument('-q', metavar='queues', type=int, default=255,
+                       help="Number of queues to set up")
+    parsed_args = arg_p.parse_args(args[1:])
+    configure_dsa(parsed_args.dsa_id, parsed_args.q)
+
+
+if __name__ == "__main__":
+    main(sys.argv)
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v6 14/25] raw/ioat: create rawdev instances on idxd PCI probe
  2020-10-08  9:51 ` [dpdk-dev] [PATCH v6 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (12 preceding siblings ...)
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 13/25] raw/ioat: include example configuration script Bruce Richardson
@ 2020-10-08  9:51   ` Bruce Richardson
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 15/25] raw/ioat: create rawdev instances for idxd vdevs Bruce Richardson
                     ` (11 subsequent siblings)
  25 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-08  9:51 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Bruce Richardson, Kevin Laatz, Radu Nicolau

When a matching device is found via PCI probe create a rawdev instance for
each queue on the hardware. Use empty self-test function for these devices
so that the overall rawdev_autotest does not report failures.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 drivers/raw/ioat/idxd_pci.c            | 237 ++++++++++++++++++++++++-
 drivers/raw/ioat/ioat_common.c         |  68 +++++++
 drivers/raw/ioat/ioat_private.h        |  33 ++++
 drivers/raw/ioat/ioat_rawdev_test.c    |   7 +
 drivers/raw/ioat/ioat_spec.h           |  64 +++++++
 drivers/raw/ioat/meson.build           |   1 +
 drivers/raw/ioat/rte_ioat_rawdev_fns.h |  35 +++-
 7 files changed, 442 insertions(+), 3 deletions(-)
 create mode 100644 drivers/raw/ioat/ioat_common.c

diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
index 1a30e9c31..c3fec56d5 100644
--- a/drivers/raw/ioat/idxd_pci.c
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -3,8 +3,10 @@
  */
 
 #include <rte_bus_pci.h>
+#include <rte_memzone.h>
 
 #include "ioat_private.h"
+#include "ioat_spec.h"
 
 #define IDXD_VENDOR_ID		0x8086
 #define IDXD_DEVICE_ID_SPR	0x0B25
@@ -16,17 +18,246 @@ const struct rte_pci_id pci_id_idxd_map[] = {
 	{ .vendor_id = 0, /* sentinel */ },
 };
 
+static inline int
+idxd_pci_dev_command(struct idxd_rawdev *idxd, enum rte_idxd_cmds command)
+{
+	uint8_t err_code;
+	uint16_t qid = idxd->qid;
+	int i = 0;
+
+	if (command >= idxd_disable_wq && command <= idxd_reset_wq)
+		qid = (1 << qid);
+	rte_spinlock_lock(&idxd->u.pci->lk);
+	idxd->u.pci->regs->cmd = (command << IDXD_CMD_SHIFT) | qid;
+
+	do {
+		rte_pause();
+		err_code = idxd->u.pci->regs->cmdstatus;
+		if (++i >= 1000) {
+			IOAT_PMD_ERR("Timeout waiting for command response from HW");
+			rte_spinlock_unlock(&idxd->u.pci->lk);
+			return err_code;
+		}
+	} while (idxd->u.pci->regs->cmdstatus & CMDSTATUS_ACTIVE_MASK);
+	rte_spinlock_unlock(&idxd->u.pci->lk);
+
+	return err_code & CMDSTATUS_ERR_MASK;
+}
+
+static int
+idxd_is_wq_enabled(struct idxd_rawdev *idxd)
+{
+	uint32_t state = idxd->u.pci->wq_regs[idxd->qid].wqcfg[WQ_STATE_IDX];
+	return ((state >> WQ_STATE_SHIFT) & WQ_STATE_MASK) == 0x1;
+}
+
+static const struct rte_rawdev_ops idxd_pci_ops = {
+		.dev_close = idxd_rawdev_close,
+		.dev_selftest = idxd_rawdev_test,
+};
+
+/* each portal uses 4 x 4k pages */
+#define IDXD_PORTAL_SIZE (4096 * 4)
+
+static int
+init_pci_device(struct rte_pci_device *dev, struct idxd_rawdev *idxd)
+{
+	struct idxd_pci_common *pci;
+	uint8_t nb_groups, nb_engines, nb_wqs;
+	uint16_t grp_offset, wq_offset; /* how far into bar0 the regs are */
+	uint16_t wq_size, total_wq_size;
+	uint8_t lg2_max_batch, lg2_max_copy_size;
+	unsigned int i, err_code;
+
+	pci = malloc(sizeof(*pci));
+	if (pci == NULL) {
+		IOAT_PMD_ERR("%s: Can't allocate memory", __func__);
+		goto err;
+	}
+	rte_spinlock_init(&pci->lk);
+
+	/* assign the bar registers, and then configure device */
+	pci->regs = dev->mem_resource[0].addr;
+	grp_offset = (uint16_t)pci->regs->offsets[0];
+	pci->grp_regs = RTE_PTR_ADD(pci->regs, grp_offset * 0x100);
+	wq_offset = (uint16_t)(pci->regs->offsets[0] >> 16);
+	pci->wq_regs = RTE_PTR_ADD(pci->regs, wq_offset * 0x100);
+	pci->portals = dev->mem_resource[2].addr;
+
+	/* sanity check device status */
+	if (pci->regs->gensts & GENSTS_DEV_STATE_MASK) {
+		/* need function-level-reset (FLR) or is enabled */
+		IOAT_PMD_ERR("Device status is not disabled, cannot init");
+		goto err;
+	}
+	if (pci->regs->cmdstatus & CMDSTATUS_ACTIVE_MASK) {
+		/* command in progress */
+		IOAT_PMD_ERR("Device has a command in progress, cannot init");
+		goto err;
+	}
+
+	/* read basic info about the hardware for use when configuring */
+	nb_groups = (uint8_t)pci->regs->grpcap;
+	nb_engines = (uint8_t)pci->regs->engcap;
+	nb_wqs = (uint8_t)(pci->regs->wqcap >> 16);
+	total_wq_size = (uint16_t)pci->regs->wqcap;
+	lg2_max_copy_size = (uint8_t)(pci->regs->gencap >> 16) & 0x1F;
+	lg2_max_batch = (uint8_t)(pci->regs->gencap >> 21) & 0x0F;
+
+	IOAT_PMD_DEBUG("nb_groups = %u, nb_engines = %u, nb_wqs = %u",
+			nb_groups, nb_engines, nb_wqs);
+
+	/* zero out any old config */
+	for (i = 0; i < nb_groups; i++) {
+		pci->grp_regs[i].grpengcfg = 0;
+		pci->grp_regs[i].grpwqcfg[0] = 0;
+	}
+	for (i = 0; i < nb_wqs; i++)
+		pci->wq_regs[i].wqcfg[0] = 0;
+
+	/* put each engine into a separate group to avoid reordering */
+	if (nb_groups > nb_engines)
+		nb_groups = nb_engines;
+	if (nb_groups < nb_engines)
+		nb_engines = nb_groups;
+
+	/* assign engines to groups, round-robin style */
+	for (i = 0; i < nb_engines; i++) {
+		IOAT_PMD_DEBUG("Assigning engine %u to group %u",
+				i, i % nb_groups);
+		pci->grp_regs[i % nb_groups].grpengcfg |= (1ULL << i);
+	}
+
+	/* now do the same for queues and give work slots to each queue */
+	wq_size = total_wq_size / nb_wqs;
+	IOAT_PMD_DEBUG("Work queue size = %u, max batch = 2^%u, max copy = 2^%u",
+			wq_size, lg2_max_batch, lg2_max_copy_size);
+	for (i = 0; i < nb_wqs; i++) {
+		/* add engine "i" to a group */
+		IOAT_PMD_DEBUG("Assigning work queue %u to group %u",
+				i, i % nb_groups);
+		pci->grp_regs[i % nb_groups].grpwqcfg[0] |= (1ULL << i);
+		/* now configure it, in terms of size, max batch, mode */
+		pci->wq_regs[i].wqcfg[WQ_SIZE_IDX] = wq_size;
+		pci->wq_regs[i].wqcfg[WQ_MODE_IDX] = (1 << WQ_PRIORITY_SHIFT) |
+				WQ_MODE_DEDICATED;
+		pci->wq_regs[i].wqcfg[WQ_SIZES_IDX] = lg2_max_copy_size |
+				(lg2_max_batch << WQ_BATCH_SZ_SHIFT);
+	}
+
+	/* dump the group configuration to output */
+	for (i = 0; i < nb_groups; i++) {
+		IOAT_PMD_DEBUG("## Group %d", i);
+		IOAT_PMD_DEBUG("    GRPWQCFG: %"PRIx64, pci->grp_regs[i].grpwqcfg[0]);
+		IOAT_PMD_DEBUG("    GRPENGCFG: %"PRIx64, pci->grp_regs[i].grpengcfg);
+		IOAT_PMD_DEBUG("    GRPFLAGS: %"PRIx32, pci->grp_regs[i].grpflags);
+	}
+
+	idxd->u.pci = pci;
+	idxd->max_batches = wq_size;
+
+	/* enable the device itself */
+	err_code = idxd_pci_dev_command(idxd, idxd_enable_dev);
+	if (err_code) {
+		IOAT_PMD_ERR("Error enabling device: code %#x", err_code);
+		return err_code;
+	}
+	IOAT_PMD_DEBUG("IDXD Device enabled OK");
+
+	return nb_wqs;
+
+err:
+	free(pci);
+	return -1;
+}
+
 static int
 idxd_rawdev_probe_pci(struct rte_pci_driver *drv, struct rte_pci_device *dev)
 {
-	int ret = 0;
+	struct idxd_rawdev idxd = {{0}}; /* Double {} to avoid error on BSD12 */
+	uint8_t nb_wqs;
+	int qid, ret = 0;
 	char name[PCI_PRI_STR_SIZE];
 
 	rte_pci_device_name(&dev->addr, name, sizeof(name));
 	IOAT_PMD_INFO("Init %s on NUMA node %d", name, dev->device.numa_node);
 	dev->device.driver = &drv->driver;
 
-	return ret;
+	ret = init_pci_device(dev, &idxd);
+	if (ret < 0) {
+		IOAT_PMD_ERR("Error initializing PCI hardware");
+		return ret;
+	}
+	nb_wqs = (uint8_t)ret;
+
+	/* set up one device for each queue */
+	for (qid = 0; qid < nb_wqs; qid++) {
+		char qname[32];
+
+		/* add the queue number to each device name */
+		snprintf(qname, sizeof(qname), "%s-q%d", name, qid);
+		idxd.qid = qid;
+		idxd.public.portal = RTE_PTR_ADD(idxd.u.pci->portals,
+				qid * IDXD_PORTAL_SIZE);
+		if (idxd_is_wq_enabled(&idxd))
+			IOAT_PMD_ERR("Error, WQ %u seems enabled", qid);
+		ret = idxd_rawdev_create(qname, &dev->device,
+				&idxd, &idxd_pci_ops);
+		if (ret != 0) {
+			IOAT_PMD_ERR("Failed to create rawdev %s", name);
+			if (qid == 0) /* if no devices using this, free pci */
+				free(idxd.u.pci);
+			return ret;
+		}
+	}
+
+	return 0;
+}
+
+static int
+idxd_rawdev_destroy(const char *name)
+{
+	int ret;
+	uint8_t err_code;
+	struct rte_rawdev *rdev;
+	struct idxd_rawdev *idxd;
+
+	if (!name) {
+		IOAT_PMD_ERR("Invalid device name");
+		return -EINVAL;
+	}
+
+	rdev = rte_rawdev_pmd_get_named_dev(name);
+	if (!rdev) {
+		IOAT_PMD_ERR("Invalid device name (%s)", name);
+		return -EINVAL;
+	}
+
+	idxd = rdev->dev_private;
+
+	/* disable the device */
+	err_code = idxd_pci_dev_command(idxd, idxd_disable_dev);
+	if (err_code) {
+		IOAT_PMD_ERR("Error disabling device: code %#x", err_code);
+		return err_code;
+	}
+	IOAT_PMD_DEBUG("IDXD Device disabled OK");
+
+	/* free device memory */
+	if (rdev->dev_private != NULL) {
+		IOAT_PMD_DEBUG("Freeing device driver memory");
+		rdev->dev_private = NULL;
+		rte_free(idxd->public.batch_ring);
+		rte_free(idxd->public.hdl_ring);
+		rte_memzone_free(idxd->mz);
+	}
+
+	/* rte_rawdev_close is called by pmd_release */
+	ret = rte_rawdev_pmd_release(rdev);
+	if (ret)
+		IOAT_PMD_DEBUG("Device cleanup failed");
+
+	return 0;
 }
 
 static int
@@ -40,6 +271,8 @@ idxd_rawdev_remove_pci(struct rte_pci_device *dev)
 	IOAT_PMD_INFO("Closing %s on NUMA node %d",
 			name, dev->device.numa_node);
 
+	ret = idxd_rawdev_destroy(name);
+
 	return ret;
 }
 
diff --git a/drivers/raw/ioat/ioat_common.c b/drivers/raw/ioat/ioat_common.c
new file mode 100644
index 000000000..c3aa015ed
--- /dev/null
+++ b/drivers/raw/ioat/ioat_common.c
@@ -0,0 +1,68 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_rawdev_pmd.h>
+#include <rte_memzone.h>
+#include <rte_common.h>
+
+#include "ioat_private.h"
+
+int
+idxd_rawdev_close(struct rte_rawdev *dev __rte_unused)
+{
+	return 0;
+}
+
+int
+idxd_rawdev_create(const char *name, struct rte_device *dev,
+		   const struct idxd_rawdev *base_idxd,
+		   const struct rte_rawdev_ops *ops)
+{
+	struct idxd_rawdev *idxd;
+	struct rte_rawdev *rawdev = NULL;
+	const struct rte_memzone *mz = NULL;
+	char mz_name[RTE_MEMZONE_NAMESIZE];
+	int ret = 0;
+
+	if (!name) {
+		IOAT_PMD_ERR("Invalid name of the device!");
+		ret = -EINVAL;
+		goto cleanup;
+	}
+
+	/* Allocate device structure */
+	rawdev = rte_rawdev_pmd_allocate(name, sizeof(struct idxd_rawdev),
+					 dev->numa_node);
+	if (rawdev == NULL) {
+		IOAT_PMD_ERR("Unable to allocate raw device");
+		ret = -ENOMEM;
+		goto cleanup;
+	}
+
+	snprintf(mz_name, sizeof(mz_name), "rawdev%u_private", rawdev->dev_id);
+	mz = rte_memzone_reserve(mz_name, sizeof(struct idxd_rawdev),
+			dev->numa_node, RTE_MEMZONE_IOVA_CONTIG);
+	if (mz == NULL) {
+		IOAT_PMD_ERR("Unable to reserve memzone for private data\n");
+		ret = -ENOMEM;
+		goto cleanup;
+	}
+	rawdev->dev_private = mz->addr;
+	rawdev->dev_ops = ops;
+	rawdev->device = dev;
+	rawdev->driver_name = IOAT_PMD_RAWDEV_NAME_STR;
+
+	idxd = rawdev->dev_private;
+	*idxd = *base_idxd; /* copy over the main fields already passed in */
+	idxd->rawdev = rawdev;
+	idxd->mz = mz;
+
+	return 0;
+
+cleanup:
+	if (rawdev)
+		rte_rawdev_pmd_release(rawdev);
+
+	return ret;
+}
diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
index d87d4d055..53f00a9f3 100644
--- a/drivers/raw/ioat/ioat_private.h
+++ b/drivers/raw/ioat/ioat_private.h
@@ -14,6 +14,10 @@
  * @b EXPERIMENTAL: these structures and APIs may change without prior notice
  */
 
+#include <rte_spinlock.h>
+#include <rte_rawdev_pmd.h>
+#include "rte_ioat_rawdev.h"
+
 extern int ioat_pmd_logtype;
 
 #define IOAT_PMD_LOG(level, fmt, args...) rte_log(RTE_LOG_ ## level, \
@@ -24,4 +28,33 @@ extern int ioat_pmd_logtype;
 #define IOAT_PMD_ERR(fmt, args...)    IOAT_PMD_LOG(ERR, fmt, ## args)
 #define IOAT_PMD_WARN(fmt, args...)   IOAT_PMD_LOG(WARNING, fmt, ## args)
 
+struct idxd_pci_common {
+	rte_spinlock_t lk;
+	volatile struct rte_idxd_bar0 *regs;
+	volatile struct rte_idxd_wqcfg *wq_regs;
+	volatile struct rte_idxd_grpcfg *grp_regs;
+	volatile void *portals;
+};
+
+struct idxd_rawdev {
+	struct rte_idxd_rawdev public; /* the public members, must be first */
+
+	struct rte_rawdev *rawdev;
+	const struct rte_memzone *mz;
+	uint8_t qid;
+	uint16_t max_batches;
+
+	union {
+		struct idxd_pci_common *pci;
+	} u;
+};
+
+extern int idxd_rawdev_create(const char *name, struct rte_device *dev,
+		       const struct idxd_rawdev *idxd,
+		       const struct rte_rawdev_ops *ops);
+
+extern int idxd_rawdev_close(struct rte_rawdev *dev);
+
+extern int idxd_rawdev_test(uint16_t dev_id);
+
 #endif /* _IOAT_PRIVATE_H_ */
diff --git a/drivers/raw/ioat/ioat_rawdev_test.c b/drivers/raw/ioat/ioat_rawdev_test.c
index 8b665cc9a..87a65b7ae 100644
--- a/drivers/raw/ioat/ioat_rawdev_test.c
+++ b/drivers/raw/ioat/ioat_rawdev_test.c
@@ -7,6 +7,7 @@
 #include <rte_mbuf.h>
 #include "rte_rawdev.h"
 #include "rte_ioat_rawdev.h"
+#include "ioat_private.h"
 
 #define MAX_SUPPORTED_RAWDEVS 64
 #define TEST_SKIPPED 77
@@ -267,3 +268,9 @@ ioat_rawdev_test(uint16_t dev_id)
 	free(ids);
 	return -1;
 }
+
+int
+idxd_rawdev_test(uint16_t dev_id __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/raw/ioat/ioat_spec.h b/drivers/raw/ioat/ioat_spec.h
index 9645e16d4..1aa768b9a 100644
--- a/drivers/raw/ioat/ioat_spec.h
+++ b/drivers/raw/ioat/ioat_spec.h
@@ -268,6 +268,70 @@ union rte_ioat_hw_desc {
 	struct rte_ioat_pq_update_hw_desc pq_update;
 };
 
+/*** Definitions for Intel(R) Data Streaming Accelerator Follow ***/
+
+#define IDXD_CMD_SHIFT 20
+enum rte_idxd_cmds {
+	idxd_enable_dev = 1,
+	idxd_disable_dev,
+	idxd_drain_all,
+	idxd_abort_all,
+	idxd_reset_device,
+	idxd_enable_wq,
+	idxd_disable_wq,
+	idxd_drain_wq,
+	idxd_abort_wq,
+	idxd_reset_wq,
+};
+
+/* General bar0 registers */
+struct rte_idxd_bar0 {
+	uint32_t __rte_cache_aligned version;    /* offset 0x00 */
+	uint64_t __rte_aligned(0x10) gencap;     /* offset 0x10 */
+	uint64_t __rte_aligned(0x10) wqcap;      /* offset 0x20 */
+	uint64_t __rte_aligned(0x10) grpcap;     /* offset 0x30 */
+	uint64_t __rte_aligned(0x08) engcap;     /* offset 0x38 */
+	uint64_t __rte_aligned(0x10) opcap;      /* offset 0x40 */
+	uint64_t __rte_aligned(0x20) offsets[2]; /* offset 0x60 */
+	uint32_t __rte_aligned(0x20) gencfg;     /* offset 0x80 */
+	uint32_t __rte_aligned(0x08) genctrl;    /* offset 0x88 */
+	uint32_t __rte_aligned(0x10) gensts;     /* offset 0x90 */
+	uint32_t __rte_aligned(0x08) intcause;   /* offset 0x98 */
+	uint32_t __rte_aligned(0x10) cmd;        /* offset 0xA0 */
+	uint32_t __rte_aligned(0x08) cmdstatus;  /* offset 0xA8 */
+	uint64_t __rte_aligned(0x20) swerror[4]; /* offset 0xC0 */
+};
+
+struct rte_idxd_wqcfg {
+	uint32_t wqcfg[8] __rte_aligned(32); /* 32-byte register */
+};
+
+#define WQ_SIZE_IDX      0 /* size is in first 32-bit value */
+#define WQ_THRESHOLD_IDX 1 /* WQ threshold second 32-bits */
+#define WQ_MODE_IDX      2 /* WQ mode and other flags */
+#define WQ_SIZES_IDX     3 /* WQ transfer and batch sizes */
+#define WQ_OCC_INT_IDX   4 /* WQ occupancy interrupt handle */
+#define WQ_OCC_LIMIT_IDX 5 /* WQ occupancy limit */
+#define WQ_STATE_IDX     6 /* WQ state and occupancy state */
+
+#define WQ_MODE_SHARED    0
+#define WQ_MODE_DEDICATED 1
+#define WQ_PRIORITY_SHIFT 4
+#define WQ_BATCH_SZ_SHIFT 5
+#define WQ_STATE_SHIFT 30
+#define WQ_STATE_MASK 0x3
+
+struct rte_idxd_grpcfg {
+	uint64_t grpwqcfg[4]  __rte_cache_aligned; /* 64-byte register set */
+	uint64_t grpengcfg;  /* offset 32 */
+	uint32_t grpflags;   /* offset 40 */
+};
+
+#define GENSTS_DEV_STATE_MASK 0x03
+#define CMDSTATUS_ACTIVE_SHIFT 31
+#define CMDSTATUS_ACTIVE_MASK (1 << 31)
+#define CMDSTATUS_ERR_MASK 0xFF
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/drivers/raw/ioat/meson.build b/drivers/raw/ioat/meson.build
index b343b7367..5eff76a1a 100644
--- a/drivers/raw/ioat/meson.build
+++ b/drivers/raw/ioat/meson.build
@@ -6,6 +6,7 @@ reason = 'only supported on x86'
 sources = files(
 	'idxd_pci.c',
 	'idxd_vdev.c',
+	'ioat_common.c',
 	'ioat_rawdev.c',
 	'ioat_rawdev_test.c')
 deps += ['bus_pci',
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index c6e0b9a58..fa2eb5334 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -41,9 +41,20 @@ struct rte_ioat_generic_hw_desc {
 
 /**
  * @internal
- * Structure representing a device instance
+ * Identify the data path to use.
+ * Must be first field of rte_ioat_rawdev and rte_idxd_rawdev structs
+ */
+enum rte_ioat_dev_type {
+	RTE_IOAT_DEV,
+	RTE_IDXD_DEV,
+};
+
+/**
+ * @internal
+ * Structure representing an IOAT device instance
  */
 struct rte_ioat_rawdev {
+	enum rte_ioat_dev_type type;
 	struct rte_rawdev *rawdev;
 	const struct rte_memzone *mz;
 	const struct rte_memzone *desc_mz;
@@ -79,6 +90,28 @@ struct rte_ioat_rawdev {
 #define RTE_IOAT_CHANSTS_HALTED			0x3
 #define RTE_IOAT_CHANSTS_ARMED			0x4
 
+/**
+ * @internal
+ * Structure representing an IDXD device instance
+ */
+struct rte_idxd_rawdev {
+	enum rte_ioat_dev_type type;
+	void *portal; /* address to write the batch descriptor */
+
+	/* counters to track the batches and the individual op handles */
+	uint16_t batch_ring_sz;  /* size of batch ring */
+	uint16_t hdl_ring_sz;    /* size of the user hdl ring */
+
+	uint16_t next_batch;     /* where we write descriptor ops */
+	uint16_t next_completed; /* batch where we read completions */
+	uint16_t next_ret_hdl;   /* the next user hdl to return */
+	uint16_t last_completed_hdl; /* the last user hdl that has completed */
+	uint16_t next_free_hdl;  /* where the handle for next op will go */
+
+	struct rte_idxd_user_hdl *hdl_ring;
+	struct rte_idxd_desc_batch *batch_ring;
+};
+
 /*
  * Enqueue a copy operation onto the ioat device
  */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v6 15/25] raw/ioat: create rawdev instances for idxd vdevs
  2020-10-08  9:51 ` [dpdk-dev] [PATCH v6 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (13 preceding siblings ...)
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 14/25] raw/ioat: create rawdev instances on idxd PCI probe Bruce Richardson
@ 2020-10-08  9:51   ` Bruce Richardson
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 16/25] raw/ioat: add datapath data structures for idxd devices Bruce Richardson
                     ` (10 subsequent siblings)
  25 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-08  9:51 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Kevin Laatz, Bruce Richardson, Radu Nicolau

From: Kevin Laatz <kevin.laatz@intel.com>

For each vdev (DSA work queue) instance, create a rawdev instance.

Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 drivers/raw/ioat/idxd_vdev.c    | 106 +++++++++++++++++++++++++++++++-
 drivers/raw/ioat/ioat_private.h |   4 ++
 2 files changed, 109 insertions(+), 1 deletion(-)

diff --git a/drivers/raw/ioat/idxd_vdev.c b/drivers/raw/ioat/idxd_vdev.c
index 0509fc084..e61c26c1b 100644
--- a/drivers/raw/ioat/idxd_vdev.c
+++ b/drivers/raw/ioat/idxd_vdev.c
@@ -2,6 +2,12 @@
  * Copyright(c) 2020 Intel Corporation
  */
 
+#include <fcntl.h>
+#include <unistd.h>
+#include <limits.h>
+#include <sys/mman.h>
+
+#include <rte_memzone.h>
 #include <rte_bus_vdev.h>
 #include <rte_kvargs.h>
 #include <rte_string_fns.h>
@@ -24,6 +30,36 @@ struct idxd_vdev_args {
 	uint8_t wq_id;
 };
 
+static const struct rte_rawdev_ops idxd_vdev_ops = {
+		.dev_close = idxd_rawdev_close,
+		.dev_selftest = idxd_rawdev_test,
+};
+
+static void *
+idxd_vdev_mmap_wq(struct idxd_vdev_args *args)
+{
+	void *addr;
+	char path[PATH_MAX];
+	int fd;
+
+	snprintf(path, sizeof(path), "/dev/dsa/wq%u.%u",
+			args->device_id, args->wq_id);
+	fd = open(path, O_RDWR);
+	if (fd < 0) {
+		IOAT_PMD_ERR("Failed to open device path");
+		return NULL;
+	}
+
+	addr = mmap(NULL, 0x1000, PROT_WRITE, MAP_SHARED, fd, 0);
+	close(fd);
+	if (addr == MAP_FAILED) {
+		IOAT_PMD_ERR("Failed to mmap device");
+		return NULL;
+	}
+
+	return addr;
+}
+
 static int
 idxd_rawdev_parse_wq(const char *key __rte_unused, const char *value,
 			  void *extra_args)
@@ -70,10 +106,32 @@ idxd_vdev_parse_params(struct rte_kvargs *kvlist, struct idxd_vdev_args *args)
 	return -EINVAL;
 }
 
+static int
+idxd_vdev_get_max_batches(struct idxd_vdev_args *args)
+{
+	char sysfs_path[PATH_MAX];
+	FILE *f;
+	int ret;
+
+	snprintf(sysfs_path, sizeof(sysfs_path),
+			"/sys/bus/dsa/devices/wq%u.%u/size",
+			args->device_id, args->wq_id);
+	f = fopen(sysfs_path, "r");
+	if (f == NULL)
+		return -1;
+
+	if (fscanf(f, "%d", &ret) != 1)
+		ret = -1;
+
+	fclose(f);
+	return ret;
+}
+
 static int
 idxd_rawdev_probe_vdev(struct rte_vdev_device *vdev)
 {
 	struct rte_kvargs *kvlist;
+	struct idxd_rawdev idxd = {{0}}; /* double {} to avoid error on BSD12 */
 	struct idxd_vdev_args vdev_args;
 	const char *name;
 	int ret = 0;
@@ -96,13 +154,32 @@ idxd_rawdev_probe_vdev(struct rte_vdev_device *vdev)
 		return -EINVAL;
 	}
 
+	idxd.qid = vdev_args.wq_id;
+	idxd.u.vdev.dsa_id = vdev_args.device_id;
+	idxd.max_batches = idxd_vdev_get_max_batches(&vdev_args);
+
+	idxd.public.portal = idxd_vdev_mmap_wq(&vdev_args);
+	if (idxd.public.portal == NULL) {
+		IOAT_PMD_ERR("WQ mmap failed");
+		return -ENOENT;
+	}
+
+	ret = idxd_rawdev_create(name, &vdev->device, &idxd, &idxd_vdev_ops);
+	if (ret) {
+		IOAT_PMD_ERR("Failed to create rawdev %s", name);
+		return ret;
+	}
+
 	return 0;
 }
 
 static int
 idxd_rawdev_remove_vdev(struct rte_vdev_device *vdev)
 {
+	struct idxd_rawdev *idxd;
 	const char *name;
+	struct rte_rawdev *rdev;
+	int ret = 0;
 
 	name = rte_vdev_device_name(vdev);
 	if (name == NULL)
@@ -110,7 +187,34 @@ idxd_rawdev_remove_vdev(struct rte_vdev_device *vdev)
 
 	IOAT_PMD_INFO("Remove DSA vdev %p", name);
 
-	return 0;
+	rdev = rte_rawdev_pmd_get_named_dev(name);
+	if (!rdev) {
+		IOAT_PMD_ERR("Invalid device name (%s)", name);
+		return -EINVAL;
+	}
+
+	idxd = rdev->dev_private;
+
+	/* free context and memory */
+	if (rdev->dev_private != NULL) {
+		IOAT_PMD_DEBUG("Freeing device driver memory");
+		rdev->dev_private = NULL;
+
+		if (munmap(idxd->public.portal, 0x1000) < 0) {
+			IOAT_PMD_ERR("Error unmapping portal");
+			ret = -errno;
+		}
+
+		rte_free(idxd->public.batch_ring);
+		rte_free(idxd->public.hdl_ring);
+
+		rte_memzone_free(idxd->mz);
+	}
+
+	if (rte_rawdev_pmd_release(rdev))
+		IOAT_PMD_ERR("Device cleanup failed");
+
+	return ret;
 }
 
 struct rte_vdev_driver idxd_rawdev_drv_vdev = {
diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
index 53f00a9f3..6f7bdb499 100644
--- a/drivers/raw/ioat/ioat_private.h
+++ b/drivers/raw/ioat/ioat_private.h
@@ -45,6 +45,10 @@ struct idxd_rawdev {
 	uint16_t max_batches;
 
 	union {
+		struct {
+			unsigned int dsa_id;
+		} vdev;
+
 		struct idxd_pci_common *pci;
 	} u;
 };
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v6 16/25] raw/ioat: add datapath data structures for idxd devices
  2020-10-08  9:51 ` [dpdk-dev] [PATCH v6 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (14 preceding siblings ...)
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 15/25] raw/ioat: create rawdev instances for idxd vdevs Bruce Richardson
@ 2020-10-08  9:51   ` Bruce Richardson
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 17/25] raw/ioat: add configure function " Bruce Richardson
                     ` (9 subsequent siblings)
  25 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-08  9:51 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Bruce Richardson, Kevin Laatz, Radu Nicolau

Add in the relevant data structures for the data path for DSA devices. Also
include a device dump function to output the status of each device.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 drivers/raw/ioat/idxd_pci.c            |  1 +
 drivers/raw/ioat/idxd_vdev.c           |  1 +
 drivers/raw/ioat/ioat_common.c         | 34 +++++++++++
 drivers/raw/ioat/ioat_private.h        |  2 +
 drivers/raw/ioat/ioat_rawdev_test.c    |  3 +-
 drivers/raw/ioat/rte_ioat_rawdev_fns.h | 80 ++++++++++++++++++++++++++
 6 files changed, 120 insertions(+), 1 deletion(-)

diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
index c3fec56d5..9bee92766 100644
--- a/drivers/raw/ioat/idxd_pci.c
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -54,6 +54,7 @@ idxd_is_wq_enabled(struct idxd_rawdev *idxd)
 static const struct rte_rawdev_ops idxd_pci_ops = {
 		.dev_close = idxd_rawdev_close,
 		.dev_selftest = idxd_rawdev_test,
+		.dump = idxd_dev_dump,
 };
 
 /* each portal uses 4 x 4k pages */
diff --git a/drivers/raw/ioat/idxd_vdev.c b/drivers/raw/ioat/idxd_vdev.c
index e61c26c1b..ba78eee90 100644
--- a/drivers/raw/ioat/idxd_vdev.c
+++ b/drivers/raw/ioat/idxd_vdev.c
@@ -33,6 +33,7 @@ struct idxd_vdev_args {
 static const struct rte_rawdev_ops idxd_vdev_ops = {
 		.dev_close = idxd_rawdev_close,
 		.dev_selftest = idxd_rawdev_test,
+		.dump = idxd_dev_dump,
 };
 
 static void *
diff --git a/drivers/raw/ioat/ioat_common.c b/drivers/raw/ioat/ioat_common.c
index c3aa015ed..672241351 100644
--- a/drivers/raw/ioat/ioat_common.c
+++ b/drivers/raw/ioat/ioat_common.c
@@ -14,6 +14,36 @@ idxd_rawdev_close(struct rte_rawdev *dev __rte_unused)
 	return 0;
 }
 
+int
+idxd_dev_dump(struct rte_rawdev *dev, FILE *f)
+{
+	struct idxd_rawdev *idxd = dev->dev_private;
+	struct rte_idxd_rawdev *rte_idxd = &idxd->public;
+	int i;
+
+	fprintf(f, "Raw Device #%d\n", dev->dev_id);
+	fprintf(f, "Driver: %s\n\n", dev->driver_name);
+
+	fprintf(f, "Portal: %p\n", rte_idxd->portal);
+	fprintf(f, "Batch Ring size: %u\n", rte_idxd->batch_ring_sz);
+	fprintf(f, "Comp Handle Ring size: %u\n\n", rte_idxd->hdl_ring_sz);
+
+	fprintf(f, "Next batch: %u\n", rte_idxd->next_batch);
+	fprintf(f, "Next batch to be completed: %u\n", rte_idxd->next_completed);
+	for (i = 0; i < rte_idxd->batch_ring_sz; i++) {
+		struct rte_idxd_desc_batch *b = &rte_idxd->batch_ring[i];
+		fprintf(f, "Batch %u @%p: submitted=%u, op_count=%u, hdl_end=%u\n",
+				i, b, b->submitted, b->op_count, b->hdl_end);
+	}
+
+	fprintf(f, "\n");
+	fprintf(f, "Next free hdl: %u\n", rte_idxd->next_free_hdl);
+	fprintf(f, "Last completed hdl: %u\n", rte_idxd->last_completed_hdl);
+	fprintf(f, "Next returned hdl: %u\n", rte_idxd->next_ret_hdl);
+
+	return 0;
+}
+
 int
 idxd_rawdev_create(const char *name, struct rte_device *dev,
 		   const struct idxd_rawdev *base_idxd,
@@ -25,6 +55,10 @@ idxd_rawdev_create(const char *name, struct rte_device *dev,
 	char mz_name[RTE_MEMZONE_NAMESIZE];
 	int ret = 0;
 
+	RTE_BUILD_BUG_ON(sizeof(struct rte_idxd_hw_desc) != 64);
+	RTE_BUILD_BUG_ON(offsetof(struct rte_idxd_hw_desc, size) != 32);
+	RTE_BUILD_BUG_ON(sizeof(struct rte_idxd_completion) != 32);
+
 	if (!name) {
 		IOAT_PMD_ERR("Invalid name of the device!");
 		ret = -EINVAL;
diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
index 6f7bdb499..f521c85a1 100644
--- a/drivers/raw/ioat/ioat_private.h
+++ b/drivers/raw/ioat/ioat_private.h
@@ -61,4 +61,6 @@ extern int idxd_rawdev_close(struct rte_rawdev *dev);
 
 extern int idxd_rawdev_test(uint16_t dev_id);
 
+extern int idxd_dev_dump(struct rte_rawdev *dev, FILE *f);
+
 #endif /* _IOAT_PRIVATE_H_ */
diff --git a/drivers/raw/ioat/ioat_rawdev_test.c b/drivers/raw/ioat/ioat_rawdev_test.c
index 87a65b7ae..ceeac92ef 100644
--- a/drivers/raw/ioat/ioat_rawdev_test.c
+++ b/drivers/raw/ioat/ioat_rawdev_test.c
@@ -270,7 +270,8 @@ ioat_rawdev_test(uint16_t dev_id)
 }
 
 int
-idxd_rawdev_test(uint16_t dev_id __rte_unused)
+idxd_rawdev_test(uint16_t dev_id)
 {
+	rte_rawdev_dump(dev_id, stdout);
 	return 0;
 }
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index fa2eb5334..178c432dd 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -90,6 +90,86 @@ struct rte_ioat_rawdev {
 #define RTE_IOAT_CHANSTS_HALTED			0x3
 #define RTE_IOAT_CHANSTS_ARMED			0x4
 
+/*
+ * Defines used in the data path for interacting with hardware.
+ */
+#define IDXD_CMD_OP_SHIFT 24
+enum rte_idxd_ops {
+	idxd_op_nop = 0,
+	idxd_op_batch,
+	idxd_op_drain,
+	idxd_op_memmove,
+	idxd_op_fill
+};
+
+#define IDXD_FLAG_FENCE                 (1 << 0)
+#define IDXD_FLAG_COMPLETION_ADDR_VALID (1 << 2)
+#define IDXD_FLAG_REQUEST_COMPLETION    (1 << 3)
+#define IDXD_FLAG_CACHE_CONTROL         (1 << 8)
+
+/**
+ * Hardware descriptor used by DSA hardware, for both bursts and
+ * for individual operations.
+ */
+struct rte_idxd_hw_desc {
+	uint32_t pasid;
+	uint32_t op_flags;
+	rte_iova_t completion;
+
+	RTE_STD_C11
+	union {
+		rte_iova_t src;      /* source address for copy ops etc. */
+		rte_iova_t desc_addr; /* descriptor pointer for batch */
+	};
+	rte_iova_t dst;
+
+	uint32_t size;    /* length of data for op, or batch size */
+
+	/* 28 bytes of padding here */
+} __rte_aligned(64);
+
+/**
+ * Completion record structure written back by DSA
+ */
+struct rte_idxd_completion {
+	uint8_t status;
+	uint8_t result;
+	/* 16-bits pad here */
+	uint32_t completed_size; /* data length, or descriptors for batch */
+
+	rte_iova_t fault_address;
+	uint32_t invalid_flags;
+} __rte_aligned(32);
+
+#define BATCH_SIZE 64
+
+/**
+ * Structure used inside the driver for building up and submitting
+ * a batch of operations to the DSA hardware.
+ */
+struct rte_idxd_desc_batch {
+	struct rte_idxd_completion comp; /* the completion record for batch */
+
+	uint16_t submitted;
+	uint16_t op_count;
+	uint16_t hdl_end;
+
+	struct rte_idxd_hw_desc batch_desc;
+
+	/* batches must always have 2 descriptors, so put a null at the start */
+	struct rte_idxd_hw_desc null_desc;
+	struct rte_idxd_hw_desc ops[BATCH_SIZE];
+};
+
+/**
+ * structure used to save the "handles" provided by the user to be
+ * returned to the user on job completion.
+ */
+struct rte_idxd_user_hdl {
+	uint64_t src;
+	uint64_t dst;
+};
+
 /**
  * @internal
  * Structure representing an IDXD device instance
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v6 17/25] raw/ioat: add configure function for idxd devices
  2020-10-08  9:51 ` [dpdk-dev] [PATCH v6 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (15 preceding siblings ...)
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 16/25] raw/ioat: add datapath data structures for idxd devices Bruce Richardson
@ 2020-10-08  9:51   ` Bruce Richardson
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 18/25] raw/ioat: add start and stop functions " Bruce Richardson
                     ` (8 subsequent siblings)
  25 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-08  9:51 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Bruce Richardson, Kevin Laatz, Radu Nicolau

Add configure function for idxd devices, taking the same parameters as the
existing configure function for ioat. The ring_size parameter is used to
compute the maximum number of bursts to be supported by the driver, given
that the hardware works on individual bursts of descriptors at a time.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 drivers/raw/ioat/idxd_pci.c            |  1 +
 drivers/raw/ioat/idxd_vdev.c           |  1 +
 drivers/raw/ioat/ioat_common.c         | 64 ++++++++++++++++++++++++++
 drivers/raw/ioat/ioat_private.h        |  3 ++
 drivers/raw/ioat/rte_ioat_rawdev_fns.h |  1 +
 5 files changed, 70 insertions(+)

diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
index 9bee92766..b173c5ae3 100644
--- a/drivers/raw/ioat/idxd_pci.c
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -55,6 +55,7 @@ static const struct rte_rawdev_ops idxd_pci_ops = {
 		.dev_close = idxd_rawdev_close,
 		.dev_selftest = idxd_rawdev_test,
 		.dump = idxd_dev_dump,
+		.dev_configure = idxd_dev_configure,
 };
 
 /* each portal uses 4 x 4k pages */
diff --git a/drivers/raw/ioat/idxd_vdev.c b/drivers/raw/ioat/idxd_vdev.c
index ba78eee90..3dad1473b 100644
--- a/drivers/raw/ioat/idxd_vdev.c
+++ b/drivers/raw/ioat/idxd_vdev.c
@@ -34,6 +34,7 @@ static const struct rte_rawdev_ops idxd_vdev_ops = {
 		.dev_close = idxd_rawdev_close,
 		.dev_selftest = idxd_rawdev_test,
 		.dump = idxd_dev_dump,
+		.dev_configure = idxd_dev_configure,
 };
 
 static void *
diff --git a/drivers/raw/ioat/ioat_common.c b/drivers/raw/ioat/ioat_common.c
index 672241351..5173c331c 100644
--- a/drivers/raw/ioat/ioat_common.c
+++ b/drivers/raw/ioat/ioat_common.c
@@ -44,6 +44,70 @@ idxd_dev_dump(struct rte_rawdev *dev, FILE *f)
 	return 0;
 }
 
+int
+idxd_dev_configure(const struct rte_rawdev *dev,
+		rte_rawdev_obj_t config, size_t config_size)
+{
+	struct idxd_rawdev *idxd = dev->dev_private;
+	struct rte_idxd_rawdev *rte_idxd = &idxd->public;
+	struct rte_ioat_rawdev_config *cfg = config;
+	uint16_t max_desc = cfg->ring_size;
+	uint16_t max_batches = max_desc / BATCH_SIZE;
+	uint16_t i;
+
+	if (config_size != sizeof(*cfg))
+		return -EINVAL;
+
+	if (dev->started) {
+		IOAT_PMD_ERR("%s: Error, device is started.", __func__);
+		return -EAGAIN;
+	}
+
+	rte_idxd->hdls_disable = cfg->hdls_disable;
+
+	/* limit the batches to what can be stored in hardware */
+	if (max_batches > idxd->max_batches) {
+		IOAT_PMD_DEBUG("Ring size of %u is too large for this device, need to limit to %u batches of %u",
+				max_desc, idxd->max_batches, BATCH_SIZE);
+		max_batches = idxd->max_batches;
+		max_desc = max_batches * BATCH_SIZE;
+	}
+	if (!rte_is_power_of_2(max_desc))
+		max_desc = rte_align32pow2(max_desc);
+	IOAT_PMD_DEBUG("Rawdev %u using %u descriptors in %u batches",
+			dev->dev_id, max_desc, max_batches);
+
+	/* in case we are reconfiguring a device, free any existing memory */
+	rte_free(rte_idxd->batch_ring);
+	rte_free(rte_idxd->hdl_ring);
+
+	rte_idxd->batch_ring = rte_zmalloc(NULL,
+			sizeof(*rte_idxd->batch_ring) * max_batches, 0);
+	if (rte_idxd->batch_ring == NULL)
+		return -ENOMEM;
+
+	rte_idxd->hdl_ring = rte_zmalloc(NULL,
+			sizeof(*rte_idxd->hdl_ring) * max_desc, 0);
+	if (rte_idxd->hdl_ring == NULL) {
+		rte_free(rte_idxd->batch_ring);
+		rte_idxd->batch_ring = NULL;
+		return -ENOMEM;
+	}
+	rte_idxd->batch_ring_sz = max_batches;
+	rte_idxd->hdl_ring_sz = max_desc;
+
+	for (i = 0; i < rte_idxd->batch_ring_sz; i++) {
+		struct rte_idxd_desc_batch *b = &rte_idxd->batch_ring[i];
+		b->batch_desc.completion = rte_mem_virt2iova(&b->comp);
+		b->batch_desc.desc_addr = rte_mem_virt2iova(&b->null_desc);
+		b->batch_desc.op_flags = (idxd_op_batch << IDXD_CMD_OP_SHIFT) |
+				IDXD_FLAG_COMPLETION_ADDR_VALID |
+				IDXD_FLAG_REQUEST_COMPLETION;
+	}
+
+	return 0;
+}
+
 int
 idxd_rawdev_create(const char *name, struct rte_device *dev,
 		   const struct idxd_rawdev *base_idxd,
diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
index f521c85a1..aba70d8d7 100644
--- a/drivers/raw/ioat/ioat_private.h
+++ b/drivers/raw/ioat/ioat_private.h
@@ -59,6 +59,9 @@ extern int idxd_rawdev_create(const char *name, struct rte_device *dev,
 
 extern int idxd_rawdev_close(struct rte_rawdev *dev);
 
+extern int idxd_dev_configure(const struct rte_rawdev *dev,
+		rte_rawdev_obj_t config, size_t config_size);
+
 extern int idxd_rawdev_test(uint16_t dev_id);
 
 extern int idxd_dev_dump(struct rte_rawdev *dev, FILE *f);
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index 178c432dd..e9cdce016 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -187,6 +187,7 @@ struct rte_idxd_rawdev {
 	uint16_t next_ret_hdl;   /* the next user hdl to return */
 	uint16_t last_completed_hdl; /* the last user hdl that has completed */
 	uint16_t next_free_hdl;  /* where the handle for next op will go */
+	uint16_t hdls_disable;   /* disable tracking completion handles */
 
 	struct rte_idxd_user_hdl *hdl_ring;
 	struct rte_idxd_desc_batch *batch_ring;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v6 18/25] raw/ioat: add start and stop functions for idxd devices
  2020-10-08  9:51 ` [dpdk-dev] [PATCH v6 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (16 preceding siblings ...)
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 17/25] raw/ioat: add configure function " Bruce Richardson
@ 2020-10-08  9:51   ` Bruce Richardson
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 19/25] raw/ioat: add data path " Bruce Richardson
                     ` (7 subsequent siblings)
  25 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-08  9:51 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Bruce Richardson, Kevin Laatz, Radu Nicolau

Add the start and stop functions for DSA hardware devices using the
vfio/uio kernel drivers. For vdevs using the idxd kernel driver, the device
must be started using sysfs before the device node appears for vdev use -
making start/stop functions in the driver unnecessary.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 drivers/raw/ioat/idxd_pci.c | 50 +++++++++++++++++++++++++++++++++++++
 1 file changed, 50 insertions(+)

diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
index b173c5ae3..6b5c47392 100644
--- a/drivers/raw/ioat/idxd_pci.c
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -51,11 +51,61 @@ idxd_is_wq_enabled(struct idxd_rawdev *idxd)
 	return ((state >> WQ_STATE_SHIFT) & WQ_STATE_MASK) == 0x1;
 }
 
+static void
+idxd_pci_dev_stop(struct rte_rawdev *dev)
+{
+	struct idxd_rawdev *idxd = dev->dev_private;
+	uint8_t err_code;
+
+	if (!idxd_is_wq_enabled(idxd)) {
+		IOAT_PMD_ERR("Work queue %d already disabled", idxd->qid);
+		return;
+	}
+
+	err_code = idxd_pci_dev_command(idxd, idxd_disable_wq);
+	if (err_code || idxd_is_wq_enabled(idxd)) {
+		IOAT_PMD_ERR("Failed disabling work queue %d, error code: %#x",
+				idxd->qid, err_code);
+		return;
+	}
+	IOAT_PMD_DEBUG("Work queue %d disabled OK", idxd->qid);
+}
+
+static int
+idxd_pci_dev_start(struct rte_rawdev *dev)
+{
+	struct idxd_rawdev *idxd = dev->dev_private;
+	uint8_t err_code;
+
+	if (idxd_is_wq_enabled(idxd)) {
+		IOAT_PMD_WARN("WQ %d already enabled", idxd->qid);
+		return 0;
+	}
+
+	if (idxd->public.batch_ring == NULL) {
+		IOAT_PMD_ERR("WQ %d has not been fully configured", idxd->qid);
+		return -EINVAL;
+	}
+
+	err_code = idxd_pci_dev_command(idxd, idxd_enable_wq);
+	if (err_code || !idxd_is_wq_enabled(idxd)) {
+		IOAT_PMD_ERR("Failed enabling work queue %d, error code: %#x",
+				idxd->qid, err_code);
+		return err_code == 0 ? -1 : err_code;
+	}
+
+	IOAT_PMD_DEBUG("Work queue %d enabled OK", idxd->qid);
+
+	return 0;
+}
+
 static const struct rte_rawdev_ops idxd_pci_ops = {
 		.dev_close = idxd_rawdev_close,
 		.dev_selftest = idxd_rawdev_test,
 		.dump = idxd_dev_dump,
 		.dev_configure = idxd_dev_configure,
+		.dev_start = idxd_pci_dev_start,
+		.dev_stop = idxd_pci_dev_stop,
 };
 
 /* each portal uses 4 x 4k pages */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v6 19/25] raw/ioat: add data path for idxd devices
  2020-10-08  9:51 ` [dpdk-dev] [PATCH v6 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (17 preceding siblings ...)
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 18/25] raw/ioat: add start and stop functions " Bruce Richardson
@ 2020-10-08  9:51   ` Bruce Richardson
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 20/25] raw/ioat: add info function " Bruce Richardson
                     ` (6 subsequent siblings)
  25 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-08  9:51 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Bruce Richardson, Kevin Laatz, Radu Nicolau

Add support for doing copies using DSA hardware. This is implemented by
just switching on the device type field at the start of the inline
functions. Since there is no hardware which will have both device types
present this branch will always be predictable after the first call,
meaning it has little to no perf penalty.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 drivers/raw/ioat/ioat_common.c         |   1 +
 drivers/raw/ioat/ioat_rawdev.c         |   1 +
 drivers/raw/ioat/rte_ioat_rawdev_fns.h | 201 +++++++++++++++++++++++--
 3 files changed, 192 insertions(+), 11 deletions(-)

diff --git a/drivers/raw/ioat/ioat_common.c b/drivers/raw/ioat/ioat_common.c
index 5173c331c..6a4e2979f 100644
--- a/drivers/raw/ioat/ioat_common.c
+++ b/drivers/raw/ioat/ioat_common.c
@@ -153,6 +153,7 @@ idxd_rawdev_create(const char *name, struct rte_device *dev,
 
 	idxd = rawdev->dev_private;
 	*idxd = *base_idxd; /* copy over the main fields already passed in */
+	idxd->public.type = RTE_IDXD_DEV;
 	idxd->rawdev = rawdev;
 	idxd->mz = mz;
 
diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
index 1fe32278d..0097be87e 100644
--- a/drivers/raw/ioat/ioat_rawdev.c
+++ b/drivers/raw/ioat/ioat_rawdev.c
@@ -260,6 +260,7 @@ ioat_rawdev_create(const char *name, struct rte_pci_device *dev)
 	rawdev->driver_name = dev->device.driver->name;
 
 	ioat = rawdev->dev_private;
+	ioat->type = RTE_IOAT_DEV;
 	ioat->rawdev = rawdev;
 	ioat->mz = mz;
 	ioat->regs = dev->mem_resource[0].addr;
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index e9cdce016..36ba876ea 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -196,8 +196,8 @@ struct rte_idxd_rawdev {
 /*
  * Enqueue a copy operation onto the ioat device
  */
-static inline int
-rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
+static __rte_always_inline int
+__ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl)
 {
 	struct rte_ioat_rawdev *ioat =
@@ -233,8 +233,8 @@ rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 }
 
 /* add fence to last written descriptor */
-static inline int
-rte_ioat_fence(int dev_id)
+static __rte_always_inline int
+__ioat_fence(int dev_id)
 {
 	struct rte_ioat_rawdev *ioat =
 			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
@@ -252,8 +252,8 @@ rte_ioat_fence(int dev_id)
 /*
  * Trigger hardware to begin performing enqueued operations
  */
-static inline void
-rte_ioat_perform_ops(int dev_id)
+static __rte_always_inline void
+__ioat_perform_ops(int dev_id)
 {
 	struct rte_ioat_rawdev *ioat =
 			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
@@ -268,8 +268,8 @@ rte_ioat_perform_ops(int dev_id)
  * @internal
  * Returns the index of the last completed operation.
  */
-static inline int
-rte_ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
+static __rte_always_inline int
+__ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
 {
 	uint64_t status = ioat->status;
 
@@ -283,8 +283,8 @@ rte_ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
 /*
  * Returns details of operations that have been completed
  */
-static inline int
-rte_ioat_completed_ops(int dev_id, uint8_t max_copies,
+static __rte_always_inline int
+__ioat_completed_ops(int dev_id, uint8_t max_copies,
 		uintptr_t *src_hdls, uintptr_t *dst_hdls)
 {
 	struct rte_ioat_rawdev *ioat =
@@ -295,7 +295,7 @@ rte_ioat_completed_ops(int dev_id, uint8_t max_copies,
 	int error;
 	int i = 0;
 
-	end_read = (rte_ioat_get_last_completed(ioat, &error) + 1) & mask;
+	end_read = (__ioat_get_last_completed(ioat, &error) + 1) & mask;
 	count = (end_read - (read & mask)) & mask;
 
 	if (error) {
@@ -332,6 +332,185 @@ rte_ioat_completed_ops(int dev_id, uint8_t max_copies,
 	return count;
 }
 
+static __rte_always_inline int
+__idxd_write_desc(int dev_id, const struct rte_idxd_hw_desc *desc,
+		const struct rte_idxd_user_hdl *hdl)
+{
+	struct rte_idxd_rawdev *idxd =
+			(struct rte_idxd_rawdev *)rte_rawdevs[dev_id].dev_private;
+	struct rte_idxd_desc_batch *b = &idxd->batch_ring[idxd->next_batch];
+
+	/* check for room in the handle ring */
+	if (((idxd->next_free_hdl + 1) & (idxd->hdl_ring_sz - 1)) == idxd->next_ret_hdl)
+		goto failed;
+
+	/* check for space in current batch */
+	if (b->op_count >= BATCH_SIZE)
+		goto failed;
+
+	/* check that we can actually use the current batch */
+	if (b->submitted)
+		goto failed;
+
+	/* write the descriptor */
+	b->ops[b->op_count++] = *desc;
+
+	/* store the completion details */
+	if (!idxd->hdls_disable)
+		idxd->hdl_ring[idxd->next_free_hdl] = *hdl;
+	if (++idxd->next_free_hdl == idxd->hdl_ring_sz)
+		idxd->next_free_hdl = 0;
+
+	return 1;
+
+failed:
+	rte_errno = ENOSPC;
+	return 0;
+}
+
+static __rte_always_inline int
+__idxd_enqueue_copy(int dev_id, rte_iova_t src, rte_iova_t dst,
+		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl)
+{
+	const struct rte_idxd_hw_desc desc = {
+			.op_flags =  (idxd_op_memmove << IDXD_CMD_OP_SHIFT) |
+				IDXD_FLAG_CACHE_CONTROL,
+			.src = src,
+			.dst = dst,
+			.size = length
+	};
+	const struct rte_idxd_user_hdl hdl = {
+			.src = src_hdl,
+			.dst = dst_hdl
+	};
+	return __idxd_write_desc(dev_id, &desc, &hdl);
+}
+
+static __rte_always_inline int
+__idxd_fence(int dev_id)
+{
+	static const struct rte_idxd_hw_desc fence = {
+			.op_flags = IDXD_FLAG_FENCE
+	};
+	static const struct rte_idxd_user_hdl null_hdl;
+	return __idxd_write_desc(dev_id, &fence, &null_hdl);
+}
+
+static __rte_always_inline void
+__idxd_movdir64b(volatile void *dst, const void *src)
+{
+	asm volatile (".byte 0x66, 0x0f, 0x38, 0xf8, 0x02"
+			:
+			: "a" (dst), "d" (src));
+}
+
+static __rte_always_inline void
+__idxd_perform_ops(int dev_id)
+{
+	struct rte_idxd_rawdev *idxd =
+			(struct rte_idxd_rawdev *)rte_rawdevs[dev_id].dev_private;
+	struct rte_idxd_desc_batch *b = &idxd->batch_ring[idxd->next_batch];
+
+	if (b->submitted || b->op_count == 0)
+		return;
+	b->hdl_end = idxd->next_free_hdl;
+	b->comp.status = 0;
+	b->submitted = 1;
+	b->batch_desc.size = b->op_count + 1;
+	__idxd_movdir64b(idxd->portal, &b->batch_desc);
+
+	if (++idxd->next_batch == idxd->batch_ring_sz)
+		idxd->next_batch = 0;
+}
+
+static __rte_always_inline int
+__idxd_completed_ops(int dev_id, uint8_t max_ops,
+		uintptr_t *src_hdls, uintptr_t *dst_hdls)
+{
+	struct rte_idxd_rawdev *idxd =
+			(struct rte_idxd_rawdev *)rte_rawdevs[dev_id].dev_private;
+	struct rte_idxd_desc_batch *b = &idxd->batch_ring[idxd->next_completed];
+	uint16_t h_idx = idxd->next_ret_hdl;
+	int n = 0;
+
+	while (b->submitted && b->comp.status != 0) {
+		idxd->last_completed_hdl = b->hdl_end;
+		b->submitted = 0;
+		b->op_count = 0;
+		if (++idxd->next_completed == idxd->batch_ring_sz)
+			idxd->next_completed = 0;
+		b = &idxd->batch_ring[idxd->next_completed];
+	}
+
+	if (!idxd->hdls_disable)
+		for (n = 0; n < max_ops && h_idx != idxd->last_completed_hdl; n++) {
+			src_hdls[n] = idxd->hdl_ring[h_idx].src;
+			dst_hdls[n] = idxd->hdl_ring[h_idx].dst;
+			if (++h_idx == idxd->hdl_ring_sz)
+				h_idx = 0;
+		}
+	else
+		while (h_idx != idxd->last_completed_hdl) {
+			n++;
+			if (++h_idx == idxd->hdl_ring_sz)
+				h_idx = 0;
+		}
+
+	idxd->next_ret_hdl = h_idx;
+
+	return n;
+}
+
+static inline int
+rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
+		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl)
+{
+	enum rte_ioat_dev_type *type =
+			(enum rte_ioat_dev_type *)rte_rawdevs[dev_id].dev_private;
+	if (*type == RTE_IDXD_DEV)
+		return __idxd_enqueue_copy(dev_id, src, dst, length,
+				src_hdl, dst_hdl);
+	else
+		return __ioat_enqueue_copy(dev_id, src, dst, length,
+				src_hdl, dst_hdl);
+}
+
+static inline int
+rte_ioat_fence(int dev_id)
+{
+	enum rte_ioat_dev_type *type =
+			(enum rte_ioat_dev_type *)rte_rawdevs[dev_id].dev_private;
+	if (*type == RTE_IDXD_DEV)
+		return __idxd_fence(dev_id);
+	else
+		return __ioat_fence(dev_id);
+}
+
+static inline void
+rte_ioat_perform_ops(int dev_id)
+{
+	enum rte_ioat_dev_type *type =
+			(enum rte_ioat_dev_type *)rte_rawdevs[dev_id].dev_private;
+	if (*type == RTE_IDXD_DEV)
+		return __idxd_perform_ops(dev_id);
+	else
+		return __ioat_perform_ops(dev_id);
+}
+
+static inline int
+rte_ioat_completed_ops(int dev_id, uint8_t max_copies,
+		uintptr_t *src_hdls, uintptr_t *dst_hdls)
+{
+	enum rte_ioat_dev_type *type =
+			(enum rte_ioat_dev_type *)rte_rawdevs[dev_id].dev_private;
+	if (*type == RTE_IDXD_DEV)
+		return __idxd_completed_ops(dev_id, max_copies,
+				src_hdls, dst_hdls);
+	else
+		return __ioat_completed_ops(dev_id,  max_copies,
+				src_hdls, dst_hdls);
+}
+
 static inline void
 __rte_deprecated_msg("use rte_ioat_perform_ops() instead")
 rte_ioat_do_copies(int dev_id) { rte_ioat_perform_ops(dev_id); }
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v6 20/25] raw/ioat: add info function for idxd devices
  2020-10-08  9:51 ` [dpdk-dev] [PATCH v6 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (18 preceding siblings ...)
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 19/25] raw/ioat: add data path " Bruce Richardson
@ 2020-10-08  9:51   ` Bruce Richardson
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 21/25] raw/ioat: create separate statistics structure Bruce Richardson
                     ` (5 subsequent siblings)
  25 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-08  9:51 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Bruce Richardson, Kevin Laatz, Radu Nicolau

Add the info get function for DSA devices, returning just the ring size
info about the device, same as is returned for existing IOAT/CBDMA devices.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 drivers/raw/ioat/idxd_pci.c     |  1 +
 drivers/raw/ioat/idxd_vdev.c    |  1 +
 drivers/raw/ioat/ioat_common.c  | 18 ++++++++++++++++++
 drivers/raw/ioat/ioat_private.h |  3 +++
 4 files changed, 23 insertions(+)

diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
index 6b5c47392..bf5edcfdd 100644
--- a/drivers/raw/ioat/idxd_pci.c
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -106,6 +106,7 @@ static const struct rte_rawdev_ops idxd_pci_ops = {
 		.dev_configure = idxd_dev_configure,
 		.dev_start = idxd_pci_dev_start,
 		.dev_stop = idxd_pci_dev_stop,
+		.dev_info_get = idxd_dev_info_get,
 };
 
 /* each portal uses 4 x 4k pages */
diff --git a/drivers/raw/ioat/idxd_vdev.c b/drivers/raw/ioat/idxd_vdev.c
index 3dad1473b..c75ac4317 100644
--- a/drivers/raw/ioat/idxd_vdev.c
+++ b/drivers/raw/ioat/idxd_vdev.c
@@ -35,6 +35,7 @@ static const struct rte_rawdev_ops idxd_vdev_ops = {
 		.dev_selftest = idxd_rawdev_test,
 		.dump = idxd_dev_dump,
 		.dev_configure = idxd_dev_configure,
+		.dev_info_get = idxd_dev_info_get,
 };
 
 static void *
diff --git a/drivers/raw/ioat/ioat_common.c b/drivers/raw/ioat/ioat_common.c
index 6a4e2979f..b5cea2fda 100644
--- a/drivers/raw/ioat/ioat_common.c
+++ b/drivers/raw/ioat/ioat_common.c
@@ -44,6 +44,24 @@ idxd_dev_dump(struct rte_rawdev *dev, FILE *f)
 	return 0;
 }
 
+int
+idxd_dev_info_get(struct rte_rawdev *dev, rte_rawdev_obj_t dev_info,
+		size_t info_size)
+{
+	struct rte_ioat_rawdev_config *cfg = dev_info;
+	struct idxd_rawdev *idxd = dev->dev_private;
+	struct rte_idxd_rawdev *rte_idxd = &idxd->public;
+
+	if (info_size != sizeof(*cfg))
+		return -EINVAL;
+
+	if (cfg != NULL) {
+		cfg->ring_size = rte_idxd->hdl_ring_sz;
+		cfg->hdls_disable = rte_idxd->hdls_disable;
+	}
+	return 0;
+}
+
 int
 idxd_dev_configure(const struct rte_rawdev *dev,
 		rte_rawdev_obj_t config, size_t config_size)
diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
index aba70d8d7..0f80d60bf 100644
--- a/drivers/raw/ioat/ioat_private.h
+++ b/drivers/raw/ioat/ioat_private.h
@@ -62,6 +62,9 @@ extern int idxd_rawdev_close(struct rte_rawdev *dev);
 extern int idxd_dev_configure(const struct rte_rawdev *dev,
 		rte_rawdev_obj_t config, size_t config_size);
 
+extern int idxd_dev_info_get(struct rte_rawdev *dev, rte_rawdev_obj_t dev_info,
+		size_t info_size);
+
 extern int idxd_rawdev_test(uint16_t dev_id);
 
 extern int idxd_dev_dump(struct rte_rawdev *dev, FILE *f);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v6 21/25] raw/ioat: create separate statistics structure
  2020-10-08  9:51 ` [dpdk-dev] [PATCH v6 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (19 preceding siblings ...)
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 20/25] raw/ioat: add info function " Bruce Richardson
@ 2020-10-08  9:51   ` Bruce Richardson
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 22/25] raw/ioat: move xstats functions to common file Bruce Richardson
                     ` (4 subsequent siblings)
  25 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-08  9:51 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Bruce Richardson, Kevin Laatz, Radu Nicolau

Rather than having the xstats as fields inside the main driver structure,
create a separate structure type for them.

As part of the change, when updating the stats functions referring to the
stats by the old path, we can simplify them to use the id to directly index
into the stats structure, making the code shorter and simpler.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 drivers/raw/ioat/ioat_rawdev.c         | 40 +++++++-------------------
 drivers/raw/ioat/rte_ioat_rawdev_fns.h | 30 ++++++++++++-------
 2 files changed, 29 insertions(+), 41 deletions(-)

diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
index 0097be87e..4ea913fff 100644
--- a/drivers/raw/ioat/ioat_rawdev.c
+++ b/drivers/raw/ioat/ioat_rawdev.c
@@ -132,16 +132,14 @@ ioat_xstats_get(const struct rte_rawdev *dev, const unsigned int ids[],
 		uint64_t values[], unsigned int n)
 {
 	const struct rte_ioat_rawdev *ioat = dev->dev_private;
+	const uint64_t *stats = (const void *)&ioat->xstats;
 	unsigned int i;
 
 	for (i = 0; i < n; i++) {
-		switch (ids[i]) {
-		case 0: values[i] = ioat->enqueue_failed; break;
-		case 1: values[i] = ioat->enqueued; break;
-		case 2: values[i] = ioat->started; break;
-		case 3: values[i] = ioat->completed; break;
-		default: values[i] = 0; break;
-		}
+		if (ids[i] < sizeof(ioat->xstats)/sizeof(*stats))
+			values[i] = stats[ids[i]];
+		else
+			values[i] = 0;
 	}
 	return n;
 }
@@ -167,35 +165,17 @@ static int
 ioat_xstats_reset(struct rte_rawdev *dev, const uint32_t *ids, uint32_t nb_ids)
 {
 	struct rte_ioat_rawdev *ioat = dev->dev_private;
+	uint64_t *stats = (void *)&ioat->xstats;
 	unsigned int i;
 
 	if (!ids) {
-		ioat->enqueue_failed = 0;
-		ioat->enqueued = 0;
-		ioat->started = 0;
-		ioat->completed = 0;
+		memset(&ioat->xstats, 0, sizeof(ioat->xstats));
 		return 0;
 	}
 
-	for (i = 0; i < nb_ids; i++) {
-		switch (ids[i]) {
-		case 0:
-			ioat->enqueue_failed = 0;
-			break;
-		case 1:
-			ioat->enqueued = 0;
-			break;
-		case 2:
-			ioat->started = 0;
-			break;
-		case 3:
-			ioat->completed = 0;
-			break;
-		default:
-			IOAT_PMD_WARN("Invalid xstat id - cannot reset value");
-			break;
-		}
-	}
+	for (i = 0; i < nb_ids; i++)
+		if (ids[i] < sizeof(ioat->xstats)/sizeof(*stats))
+			stats[ids[i]] = 0;
 
 	return 0;
 }
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index 36ba876ea..89bfc8d21 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -49,17 +49,31 @@ enum rte_ioat_dev_type {
 	RTE_IDXD_DEV,
 };
 
+/**
+ * @internal
+ * some statistics for tracking, if added/changed update xstats fns
+ */
+struct rte_ioat_xstats {
+	uint64_t enqueue_failed;
+	uint64_t enqueued;
+	uint64_t started;
+	uint64_t completed;
+};
+
 /**
  * @internal
  * Structure representing an IOAT device instance
  */
 struct rte_ioat_rawdev {
+	/* common fields at the top - match those in rte_idxd_rawdev */
 	enum rte_ioat_dev_type type;
+	struct rte_ioat_xstats xstats;
+
 	struct rte_rawdev *rawdev;
 	const struct rte_memzone *mz;
 	const struct rte_memzone *desc_mz;
 
-	volatile uint16_t *doorbell;
+	volatile uint16_t *doorbell __rte_cache_aligned;
 	phys_addr_t status_addr;
 	phys_addr_t ring_addr;
 
@@ -72,12 +86,6 @@ struct rte_ioat_rawdev {
 	unsigned short next_read;
 	unsigned short next_write;
 
-	/* some statistics for tracking, if added/changed update xstats fns*/
-	uint64_t enqueue_failed __rte_cache_aligned;
-	uint64_t enqueued;
-	uint64_t started;
-	uint64_t completed;
-
 	/* to report completions, the device will write status back here */
 	volatile uint64_t status __rte_cache_aligned;
 
@@ -209,7 +217,7 @@ __ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 	struct rte_ioat_generic_hw_desc *desc;
 
 	if (space == 0) {
-		ioat->enqueue_failed++;
+		ioat->xstats.enqueue_failed++;
 		return 0;
 	}
 
@@ -228,7 +236,7 @@ __ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 					(int64_t)src_hdl);
 	rte_prefetch0(&ioat->desc_ring[ioat->next_write & mask]);
 
-	ioat->enqueued++;
+	ioat->xstats.enqueued++;
 	return 1;
 }
 
@@ -261,7 +269,7 @@ __ioat_perform_ops(int dev_id)
 			.control.completion_update = 1;
 	rte_compiler_barrier();
 	*ioat->doorbell = ioat->next_write;
-	ioat->started = ioat->enqueued;
+	ioat->xstats.started = ioat->xstats.enqueued;
 }
 
 /**
@@ -328,7 +336,7 @@ __ioat_completed_ops(int dev_id, uint8_t max_copies,
 
 end:
 	ioat->next_read = read;
-	ioat->completed += count;
+	ioat->xstats.completed += count;
 	return count;
 }
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v6 22/25] raw/ioat: move xstats functions to common file
  2020-10-08  9:51 ` [dpdk-dev] [PATCH v6 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (20 preceding siblings ...)
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 21/25] raw/ioat: create separate statistics structure Bruce Richardson
@ 2020-10-08  9:51   ` Bruce Richardson
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 23/25] raw/ioat: add xstats tracking for idxd devices Bruce Richardson
                     ` (3 subsequent siblings)
  25 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-08  9:51 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Bruce Richardson, Kevin Laatz, Radu Nicolau

The xstats functions can be used by all ioat devices so move them from the
ioat_rawdev.c file to ioat_common.c, and add the function prototypes to the
internal header file.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 drivers/raw/ioat/ioat_common.c  | 59 +++++++++++++++++++++++++++++++++
 drivers/raw/ioat/ioat_private.h | 10 ++++++
 drivers/raw/ioat/ioat_rawdev.c  | 58 --------------------------------
 3 files changed, 69 insertions(+), 58 deletions(-)

diff --git a/drivers/raw/ioat/ioat_common.c b/drivers/raw/ioat/ioat_common.c
index b5cea2fda..142e171bc 100644
--- a/drivers/raw/ioat/ioat_common.c
+++ b/drivers/raw/ioat/ioat_common.c
@@ -5,9 +5,68 @@
 #include <rte_rawdev_pmd.h>
 #include <rte_memzone.h>
 #include <rte_common.h>
+#include <rte_string_fns.h>
 
 #include "ioat_private.h"
 
+static const char * const xstat_names[] = {
+		"failed_enqueues", "successful_enqueues",
+		"copies_started", "copies_completed"
+};
+
+int
+ioat_xstats_get(const struct rte_rawdev *dev, const unsigned int ids[],
+		uint64_t values[], unsigned int n)
+{
+	const struct rte_ioat_rawdev *ioat = dev->dev_private;
+	const uint64_t *stats = (const void *)&ioat->xstats;
+	unsigned int i;
+
+	for (i = 0; i < n; i++) {
+		if (ids[i] > sizeof(ioat->xstats)/sizeof(*stats))
+			values[i] = 0;
+		else
+			values[i] = stats[ids[i]];
+	}
+	return n;
+}
+
+int
+ioat_xstats_get_names(const struct rte_rawdev *dev,
+		struct rte_rawdev_xstats_name *names,
+		unsigned int size)
+{
+	unsigned int i;
+
+	RTE_SET_USED(dev);
+	if (size < RTE_DIM(xstat_names))
+		return RTE_DIM(xstat_names);
+
+	for (i = 0; i < RTE_DIM(xstat_names); i++)
+		strlcpy(names[i].name, xstat_names[i], sizeof(names[i]));
+
+	return RTE_DIM(xstat_names);
+}
+
+int
+ioat_xstats_reset(struct rte_rawdev *dev, const uint32_t *ids, uint32_t nb_ids)
+{
+	struct rte_ioat_rawdev *ioat = dev->dev_private;
+	uint64_t *stats = (void *)&ioat->xstats;
+	unsigned int i;
+
+	if (!ids) {
+		memset(&ioat->xstats, 0, sizeof(ioat->xstats));
+		return 0;
+	}
+
+	for (i = 0; i < nb_ids; i++)
+		if (ids[i] < sizeof(ioat->xstats)/sizeof(*stats))
+			stats[ids[i]] = 0;
+
+	return 0;
+}
+
 int
 idxd_rawdev_close(struct rte_rawdev *dev __rte_unused)
 {
diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
index 0f80d60bf..ab9a3e6cc 100644
--- a/drivers/raw/ioat/ioat_private.h
+++ b/drivers/raw/ioat/ioat_private.h
@@ -53,6 +53,16 @@ struct idxd_rawdev {
 	} u;
 };
 
+int ioat_xstats_get(const struct rte_rawdev *dev, const unsigned int ids[],
+		uint64_t values[], unsigned int n);
+
+int ioat_xstats_get_names(const struct rte_rawdev *dev,
+		struct rte_rawdev_xstats_name *names,
+		unsigned int size);
+
+int ioat_xstats_reset(struct rte_rawdev *dev, const uint32_t *ids,
+		uint32_t nb_ids);
+
 extern int idxd_rawdev_create(const char *name, struct rte_device *dev,
 		       const struct idxd_rawdev *idxd,
 		       const struct rte_rawdev_ops *ops);
diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
index 4ea913fff..dd2543c80 100644
--- a/drivers/raw/ioat/ioat_rawdev.c
+++ b/drivers/raw/ioat/ioat_rawdev.c
@@ -122,64 +122,6 @@ ioat_dev_info_get(struct rte_rawdev *dev, rte_rawdev_obj_t dev_info,
 	return 0;
 }
 
-static const char * const xstat_names[] = {
-		"failed_enqueues", "successful_enqueues",
-		"copies_started", "copies_completed"
-};
-
-static int
-ioat_xstats_get(const struct rte_rawdev *dev, const unsigned int ids[],
-		uint64_t values[], unsigned int n)
-{
-	const struct rte_ioat_rawdev *ioat = dev->dev_private;
-	const uint64_t *stats = (const void *)&ioat->xstats;
-	unsigned int i;
-
-	for (i = 0; i < n; i++) {
-		if (ids[i] < sizeof(ioat->xstats)/sizeof(*stats))
-			values[i] = stats[ids[i]];
-		else
-			values[i] = 0;
-	}
-	return n;
-}
-
-static int
-ioat_xstats_get_names(const struct rte_rawdev *dev,
-		struct rte_rawdev_xstats_name *names,
-		unsigned int size)
-{
-	unsigned int i;
-
-	RTE_SET_USED(dev);
-	if (size < RTE_DIM(xstat_names))
-		return RTE_DIM(xstat_names);
-
-	for (i = 0; i < RTE_DIM(xstat_names); i++)
-		strlcpy(names[i].name, xstat_names[i], sizeof(names[i]));
-
-	return RTE_DIM(xstat_names);
-}
-
-static int
-ioat_xstats_reset(struct rte_rawdev *dev, const uint32_t *ids, uint32_t nb_ids)
-{
-	struct rte_ioat_rawdev *ioat = dev->dev_private;
-	uint64_t *stats = (void *)&ioat->xstats;
-	unsigned int i;
-
-	if (!ids) {
-		memset(&ioat->xstats, 0, sizeof(ioat->xstats));
-		return 0;
-	}
-
-	for (i = 0; i < nb_ids; i++)
-		if (ids[i] < sizeof(ioat->xstats)/sizeof(*stats))
-			stats[ids[i]] = 0;
-
-	return 0;
-}
-
 static int
 ioat_dev_close(struct rte_rawdev *dev __rte_unused)
 {
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v6 23/25] raw/ioat: add xstats tracking for idxd devices
  2020-10-08  9:51 ` [dpdk-dev] [PATCH v6 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (21 preceding siblings ...)
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 22/25] raw/ioat: move xstats functions to common file Bruce Richardson
@ 2020-10-08  9:51   ` Bruce Richardson
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 24/25] raw/ioat: clean up use of common test function Bruce Richardson
                     ` (2 subsequent siblings)
  25 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-08  9:51 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Bruce Richardson, Kevin Laatz, Radu Nicolau

Add update of the relevant stats for the data path functions and point the
overall device struct xstats function pointers to the existing ioat
functions.

At this point, all necessary hooks for supporting the existing unit tests
are in place so call them for each device.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 drivers/raw/ioat/idxd_pci.c            | 3 +++
 drivers/raw/ioat/idxd_vdev.c           | 3 +++
 drivers/raw/ioat/ioat_rawdev_test.c    | 2 +-
 drivers/raw/ioat/rte_ioat_rawdev_fns.h | 6 ++++++
 4 files changed, 13 insertions(+), 1 deletion(-)

diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
index bf5edcfdd..9113f8c8e 100644
--- a/drivers/raw/ioat/idxd_pci.c
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -107,6 +107,9 @@ static const struct rte_rawdev_ops idxd_pci_ops = {
 		.dev_start = idxd_pci_dev_start,
 		.dev_stop = idxd_pci_dev_stop,
 		.dev_info_get = idxd_dev_info_get,
+		.xstats_get = ioat_xstats_get,
+		.xstats_get_names = ioat_xstats_get_names,
+		.xstats_reset = ioat_xstats_reset,
 };
 
 /* each portal uses 4 x 4k pages */
diff --git a/drivers/raw/ioat/idxd_vdev.c b/drivers/raw/ioat/idxd_vdev.c
index c75ac4317..38218cc1e 100644
--- a/drivers/raw/ioat/idxd_vdev.c
+++ b/drivers/raw/ioat/idxd_vdev.c
@@ -36,6 +36,9 @@ static const struct rte_rawdev_ops idxd_vdev_ops = {
 		.dump = idxd_dev_dump,
 		.dev_configure = idxd_dev_configure,
 		.dev_info_get = idxd_dev_info_get,
+		.xstats_get = ioat_xstats_get,
+		.xstats_get_names = ioat_xstats_get_names,
+		.xstats_reset = ioat_xstats_reset,
 };
 
 static void *
diff --git a/drivers/raw/ioat/ioat_rawdev_test.c b/drivers/raw/ioat/ioat_rawdev_test.c
index ceeac92ef..a84be56c4 100644
--- a/drivers/raw/ioat/ioat_rawdev_test.c
+++ b/drivers/raw/ioat/ioat_rawdev_test.c
@@ -273,5 +273,5 @@ int
 idxd_rawdev_test(uint16_t dev_id)
 {
 	rte_rawdev_dump(dev_id, stdout);
-	return 0;
+	return ioat_rawdev_test(dev_id);
 }
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index 89bfc8d21..d0045d8a4 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -184,6 +184,8 @@ struct rte_idxd_user_hdl {
  */
 struct rte_idxd_rawdev {
 	enum rte_ioat_dev_type type;
+	struct rte_ioat_xstats xstats;
+
 	void *portal; /* address to write the batch descriptor */
 
 	/* counters to track the batches and the individual op handles */
@@ -369,9 +371,11 @@ __idxd_write_desc(int dev_id, const struct rte_idxd_hw_desc *desc,
 	if (++idxd->next_free_hdl == idxd->hdl_ring_sz)
 		idxd->next_free_hdl = 0;
 
+	idxd->xstats.enqueued++;
 	return 1;
 
 failed:
+	idxd->xstats.enqueue_failed++;
 	rte_errno = ENOSPC;
 	return 0;
 }
@@ -429,6 +433,7 @@ __idxd_perform_ops(int dev_id)
 
 	if (++idxd->next_batch == idxd->batch_ring_sz)
 		idxd->next_batch = 0;
+	idxd->xstats.started = idxd->xstats.enqueued;
 }
 
 static __rte_always_inline int
@@ -466,6 +471,7 @@ __idxd_completed_ops(int dev_id, uint8_t max_ops,
 
 	idxd->next_ret_hdl = h_idx;
 
+	idxd->xstats.completed += n;
 	return n;
 }
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v6 24/25] raw/ioat: clean up use of common test function
  2020-10-08  9:51 ` [dpdk-dev] [PATCH v6 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (22 preceding siblings ...)
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 23/25] raw/ioat: add xstats tracking for idxd devices Bruce Richardson
@ 2020-10-08  9:51   ` Bruce Richardson
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 25/25] raw/ioat: add fill operation Bruce Richardson
  2020-10-08 12:32   ` [dpdk-dev] [PATCH v6 00/25] raw/ioat: enhancements and new hardware support Thomas Monjalon
  25 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-08  9:51 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Bruce Richardson, Kevin Laatz, Radu Nicolau

Now that all devices can pass the same set of unit tests, eliminate the
temporary idxd_rawdev_test function and move the prototype for
ioat_rawdev_test to the proper internal header file, to be used by all
device instances.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 drivers/raw/ioat/idxd_pci.c         | 2 +-
 drivers/raw/ioat/idxd_vdev.c        | 2 +-
 drivers/raw/ioat/ioat_private.h     | 4 ++--
 drivers/raw/ioat/ioat_rawdev.c      | 2 --
 drivers/raw/ioat/ioat_rawdev_test.c | 7 -------
 5 files changed, 4 insertions(+), 13 deletions(-)

diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
index 9113f8c8e..165a9ea7f 100644
--- a/drivers/raw/ioat/idxd_pci.c
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -101,7 +101,7 @@ idxd_pci_dev_start(struct rte_rawdev *dev)
 
 static const struct rte_rawdev_ops idxd_pci_ops = {
 		.dev_close = idxd_rawdev_close,
-		.dev_selftest = idxd_rawdev_test,
+		.dev_selftest = ioat_rawdev_test,
 		.dump = idxd_dev_dump,
 		.dev_configure = idxd_dev_configure,
 		.dev_start = idxd_pci_dev_start,
diff --git a/drivers/raw/ioat/idxd_vdev.c b/drivers/raw/ioat/idxd_vdev.c
index 38218cc1e..50d47d05c 100644
--- a/drivers/raw/ioat/idxd_vdev.c
+++ b/drivers/raw/ioat/idxd_vdev.c
@@ -32,7 +32,7 @@ struct idxd_vdev_args {
 
 static const struct rte_rawdev_ops idxd_vdev_ops = {
 		.dev_close = idxd_rawdev_close,
-		.dev_selftest = idxd_rawdev_test,
+		.dev_selftest = ioat_rawdev_test,
 		.dump = idxd_dev_dump,
 		.dev_configure = idxd_dev_configure,
 		.dev_info_get = idxd_dev_info_get,
diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
index ab9a3e6cc..a74bc0422 100644
--- a/drivers/raw/ioat/ioat_private.h
+++ b/drivers/raw/ioat/ioat_private.h
@@ -63,6 +63,8 @@ int ioat_xstats_get_names(const struct rte_rawdev *dev,
 int ioat_xstats_reset(struct rte_rawdev *dev, const uint32_t *ids,
 		uint32_t nb_ids);
 
+extern int ioat_rawdev_test(uint16_t dev_id);
+
 extern int idxd_rawdev_create(const char *name, struct rte_device *dev,
 		       const struct idxd_rawdev *idxd,
 		       const struct rte_rawdev_ops *ops);
@@ -75,8 +77,6 @@ extern int idxd_dev_configure(const struct rte_rawdev *dev,
 extern int idxd_dev_info_get(struct rte_rawdev *dev, rte_rawdev_obj_t dev_info,
 		size_t info_size);
 
-extern int idxd_rawdev_test(uint16_t dev_id);
-
 extern int idxd_dev_dump(struct rte_rawdev *dev, FILE *f);
 
 #endif /* _IOAT_PRIVATE_H_ */
diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
index dd2543c80..2c88b4369 100644
--- a/drivers/raw/ioat/ioat_rawdev.c
+++ b/drivers/raw/ioat/ioat_rawdev.c
@@ -128,8 +128,6 @@ ioat_dev_close(struct rte_rawdev *dev __rte_unused)
 	return 0;
 }
 
-extern int ioat_rawdev_test(uint16_t dev_id);
-
 static int
 ioat_rawdev_create(const char *name, struct rte_pci_device *dev)
 {
diff --git a/drivers/raw/ioat/ioat_rawdev_test.c b/drivers/raw/ioat/ioat_rawdev_test.c
index a84be56c4..60d189b62 100644
--- a/drivers/raw/ioat/ioat_rawdev_test.c
+++ b/drivers/raw/ioat/ioat_rawdev_test.c
@@ -268,10 +268,3 @@ ioat_rawdev_test(uint16_t dev_id)
 	free(ids);
 	return -1;
 }
-
-int
-idxd_rawdev_test(uint16_t dev_id)
-{
-	rte_rawdev_dump(dev_id, stdout);
-	return ioat_rawdev_test(dev_id);
-}
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* [dpdk-dev] [PATCH v6 25/25] raw/ioat: add fill operation
  2020-10-08  9:51 ` [dpdk-dev] [PATCH v6 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (23 preceding siblings ...)
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 24/25] raw/ioat: clean up use of common test function Bruce Richardson
@ 2020-10-08  9:51   ` Bruce Richardson
  2020-10-08 12:32   ` [dpdk-dev] [PATCH v6 00/25] raw/ioat: enhancements and new hardware support Thomas Monjalon
  25 siblings, 0 replies; 157+ messages in thread
From: Bruce Richardson @ 2020-10-08  9:51 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Kevin Laatz, Bruce Richardson, Radu Nicolau

From: Kevin Laatz <kevin.laatz@intel.com>

Add fill operation enqueue support for IOAT and IDXD. The fill enqueue is
similar to the copy enqueue, but takes a 'pattern' rather than a source
address to transfer to the destination address. This patch also includes an
additional test case for the new operation type.

Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 doc/guides/rawdevs/ioat.rst            | 10 ++++
 doc/guides/rel_notes/release_20_11.rst |  2 +
 drivers/raw/ioat/ioat_rawdev_test.c    | 62 ++++++++++++++++++++++++
 drivers/raw/ioat/rte_ioat_rawdev.h     | 27 +++++++++++
 drivers/raw/ioat/rte_ioat_rawdev_fns.h | 65 ++++++++++++++++++++++++--
 5 files changed, 161 insertions(+), 5 deletions(-)

diff --git a/doc/guides/rawdevs/ioat.rst b/doc/guides/rawdevs/ioat.rst
index 7c2a2d457..250cfc48a 100644
--- a/doc/guides/rawdevs/ioat.rst
+++ b/doc/guides/rawdevs/ioat.rst
@@ -285,6 +285,16 @@ is correct before freeing the data buffers using the returned handles:
         }
 
 
+Filling an Area of Memory
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The IOAT driver also has support for the ``fill`` operation, where an area
+of memory is overwritten, or filled, with a short pattern of data.
+Fill operations can be performed in much the same was as copy operations
+described above, just using the ``rte_ioat_enqueue_fill()`` function rather
+than the ``rte_ioat_enqueue_copy()`` function.
+
+
 Querying Device Statistics
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index e48e6ea75..943ec83fd 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -122,6 +122,8 @@ New Features
 
   * Added support for Intel\ |reg| Data Streaming Accelerator hardware.
     For more information, see https://01.org/blogs/2019/introducing-intel-data-streaming-accelerator
+  * Added support for the fill operation via the API ``rte_ioat_enqueue_fill()``,
+    where the hardware fills an area of memory with a repeating pattern.
   * Added a per-device configuration flag to disable management of user-provided completion handles
   * Renamed the ``rte_ioat_do_copies()`` API to ``rte_ioat_perform_ops()``,
     and renamed the ``rte_ioat_completed_copies()`` API to ``rte_ioat_completed_ops()``
diff --git a/drivers/raw/ioat/ioat_rawdev_test.c b/drivers/raw/ioat/ioat_rawdev_test.c
index 60d189b62..101f24a67 100644
--- a/drivers/raw/ioat/ioat_rawdev_test.c
+++ b/drivers/raw/ioat/ioat_rawdev_test.c
@@ -155,6 +155,52 @@ test_enqueue_copies(int dev_id)
 	return 0;
 }
 
+static int
+test_enqueue_fill(int dev_id)
+{
+	const unsigned int length[] = {8, 64, 1024, 50, 100, 89};
+	struct rte_mbuf *dst = rte_pktmbuf_alloc(pool);
+	char *dst_data = rte_pktmbuf_mtod(dst, char *);
+	struct rte_mbuf *completed[2] = {0};
+	uint64_t pattern = 0xfedcba9876543210;
+	unsigned int i, j;
+
+	for (i = 0; i < RTE_DIM(length); i++) {
+		/* reset dst_data */
+		memset(dst_data, 0, length[i]);
+
+		/* perform the fill operation */
+		if (rte_ioat_enqueue_fill(dev_id, pattern,
+				dst->buf_iova + dst->data_off, length[i],
+				(uintptr_t)dst) != 1) {
+			PRINT_ERR("Error with rte_ioat_enqueue_fill\n");
+			return -1;
+		}
+
+		rte_ioat_perform_ops(dev_id);
+		usleep(100);
+
+		if (rte_ioat_completed_ops(dev_id, 1, (void *)&completed[0],
+			(void *)&completed[1]) != 1) {
+			PRINT_ERR("Error with completed ops\n");
+			return -1;
+		}
+		/* check the result */
+		for (j = 0; j < length[i]; j++) {
+			char pat_byte = ((char *)&pattern)[j % 8];
+			if (dst_data[j] != pat_byte) {
+				PRINT_ERR("Error with fill operation (length = %u): got (%x), not (%x)\n",
+						length[i], dst_data[j],
+						pat_byte);
+				return -1;
+			}
+		}
+	}
+
+	rte_pktmbuf_free(dst);
+	return 0;
+}
+
 int
 ioat_rawdev_test(uint16_t dev_id)
 {
@@ -234,6 +280,7 @@ ioat_rawdev_test(uint16_t dev_id)
 	}
 
 	/* run the test cases */
+	printf("Running Copy Tests\n");
 	for (i = 0; i < 100; i++) {
 		unsigned int j;
 
@@ -247,6 +294,21 @@ ioat_rawdev_test(uint16_t dev_id)
 	}
 	printf("\n");
 
+	/* test enqueue fill operation */
+	printf("Running Fill Tests\n");
+	for (i = 0; i < 100; i++) {
+		unsigned int j;
+
+		if (test_enqueue_fill(dev_id) != 0)
+			goto err;
+
+		rte_rawdev_xstats_get(dev_id, ids, stats, nb_xstats);
+		for (j = 0; j < nb_xstats; j++)
+			printf("%s: %"PRIu64"   ", snames[j].name, stats[j]);
+		printf("\r");
+	}
+	printf("\n");
+
 	rte_rawdev_stop(dev_id);
 	if (rte_rawdev_xstats_reset(dev_id, NULL, 0) != 0) {
 		PRINT_ERR("Error resetting xstat values\n");
diff --git a/drivers/raw/ioat/rte_ioat_rawdev.h b/drivers/raw/ioat/rte_ioat_rawdev.h
index 21a929012..f9e8425a7 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev.h
@@ -37,6 +37,33 @@ struct rte_ioat_rawdev_config {
 	bool hdls_disable;    /**< if set, ignore user-supplied handle params */
 };
 
+/**
+ * Enqueue a fill operation onto the ioat device
+ *
+ * This queues up a fill operation to be performed by hardware, but does not
+ * trigger hardware to begin that operation.
+ *
+ * @param dev_id
+ *   The rawdev device id of the ioat instance
+ * @param pattern
+ *   The pattern to populate the destination buffer with
+ * @param dst
+ *   The physical address of the destination buffer
+ * @param length
+ *   The length of the destination buffer
+ * @param dst_hdl
+ *   An opaque handle for the destination data, to be returned when this
+ *   operation has been completed and the user polls for the completion details.
+ *   NOTE: If hdls_disable configuration option for the device is set, this
+ *   parameter is ignored.
+ * @return
+ *   Number of operations enqueued, either 0 or 1
+ */
+static inline int
+__rte_experimental
+rte_ioat_enqueue_fill(int dev_id, uint64_t pattern, phys_addr_t dst,
+		unsigned int length, uintptr_t dst_hdl);
+
 /**
  * Enqueue a copy operation onto the ioat device
  *
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index d0045d8a4..c2c4601ca 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -115,6 +115,13 @@ enum rte_idxd_ops {
 #define IDXD_FLAG_REQUEST_COMPLETION    (1 << 3)
 #define IDXD_FLAG_CACHE_CONTROL         (1 << 8)
 
+#define IOAT_COMP_UPDATE_SHIFT	3
+#define IOAT_CMD_OP_SHIFT	24
+enum rte_ioat_ops {
+	ioat_op_copy = 0,	/* Standard DMA Operation */
+	ioat_op_fill		/* Block Fill */
+};
+
 /**
  * Hardware descriptor used by DSA hardware, for both bursts and
  * for individual operations.
@@ -203,11 +210,8 @@ struct rte_idxd_rawdev {
 	struct rte_idxd_desc_batch *batch_ring;
 };
 
-/*
- * Enqueue a copy operation onto the ioat device
- */
 static __rte_always_inline int
-__ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
+__ioat_write_desc(int dev_id, uint32_t op, uint64_t src, phys_addr_t dst,
 		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl)
 {
 	struct rte_ioat_rawdev *ioat =
@@ -229,7 +233,8 @@ __ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 	desc = &ioat->desc_ring[write];
 	desc->size = length;
 	/* set descriptor write-back every 16th descriptor */
-	desc->u.control_raw = (uint32_t)((!(write & 0xF)) << 3);
+	desc->u.control_raw = (uint32_t)((op << IOAT_CMD_OP_SHIFT) |
+			(!(write & 0xF) << IOAT_COMP_UPDATE_SHIFT));
 	desc->src_addr = src;
 	desc->dest_addr = dst;
 
@@ -242,6 +247,27 @@ __ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 	return 1;
 }
 
+static __rte_always_inline int
+__ioat_enqueue_fill(int dev_id, uint64_t pattern, phys_addr_t dst,
+		unsigned int length, uintptr_t dst_hdl)
+{
+	static const uintptr_t null_hdl;
+
+	return __ioat_write_desc(dev_id, ioat_op_fill, pattern, dst, length,
+			null_hdl, dst_hdl);
+}
+
+/*
+ * Enqueue a copy operation onto the ioat device
+ */
+static __rte_always_inline int
+__ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
+		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl)
+{
+	return __ioat_write_desc(dev_id, ioat_op_copy, src, dst, length,
+			src_hdl, dst_hdl);
+}
+
 /* add fence to last written descriptor */
 static __rte_always_inline int
 __ioat_fence(int dev_id)
@@ -380,6 +406,23 @@ __idxd_write_desc(int dev_id, const struct rte_idxd_hw_desc *desc,
 	return 0;
 }
 
+static __rte_always_inline int
+__idxd_enqueue_fill(int dev_id, uint64_t pattern, rte_iova_t dst,
+		unsigned int length, uintptr_t dst_hdl)
+{
+	const struct rte_idxd_hw_desc desc = {
+			.op_flags =  (idxd_op_fill << IDXD_CMD_OP_SHIFT) |
+				IDXD_FLAG_CACHE_CONTROL,
+			.src = pattern,
+			.dst = dst,
+			.size = length
+	};
+	const struct rte_idxd_user_hdl hdl = {
+			.dst = dst_hdl
+	};
+	return __idxd_write_desc(dev_id, &desc, &hdl);
+}
+
 static __rte_always_inline int
 __idxd_enqueue_copy(int dev_id, rte_iova_t src, rte_iova_t dst,
 		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl)
@@ -475,6 +518,18 @@ __idxd_completed_ops(int dev_id, uint8_t max_ops,
 	return n;
 }
 
+static inline int
+rte_ioat_enqueue_fill(int dev_id, uint64_t pattern, phys_addr_t dst,
+		unsigned int len, uintptr_t dst_hdl)
+{
+	enum rte_ioat_dev_type *type =
+			(enum rte_ioat_dev_type *)rte_rawdevs[dev_id].dev_private;
+	if (*type == RTE_IDXD_DEV)
+		return __idxd_enqueue_fill(dev_id, pattern, dst, len, dst_hdl);
+	else
+		return __ioat_enqueue_fill(dev_id, pattern, dst, len, dst_hdl);
+}
+
 static inline int
 rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl)
-- 
2.25.1


^ permalink raw reply	[flat|nested] 157+ messages in thread

* Re: [dpdk-dev] [PATCH v6 00/25] raw/ioat: enhancements and new hardware support
  2020-10-08  9:51 ` [dpdk-dev] [PATCH v6 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
                     ` (24 preceding siblings ...)
  2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 25/25] raw/ioat: add fill operation Bruce Richardson
@ 2020-10-08 12:32   ` Thomas Monjalon
  25 siblings, 0 replies; 157+ messages in thread
From: Thomas Monjalon @ 2020-10-08 12:32 UTC (permalink / raw)
  To: Bruce Richardson, kevin.laatz; +Cc: dev, patrick.fu

> This patchset adds some small enhancements, some rework and also support
> for new hardware to the ioat rawdev driver. Most rework and enhancements
> are largely self-explanatory from the individual patches.
> 
> The new hardware support is for the Intel(R) DSA accelerator which will be
> present in future Intel processors. A description of this new hardware is
> covered in [1]. Functions specific to the new hardware use the "idxd"
> prefix, for consistency with the kernel driver.
> 
> [1] https://01.org/blogs/2019/introducing-intel-data-streaming-accelerator

Applied, thanks



^ permalink raw reply	[flat|nested] 157+ messages in thread

end of thread, other threads:[~2020-10-08 12:33 UTC | newest]

Thread overview: 157+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-21  9:51 [dpdk-dev] [PATCH 20.11 00/20] raw/ioat: enhancements and new hardware support Bruce Richardson
2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 01/20] raw/ioat: add a flag to control copying handle parameters Bruce Richardson
2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 02/20] raw/ioat: support multiple devices being tested Bruce Richardson
2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 03/20] app/test: change rawdev autotest to run selftest on all devs Bruce Richardson
2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 04/20] app/test: remove ioat-specific autotest Bruce Richardson
2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 05/20] raw/ioat: split header for readability Bruce Richardson
2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 06/20] raw/ioat: make the HW register spec private Bruce Richardson
2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 07/20] usertools/dpdk-devbind.py: add support for DSA HW Bruce Richardson
2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 08/20] raw/ioat: add skeleton for vfio/uio based DSA device Bruce Richardson
2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 09/20] raw/ioat: add vdev probe for DSA/idxd devices Bruce Richardson
2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 10/20] raw/ioat: create rawdev instances on idxd PCI probe Bruce Richardson
2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 11/20] raw/ioat: create rawdev instances for idxd vdevs Bruce Richardson
2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 12/20] raw/ioat: add datapath data structures for idxd devices Bruce Richardson
2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 13/20] raw/ioat: add configure function " Bruce Richardson
2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 14/20] raw/ioat: add start and stop functions " Bruce Richardson
2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 15/20] raw/ioat: add data path support " Bruce Richardson
2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 16/20] raw/ioat: add info function " Bruce Richardson
2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 17/20] raw/ioat: create separate statistics structure Bruce Richardson
2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 18/20] raw/ioat: move xstats functions to common file Bruce Richardson
2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 19/20] raw/ioat: add xstats tracking for idxd devices Bruce Richardson
2020-07-21  9:51 ` [dpdk-dev] [PATCH 20.11 20/20] raw/ioat: clean up use of common test function Bruce Richardson
2020-08-21 16:29 ` [dpdk-dev] [PATCH v2 00/18] raw/ioat: enhancements and new hardware support Bruce Richardson
2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 01/18] raw/ioat: add a flag to control copying handle parameters Bruce Richardson
2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 02/18] raw/ioat: split header for readability Bruce Richardson
2020-08-25 15:27     ` Laatz, Kevin
2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 03/18] raw/ioat: make the HW register spec private Bruce Richardson
2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 04/18] usertools/dpdk-devbind.py: add support for DSA HW Bruce Richardson
2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 05/18] raw/ioat: add skeleton for VFIO/UIO based DSA device Bruce Richardson
2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 06/18] raw/ioat: add vdev probe for DSA/idxd devices Bruce Richardson
2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 07/18] raw/ioat: include example configuration script Bruce Richardson
2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 08/18] raw/ioat: create rawdev instances on idxd PCI probe Bruce Richardson
2020-08-25 15:27     ` Laatz, Kevin
2020-08-26 15:45       ` Bruce Richardson
2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 09/18] raw/ioat: create rawdev instances for idxd vdevs Bruce Richardson
2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 10/18] raw/ioat: add datapath data structures for idxd devices Bruce Richardson
2020-08-25 15:27     ` Laatz, Kevin
2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 11/18] raw/ioat: add configure function " Bruce Richardson
2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 12/18] raw/ioat: add start and stop functions " Bruce Richardson
2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 13/18] raw/ioat: add data path " Bruce Richardson
2020-08-25 15:27     ` Laatz, Kevin
2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 14/18] raw/ioat: add info function " Bruce Richardson
2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 15/18] raw/ioat: create separate statistics structure Bruce Richardson
2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 16/18] raw/ioat: move xstats functions to common file Bruce Richardson
2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 17/18] raw/ioat: add xstats tracking for idxd devices Bruce Richardson
2020-08-24  9:56     ` Laatz, Kevin
2020-08-21 16:29   ` [dpdk-dev] [PATCH v2 18/18] raw/ioat: clean up use of common test function Bruce Richardson
2020-08-21 16:39   ` [dpdk-dev] [PATCH v2 00/18] raw/ioat: enhancements and new hardware support Bruce Richardson
2020-09-25 11:08 ` [dpdk-dev] [PATCH v3 00/25] " Bruce Richardson
2020-09-25 11:08   ` [dpdk-dev] [PATCH v3 01/25] doc/api: add ioat driver to index Bruce Richardson
2020-09-25 11:08   ` [dpdk-dev] [PATCH v3 02/25] raw/ioat: fix missing close function Bruce Richardson
2020-09-25 11:12     ` Bruce Richardson
2020-09-25 11:12     ` Pai G, Sunil
2020-09-25 11:08   ` [dpdk-dev] [PATCH v3 03/25] raw/ioat: enable use from C++ code Bruce Richardson
2020-09-25 11:08   ` [dpdk-dev] [PATCH v3 04/25] raw/ioat: include extra info in error messages Bruce Richardson
2020-09-25 11:08   ` [dpdk-dev] [PATCH v3 05/25] raw/ioat: add a flag to control copying handle parameters Bruce Richardson
2020-09-25 11:08   ` [dpdk-dev] [PATCH v3 06/25] raw/ioat: split header for readability Bruce Richardson
2020-09-25 11:08   ` [dpdk-dev] [PATCH v3 07/25] raw/ioat: rename functions to be operation-agnostic Bruce Richardson
2020-09-25 11:08   ` [dpdk-dev] [PATCH v3 08/25] raw/ioat: add separate API for fence call Bruce Richardson
2020-09-25 11:08   ` [dpdk-dev] [PATCH v3 09/25] raw/ioat: make the HW register spec private Bruce Richardson
2020-09-25 11:08   ` [dpdk-dev] [PATCH v3 10/25] usertools/dpdk-devbind.py: add support for DSA HW Bruce Richardson
2020-09-25 11:08   ` [dpdk-dev] [PATCH v3 11/25] raw/ioat: add skeleton for VFIO/UIO based DSA device Bruce Richardson
2020-09-25 11:08   ` [dpdk-dev] [PATCH v3 12/25] raw/ioat: add vdev probe for DSA/idxd devices Bruce Richardson
2020-09-25 11:08   ` [dpdk-dev] [PATCH v3 13/25] raw/ioat: include example configuration script Bruce Richardson
2020-09-25 11:08   ` [dpdk-dev] [PATCH v3 14/25] raw/ioat: create rawdev instances on idxd PCI probe Bruce Richardson
2020-09-25 11:09   ` [dpdk-dev] [PATCH v3 15/25] raw/ioat: create rawdev instances for idxd vdevs Bruce Richardson
2020-09-25 11:09   ` [dpdk-dev] [PATCH v3 16/25] raw/ioat: add datapath data structures for idxd devices Bruce Richardson
2020-09-25 11:09   ` [dpdk-dev] [PATCH v3 17/25] raw/ioat: add configure function " Bruce Richardson
2020-09-25 11:09   ` [dpdk-dev] [PATCH v3 18/25] raw/ioat: add start and stop functions " Bruce Richardson
2020-09-25 11:09   ` [dpdk-dev] [PATCH v3 19/25] raw/ioat: add data path " Bruce Richardson
2020-09-25 11:09   ` [dpdk-dev] [PATCH v3 20/25] raw/ioat: add info function " Bruce Richardson
2020-09-25 11:09   ` [dpdk-dev] [PATCH v3 21/25] raw/ioat: create separate statistics structure Bruce Richardson
2020-09-25 11:09   ` [dpdk-dev] [PATCH v3 22/25] raw/ioat: move xstats functions to common file Bruce Richardson
2020-09-25 11:09   ` [dpdk-dev] [PATCH v3 23/25] raw/ioat: add xstats tracking for idxd devices Bruce Richardson
2020-09-25 11:09   ` [dpdk-dev] [PATCH v3 24/25] raw/ioat: clean up use of common test function Bruce Richardson
2020-09-25 11:09   ` [dpdk-dev] [PATCH v3 25/25] raw/ioat: add fill operation Bruce Richardson
2020-09-28 16:42 ` [dpdk-dev] [PATCH v4 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 01/25] doc/api: add ioat driver to index Bruce Richardson
2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 02/25] raw/ioat: fix missing close function Bruce Richardson
2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 03/25] raw/ioat: enable use from C++ code Bruce Richardson
2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 04/25] raw/ioat: include extra info in error messages Bruce Richardson
2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 05/25] raw/ioat: add a flag to control copying handle parameters Bruce Richardson
2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 06/25] raw/ioat: split header for readability Bruce Richardson
2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 07/25] raw/ioat: rename functions to be operation-agnostic Bruce Richardson
2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 08/25] raw/ioat: add separate API for fence call Bruce Richardson
2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 09/25] raw/ioat: make the HW register spec private Bruce Richardson
2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 10/25] usertools/dpdk-devbind.py: add support for DSA HW Bruce Richardson
2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 11/25] raw/ioat: add skeleton for VFIO/UIO based DSA device Bruce Richardson
2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 12/25] raw/ioat: add vdev probe for DSA/idxd devices Bruce Richardson
2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 13/25] raw/ioat: include example configuration script Bruce Richardson
2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 14/25] raw/ioat: create rawdev instances on idxd PCI probe Bruce Richardson
2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 15/25] raw/ioat: create rawdev instances for idxd vdevs Bruce Richardson
2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 16/25] raw/ioat: add datapath data structures for idxd devices Bruce Richardson
2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 17/25] raw/ioat: add configure function " Bruce Richardson
2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 18/25] raw/ioat: add start and stop functions " Bruce Richardson
2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 19/25] raw/ioat: add data path " Bruce Richardson
2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 20/25] raw/ioat: add info function " Bruce Richardson
2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 21/25] raw/ioat: create separate statistics structure Bruce Richardson
2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 22/25] raw/ioat: move xstats functions to common file Bruce Richardson
2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 23/25] raw/ioat: add xstats tracking for idxd devices Bruce Richardson
2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 24/25] raw/ioat: clean up use of common test function Bruce Richardson
2020-09-28 16:42   ` [dpdk-dev] [PATCH v4 25/25] raw/ioat: add fill operation Bruce Richardson
2020-10-02 14:07   ` [dpdk-dev] [PATCH v4 00/25] raw/ioat: enhancements and new hardware support Nicolau, Radu
2020-10-06 21:10   ` Thomas Monjalon
2020-10-07  9:46     ` Bruce Richardson
2020-10-07 16:29 ` [dpdk-dev] [PATCH v5 " Bruce Richardson
2020-10-07 16:29   ` [dpdk-dev] [PATCH v5 01/25] doc/api: add ioat driver to index Bruce Richardson
2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 02/25] raw/ioat: fix missing close function Bruce Richardson
2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 03/25] raw/ioat: enable use from C++ code Bruce Richardson
2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 04/25] raw/ioat: include extra info in error messages Bruce Richardson
2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 05/25] raw/ioat: add a flag to control copying handle parameters Bruce Richardson
2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 06/25] raw/ioat: split header for readability Bruce Richardson
2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 07/25] raw/ioat: rename functions to be operation-agnostic Bruce Richardson
2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 08/25] raw/ioat: add separate API for fence call Bruce Richardson
2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 09/25] raw/ioat: make the HW register spec private Bruce Richardson
2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 10/25] usertools/dpdk-devbind.py: add support for DSA HW Bruce Richardson
2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 11/25] raw/ioat: add skeleton for VFIO/UIO based DSA device Bruce Richardson
2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 12/25] raw/ioat: add vdev probe for DSA/idxd devices Bruce Richardson
2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 13/25] raw/ioat: include example configuration script Bruce Richardson
2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 14/25] raw/ioat: create rawdev instances on idxd PCI probe Bruce Richardson
2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 15/25] raw/ioat: create rawdev instances for idxd vdevs Bruce Richardson
2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 16/25] raw/ioat: add datapath data structures for idxd devices Bruce Richardson
2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 17/25] raw/ioat: add configure function " Bruce Richardson
2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 18/25] raw/ioat: add start and stop functions " Bruce Richardson
2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 19/25] raw/ioat: add data path " Bruce Richardson
2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 20/25] raw/ioat: add info function " Bruce Richardson
2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 21/25] raw/ioat: create separate statistics structure Bruce Richardson
2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 22/25] raw/ioat: move xstats functions to common file Bruce Richardson
2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 23/25] raw/ioat: add xstats tracking for idxd devices Bruce Richardson
2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 24/25] raw/ioat: clean up use of common test function Bruce Richardson
2020-10-07 16:30   ` [dpdk-dev] [PATCH v5 25/25] raw/ioat: add fill operation Bruce Richardson
2020-10-08  9:51 ` [dpdk-dev] [PATCH v6 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 01/25] doc/api: add ioat driver to index Bruce Richardson
2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 02/25] raw/ioat: fix missing close function Bruce Richardson
2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 03/25] raw/ioat: enable use from C++ code Bruce Richardson
2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 04/25] raw/ioat: include extra info in error messages Bruce Richardson
2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 05/25] raw/ioat: add a flag to control copying handle parameters Bruce Richardson
2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 06/25] raw/ioat: split header for readability Bruce Richardson
2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 07/25] raw/ioat: rename functions to be operation-agnostic Bruce Richardson
2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 08/25] raw/ioat: add separate API for fence call Bruce Richardson
2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 09/25] raw/ioat: make the HW register spec private Bruce Richardson
2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 10/25] usertools/dpdk-devbind.py: add support for DSA HW Bruce Richardson
2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 11/25] raw/ioat: add skeleton for VFIO/UIO based DSA device Bruce Richardson
2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 12/25] raw/ioat: add vdev probe for DSA/idxd devices Bruce Richardson
2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 13/25] raw/ioat: include example configuration script Bruce Richardson
2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 14/25] raw/ioat: create rawdev instances on idxd PCI probe Bruce Richardson
2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 15/25] raw/ioat: create rawdev instances for idxd vdevs Bruce Richardson
2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 16/25] raw/ioat: add datapath data structures for idxd devices Bruce Richardson
2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 17/25] raw/ioat: add configure function " Bruce Richardson
2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 18/25] raw/ioat: add start and stop functions " Bruce Richardson
2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 19/25] raw/ioat: add data path " Bruce Richardson
2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 20/25] raw/ioat: add info function " Bruce Richardson
2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 21/25] raw/ioat: create separate statistics structure Bruce Richardson
2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 22/25] raw/ioat: move xstats functions to common file Bruce Richardson
2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 23/25] raw/ioat: add xstats tracking for idxd devices Bruce Richardson
2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 24/25] raw/ioat: clean up use of common test function Bruce Richardson
2020-10-08  9:51   ` [dpdk-dev] [PATCH v6 25/25] raw/ioat: add fill operation Bruce Richardson
2020-10-08 12:32   ` [dpdk-dev] [PATCH v6 00/25] raw/ioat: enhancements and new hardware support Thomas Monjalon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).