DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 0/6] Introduce DPAA DMA driver
@ 2021-09-09 11:14 Gagandeep Singh
  2021-09-09 11:14 ` [dpdk-dev] [PATCH 1/6] dma/dpaa: introduce " Gagandeep Singh
                   ` (7 more replies)
  0 siblings, 8 replies; 42+ messages in thread
From: Gagandeep Singh @ 2021-09-09 11:14 UTC (permalink / raw)
  To: dev; +Cc: nipun.gupta, thomas, Gagandeep Singh

This series support DMA driver for NXP
1046A and 1043A SoCs.

Gagandeep Singh (6):
  dma/dpaa: introduce DPAA DMA driver
  dma/dpaa: add device probe and remove functionality
  dma/dpaa: add driver logs
  dma/dpaa: support basic operations
  dma/dpaa: support DMA operations
  doc: add user guide of DPAA DMA driver

 MAINTAINERS                            |   11 +
 doc/guides/dmadevs/dpaa.rst            |   60 ++
 doc/guides/rel_notes/release_21_11.rst |    3 +
 drivers/bus/dpaa/dpaa_bus.c            |   22 +
 drivers/bus/dpaa/rte_dpaa_bus.h        |    5 +
 drivers/common/dpaax/dpaa_list.h       |    2 +
 drivers/dma/dpaa/dpaa_qdma.c           | 1002 ++++++++++++++++++++++++
 drivers/dma/dpaa/dpaa_qdma.h           |  257 ++++++
 drivers/dma/dpaa/dpaa_qdma_logs.h      |   46 ++
 drivers/dma/dpaa/meson.build           |   14 +
 drivers/dma/dpaa/version.map           |    4 +
 drivers/dma/meson.build                |    1 +
 12 files changed, 1427 insertions(+)
 create mode 100644 doc/guides/dmadevs/dpaa.rst
 create mode 100644 drivers/dma/dpaa/dpaa_qdma.c
 create mode 100644 drivers/dma/dpaa/dpaa_qdma.h
 create mode 100644 drivers/dma/dpaa/dpaa_qdma_logs.h
 create mode 100644 drivers/dma/dpaa/meson.build
 create mode 100644 drivers/dma/dpaa/version.map

-- 
2.25.1


^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH 1/6] dma/dpaa: introduce DPAA DMA driver
  2021-09-09 11:14 [dpdk-dev] [PATCH 0/6] Introduce DPAA DMA driver Gagandeep Singh
@ 2021-09-09 11:14 ` Gagandeep Singh
  2021-09-09 11:14 ` [dpdk-dev] [PATCH 2/6] dma/dpaa: add device probe and remove functionality Gagandeep Singh
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 42+ messages in thread
From: Gagandeep Singh @ 2021-09-09 11:14 UTC (permalink / raw)
  To: dev; +Cc: nipun.gupta, thomas, Gagandeep Singh

The DPAA DMA  driver is an implementation of the dmadev APIs,
that provide means to initiate a DMA transaction from CPU.
The initiated DMA is performed without CPU being involved
in the actual DMA transaction. This is achieved via using
the QDMA controller of DPAA SoC.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 MAINTAINERS                            | 10 +++++++++
 doc/guides/rel_notes/release_21_11.rst |  3 +++
 drivers/bus/dpaa/dpaa_bus.c            | 22 ++++++++++++++++++++
 drivers/bus/dpaa/rte_dpaa_bus.h        |  5 +++++
 drivers/common/dpaax/dpaa_list.h       |  2 ++
 drivers/dma/dpaa/dpaa_qdma.c           | 28 ++++++++++++++++++++++++++
 drivers/dma/dpaa/meson.build           | 14 +++++++++++++
 drivers/dma/dpaa/version.map           |  4 ++++
 drivers/dma/meson.build                |  1 +
 9 files changed, 89 insertions(+)
 create mode 100644 drivers/dma/dpaa/dpaa_qdma.c
 create mode 100644 drivers/dma/dpaa/meson.build
 create mode 100644 drivers/dma/dpaa/version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index 266f5ac1da..e3113b2e7e 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1341,6 +1341,16 @@ F: doc/guides/rawdevs/ntb.rst
 F: examples/ntb/
 F: doc/guides/sample_app_ug/ntb.rst
 
+
+Dmadev Drivers
+--------------
+
+NXP DPAA DMA
+M: Gagandeep Singh <g.singh@nxp.com>
+M: Nipun Gupta <nipun.gupta@nxp.com>
+F: drivers/dma/dpaa/
+
+
 Packet processing
 -----------------
 
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index d707a554ef..d322bc93af 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -55,6 +55,9 @@ New Features
      Also, make sure to start the actual text at the margin.
      =======================================================
 
+* **Added NXP DPAA DMA driver.**
+
+  * Added a new dmadev driver for NXP DPAA platform.
 
 Removed Items
 -------------
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index e499305d85..09cd30d41c 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -232,6 +232,28 @@ dpaa_create_device_list(void)
 
 	rte_dpaa_bus.device_count += i;
 
+	/* Creating QDMA Device */
+	for (i = 0; i < RTE_DPAA_QDMA_DEVICES; i++) {
+		dev = calloc(1, sizeof(struct rte_dpaa_device));
+		if (!dev) {
+			DPAA_BUS_LOG(ERR, "Failed to allocate QDMA device");
+			ret = -1;
+			goto cleanup;
+		}
+
+		dev->device_type = FSL_DPAA_QDMA;
+		dev->id.dev_id = rte_dpaa_bus.device_count + i;
+
+		memset(dev->name, 0, RTE_ETH_NAME_MAX_LEN);
+		sprintf(dev->name, "dpaa_qdma-%d", i+1);
+		DPAA_BUS_LOG(INFO, "%s qdma device added", dev->name);
+		dev->device.name = dev->name;
+		dev->device.devargs = dpaa_devargs_lookup(dev);
+
+		dpaa_add_to_device_list(dev);
+	}
+	rte_dpaa_bus.device_count += i;
+
 	return 0;
 
 cleanup:
diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h
index 48d5cf4625..678a126154 100644
--- a/drivers/bus/dpaa/rte_dpaa_bus.h
+++ b/drivers/bus/dpaa/rte_dpaa_bus.h
@@ -61,6 +61,9 @@ dpaa_seqn(struct rte_mbuf *mbuf)
 /** Device driver supports link state interrupt */
 #define RTE_DPAA_DRV_INTR_LSC  0x0008
 
+/** Number of supported QDMA devices */
+#define RTE_DPAA_QDMA_DEVICES  1
+
 #define RTE_DEV_TO_DPAA_CONST(ptr) \
 	container_of(ptr, const struct rte_dpaa_device, device)
 
@@ -76,6 +79,7 @@ TAILQ_HEAD(rte_dpaa_driver_list, rte_dpaa_driver);
 enum rte_dpaa_type {
 	FSL_DPAA_ETH = 1,
 	FSL_DPAA_CRYPTO,
+	FSL_DPAA_QDMA
 };
 
 struct rte_dpaa_bus {
@@ -98,6 +102,7 @@ struct rte_dpaa_device {
 	union {
 		struct rte_eth_dev *eth_dev;
 		struct rte_cryptodev *crypto_dev;
+		struct rte_dmadev *dmadev;
 	};
 	struct rte_dpaa_driver *driver;
 	struct dpaa_device_id id;
diff --git a/drivers/common/dpaax/dpaa_list.h b/drivers/common/dpaax/dpaa_list.h
index e94575982b..319a3562ab 100644
--- a/drivers/common/dpaax/dpaa_list.h
+++ b/drivers/common/dpaax/dpaa_list.h
@@ -35,6 +35,8 @@ do { \
 	const struct list_head *__p298 = (p); \
 	((__p298->next == __p298) && (__p298->prev == __p298)); \
 })
+#define list_first_entry(ptr, type, member) \
+	list_entry((ptr)->next, type, member)
 #define list_add(p, l) \
 do { \
 	struct list_head *__p298 = (p); \
diff --git a/drivers/dma/dpaa/dpaa_qdma.c b/drivers/dma/dpaa/dpaa_qdma.c
new file mode 100644
index 0000000000..2ef3ee0c35
--- /dev/null
+++ b/drivers/dma/dpaa/dpaa_qdma.c
@@ -0,0 +1,28 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 NXP
+ */
+
+#include <rte_dpaa_bus.h>
+
+static int
+dpaa_qdma_probe(__rte_unused struct rte_dpaa_driver *dpaa_drv,
+		__rte_unused struct rte_dpaa_device *dpaa_dev)
+{
+	return 0;
+}
+
+static int
+dpaa_qdma_remove(__rte_unused struct rte_dpaa_device *dpaa_dev)
+{
+	return 0;
+}
+
+static struct rte_dpaa_driver rte_dpaa_qdma_pmd;
+
+static struct rte_dpaa_driver rte_dpaa_qdma_pmd = {
+	.drv_type = FSL_DPAA_QDMA,
+	.probe = dpaa_qdma_probe,
+	.remove = dpaa_qdma_remove,
+};
+
+RTE_PMD_REGISTER_DPAA(dpaa_qdma, rte_dpaa_qdma_pmd);
diff --git a/drivers/dma/dpaa/meson.build b/drivers/dma/dpaa/meson.build
new file mode 100644
index 0000000000..9ab0862ede
--- /dev/null
+++ b/drivers/dma/dpaa/meson.build
@@ -0,0 +1,14 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright 2021 NXP
+
+if not is_linux
+	build = false
+	reason = 'only supported on linux'
+endif
+
+deps += ['dmadev', 'bus_dpaa']
+sources = files('dpaa_qdma.c')
+
+if cc.has_argument('-Wno-pointer-arith')
+	cflags += '-Wno-pointer-arith'
+endif
diff --git a/drivers/dma/dpaa/version.map b/drivers/dma/dpaa/version.map
new file mode 100644
index 0000000000..7bab7bea48
--- /dev/null
+++ b/drivers/dma/dpaa/version.map
@@ -0,0 +1,4 @@
+DPDK_22 {
+
+	local: *;
+};
diff --git a/drivers/dma/meson.build b/drivers/dma/meson.build
index 0c2c34cd00..2f22d65215 100644
--- a/drivers/dma/meson.build
+++ b/drivers/dma/meson.build
@@ -7,5 +7,6 @@ endif
 
 drivers = [
         'skeleton',
+	'dpaa',
 ]
 std_deps = ['dmadev']
-- 
2.25.1


^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH 2/6] dma/dpaa: add device probe and remove functionality
  2021-09-09 11:14 [dpdk-dev] [PATCH 0/6] Introduce DPAA DMA driver Gagandeep Singh
  2021-09-09 11:14 ` [dpdk-dev] [PATCH 1/6] dma/dpaa: introduce " Gagandeep Singh
@ 2021-09-09 11:14 ` Gagandeep Singh
  2021-09-09 11:14 ` [dpdk-dev] [PATCH 3/6] dma/dpaa: add driver logs Gagandeep Singh
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 42+ messages in thread
From: Gagandeep Singh @ 2021-09-09 11:14 UTC (permalink / raw)
  To: dev; +Cc: nipun.gupta, thomas, Gagandeep Singh

This patch add device initialisation functionality.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 drivers/dma/dpaa/dpaa_qdma.c | 440 ++++++++++++++++++++++++++++++++++-
 drivers/dma/dpaa/dpaa_qdma.h | 247 ++++++++++++++++++++
 2 files changed, 685 insertions(+), 2 deletions(-)
 create mode 100644 drivers/dma/dpaa/dpaa_qdma.h

diff --git a/drivers/dma/dpaa/dpaa_qdma.c b/drivers/dma/dpaa/dpaa_qdma.c
index 2ef3ee0c35..aea09edc9e 100644
--- a/drivers/dma/dpaa/dpaa_qdma.c
+++ b/drivers/dma/dpaa/dpaa_qdma.c
@@ -3,17 +3,453 @@
  */
 
 #include <rte_dpaa_bus.h>
+#include <rte_dmadev_pmd.h>
+
+#include "dpaa_qdma.h"
+
+static inline int ilog2(int x)
+{
+	int log = 0;
+
+	x >>= 1;
+
+	while (x) {
+		log++;
+		x >>= 1;
+	}
+	return log;
+}
+
+static u32 qdma_readl(void *addr)
+{
+	return QDMA_IN(addr);
+}
+
+static void qdma_writel(u32 val, void *addr)
+{
+	QDMA_OUT(addr, val);
+}
+
+static void *dma_pool_alloc(int size, int aligned, dma_addr_t *phy_addr)
+{
+	void *virt_addr;
+
+	virt_addr = rte_malloc("dma pool alloc", size, aligned);
+	if (!virt_addr)
+		return NULL;
+
+	*phy_addr = rte_mem_virt2iova(virt_addr);
+
+	return virt_addr;
+}
+
+static void dma_pool_free(void *addr)
+{
+	rte_free(addr);
+}
+
+static void fsl_qdma_free_chan_resources(struct fsl_qdma_chan *fsl_chan)
+{
+	struct fsl_qdma_queue *fsl_queue = fsl_chan->queue;
+	struct fsl_qdma_engine *fsl_qdma = fsl_chan->qdma;
+	struct fsl_qdma_comp *comp_temp, *_comp_temp;
+	int id;
+
+	if (--fsl_queue->count)
+		goto finally;
+
+	id = (fsl_qdma->block_base - fsl_queue->block_base) /
+	      fsl_qdma->block_offset;
+
+	while (rte_atomic32_read(&wait_task[id]) == 1)
+		rte_delay_us(QDMA_DELAY);
+
+	list_for_each_entry_safe(comp_temp, _comp_temp,
+				 &fsl_queue->comp_used,	list) {
+		list_del(&comp_temp->list);
+		dma_pool_free(comp_temp->virt_addr);
+		dma_pool_free(comp_temp->desc_virt_addr);
+		rte_free(comp_temp);
+	}
+
+	list_for_each_entry_safe(comp_temp, _comp_temp,
+				 &fsl_queue->comp_free, list) {
+		list_del(&comp_temp->list);
+		dma_pool_free(comp_temp->virt_addr);
+		dma_pool_free(comp_temp->desc_virt_addr);
+		rte_free(comp_temp);
+	}
+
+finally:
+	fsl_qdma->desc_allocated--;
+}
+static struct fsl_qdma_queue
+*fsl_qdma_alloc_queue_resources(struct fsl_qdma_engine *fsl_qdma)
+{
+	struct fsl_qdma_queue *queue_head, *queue_temp;
+	int len, i, j;
+	int queue_num;
+	int blocks;
+	unsigned int queue_size[FSL_QDMA_QUEUE_MAX];
+
+	queue_num = fsl_qdma->n_queues;
+	blocks = fsl_qdma->num_blocks;
+
+	len = sizeof(*queue_head) * queue_num * blocks;
+	queue_head = rte_zmalloc("qdma: queue head", len, 0);
+	if (!queue_head)
+		return NULL;
+
+	for (i = 0; i < FSL_QDMA_QUEUE_MAX; i++)
+		queue_size[i] = QDMA_QUEUE_SIZE;
+
+	for (j = 0; j < blocks; j++) {
+		for (i = 0; i < queue_num; i++) {
+			if (queue_size[i] > FSL_QDMA_CIRCULAR_DESC_SIZE_MAX ||
+			    queue_size[i] < FSL_QDMA_CIRCULAR_DESC_SIZE_MIN) {
+				return NULL;
+			}
+			queue_temp = queue_head + i + (j * queue_num);
+
+			queue_temp->cq =
+			dma_pool_alloc(sizeof(struct fsl_qdma_format) *
+				       queue_size[i],
+				       sizeof(struct fsl_qdma_format) *
+				       queue_size[i], &queue_temp->bus_addr);
+
+			memset(queue_temp->cq, 0x0, queue_size[i] *
+			       sizeof(struct fsl_qdma_format));
+
+			if (!queue_temp->cq)
+				return NULL;
+
+			queue_temp->block_base = fsl_qdma->block_base +
+				FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, j);
+			queue_temp->n_cq = queue_size[i];
+			queue_temp->id = i;
+			queue_temp->count = 0;
+			queue_temp->virt_head = queue_temp->cq;
+
+		}
+	}
+	return queue_head;
+}
+
+static struct fsl_qdma_queue *fsl_qdma_prep_status_queue(void)
+{
+	struct fsl_qdma_queue *status_head;
+	unsigned int status_size;
+
+	status_size = QDMA_STATUS_SIZE;
+	if (status_size > FSL_QDMA_CIRCULAR_DESC_SIZE_MAX ||
+	    status_size < FSL_QDMA_CIRCULAR_DESC_SIZE_MIN) {
+		return NULL;
+	}
+
+	status_head = rte_zmalloc("qdma: status head", sizeof(*status_head), 0);
+	if (!status_head)
+		return NULL;
+
+	/*
+	 * Buffer for queue command
+	 */
+	status_head->cq = dma_pool_alloc(sizeof(struct fsl_qdma_format) *
+					 status_size,
+					 sizeof(struct fsl_qdma_format) *
+					 status_size,
+					 &status_head->bus_addr);
+
+	memset(status_head->cq, 0x0, status_size *
+	       sizeof(struct fsl_qdma_format));
+	if (!status_head->cq)
+		return NULL;
+
+	status_head->n_cq = status_size;
+	status_head->virt_head = status_head->cq;
+
+	return status_head;
+}
+
+static int fsl_qdma_halt(struct fsl_qdma_engine *fsl_qdma)
+{
+	void *ctrl = fsl_qdma->ctrl_base;
+	void *block;
+	int i, count = RETRIES;
+	unsigned int j;
+	u32 reg;
+
+	/* Disable the command queue and wait for idle state. */
+	reg = qdma_readl(ctrl + FSL_QDMA_DMR);
+	reg |= FSL_QDMA_DMR_DQD;
+	qdma_writel(reg, ctrl + FSL_QDMA_DMR);
+	for (j = 0; j < fsl_qdma->num_blocks; j++) {
+		block = fsl_qdma->block_base +
+			FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, j);
+		for (i = 0; i < FSL_QDMA_QUEUE_NUM_MAX; i++)
+			qdma_writel(0, block + FSL_QDMA_BCQMR(i));
+	}
+	while (true) {
+		reg = qdma_readl(ctrl + FSL_QDMA_DSR);
+		if (!(reg & FSL_QDMA_DSR_DB))
+			break;
+		if (count-- < 0)
+			return -EBUSY;
+		rte_delay_us(100);
+	}
+
+	for (j = 0; j < fsl_qdma->num_blocks; j++) {
+		block = fsl_qdma->block_base +
+			FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, j);
+
+		/* Disable status queue. */
+		qdma_writel(0, block + FSL_QDMA_BSQMR);
+
+		/*
+		 * clear the command queue interrupt detect register for
+		 * all queues.
+		 */
+		qdma_writel(0xffffffff, block + FSL_QDMA_BCQIDR(0));
+	}
+
+	return 0;
+}
+
+static int fsl_qdma_reg_init(struct fsl_qdma_engine *fsl_qdma)
+{
+	struct fsl_qdma_queue *fsl_queue = fsl_qdma->queue;
+	struct fsl_qdma_queue *temp;
+	void *ctrl = fsl_qdma->ctrl_base;
+	void *block;
+	u32 i, j;
+	u32 reg;
+	int ret, val;
+
+	/* Try to halt the qDMA engine first. */
+	ret = fsl_qdma_halt(fsl_qdma);
+	if (ret) {
+		return ret;
+	}
+
+	for (j = 0; j < fsl_qdma->num_blocks; j++) {
+		block = fsl_qdma->block_base +
+			FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, j);
+		for (i = 0; i < fsl_qdma->n_queues; i++) {
+			temp = fsl_queue + i + (j * fsl_qdma->n_queues);
+			/*
+			 * Initialize Command Queue registers to
+			 * point to the first
+			 * command descriptor in memory.
+			 * Dequeue Pointer Address Registers
+			 * Enqueue Pointer Address Registers
+			 */
+
+			qdma_writel(lower_32_bits(temp->bus_addr),
+				    block + FSL_QDMA_BCQDPA_SADDR(i));
+			qdma_writel(upper_32_bits(temp->bus_addr),
+				    block + FSL_QDMA_BCQEDPA_SADDR(i));
+			qdma_writel(lower_32_bits(temp->bus_addr),
+				    block + FSL_QDMA_BCQEPA_SADDR(i));
+			qdma_writel(upper_32_bits(temp->bus_addr),
+				    block + FSL_QDMA_BCQEEPA_SADDR(i));
+
+			/* Initialize the queue mode. */
+			reg = FSL_QDMA_BCQMR_EN;
+			reg |= FSL_QDMA_BCQMR_CD_THLD(ilog2(temp->n_cq) - 4);
+			reg |= FSL_QDMA_BCQMR_CQ_SIZE(ilog2(temp->n_cq) - 6);
+			qdma_writel(reg, block + FSL_QDMA_BCQMR(i));
+		}
+
+		/*
+		 * Workaround for erratum: ERR010812.
+		 * We must enable XOFF to avoid the enqueue rejection occurs.
+		 * Setting SQCCMR ENTER_WM to 0x20.
+		 */
+
+		qdma_writel(FSL_QDMA_SQCCMR_ENTER_WM,
+			    block + FSL_QDMA_SQCCMR);
+
+		/*
+		 * Initialize status queue registers to point to the first
+		 * command descriptor in memory.
+		 * Dequeue Pointer Address Registers
+		 * Enqueue Pointer Address Registers
+		 */
+
+		qdma_writel(
+			    upper_32_bits(fsl_qdma->status[j]->bus_addr),
+			    block + FSL_QDMA_SQEEPAR);
+		qdma_writel(
+			    lower_32_bits(fsl_qdma->status[j]->bus_addr),
+			    block + FSL_QDMA_SQEPAR);
+		qdma_writel(
+			    upper_32_bits(fsl_qdma->status[j]->bus_addr),
+			    block + FSL_QDMA_SQEDPAR);
+		qdma_writel(
+			    lower_32_bits(fsl_qdma->status[j]->bus_addr),
+			    block + FSL_QDMA_SQDPAR);
+		/* Desiable status queue interrupt. */
+
+		qdma_writel(0x0, block + FSL_QDMA_BCQIER(0));
+		qdma_writel(0x0, block + FSL_QDMA_BSQICR);
+		qdma_writel(0x0, block + FSL_QDMA_CQIER);
+
+		/* Initialize the status queue mode. */
+		reg = FSL_QDMA_BSQMR_EN;
+		val = ilog2(fsl_qdma->status[j]->n_cq) - 6;
+		reg |= FSL_QDMA_BSQMR_CQ_SIZE(val);
+		qdma_writel(reg, block + FSL_QDMA_BSQMR);
+	}
+
+	reg = qdma_readl(ctrl + FSL_QDMA_DMR);
+	reg &= ~FSL_QDMA_DMR_DQD;
+	qdma_writel(reg, ctrl + FSL_QDMA_DMR);
+
+	return 0;
+}
+
+static void
+dma_release(void *fsl_chan)
+{
+	((struct fsl_qdma_chan *)fsl_chan)->free = true;
+	fsl_qdma_free_chan_resources((struct fsl_qdma_chan *)fsl_chan);
+}
+
+static int
+dpaa_qdma_init(struct rte_dmadev *dmadev)
+{
+	struct fsl_qdma_engine *fsl_qdma = dmadev->dev_private;
+	struct fsl_qdma_chan *fsl_chan;
+	uint64_t phys_addr;
+	unsigned int len;
+	int ccsr_qdma_fd;
+	int regs_size;
+	int ret;
+	u32 i;
+
+	fsl_qdma->desc_allocated = 0;
+	fsl_qdma->n_chans = VIRT_CHANNELS;
+	fsl_qdma->n_queues = QDMA_QUEUES;
+	fsl_qdma->num_blocks = QDMA_BLOCKS;
+	fsl_qdma->block_offset = QDMA_BLOCK_OFFSET;
+
+	len = sizeof(*fsl_chan) * fsl_qdma->n_chans;
+	fsl_qdma->chans = rte_zmalloc("qdma: fsl chans", len, 0);
+	if (!fsl_qdma->chans)
+		return -1;
+
+	len = sizeof(struct fsl_qdma_queue *) * fsl_qdma->num_blocks;
+	fsl_qdma->status = rte_zmalloc("qdma: fsl status", len, 0);
+	if (!fsl_qdma->status) {
+		rte_free(fsl_qdma->chans);
+		return -1;
+	}
+
+	for (i = 0; i < fsl_qdma->num_blocks; i++) {
+		rte_atomic32_init(&wait_task[i]);
+		fsl_qdma->status[i] = fsl_qdma_prep_status_queue();
+		if (!fsl_qdma->status[i])
+			goto err;
+	}
+
+	ccsr_qdma_fd = open("/dev/mem", O_RDWR);
+	if (unlikely(ccsr_qdma_fd < 0)) {
+		goto err;
+	}
+
+	regs_size = fsl_qdma->block_offset * (fsl_qdma->num_blocks + 2);
+	phys_addr = QDMA_CCSR_BASE;
+	fsl_qdma->ctrl_base = mmap(NULL, regs_size, PROT_READ |
+					 PROT_WRITE, MAP_SHARED,
+					 ccsr_qdma_fd, phys_addr);
+
+	close(ccsr_qdma_fd);
+	if (fsl_qdma->ctrl_base == MAP_FAILED) {
+		goto err;
+	}
+
+	fsl_qdma->status_base = fsl_qdma->ctrl_base + QDMA_BLOCK_OFFSET;
+	fsl_qdma->block_base = fsl_qdma->status_base + QDMA_BLOCK_OFFSET;
+
+	fsl_qdma->queue = fsl_qdma_alloc_queue_resources(fsl_qdma);
+	if (!fsl_qdma->queue) {
+		munmap(fsl_qdma->ctrl_base, regs_size);
+		goto err;
+	}
+
+	for (i = 0; i < fsl_qdma->n_chans; i++) {
+		struct fsl_qdma_chan *fsl_chan = &fsl_qdma->chans[i];
+
+		fsl_chan->qdma = fsl_qdma;
+		fsl_chan->queue = fsl_qdma->queue + i % (fsl_qdma->n_queues *
+							fsl_qdma->num_blocks);
+		fsl_chan->free = true;
+	}
+
+	ret = fsl_qdma_reg_init(fsl_qdma);
+	if (ret) {
+		munmap(fsl_qdma->ctrl_base, regs_size);
+		goto err;
+	}
+
+	return 0;
+
+err:
+	rte_free(fsl_qdma->chans);
+	rte_free(fsl_qdma->status);
+
+	return -1;
+}
 
 static int
 dpaa_qdma_probe(__rte_unused struct rte_dpaa_driver *dpaa_drv,
-		__rte_unused struct rte_dpaa_device *dpaa_dev)
+		struct rte_dpaa_device *dpaa_dev)
 {
+	struct rte_dmadev *dmadev;
+	int ret;
+
+	dmadev = rte_dmadev_pmd_allocate(dpaa_dev->device.name);
+	if (!dmadev) {
+		return -EINVAL;
+	}
+
+	dmadev->dev_private = rte_zmalloc("struct fsl_qdma_engine *",
+				       sizeof(struct fsl_qdma_engine),
+				       RTE_CACHE_LINE_SIZE);
+	if (!dmadev->dev_private) {
+		(void)rte_dmadev_pmd_release(dmadev);
+		return -ENOMEM;
+	}
+
+	dpaa_dev->dmadev = dmadev;
+
+	/* Invoke PMD device initialization function */
+	ret = dpaa_qdma_init(dmadev);
+	if (ret) {
+		rte_free(dmadev->dev_private);
+		(void)rte_dmadev_pmd_release(dmadev);
+		return ret;
+	}
+
 	return 0;
 }
 
 static int
-dpaa_qdma_remove(__rte_unused struct rte_dpaa_device *dpaa_dev)
+dpaa_qdma_remove(struct rte_dpaa_device *dpaa_dev)
 {
+	struct rte_dmadev *dmadev = dpaa_dev->dmadev;
+	struct fsl_qdma_engine *fsl_qdma = dmadev->dev_private;
+	int i = 0, max = QDMA_QUEUES * QDMA_BLOCKS;
+
+	for (i = 0; i < max; i++) {
+		struct fsl_qdma_chan *fsl_chan = &fsl_qdma->chans[i];
+
+		if (fsl_chan->free == false)
+			dma_release(fsl_chan);
+	}
+
+	rte_free(fsl_qdma->status);
+	rte_free(fsl_qdma->chans);
+
 	return 0;
 }
 
diff --git a/drivers/dma/dpaa/dpaa_qdma.h b/drivers/dma/dpaa/dpaa_qdma.h
new file mode 100644
index 0000000000..cc0d1f114e
--- /dev/null
+++ b/drivers/dma/dpaa/dpaa_qdma.h
@@ -0,0 +1,247 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 NXP
+ */
+
+#ifndef _DPAA_QDMA_H_
+#define _DPAA_QDMA_H_
+
+#define CORE_NUMBER 4
+#define RETRIES	5
+
+#define FSL_QDMA_DMR			0x0
+#define FSL_QDMA_DSR			0x4
+#define FSL_QDMA_DEIER			0xe00
+#define FSL_QDMA_DEDR			0xe04
+#define FSL_QDMA_DECFDW0R		0xe10
+#define FSL_QDMA_DECFDW1R		0xe14
+#define FSL_QDMA_DECFDW2R		0xe18
+#define FSL_QDMA_DECFDW3R		0xe1c
+#define FSL_QDMA_DECFQIDR		0xe30
+#define FSL_QDMA_DECBR			0xe34
+
+#define FSL_QDMA_BCQMR(x)		(0xc0 + 0x100 * (x))
+#define FSL_QDMA_BCQSR(x)		(0xc4 + 0x100 * (x))
+#define FSL_QDMA_BCQEDPA_SADDR(x)	(0xc8 + 0x100 * (x))
+#define FSL_QDMA_BCQDPA_SADDR(x)	(0xcc + 0x100 * (x))
+#define FSL_QDMA_BCQEEPA_SADDR(x)	(0xd0 + 0x100 * (x))
+#define FSL_QDMA_BCQEPA_SADDR(x)	(0xd4 + 0x100 * (x))
+#define FSL_QDMA_BCQIER(x)		(0xe0 + 0x100 * (x))
+#define FSL_QDMA_BCQIDR(x)		(0xe4 + 0x100 * (x))
+
+#define FSL_QDMA_SQEDPAR		0x808
+#define FSL_QDMA_SQDPAR			0x80c
+#define FSL_QDMA_SQEEPAR		0x810
+#define FSL_QDMA_SQEPAR			0x814
+#define FSL_QDMA_BSQMR			0x800
+#define FSL_QDMA_BSQSR			0x804
+#define FSL_QDMA_BSQICR			0x828
+#define FSL_QDMA_CQMR			0xa00
+#define FSL_QDMA_CQDSCR1		0xa08
+#define FSL_QDMA_CQDSCR2                0xa0c
+#define FSL_QDMA_CQIER			0xa10
+#define FSL_QDMA_CQEDR			0xa14
+#define FSL_QDMA_SQCCMR			0xa20
+
+#define FSL_QDMA_SQICR_ICEN
+
+#define FSL_QDMA_CQIDR_CQT		0xff000000
+#define FSL_QDMA_CQIDR_SQPE		0x800000
+#define FSL_QDMA_CQIDR_SQT		0x8000
+
+#define FSL_QDMA_BCQIER_CQTIE		0x8000
+#define FSL_QDMA_BCQIER_CQPEIE		0x800000
+#define FSL_QDMA_BSQICR_ICEN		0x80000000
+#define FSL_QDMA_BSQICR_ICST(x)		((x) << 16)
+#define FSL_QDMA_CQIER_MEIE		0x80000000
+#define FSL_QDMA_CQIER_TEIE		0x1
+#define FSL_QDMA_SQCCMR_ENTER_WM	0x200000
+
+#define FSL_QDMA_QUEUE_MAX		8
+
+#define FSL_QDMA_BCQMR_EN		0x80000000
+#define FSL_QDMA_BCQMR_EI		0x40000000
+#define FSL_QDMA_BCQMR_EI_BE           0x40
+#define FSL_QDMA_BCQMR_CD_THLD(x)	((x) << 20)
+#define FSL_QDMA_BCQMR_CQ_SIZE(x)	((x) << 16)
+
+#define FSL_QDMA_BCQSR_QF		0x10000
+#define FSL_QDMA_BCQSR_XOFF		0x1
+#define FSL_QDMA_BCQSR_QF_XOFF_BE      0x1000100
+
+#define FSL_QDMA_BSQMR_EN		0x80000000
+#define FSL_QDMA_BSQMR_DI		0x40000000
+#define FSL_QDMA_BSQMR_DI_BE		0x40
+#define FSL_QDMA_BSQMR_CQ_SIZE(x)	((x) << 16)
+
+#define FSL_QDMA_BSQSR_QE		0x20000
+#define FSL_QDMA_BSQSR_QE_BE		0x200
+#define FSL_QDMA_BSQSR_QF		0x10000
+
+#define FSL_QDMA_DMR_DQD		0x40000000
+#define FSL_QDMA_DSR_DB			0x80000000
+
+#define FSL_QDMA_COMMAND_BUFFER_SIZE	64
+#define FSL_QDMA_DESCRIPTOR_BUFFER_SIZE 32
+#define FSL_QDMA_CIRCULAR_DESC_SIZE_MIN	64
+#define FSL_QDMA_CIRCULAR_DESC_SIZE_MAX	16384
+#define FSL_QDMA_QUEUE_NUM_MAX		8
+
+#define FSL_QDMA_CMD_RWTTYPE		0x4
+#define FSL_QDMA_CMD_LWC                0x2
+
+#define FSL_QDMA_CMD_RWTTYPE_OFFSET	28
+#define FSL_QDMA_CMD_NS_OFFSET		27
+#define FSL_QDMA_CMD_DQOS_OFFSET	24
+#define FSL_QDMA_CMD_WTHROTL_OFFSET	20
+#define FSL_QDMA_CMD_DSEN_OFFSET	19
+#define FSL_QDMA_CMD_LWC_OFFSET		16
+
+#define QDMA_CCDF_STATUS		20
+#define QDMA_CCDF_OFFSET		20
+#define QDMA_CCDF_MASK			GENMASK(28, 20)
+#define QDMA_CCDF_FOTMAT		BIT(29)
+#define QDMA_CCDF_SER			BIT(30)
+
+#define QDMA_SG_FIN			BIT(30)
+#define QDMA_SG_EXT			BIT(31)
+#define QDMA_SG_LEN_MASK		GENMASK(29, 0)
+
+#define QDMA_BIG_ENDIAN			1
+#define COMP_TIMEOUT			100000
+#define COMMAND_QUEUE_OVERFLLOW		10
+
+/* qdma engine attribute */
+#define QDMA_QUEUE_SIZE 64
+#define QDMA_STATUS_SIZE 64
+#define QDMA_CCSR_BASE 0x8380000
+#define VIRT_CHANNELS 32
+#define QDMA_BLOCK_OFFSET 0x10000
+#define QDMA_BLOCKS 4
+#define QDMA_QUEUES 8
+#define QDMA_DELAY 1000
+
+#define __arch_getq(a)		(*(volatile u64 *)(a))
+#define __arch_putq(v, a)	(*(volatile u64 *)(a) = (v))
+#define __arch_getq32(a)	(*(volatile u32 *)(a))
+#define __arch_putq32(v, a)	(*(volatile u32 *)(a) = (v))
+#define readq32(c) \
+	({ u32 __v = __arch_getq32(c); rte_io_rmb(); __v; })
+#define writeq32(v, c) \
+	({ u32 __v = v; __arch_putq32(__v, c); __v; })
+#define ioread32(_p)		readq32(_p)
+#define iowrite32(_v, _p)	writeq32(_v, _p)
+
+#define ioread32be(_p)          be32_to_cpu(readq32(_p))
+#define iowrite32be(_v, _p)	writeq32(be32_to_cpu(_v), _p)
+
+#ifdef QDMA_BIG_ENDIAN
+#define QDMA_IN(addr)		ioread32be(addr)
+#define QDMA_OUT(addr, val)	iowrite32be(val, addr)
+#define QDMA_IN_BE(addr)	ioread32(addr)
+#define QDMA_OUT_BE(addr, val)	iowrite32(val, addr)
+#else
+#define QDMA_IN(addr)		ioread32(addr)
+#define QDMA_OUT(addr, val)	iowrite32(val, addr)
+#define QDMA_IN_BE(addr)	ioread32be(addr)
+#define QDMA_OUT_BE(addr, val)	iowrite32be(val, addr)
+#endif
+
+#define FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma_engine, x)			\
+	(((fsl_qdma_engine)->block_offset) * (x))
+
+typedef void (*dma_call_back)(void *params);
+
+/* qDMA Command Descriptor Formats */
+struct fsl_qdma_format {
+	__le32 status; /* ser, status */
+	__le32 cfg;	/* format, offset */
+	union {
+		struct {
+			__le32 addr_lo;	/* low 32-bits of 40-bit address */
+			u8 addr_hi;	/* high 8-bits of 40-bit address */
+			u8 __reserved1[2];
+			u8 cfg8b_w1; /* dd, queue */
+		};
+		__le64 data;
+	};
+};
+
+/* qDMA Source Descriptor Format */
+struct fsl_qdma_sdf {
+	__le32 rev3;
+	__le32 cfg; /* rev4, bit[0-11] - ssd, bit[12-23] sss */
+	__le32 rev5;
+	__le32 cmd;
+};
+
+/* qDMA Destination Descriptor Format */
+struct fsl_qdma_ddf {
+	__le32 rev1;
+	__le32 cfg; /* rev2, bit[0-11] - dsd, bit[12-23] - dss */
+	__le32 rev3;
+	__le32 cmd;
+};
+
+enum dma_status {
+	DMA_COMPLETE,
+	DMA_IN_PROGRESS,
+	DMA_IN_PREPAR,
+	DMA_PAUSED,
+	DMA_ERROR,
+};
+
+struct fsl_qdma_chan {
+	struct fsl_qdma_engine	*qdma;
+	struct fsl_qdma_queue	*queue;
+	bool			free;
+	struct list_head	list;
+};
+
+struct fsl_qdma_list {
+	struct list_head	dma_list;
+};
+
+struct fsl_qdma_queue {
+	struct fsl_qdma_format	*virt_head;
+	struct list_head	comp_used;
+	struct list_head	comp_free;
+	dma_addr_t		bus_addr;
+	u32                     n_cq;
+	u32			id;
+	u32			count;
+	struct fsl_qdma_format	*cq;
+	void			*block_base;
+};
+
+struct fsl_qdma_comp {
+	dma_addr_t              bus_addr;
+	dma_addr_t              desc_bus_addr;
+	void			*virt_addr;
+	int			index;
+	void			*desc_virt_addr;
+	struct fsl_qdma_chan	*qchan;
+	dma_call_back		call_back_func;
+	void			*params;
+	struct list_head	list;
+};
+
+struct fsl_qdma_engine {
+	int			desc_allocated;
+	void			*ctrl_base;
+	void			*status_base;
+	void			*block_base;
+	u32			n_chans;
+	u32			n_queues;
+	int			error_irq;
+	struct fsl_qdma_queue	*queue;
+	struct fsl_qdma_queue	**status;
+	struct fsl_qdma_chan	*chans;
+	u32			num_blocks;
+	u8			free_block_id;
+	u32			vchan_map[4];
+	int			block_offset;
+};
+
+static rte_atomic32_t wait_task[CORE_NUMBER];
+
+#endif /* _DPAA_QDMA_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH 3/6] dma/dpaa: add driver logs
  2021-09-09 11:14 [dpdk-dev] [PATCH 0/6] Introduce DPAA DMA driver Gagandeep Singh
  2021-09-09 11:14 ` [dpdk-dev] [PATCH 1/6] dma/dpaa: introduce " Gagandeep Singh
  2021-09-09 11:14 ` [dpdk-dev] [PATCH 2/6] dma/dpaa: add device probe and remove functionality Gagandeep Singh
@ 2021-09-09 11:14 ` Gagandeep Singh
  2021-09-09 11:14 ` [dpdk-dev] [PATCH 4/6] dma/dpaa: support basic operations Gagandeep Singh
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 42+ messages in thread
From: Gagandeep Singh @ 2021-09-09 11:14 UTC (permalink / raw)
  To: dev; +Cc: nipun.gupta, thomas, Gagandeep Singh

This patch supports DPAA DMA driver logs.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 drivers/dma/dpaa/dpaa_qdma.c      | 10 +++++++
 drivers/dma/dpaa/dpaa_qdma_logs.h | 46 +++++++++++++++++++++++++++++++
 2 files changed, 56 insertions(+)
 create mode 100644 drivers/dma/dpaa/dpaa_qdma_logs.h

diff --git a/drivers/dma/dpaa/dpaa_qdma.c b/drivers/dma/dpaa/dpaa_qdma.c
index aea09edc9e..8b0454abce 100644
--- a/drivers/dma/dpaa/dpaa_qdma.c
+++ b/drivers/dma/dpaa/dpaa_qdma.c
@@ -6,6 +6,7 @@
 #include <rte_dmadev_pmd.h>
 
 #include "dpaa_qdma.h"
+#include "dpaa_qdma_logs.h"
 
 static inline int ilog2(int x)
 {
@@ -107,6 +108,7 @@ static struct fsl_qdma_queue
 		for (i = 0; i < queue_num; i++) {
 			if (queue_size[i] > FSL_QDMA_CIRCULAR_DESC_SIZE_MAX ||
 			    queue_size[i] < FSL_QDMA_CIRCULAR_DESC_SIZE_MIN) {
+				DPAA_QDMA_ERR("Get wrong queue-sizes.\n");
 				return NULL;
 			}
 			queue_temp = queue_head + i + (j * queue_num);
@@ -143,6 +145,7 @@ static struct fsl_qdma_queue *fsl_qdma_prep_status_queue(void)
 	status_size = QDMA_STATUS_SIZE;
 	if (status_size > FSL_QDMA_CIRCULAR_DESC_SIZE_MAX ||
 	    status_size < FSL_QDMA_CIRCULAR_DESC_SIZE_MIN) {
+		DPAA_QDMA_ERR("Get wrong status_size.\n");
 		return NULL;
 	}
 
@@ -227,6 +230,7 @@ static int fsl_qdma_reg_init(struct fsl_qdma_engine *fsl_qdma)
 	/* Try to halt the qDMA engine first. */
 	ret = fsl_qdma_halt(fsl_qdma);
 	if (ret) {
+		DPAA_QDMA_ERR("DMA halt failed!");
 		return ret;
 	}
 
@@ -353,6 +357,7 @@ dpaa_qdma_init(struct rte_dmadev *dmadev)
 
 	ccsr_qdma_fd = open("/dev/mem", O_RDWR);
 	if (unlikely(ccsr_qdma_fd < 0)) {
+		DPAA_QDMA_ERR("Can not open /dev/mem for qdma CCSR map");
 		goto err;
 	}
 
@@ -364,6 +369,8 @@ dpaa_qdma_init(struct rte_dmadev *dmadev)
 
 	close(ccsr_qdma_fd);
 	if (fsl_qdma->ctrl_base == MAP_FAILED) {
+		DPAA_QDMA_ERR("Can not map CCSR base qdma: Phys: %08" PRIx64
+		       "size %d\n", phys_addr, regs_size);
 		goto err;
 	}
 
@@ -387,6 +394,7 @@ dpaa_qdma_init(struct rte_dmadev *dmadev)
 
 	ret = fsl_qdma_reg_init(fsl_qdma);
 	if (ret) {
+		DPAA_QDMA_ERR("Can't Initialize the qDMA engine.\n");
 		munmap(fsl_qdma->ctrl_base, regs_size);
 		goto err;
 	}
@@ -409,6 +417,7 @@ dpaa_qdma_probe(__rte_unused struct rte_dpaa_driver *dpaa_drv,
 
 	dmadev = rte_dmadev_pmd_allocate(dpaa_dev->device.name);
 	if (!dmadev) {
+		DPAA_QDMA_ERR("Unable to allocate dmadevice");
 		return -EINVAL;
 	}
 
@@ -462,3 +471,4 @@ static struct rte_dpaa_driver rte_dpaa_qdma_pmd = {
 };
 
 RTE_PMD_REGISTER_DPAA(dpaa_qdma, rte_dpaa_qdma_pmd);
+RTE_LOG_REGISTER_DEFAULT(dpaa_qdma_logtype, INFO);
diff --git a/drivers/dma/dpaa/dpaa_qdma_logs.h b/drivers/dma/dpaa/dpaa_qdma_logs.h
new file mode 100644
index 0000000000..01d4a508fc
--- /dev/null
+++ b/drivers/dma/dpaa/dpaa_qdma_logs.h
@@ -0,0 +1,46 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 NXP
+ */
+
+#ifndef __DPAA_QDMA_LOGS_H__
+#define __DPAA_QDMA_LOGS_H__
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+extern int dpaa_qdma_logtype;
+
+#define DPAA_QDMA_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, dpaa_qdma_logtype, "dpaa_qdma: " \
+		fmt "\n", ## args)
+
+#define DPAA_QDMA_DEBUG(fmt, args...) \
+	rte_log(RTE_LOG_DEBUG, dpaa_qdma_logtype, "dpaa_qdma: %s(): " \
+		fmt "\n", __func__, ## args)
+
+#define DPAA_QDMA_FUNC_TRACE() DPAA_QDMA_DEBUG(">>")
+
+#define DPAA_QDMA_INFO(fmt, args...) \
+	DPAA_QDMA_LOG(INFO, fmt, ## args)
+#define DPAA_QDMA_ERR(fmt, args...) \
+	DPAA_QDMA_LOG(ERR, fmt, ## args)
+#define DPAA_QDMA_WARN(fmt, args...) \
+	DPAA_QDMA_LOG(WARNING, fmt, ## args)
+
+/* DP Logs, toggled out at compile time if level lower than current level */
+#define DPAA_QDMA_DP_LOG(level, fmt, args...) \
+	RTE_LOG_DP(level, PMD, "dpaa_qdma: " fmt "\n", ## args)
+
+#define DPAA_QDMA_DP_DEBUG(fmt, args...) \
+	DPAA_QDMA_DP_LOG(DEBUG, fmt, ## args)
+#define DPAA_QDMA_DP_INFO(fmt, args...) \
+	DPAA_QDMA_DP_LOG(INFO, fmt, ## args)
+#define DPAA_QDMA_DP_WARN(fmt, args...) \
+	DPAA_QDMA_DP_LOG(WARNING, fmt, ## args)
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __DPAA_QDMA_LOGS_H__ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH 4/6] dma/dpaa: support basic operations
  2021-09-09 11:14 [dpdk-dev] [PATCH 0/6] Introduce DPAA DMA driver Gagandeep Singh
                   ` (2 preceding siblings ...)
  2021-09-09 11:14 ` [dpdk-dev] [PATCH 3/6] dma/dpaa: add driver logs Gagandeep Singh
@ 2021-09-09 11:14 ` Gagandeep Singh
  2021-09-09 11:14 ` [dpdk-dev] [PATCH 5/6] dma/dpaa: support DMA operations Gagandeep Singh
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 42+ messages in thread
From: Gagandeep Singh @ 2021-09-09 11:14 UTC (permalink / raw)
  To: dev; +Cc: nipun.gupta, thomas, Gagandeep Singh

This patch support basic DMA operations which includes
device capability and channel setup.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 drivers/dma/dpaa/dpaa_qdma.c | 182 +++++++++++++++++++++++++++++++++++
 drivers/dma/dpaa/dpaa_qdma.h |   6 ++
 2 files changed, 188 insertions(+)

diff --git a/drivers/dma/dpaa/dpaa_qdma.c b/drivers/dma/dpaa/dpaa_qdma.c
index 8b0454abce..0297166550 100644
--- a/drivers/dma/dpaa/dpaa_qdma.c
+++ b/drivers/dma/dpaa/dpaa_qdma.c
@@ -8,6 +8,18 @@
 #include "dpaa_qdma.h"
 #include "dpaa_qdma_logs.h"
 
+static inline void
+qdma_desc_addr_set64(struct fsl_qdma_format *ccdf, u64 addr)
+{
+	ccdf->addr_hi = upper_32_bits(addr);
+	ccdf->addr_lo = rte_cpu_to_le_32(lower_32_bits(addr));
+}
+
+static inline void qdma_csgf_set_len(struct fsl_qdma_format *csgf, int len)
+{
+	csgf->cfg = rte_cpu_to_le_32(len & QDMA_SG_LEN_MASK);
+}
+
 static inline int ilog2(int x)
 {
 	int log = 0;
@@ -84,6 +96,64 @@ static void fsl_qdma_free_chan_resources(struct fsl_qdma_chan *fsl_chan)
 finally:
 	fsl_qdma->desc_allocated--;
 }
+
+/*
+ * Pre-request command descriptor and compound S/G for enqueue.
+ */
+static int fsl_qdma_pre_request_enqueue_comp_sd_desc(
+					struct fsl_qdma_queue *queue,
+					int size, int aligned)
+{
+	struct fsl_qdma_comp *comp_temp;
+	struct fsl_qdma_sdf *sdf;
+	struct fsl_qdma_ddf *ddf;
+	struct fsl_qdma_format *csgf_desc;
+	int i;
+
+	for (i = 0; i < (int)(queue->n_cq + COMMAND_QUEUE_OVERFLLOW); i++) {
+		comp_temp = rte_zmalloc("qdma: comp temp",
+					sizeof(*comp_temp), 0);
+		if (!comp_temp)
+			return -ENOMEM;
+
+		comp_temp->virt_addr =
+		dma_pool_alloc(size, aligned, &comp_temp->bus_addr);
+		if (!comp_temp->virt_addr) {
+			rte_free(comp_temp);
+			return -ENOMEM;
+		}
+
+		comp_temp->desc_virt_addr =
+		dma_pool_alloc(size, aligned, &comp_temp->desc_bus_addr);
+		if (!comp_temp->desc_virt_addr)
+			return -ENOMEM;
+
+		memset(comp_temp->virt_addr, 0, FSL_QDMA_COMMAND_BUFFER_SIZE);
+		memset(comp_temp->desc_virt_addr, 0,
+		       FSL_QDMA_DESCRIPTOR_BUFFER_SIZE);
+
+		csgf_desc = (struct fsl_qdma_format *)comp_temp->virt_addr + 1;
+		sdf = (struct fsl_qdma_sdf *)comp_temp->desc_virt_addr;
+		ddf = (struct fsl_qdma_ddf *)comp_temp->desc_virt_addr + 1;
+		/* Compound Command Descriptor(Frame List Table) */
+		qdma_desc_addr_set64(csgf_desc, comp_temp->desc_bus_addr);
+		/* It must be 32 as Compound S/G Descriptor */
+		qdma_csgf_set_len(csgf_desc, 32);
+		/* Descriptor Buffer */
+		sdf->cmd = rte_cpu_to_le_32(FSL_QDMA_CMD_RWTTYPE <<
+			       FSL_QDMA_CMD_RWTTYPE_OFFSET);
+		ddf->cmd = rte_cpu_to_le_32(FSL_QDMA_CMD_RWTTYPE <<
+			       FSL_QDMA_CMD_RWTTYPE_OFFSET);
+		ddf->cmd |= rte_cpu_to_le_32(FSL_QDMA_CMD_LWC <<
+				FSL_QDMA_CMD_LWC_OFFSET);
+
+		list_add_tail(&comp_temp->list, &queue->comp_free);
+	}
+
+	return 0;
+}
+
+
 static struct fsl_qdma_queue
 *fsl_qdma_alloc_queue_resources(struct fsl_qdma_engine *fsl_qdma)
 {
@@ -311,6 +381,79 @@ static int fsl_qdma_reg_init(struct fsl_qdma_engine *fsl_qdma)
 	return 0;
 }
 
+static int fsl_qdma_alloc_chan_resources(struct fsl_qdma_chan *fsl_chan)
+{
+	struct fsl_qdma_queue *fsl_queue = fsl_chan->queue;
+	struct fsl_qdma_engine *fsl_qdma = fsl_chan->qdma;
+	int ret;
+
+	if (fsl_queue->count++)
+		goto finally;
+
+	INIT_LIST_HEAD(&fsl_queue->comp_free);
+	INIT_LIST_HEAD(&fsl_queue->comp_used);
+
+	ret = fsl_qdma_pre_request_enqueue_comp_sd_desc(fsl_queue,
+				FSL_QDMA_COMMAND_BUFFER_SIZE, 64);
+	if (ret) {
+		DPAA_QDMA_ERR(
+			"failed to alloc dma buffer for comp descriptor\n");
+		goto exit;
+	}
+
+finally:
+	return fsl_qdma->desc_allocated++;
+
+exit:
+	return -ENOMEM;
+}
+
+static int
+dpaa_info_get(const struct rte_dmadev *dev, struct rte_dmadev_info *dev_info,
+	      uint32_t info_sz)
+{
+#define DPAADMA_MAX_DESC        128
+#define DPAADMA_MIN_DESC        128
+
+	RTE_SET_USED(dev);
+	RTE_SET_USED(info_sz);
+
+	dev_info->dev_capa = RTE_DMADEV_CAPA_MEM_TO_MEM |
+			     RTE_DMADEV_CAPA_MEM_TO_DEV |
+			     RTE_DMADEV_CAPA_DEV_TO_DEV |
+			     RTE_DMADEV_CAPA_DEV_TO_MEM |
+			     RTE_DMADEV_CAPA_SILENT |
+			     RTE_DMADEV_CAPA_OPS_COPY;
+	dev_info->max_vchans = 1;
+	dev_info->max_desc = DPAADMA_MAX_DESC;
+	dev_info->min_desc = DPAADMA_MIN_DESC;
+
+	return 0;
+}
+
+static int
+dpaa_get_channel(struct fsl_qdma_engine *fsl_qdma,  uint16_t vchan)
+{
+	u32 i, start, end;
+
+	start = fsl_qdma->free_block_id * QDMA_QUEUES;
+	fsl_qdma->free_block_id++;
+
+	end = start + 1;
+	for (i = start; i < end; i++) {
+		struct fsl_qdma_chan *fsl_chan = &fsl_qdma->chans[i];
+
+		if (fsl_chan->free) {
+			fsl_chan->free = false;
+			fsl_qdma_alloc_chan_resources(fsl_chan);
+			fsl_qdma->vchan_map[vchan] = i;
+			return 0;
+		}
+	}
+
+	return -1;
+}
+
 static void
 dma_release(void *fsl_chan)
 {
@@ -318,6 +461,43 @@ dma_release(void *fsl_chan)
 	fsl_qdma_free_chan_resources((struct fsl_qdma_chan *)fsl_chan);
 }
 
+static int
+dpaa_qdma_configure(__rte_unused struct rte_dmadev *dmadev,
+		    __rte_unused const struct rte_dmadev_conf *dev_conf)
+{
+	return 0;
+}
+
+static int
+dpaa_qdma_start(__rte_unused struct rte_dmadev *dev)
+{
+	return 0;
+}
+
+static int
+dpaa_qdma_close(__rte_unused struct rte_dmadev *dev)
+{
+	return 0;
+}
+
+static int
+dpaa_qdma_queue_setup(struct rte_dmadev *dmadev,
+		      uint16_t vchan,
+		      __rte_unused const struct rte_dmadev_vchan_conf *conf)
+{
+	struct fsl_qdma_engine *fsl_qdma = dmadev->dev_private;
+
+	return dpaa_get_channel(fsl_qdma, vchan);
+}
+
+static struct rte_dmadev_ops dpaa_qdma_ops = {
+	.dev_info_get		  = dpaa_info_get,
+	.dev_configure            = dpaa_qdma_configure,
+	.dev_start                = dpaa_qdma_start,
+	.dev_close                = dpaa_qdma_close,
+	.vchan_setup		  = dpaa_qdma_queue_setup,
+};
+
 static int
 dpaa_qdma_init(struct rte_dmadev *dmadev)
 {
@@ -430,6 +610,8 @@ dpaa_qdma_probe(__rte_unused struct rte_dpaa_driver *dpaa_drv,
 	}
 
 	dpaa_dev->dmadev = dmadev;
+	dmadev->dev_ops = &dpaa_qdma_ops;
+	dmadev->device = &dpaa_dev->device;
 
 	/* Invoke PMD device initialization function */
 	ret = dpaa_qdma_init(dmadev);
diff --git a/drivers/dma/dpaa/dpaa_qdma.h b/drivers/dma/dpaa/dpaa_qdma.h
index cc0d1f114e..f482b16334 100644
--- a/drivers/dma/dpaa/dpaa_qdma.h
+++ b/drivers/dma/dpaa/dpaa_qdma.h
@@ -8,6 +8,12 @@
 #define CORE_NUMBER 4
 #define RETRIES	5
 
+#ifndef GENMASK
+#define BITS_PER_LONG	(__SIZEOF_LONG__ * 8)
+#define GENMASK(h, l) \
+		(((~0UL) << (l)) & (~0UL >> (BITS_PER_LONG - 1 - (h))))
+#endif
+
 #define FSL_QDMA_DMR			0x0
 #define FSL_QDMA_DSR			0x4
 #define FSL_QDMA_DEIER			0xe00
-- 
2.25.1


^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH 5/6] dma/dpaa: support DMA operations
  2021-09-09 11:14 [dpdk-dev] [PATCH 0/6] Introduce DPAA DMA driver Gagandeep Singh
                   ` (3 preceding siblings ...)
  2021-09-09 11:14 ` [dpdk-dev] [PATCH 4/6] dma/dpaa: support basic operations Gagandeep Singh
@ 2021-09-09 11:14 ` Gagandeep Singh
  2021-09-09 11:15 ` [dpdk-dev] [PATCH 6/6] doc: add user guide of DPAA DMA driver Gagandeep Singh
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 42+ messages in thread
From: Gagandeep Singh @ 2021-09-09 11:14 UTC (permalink / raw)
  To: dev; +Cc: nipun.gupta, thomas, Gagandeep Singh

This patch support copy, submit, completed and
completed status functionality of DMA driver.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 drivers/dma/dpaa/dpaa_qdma.c | 346 +++++++++++++++++++++++++++++++++++
 drivers/dma/dpaa/dpaa_qdma.h |   4 +
 2 files changed, 350 insertions(+)

diff --git a/drivers/dma/dpaa/dpaa_qdma.c b/drivers/dma/dpaa/dpaa_qdma.c
index 0297166550..1ce1999165 100644
--- a/drivers/dma/dpaa/dpaa_qdma.c
+++ b/drivers/dma/dpaa/dpaa_qdma.c
@@ -15,11 +15,48 @@ qdma_desc_addr_set64(struct fsl_qdma_format *ccdf, u64 addr)
 	ccdf->addr_lo = rte_cpu_to_le_32(lower_32_bits(addr));
 }
 
+static inline u64
+qdma_ccdf_get_queue(const struct fsl_qdma_format *ccdf)
+{
+	return ccdf->cfg8b_w1 & 0xff;
+}
+
+static inline int
+qdma_ccdf_get_offset(const struct fsl_qdma_format *ccdf)
+{
+	return (rte_le_to_cpu_32(ccdf->cfg) & QDMA_CCDF_MASK)
+		>> QDMA_CCDF_OFFSET;
+}
+
+static inline void
+qdma_ccdf_set_format(struct fsl_qdma_format *ccdf, int offset)
+{
+	ccdf->cfg = rte_cpu_to_le_32(QDMA_CCDF_FOTMAT | offset);
+}
+
+static inline int
+qdma_ccdf_get_status(const struct fsl_qdma_format *ccdf)
+{
+	return (rte_le_to_cpu_32(ccdf->status) & QDMA_CCDF_MASK)
+		>> QDMA_CCDF_STATUS;
+}
+
+static inline void
+qdma_ccdf_set_ser(struct fsl_qdma_format *ccdf, int status)
+{
+	ccdf->status = rte_cpu_to_le_32(QDMA_CCDF_SER | status);
+}
+
 static inline void qdma_csgf_set_len(struct fsl_qdma_format *csgf, int len)
 {
 	csgf->cfg = rte_cpu_to_le_32(len & QDMA_SG_LEN_MASK);
 }
 
+static inline void qdma_csgf_set_f(struct fsl_qdma_format *csgf, int len)
+{
+	csgf->cfg = rte_cpu_to_le_32(QDMA_SG_FIN | (len & QDMA_SG_LEN_MASK));
+}
+
 static inline int ilog2(int x)
 {
 	int log = 0;
@@ -43,6 +80,16 @@ static void qdma_writel(u32 val, void *addr)
 	QDMA_OUT(addr, val);
 }
 
+static u32 qdma_readl_be(void *addr)
+{
+	return QDMA_IN_BE(addr);
+}
+
+static void qdma_writel_be(u32 val, void *addr)
+{
+	QDMA_OUT_BE(addr, val);
+}
+
 static void *dma_pool_alloc(int size, int aligned, dma_addr_t *phy_addr)
 {
 	void *virt_addr;
@@ -97,6 +144,31 @@ static void fsl_qdma_free_chan_resources(struct fsl_qdma_chan *fsl_chan)
 	fsl_qdma->desc_allocated--;
 }
 
+static void fsl_qdma_comp_fill_memcpy(struct fsl_qdma_comp *fsl_comp,
+				      dma_addr_t dst, dma_addr_t src, u32 len)
+{
+	struct fsl_qdma_format *csgf_src, *csgf_dest;
+
+	/* Note: command table (fsl_comp->virt_addr) is getting filled
+	 * directly in cmd descriptors of queues while enqueuing the descriptor
+	 * please refer fsl_qdma_enqueue_desc
+	 * frame list table (virt_addr) + 1) and source,
+	 * destination descriptor table
+	 * (fsl_comp->desc_virt_addr and fsl_comp->desc_virt_addr+1) move to
+	 * the control path to fsl_qdma_pre_request_enqueue_comp_sd_desc
+	 */
+	csgf_src = (struct fsl_qdma_format *)fsl_comp->virt_addr + 2;
+	csgf_dest = (struct fsl_qdma_format *)fsl_comp->virt_addr + 3;
+
+	/* Status notification is enqueued to status queue. */
+	qdma_desc_addr_set64(csgf_src, src);
+	qdma_csgf_set_len(csgf_src, len);
+	qdma_desc_addr_set64(csgf_dest, dst);
+	qdma_csgf_set_len(csgf_dest, len);
+	/* This entry is the last entry. */
+	qdma_csgf_set_f(csgf_dest, len);
+}
+
 /*
  * Pre-request command descriptor and compound S/G for enqueue.
  */
@@ -153,6 +225,25 @@ static int fsl_qdma_pre_request_enqueue_comp_sd_desc(
 	return 0;
 }
 
+/*
+ * Request a command descriptor for enqueue.
+ */
+static struct fsl_qdma_comp *
+fsl_qdma_request_enqueue_desc(struct fsl_qdma_chan *fsl_chan)
+{
+	struct fsl_qdma_queue *queue = fsl_chan->queue;
+	struct fsl_qdma_comp *comp_temp;
+
+	if (!list_empty(&queue->comp_free)) {
+		comp_temp = list_first_entry(&queue->comp_free,
+					     struct fsl_qdma_comp,
+					     list);
+		list_del(&comp_temp->list);
+		return comp_temp;
+	}
+
+	return NULL;
+}
 
 static struct fsl_qdma_queue
 *fsl_qdma_alloc_queue_resources(struct fsl_qdma_engine *fsl_qdma)
@@ -287,6 +378,54 @@ static int fsl_qdma_halt(struct fsl_qdma_engine *fsl_qdma)
 	return 0;
 }
 
+static int
+fsl_qdma_queue_transfer_complete(struct fsl_qdma_engine *fsl_qdma,
+				 void *block, int id, const uint16_t nb_cpls,
+				 uint16_t *last_idx,
+				 enum rte_dma_status_code *status)
+{
+	struct fsl_qdma_queue *fsl_queue = fsl_qdma->queue;
+	struct fsl_qdma_queue *fsl_status = fsl_qdma->status[id];
+	struct fsl_qdma_queue *temp_queue;
+	struct fsl_qdma_format *status_addr;
+	struct fsl_qdma_comp *fsl_comp = NULL;
+	u32 reg, i;
+	int count = 0;
+
+	while (count < nb_cpls) {
+		reg = qdma_readl_be(block + FSL_QDMA_BSQSR);
+		if (reg & FSL_QDMA_BSQSR_QE_BE)
+			return count;
+
+		status_addr = fsl_status->virt_head;
+
+		i = qdma_ccdf_get_queue(status_addr) +
+			id * fsl_qdma->n_queues;
+		temp_queue = fsl_queue + i;
+		fsl_comp = list_first_entry(&temp_queue->comp_used,
+					    struct fsl_qdma_comp,
+					    list);
+		list_del(&fsl_comp->list);
+
+		reg = qdma_readl_be(block + FSL_QDMA_BSQMR);
+		reg |= FSL_QDMA_BSQMR_DI_BE;
+
+		qdma_desc_addr_set64(status_addr, 0x0);
+		fsl_status->virt_head++;
+		if (fsl_status->virt_head == fsl_status->cq + fsl_status->n_cq)
+			fsl_status->virt_head = fsl_status->cq;
+		qdma_writel_be(reg, block + FSL_QDMA_BSQMR);
+		*last_idx = fsl_comp->index;
+		if (status != NULL)
+			status[count] = RTE_DMA_STATUS_SUCCESSFUL;
+
+		list_add_tail(&fsl_comp->list, &temp_queue->comp_free);
+		count++;
+
+	}
+	return count;
+}
+
 static int fsl_qdma_reg_init(struct fsl_qdma_engine *fsl_qdma)
 {
 	struct fsl_qdma_queue *fsl_queue = fsl_qdma->queue;
@@ -381,6 +520,65 @@ static int fsl_qdma_reg_init(struct fsl_qdma_engine *fsl_qdma)
 	return 0;
 }
 
+static void *
+fsl_qdma_prep_memcpy(void *fsl_chan, dma_addr_t dst,
+			   dma_addr_t src, size_t len,
+			   void *call_back,
+			   void *param)
+{
+	struct fsl_qdma_comp *fsl_comp;
+
+	fsl_comp =
+	fsl_qdma_request_enqueue_desc((struct fsl_qdma_chan *)fsl_chan);
+	if (!fsl_comp)
+		return NULL;
+
+	fsl_comp->qchan = fsl_chan;
+	fsl_comp->call_back_func = call_back;
+	fsl_comp->params = param;
+
+	fsl_qdma_comp_fill_memcpy(fsl_comp, dst, src, len);
+	return (void *)fsl_comp;
+}
+
+static int fsl_qdma_enqueue_desc(struct fsl_qdma_chan *fsl_chan,
+				  struct fsl_qdma_comp *fsl_comp,
+				  uint64_t flags)
+{
+	struct fsl_qdma_queue *fsl_queue = fsl_chan->queue;
+	void *block = fsl_queue->block_base;
+	struct fsl_qdma_format *ccdf;
+	u32 reg;
+
+	/* retrieve and store the register value in big endian
+	 * to avoid bits swap
+	 */
+	reg = qdma_readl_be(block +
+			 FSL_QDMA_BCQSR(fsl_queue->id));
+	if (reg & (FSL_QDMA_BCQSR_QF_XOFF_BE))
+		return -1;
+
+	/* filling descriptor  command table */
+	ccdf = (struct fsl_qdma_format *)fsl_queue->virt_head;
+	qdma_desc_addr_set64(ccdf, fsl_comp->bus_addr + 16);
+	qdma_ccdf_set_format(ccdf, qdma_ccdf_get_offset(fsl_comp->virt_addr));
+	qdma_ccdf_set_ser(ccdf, qdma_ccdf_get_status(fsl_comp->virt_addr));
+	fsl_comp->index = fsl_queue->virt_head - fsl_queue->cq;
+	fsl_queue->virt_head++;
+
+	if (fsl_queue->virt_head == fsl_queue->cq + fsl_queue->n_cq)
+		fsl_queue->virt_head = fsl_queue->cq;
+
+	list_add_tail(&fsl_comp->list, &fsl_queue->comp_used);
+
+	if (flags == RTE_DMA_OP_FLAG_SUBMIT) {
+		reg = qdma_readl_be(block + FSL_QDMA_BCQMR(fsl_queue->id));
+		reg |= FSL_QDMA_BCQMR_EI_BE;
+		qdma_writel_be(reg, block + FSL_QDMA_BCQMR(fsl_queue->id));
+	}
+	return 0;
+}
+
 static int fsl_qdma_alloc_chan_resources(struct fsl_qdma_chan *fsl_chan)
 {
 	struct fsl_qdma_queue *fsl_queue = fsl_chan->queue;
@@ -490,6 +688,150 @@ dpaa_qdma_queue_setup(struct rte_dmadev *dmadev,
 	return dpaa_get_channel(fsl_qdma, vchan);
 }
 
+static int
+dpaa_qdma_submit(struct rte_dmadev *dmadev, uint16_t vchan)
+{
+	struct fsl_qdma_engine *fsl_qdma = dmadev->dev_private;
+	struct fsl_qdma_chan *fsl_chan =
+		&fsl_qdma->chans[fsl_qdma->vchan_map[vchan]];
+	struct fsl_qdma_queue *fsl_queue = fsl_chan->queue;
+	void *block = fsl_queue->block_base;
+	u32 reg;
+
+	reg = qdma_readl_be(block + FSL_QDMA_BCQMR(fsl_queue->id));
+	reg |= FSL_QDMA_BCQMR_EI_BE;
+	qdma_writel_be(reg, block + FSL_QDMA_BCQMR(fsl_queue->id));
+
+	return 0;
+}
+
+static int
+dpaa_qdma_enqueue(struct rte_dmadev *dmadev, uint16_t vchan,
+		  rte_iova_t src, rte_iova_t dst,
+		  uint32_t length, uint64_t flags)
+{
+	struct fsl_qdma_engine *fsl_qdma = dmadev->dev_private;
+	struct fsl_qdma_chan *fsl_chan =
+		&fsl_qdma->chans[fsl_qdma->vchan_map[vchan]];
+	int ret;
+
+	void *fsl_comp = NULL;
+
+	fsl_comp = fsl_qdma_prep_memcpy(fsl_chan,
+			(dma_addr_t)dst, (dma_addr_t)src,
+			length, NULL, NULL);
+	if (!fsl_comp) {
+		DPAA_QDMA_DP_DEBUG("fsl_comp is NULL\n");
+		return -1;
+	}
+	ret = fsl_qdma_enqueue_desc(fsl_chan, fsl_comp, flags);
+	if (ret)
+		return -1;
+
+	return 0;
+}
+
+static uint16_t
+dpaa_qdma_dequeue_status(struct rte_dmadev *dmadev, uint16_t vchan,
+			 const uint16_t nb_cpls, uint16_t *last_idx,
+			 enum rte_dma_status_code *st)
+{
+	struct fsl_qdma_engine *fsl_qdma = dmadev->dev_private;
+	int id = (int)((fsl_qdma->vchan_map[vchan]) / QDMA_QUEUES);
+	void *block;
+	unsigned int reg;
+	int intr;
+	void *status = fsl_qdma->status_base;
+
+	intr = qdma_readl_be(status + FSL_QDMA_DEDR);
+	if (intr) {
+		DPAA_QDMA_ERR("DMA transaction error! %x\n", intr);
+		intr = qdma_readl(status + FSL_QDMA_DECFDW0R);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW0R %x\n", intr);
+		intr = qdma_readl(status + FSL_QDMA_DECFDW1R);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW1R %x\n", intr);
+		intr = qdma_readl(status + FSL_QDMA_DECFDW2R);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW2R %x\n", intr);
+		intr = qdma_readl(status + FSL_QDMA_DECFDW3R);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW3R %x\n", intr);
+		intr = qdma_readl(status + FSL_QDMA_DECFQIDR);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFQIDR %x\n", intr);
+		intr = qdma_readl(status + FSL_QDMA_DECBR);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECBR %x\n", intr);
+		qdma_writel(0xffffffff,
+			    status + FSL_QDMA_DEDR);
+		intr = qdma_readl(status + FSL_QDMA_DEDR);
+	}
+
+	block = fsl_qdma->block_base +
+		FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, id);
+
+	intr = fsl_qdma_queue_transfer_complete(fsl_qdma, block, id, nb_cpls,
+						last_idx, st);
+	if (intr < 0) {
+		void *ctrl = fsl_qdma->ctrl_base;
+
+		reg = qdma_readl(ctrl + FSL_QDMA_DMR);
+		reg |= FSL_QDMA_DMR_DQD;
+		qdma_writel(reg, ctrl + FSL_QDMA_DMR);
+		qdma_writel(0, block + FSL_QDMA_BCQIER(0));
+		DPAA_QDMA_ERR("QDMA: status err!\n");
+	}
+
+	return intr;
+}
+
+
+static uint16_t
+dpaa_qdma_dequeue(struct rte_dmadev *dmadev,
+		  uint16_t vchan, const uint16_t nb_cpls,
+		  uint16_t *last_idx, __rte_unused bool *has_error)
+{
+	struct fsl_qdma_engine *fsl_qdma = dmadev->dev_private;
+	int id = (int)((fsl_qdma->vchan_map[vchan]) / QDMA_QUEUES);
+	void *block;
+	unsigned int reg;
+	int intr;
+	void *status = fsl_qdma->status_base;
+
+	intr = qdma_readl_be(status + FSL_QDMA_DEDR);
+	if (intr) {
+		DPAA_QDMA_ERR("DMA transaction error! %x\n", intr);
+		intr = qdma_readl(status + FSL_QDMA_DECFDW0R);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW0R %x\n", intr);
+		intr = qdma_readl(status + FSL_QDMA_DECFDW1R);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW1R %x\n", intr);
+		intr = qdma_readl(status + FSL_QDMA_DECFDW2R);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW2R %x\n", intr);
+		intr = qdma_readl(status + FSL_QDMA_DECFDW3R);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW3R %x\n", intr);
+		intr = qdma_readl(status + FSL_QDMA_DECFQIDR);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFQIDR %x\n", intr);
+		intr = qdma_readl(status + FSL_QDMA_DECBR);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECBR %x\n", intr);
+		qdma_writel(0xffffffff,
+			    status + FSL_QDMA_DEDR);
+		intr = qdma_readl(status + FSL_QDMA_DEDR);
+	}
+
+	block = fsl_qdma->block_base +
+		FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, id);
+
+	intr = fsl_qdma_queue_transfer_complete(fsl_qdma, block, id, nb_cpls,
+						last_idx, NULL);
+	if (intr < 0) {
+		void *ctrl = fsl_qdma->ctrl_base;
+
+		reg = qdma_readl(ctrl + FSL_QDMA_DMR);
+		reg |= FSL_QDMA_DMR_DQD;
+		qdma_writel(reg, ctrl + FSL_QDMA_DMR);
+		qdma_writel(0, block + FSL_QDMA_BCQIER(0));
+		DPAA_QDMA_ERR("QDMA: status err!\n");
+	}
+
+	return intr;
+}
+
 static struct rte_dmadev_ops dpaa_qdma_ops = {
 	.dev_info_get		  = dpaa_info_get,
 	.dev_configure            = dpaa_qdma_configure,
@@ -612,6 +954,10 @@ dpaa_qdma_probe(__rte_unused struct rte_dpaa_driver *dpaa_drv,
 	dpaa_dev->dmadev = dmadev;
 	dmadev->dev_ops = &dpaa_qdma_ops;
 	dmadev->device = &dpaa_dev->device;
+	dmadev->copy = dpaa_qdma_enqueue;
+	dmadev->submit = dpaa_qdma_submit;
+	dmadev->completed = dpaa_qdma_dequeue;
+	dmadev->completed_status = dpaa_qdma_dequeue_status;
 
 	/* Invoke PMD device initialization function */
 	ret = dpaa_qdma_init(dmadev);
diff --git a/drivers/dma/dpaa/dpaa_qdma.h b/drivers/dma/dpaa/dpaa_qdma.h
index f482b16334..ef3c37e3a8 100644
--- a/drivers/dma/dpaa/dpaa_qdma.h
+++ b/drivers/dma/dpaa/dpaa_qdma.h
@@ -5,6 +5,10 @@
 #ifndef _DPAA_QDMA_H_
 #define _DPAA_QDMA_H_
 
+#ifndef BIT
+#define BIT(nr)		(1UL << (nr))
+#endif
+
 #define CORE_NUMBER 4
 #define RETRIES	5
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH 6/6] doc: add user guide of DPAA DMA driver
  2021-09-09 11:14 [dpdk-dev] [PATCH 0/6] Introduce DPAA DMA driver Gagandeep Singh
                   ` (4 preceding siblings ...)
  2021-09-09 11:14 ` [dpdk-dev] [PATCH 5/6] dma/dpaa: support DMA operations Gagandeep Singh
@ 2021-09-09 11:15 ` Gagandeep Singh
  2021-10-27 14:57 ` [dpdk-dev] [PATCH 0/6] Introduce " Thomas Monjalon
  2021-11-01  8:51 ` [dpdk-dev] [PATCH v2 " Gagandeep Singh
  7 siblings, 0 replies; 42+ messages in thread
From: Gagandeep Singh @ 2021-09-09 11:15 UTC (permalink / raw)
  To: dev; +Cc: nipun.gupta, thomas, Gagandeep Singh

This patch adds DPAA DMA user guide.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 MAINTAINERS                 |  1 +
 doc/guides/dmadevs/dpaa.rst | 60 +++++++++++++++++++++++++++++++++++++
 2 files changed, 61 insertions(+)
 create mode 100644 doc/guides/dmadevs/dpaa.rst

diff --git a/MAINTAINERS b/MAINTAINERS
index e3113b2e7e..0a131ede7c 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1349,6 +1349,7 @@ NXP DPAA DMA
 M: Gagandeep Singh <g.singh@nxp.com>
 M: Nipun Gupta <nipun.gupta@nxp.com>
 F: drivers/dma/dpaa/
+F: doc/guides/dmadevs/dpaa.rst
 
 
 Packet processing
diff --git a/doc/guides/dmadevs/dpaa.rst b/doc/guides/dmadevs/dpaa.rst
new file mode 100644
index 0000000000..ed9628ed79
--- /dev/null
+++ b/doc/guides/dmadevs/dpaa.rst
@@ -0,0 +1,60 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright 2021 NXP
+
+NXP DPAA DMA Driver
+=====================
+
+The DPAA DMA is an implementation of the dmadev APIs, that provide means
+to initiate a DMA transaction from CPU. The initiated DMA is performed
+without CPU being involved in the actual DMA transaction. This is achieved
+via using the QDMA controller of DPAA SoC.
+
+The QDMA controller transfers blocks of data between one source and one
+destination. The blocks of data transferred can be represented in memory
+as contiguous or noncontiguous using scatter/gather table(s).
+
+More information can be found at `NXP Official Website
+<http://www.nxp.com/products/microcontrollers-and-processors/arm-processors/qoriq-arm-processors:QORIQ-ARM>`_.
+
+Features
+--------
+
+The DPAA DMA implements following features in the dmadev API:
+
+- Supports 1 virtual channel.
+- Supports all 4 DMA transfers: MEM_TO_MEM, MEM_TO_DEV,
+  DEV_TO_MEM, DEV_TO_DEV.
+- Supports DMA silent mode.
+- Supports issuing DMA of data within memory without hogging CPU while
+  performing DMA operation.
+
+Supported DPAA SoCs
+--------------------
+
+- LS1046A
+- LS1043A
+
+Prerequisites
+-------------
+
+See :doc:`../platform/dpaa` for setup information
+
+- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
+
+.. note::
+
+   Some part of dpaa bus code (qbman and fman - library) routines are
+   dual licensed (BSD & GPLv2), however they are used as BSD in DPDK in userspace.
+
+Initialization
+--------------
+
+On EAL initialization, DPAA DMA devices will be detected on DPAA bus and
+will be probed and populated into their device list.
+
+
+Platform Requirement
+~~~~~~~~~~~~~~~~~~~~
+
+DPAA DMA driver for DPDK can only work on NXP SoCs as listed in the
+``Supported DPAA SoCs``.
-- 
2.25.1


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [dpdk-dev] [PATCH 0/6] Introduce DPAA DMA driver
  2021-09-09 11:14 [dpdk-dev] [PATCH 0/6] Introduce DPAA DMA driver Gagandeep Singh
                   ` (5 preceding siblings ...)
  2021-09-09 11:15 ` [dpdk-dev] [PATCH 6/6] doc: add user guide of DPAA DMA driver Gagandeep Singh
@ 2021-10-27 14:57 ` Thomas Monjalon
  2021-10-28  4:34   ` Gagandeep Singh
  2021-11-01  8:51 ` [dpdk-dev] [PATCH v2 " Gagandeep Singh
  7 siblings, 1 reply; 42+ messages in thread
From: Thomas Monjalon @ 2021-10-27 14:57 UTC (permalink / raw)
  To: Gagandeep Singh; +Cc: dev, nipun.gupta

09/09/2021 13:14, Gagandeep Singh:
> This series support DMA driver for NXP
> 1046A and 1043A SoCs.

Any update?
I guess it has to be rebased with latest dmadev.



^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [dpdk-dev] [PATCH 0/6] Introduce DPAA DMA driver
  2021-10-27 14:57 ` [dpdk-dev] [PATCH 0/6] Introduce " Thomas Monjalon
@ 2021-10-28  4:34   ` Gagandeep Singh
  0 siblings, 0 replies; 42+ messages in thread
From: Gagandeep Singh @ 2021-10-28  4:34 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev, Nipun Gupta



> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Wednesday, October 27, 2021 8:28 PM
> To: Gagandeep Singh <G.Singh@nxp.com>
> Cc: dev@dpdk.org; Nipun Gupta <nipun.gupta@nxp.com>
> Subject: Re: [dpdk-dev] [PATCH 0/6] Introduce DPAA DMA driver
> 
> 09/09/2021 13:14, Gagandeep Singh:
> > This series support DMA driver for NXP
> > 1046A and 1043A SoCs.
> 
> Any update?
> I guess it has to be rebased with latest dmadev.
> 
Yes, I will send next version after rebasing to latest dmadev.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH v2 0/6] Introduce DPAA DMA driver
  2021-09-09 11:14 [dpdk-dev] [PATCH 0/6] Introduce DPAA DMA driver Gagandeep Singh
                   ` (6 preceding siblings ...)
  2021-10-27 14:57 ` [dpdk-dev] [PATCH 0/6] Introduce " Thomas Monjalon
@ 2021-11-01  8:51 ` Gagandeep Singh
  2021-11-01  8:51   ` [dpdk-dev] [PATCH v2 1/6] dma/dpaa: introduce " Gagandeep Singh
                     ` (5 more replies)
  7 siblings, 6 replies; 42+ messages in thread
From: Gagandeep Singh @ 2021-11-01  8:51 UTC (permalink / raw)
  To: thomas, dev; +Cc: nipun.gupta, Gagandeep Singh

This series support DMA driver for NXP
1046A and 1043A SoCs.

v2-change-log:
* series rebased with latest dma driver

Gagandeep Singh (6):
  dma/dpaa: introduce DPAA DMA driver
  dma/dpaa: add device probe and remove functionality
  dma/dpaa: add driver logs
  dma/dpaa: support basic operations
  dma/dpaa: support DMA operations
  doc: add user guide of DPAA DMA driver

 MAINTAINERS                            |  11 +
 doc/guides/dmadevs/dpaa.rst            |  60 ++
 doc/guides/rel_notes/release_21_11.rst |   3 +
 drivers/bus/dpaa/dpaa_bus.c            |  22 +
 drivers/bus/dpaa/rte_dpaa_bus.h        |   5 +
 drivers/common/dpaax/dpaa_list.h       |   2 +
 drivers/dma/dpaa/dpaa_qdma.c           | 997 +++++++++++++++++++++++++
 drivers/dma/dpaa/dpaa_qdma.h           | 257 +++++++
 drivers/dma/dpaa/dpaa_qdma_logs.h      |  46 ++
 drivers/dma/dpaa/meson.build           |  14 +
 drivers/dma/dpaa/version.map           |   4 +
 drivers/dma/meson.build                |   1 +
 12 files changed, 1422 insertions(+)
 create mode 100644 doc/guides/dmadevs/dpaa.rst
 create mode 100644 drivers/dma/dpaa/dpaa_qdma.c
 create mode 100644 drivers/dma/dpaa/dpaa_qdma.h
 create mode 100644 drivers/dma/dpaa/dpaa_qdma_logs.h
 create mode 100644 drivers/dma/dpaa/meson.build
 create mode 100644 drivers/dma/dpaa/version.map

-- 
2.25.1


^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH v2 1/6] dma/dpaa: introduce DPAA DMA driver
  2021-11-01  8:51 ` [dpdk-dev] [PATCH v2 " Gagandeep Singh
@ 2021-11-01  8:51   ` Gagandeep Singh
  2021-11-02  8:51     ` fengchengwen
  2021-11-08  9:06     ` [dpdk-dev] [PATCH v3 0/7] Introduce " Gagandeep Singh
  2021-11-01  8:51   ` [dpdk-dev] [PATCH v2 2/6] dma/dpaa: add device probe and remove functionality Gagandeep Singh
                     ` (4 subsequent siblings)
  5 siblings, 2 replies; 42+ messages in thread
From: Gagandeep Singh @ 2021-11-01  8:51 UTC (permalink / raw)
  To: thomas, dev; +Cc: nipun.gupta, Gagandeep Singh

The DPAA DMA  driver is an implementation of the dmadev APIs,
that provide means to initiate a DMA transaction from CPU.
The initiated DMA is performed without CPU being involved
in the actual DMA transaction. This is achieved via using
the QDMA controller of DPAA SoC.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 MAINTAINERS                            | 10 +++++++++
 doc/guides/rel_notes/release_21_11.rst |  3 +++
 drivers/bus/dpaa/dpaa_bus.c            | 22 ++++++++++++++++++++
 drivers/bus/dpaa/rte_dpaa_bus.h        |  5 +++++
 drivers/common/dpaax/dpaa_list.h       |  2 ++
 drivers/dma/dpaa/dpaa_qdma.c           | 28 ++++++++++++++++++++++++++
 drivers/dma/dpaa/meson.build           | 14 +++++++++++++
 drivers/dma/dpaa/version.map           |  4 ++++
 drivers/dma/meson.build                |  1 +
 9 files changed, 89 insertions(+)
 create mode 100644 drivers/dma/dpaa/dpaa_qdma.c
 create mode 100644 drivers/dma/dpaa/meson.build
 create mode 100644 drivers/dma/dpaa/version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index 0e5951f8f1..76b9fb8e6c 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1353,6 +1353,16 @@ F: drivers/raw/dpaa2_qdma/
 F: doc/guides/rawdevs/dpaa2_qdma.rst
 
 
+
+Dmadev Drivers
+--------------
+
+NXP DPAA DMA
+M: Gagandeep Singh <g.singh@nxp.com>
+M: Nipun Gupta <nipun.gupta@nxp.com>
+F: drivers/dma/dpaa/
+
+
 Packet processing
 -----------------
 
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 502cc5ceb2..8080ada721 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -20,6 +20,9 @@ DPDK Release 21.11
       ninja -C build doc
       xdg-open build/doc/guides/html/rel_notes/release_21_11.html
 
+* **Added NXP DPAA DMA driver.**
+
+  * Added a new dmadev driver for NXP DPAA platform.
 
 New Features
 ------------
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index 9a53fdc1fb..737ac8d8c5 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -250,6 +250,28 @@ dpaa_create_device_list(void)
 
 	rte_dpaa_bus.device_count += i;
 
+	/* Creating QDMA Device */
+	for (i = 0; i < RTE_DPAA_QDMA_DEVICES; i++) {
+		dev = calloc(1, sizeof(struct rte_dpaa_device));
+		if (!dev) {
+			DPAA_BUS_LOG(ERR, "Failed to allocate QDMA device");
+			ret = -1;
+			goto cleanup;
+		}
+
+		dev->device_type = FSL_DPAA_QDMA;
+		dev->id.dev_id = rte_dpaa_bus.device_count + i;
+
+		memset(dev->name, 0, RTE_ETH_NAME_MAX_LEN);
+		sprintf(dev->name, "dpaa_qdma-%d", i+1);
+		DPAA_BUS_LOG(INFO, "%s qdma device added", dev->name);
+		dev->device.name = dev->name;
+		dev->device.devargs = dpaa_devargs_lookup(dev);
+
+		dpaa_add_to_device_list(dev);
+	}
+	rte_dpaa_bus.device_count += i;
+
 	return 0;
 
 cleanup:
diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h
index 97d189f9b0..31a5ea3fca 100644
--- a/drivers/bus/dpaa/rte_dpaa_bus.h
+++ b/drivers/bus/dpaa/rte_dpaa_bus.h
@@ -58,6 +58,9 @@ dpaa_seqn(struct rte_mbuf *mbuf)
 /** Device driver supports link state interrupt */
 #define RTE_DPAA_DRV_INTR_LSC  0x0008
 
+/** Number of supported QDMA devices */
+#define RTE_DPAA_QDMA_DEVICES  1
+
 #define RTE_DEV_TO_DPAA_CONST(ptr) \
 	container_of(ptr, const struct rte_dpaa_device, device)
 
@@ -73,6 +76,7 @@ TAILQ_HEAD(rte_dpaa_driver_list, rte_dpaa_driver);
 enum rte_dpaa_type {
 	FSL_DPAA_ETH = 1,
 	FSL_DPAA_CRYPTO,
+	FSL_DPAA_QDMA
 };
 
 struct rte_dpaa_bus {
@@ -95,6 +99,7 @@ struct rte_dpaa_device {
 	union {
 		struct rte_eth_dev *eth_dev;
 		struct rte_cryptodev *crypto_dev;
+		struct rte_dma_dev *dmadev;
 	};
 	struct rte_dpaa_driver *driver;
 	struct dpaa_device_id id;
diff --git a/drivers/common/dpaax/dpaa_list.h b/drivers/common/dpaax/dpaa_list.h
index e94575982b..319a3562ab 100644
--- a/drivers/common/dpaax/dpaa_list.h
+++ b/drivers/common/dpaax/dpaa_list.h
@@ -35,6 +35,8 @@ do { \
 	const struct list_head *__p298 = (p); \
 	((__p298->next == __p298) && (__p298->prev == __p298)); \
 })
+#define list_first_entry(ptr, type, member) \
+	list_entry((ptr)->next, type, member)
 #define list_add(p, l) \
 do { \
 	struct list_head *__p298 = (p); \
diff --git a/drivers/dma/dpaa/dpaa_qdma.c b/drivers/dma/dpaa/dpaa_qdma.c
new file mode 100644
index 0000000000..2ef3ee0c35
--- /dev/null
+++ b/drivers/dma/dpaa/dpaa_qdma.c
@@ -0,0 +1,28 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 NXP
+ */
+
+#include <rte_dpaa_bus.h>
+
+static int
+dpaa_qdma_probe(__rte_unused struct rte_dpaa_driver *dpaa_drv,
+		__rte_unused struct rte_dpaa_device *dpaa_dev)
+{
+	return 0;
+}
+
+static int
+dpaa_qdma_remove(__rte_unused struct rte_dpaa_device *dpaa_dev)
+{
+	return 0;
+}
+
+static struct rte_dpaa_driver rte_dpaa_qdma_pmd;
+
+static struct rte_dpaa_driver rte_dpaa_qdma_pmd = {
+	.drv_type = FSL_DPAA_QDMA,
+	.probe = dpaa_qdma_probe,
+	.remove = dpaa_qdma_remove,
+};
+
+RTE_PMD_REGISTER_DPAA(dpaa_qdma, rte_dpaa_qdma_pmd);
diff --git a/drivers/dma/dpaa/meson.build b/drivers/dma/dpaa/meson.build
new file mode 100644
index 0000000000..9ab0862ede
--- /dev/null
+++ b/drivers/dma/dpaa/meson.build
@@ -0,0 +1,14 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright 2021 NXP
+
+if not is_linux
+	build = false
+	reason = 'only supported on linux'
+endif
+
+deps += ['dmadev', 'bus_dpaa']
+sources = files('dpaa_qdma.c')
+
+if cc.has_argument('-Wno-pointer-arith')
+	cflags += '-Wno-pointer-arith'
+endif
diff --git a/drivers/dma/dpaa/version.map b/drivers/dma/dpaa/version.map
new file mode 100644
index 0000000000..7bab7bea48
--- /dev/null
+++ b/drivers/dma/dpaa/version.map
@@ -0,0 +1,4 @@
+DPDK_22 {
+
+	local: *;
+};
diff --git a/drivers/dma/meson.build b/drivers/dma/meson.build
index a69418ce9b..ab2733f7f6 100644
--- a/drivers/dma/meson.build
+++ b/drivers/dma/meson.build
@@ -5,5 +5,6 @@ drivers = [
         'idxd',
         'ioat',
         'skeleton',
+	'dpaa',
 ]
 std_deps = ['dmadev']
-- 
2.25.1


^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH v2 2/6] dma/dpaa: add device probe and remove functionality
  2021-11-01  8:51 ` [dpdk-dev] [PATCH v2 " Gagandeep Singh
  2021-11-01  8:51   ` [dpdk-dev] [PATCH v2 1/6] dma/dpaa: introduce " Gagandeep Singh
@ 2021-11-01  8:51   ` Gagandeep Singh
  2021-11-02  9:07     ` fengchengwen
  2021-11-01  8:51   ` [dpdk-dev] [PATCH v2 3/6] dma/dpaa: add driver logs Gagandeep Singh
                     ` (3 subsequent siblings)
  5 siblings, 1 reply; 42+ messages in thread
From: Gagandeep Singh @ 2021-11-01  8:51 UTC (permalink / raw)
  To: thomas, dev; +Cc: nipun.gupta, Gagandeep Singh

This patch add device initialisation functionality.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 drivers/dma/dpaa/dpaa_qdma.c | 434 ++++++++++++++++++++++++++++++++++-
 drivers/dma/dpaa/dpaa_qdma.h | 247 ++++++++++++++++++++
 2 files changed, 679 insertions(+), 2 deletions(-)
 create mode 100644 drivers/dma/dpaa/dpaa_qdma.h

diff --git a/drivers/dma/dpaa/dpaa_qdma.c b/drivers/dma/dpaa/dpaa_qdma.c
index 2ef3ee0c35..3ad23513e9 100644
--- a/drivers/dma/dpaa/dpaa_qdma.c
+++ b/drivers/dma/dpaa/dpaa_qdma.c
@@ -3,17 +3,447 @@
  */
 
 #include <rte_dpaa_bus.h>
+#include <rte_dmadev_pmd.h>
+
+#include "dpaa_qdma.h"
+
+static inline int ilog2(int x)
+{
+	int log = 0;
+
+	x >>= 1;
+
+	while (x) {
+		log++;
+		x >>= 1;
+	}
+	return log;
+}
+
+static u32 qdma_readl(void *addr)
+{
+	return QDMA_IN(addr);
+}
+
+static void qdma_writel(u32 val, void *addr)
+{
+	QDMA_OUT(addr, val);
+}
+
+static void *dma_pool_alloc(int size, int aligned, dma_addr_t *phy_addr)
+{
+	void *virt_addr;
+
+	virt_addr = rte_malloc("dma pool alloc", size, aligned);
+	if (!virt_addr)
+		return NULL;
+
+	*phy_addr = rte_mem_virt2iova(virt_addr);
+
+	return virt_addr;
+}
+
+static void dma_pool_free(void *addr)
+{
+	rte_free(addr);
+}
+
+static void fsl_qdma_free_chan_resources(struct fsl_qdma_chan *fsl_chan)
+{
+	struct fsl_qdma_queue *fsl_queue = fsl_chan->queue;
+	struct fsl_qdma_engine *fsl_qdma = fsl_chan->qdma;
+	struct fsl_qdma_comp *comp_temp, *_comp_temp;
+	int id;
+
+	if (--fsl_queue->count)
+		goto finally;
+
+	id = (fsl_qdma->block_base - fsl_queue->block_base) /
+	      fsl_qdma->block_offset;
+
+	while (rte_atomic32_read(&wait_task[id]) == 1)
+		rte_delay_us(QDMA_DELAY);
+
+	list_for_each_entry_safe(comp_temp, _comp_temp,
+				 &fsl_queue->comp_used,	list) {
+		list_del(&comp_temp->list);
+		dma_pool_free(comp_temp->virt_addr);
+		dma_pool_free(comp_temp->desc_virt_addr);
+		rte_free(comp_temp);
+	}
+
+	list_for_each_entry_safe(comp_temp, _comp_temp,
+				 &fsl_queue->comp_free, list) {
+		list_del(&comp_temp->list);
+		dma_pool_free(comp_temp->virt_addr);
+		dma_pool_free(comp_temp->desc_virt_addr);
+		rte_free(comp_temp);
+	}
+
+finally:
+	fsl_qdma->desc_allocated--;
+}
+static struct fsl_qdma_queue
+*fsl_qdma_alloc_queue_resources(struct fsl_qdma_engine *fsl_qdma)
+{
+	struct fsl_qdma_queue *queue_head, *queue_temp;
+	int len, i, j;
+	int queue_num;
+	int blocks;
+	unsigned int queue_size[FSL_QDMA_QUEUE_MAX];
+
+	queue_num = fsl_qdma->n_queues;
+	blocks = fsl_qdma->num_blocks;
+
+	len = sizeof(*queue_head) * queue_num * blocks;
+	queue_head = rte_zmalloc("qdma: queue head", len, 0);
+	if (!queue_head)
+		return NULL;
+
+	for (i = 0; i < FSL_QDMA_QUEUE_MAX; i++)
+		queue_size[i] = QDMA_QUEUE_SIZE;
+
+	for (j = 0; j < blocks; j++) {
+		for (i = 0; i < queue_num; i++) {
+			if (queue_size[i] > FSL_QDMA_CIRCULAR_DESC_SIZE_MAX ||
+			    queue_size[i] < FSL_QDMA_CIRCULAR_DESC_SIZE_MIN) {
+				return NULL;
+			}
+			queue_temp = queue_head + i + (j * queue_num);
+
+			queue_temp->cq =
+			dma_pool_alloc(sizeof(struct fsl_qdma_format) *
+				       queue_size[i],
+				       sizeof(struct fsl_qdma_format) *
+				       queue_size[i], &queue_temp->bus_addr);
+
+			memset(queue_temp->cq, 0x0, queue_size[i] *
+			       sizeof(struct fsl_qdma_format));
+
+			if (!queue_temp->cq)
+				return NULL;
+
+			queue_temp->block_base = fsl_qdma->block_base +
+				FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, j);
+			queue_temp->n_cq = queue_size[i];
+			queue_temp->id = i;
+			queue_temp->count = 0;
+			queue_temp->virt_head = queue_temp->cq;
+
+		}
+	}
+	return queue_head;
+}
+
+static struct fsl_qdma_queue *fsl_qdma_prep_status_queue(void)
+{
+	struct fsl_qdma_queue *status_head;
+	unsigned int status_size;
+
+	status_size = QDMA_STATUS_SIZE;
+	if (status_size > FSL_QDMA_CIRCULAR_DESC_SIZE_MAX ||
+	    status_size < FSL_QDMA_CIRCULAR_DESC_SIZE_MIN) {
+		return NULL;
+	}
+
+	status_head = rte_zmalloc("qdma: status head", sizeof(*status_head), 0);
+	if (!status_head)
+		return NULL;
+
+	/*
+	 * Buffer for queue command
+	 */
+	status_head->cq = dma_pool_alloc(sizeof(struct fsl_qdma_format) *
+					 status_size,
+					 sizeof(struct fsl_qdma_format) *
+					 status_size,
+					 &status_head->bus_addr);
+
+	memset(status_head->cq, 0x0, status_size *
+	       sizeof(struct fsl_qdma_format));
+	if (!status_head->cq)
+		return NULL;
+
+	status_head->n_cq = status_size;
+	status_head->virt_head = status_head->cq;
+
+	return status_head;
+}
+
+static int fsl_qdma_halt(struct fsl_qdma_engine *fsl_qdma)
+{
+	void *ctrl = fsl_qdma->ctrl_base;
+	void *block;
+	int i, count = RETRIES;
+	unsigned int j;
+	u32 reg;
+
+	/* Disable the command queue and wait for idle state. */
+	reg = qdma_readl(ctrl + FSL_QDMA_DMR);
+	reg |= FSL_QDMA_DMR_DQD;
+	qdma_writel(reg, ctrl + FSL_QDMA_DMR);
+	for (j = 0; j < fsl_qdma->num_blocks; j++) {
+		block = fsl_qdma->block_base +
+			FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, j);
+		for (i = 0; i < FSL_QDMA_QUEUE_NUM_MAX; i++)
+			qdma_writel(0, block + FSL_QDMA_BCQMR(i));
+	}
+	while (true) {
+		reg = qdma_readl(ctrl + FSL_QDMA_DSR);
+		if (!(reg & FSL_QDMA_DSR_DB))
+			break;
+		if (count-- < 0)
+			return -EBUSY;
+		rte_delay_us(100);
+	}
+
+	for (j = 0; j < fsl_qdma->num_blocks; j++) {
+		block = fsl_qdma->block_base +
+			FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, j);
+
+		/* Disable status queue. */
+		qdma_writel(0, block + FSL_QDMA_BSQMR);
+
+		/*
+		 * clear the command queue interrupt detect register for
+		 * all queues.
+		 */
+		qdma_writel(0xffffffff, block + FSL_QDMA_BCQIDR(0));
+	}
+
+	return 0;
+}
+
+static int fsl_qdma_reg_init(struct fsl_qdma_engine *fsl_qdma)
+{
+	struct fsl_qdma_queue *fsl_queue = fsl_qdma->queue;
+	struct fsl_qdma_queue *temp;
+	void *ctrl = fsl_qdma->ctrl_base;
+	void *block;
+	u32 i, j;
+	u32 reg;
+	int ret, val;
+
+	/* Try to halt the qDMA engine first. */
+	ret = fsl_qdma_halt(fsl_qdma);
+	if (ret) {
+		return ret;
+	}
+
+	for (j = 0; j < fsl_qdma->num_blocks; j++) {
+		block = fsl_qdma->block_base +
+			FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, j);
+		for (i = 0; i < fsl_qdma->n_queues; i++) {
+			temp = fsl_queue + i + (j * fsl_qdma->n_queues);
+			/*
+			 * Initialize Command Queue registers to
+			 * point to the first
+			 * command descriptor in memory.
+			 * Dequeue Pointer Address Registers
+			 * Enqueue Pointer Address Registers
+			 */
+
+			qdma_writel(lower_32_bits(temp->bus_addr),
+				    block + FSL_QDMA_BCQDPA_SADDR(i));
+			qdma_writel(upper_32_bits(temp->bus_addr),
+				    block + FSL_QDMA_BCQEDPA_SADDR(i));
+			qdma_writel(lower_32_bits(temp->bus_addr),
+				    block + FSL_QDMA_BCQEPA_SADDR(i));
+			qdma_writel(upper_32_bits(temp->bus_addr),
+				    block + FSL_QDMA_BCQEEPA_SADDR(i));
+
+			/* Initialize the queue mode. */
+			reg = FSL_QDMA_BCQMR_EN;
+			reg |= FSL_QDMA_BCQMR_CD_THLD(ilog2(temp->n_cq) - 4);
+			reg |= FSL_QDMA_BCQMR_CQ_SIZE(ilog2(temp->n_cq) - 6);
+			qdma_writel(reg, block + FSL_QDMA_BCQMR(i));
+		}
+
+		/*
+		 * Workaround for erratum: ERR010812.
+		 * We must enable XOFF to avoid the enqueue rejection occurs.
+		 * Setting SQCCMR ENTER_WM to 0x20.
+		 */
+
+		qdma_writel(FSL_QDMA_SQCCMR_ENTER_WM,
+			    block + FSL_QDMA_SQCCMR);
+
+		/*
+		 * Initialize status queue registers to point to the first
+		 * command descriptor in memory.
+		 * Dequeue Pointer Address Registers
+		 * Enqueue Pointer Address Registers
+		 */
+
+		qdma_writel(
+			    upper_32_bits(fsl_qdma->status[j]->bus_addr),
+			    block + FSL_QDMA_SQEEPAR);
+		qdma_writel(
+			    lower_32_bits(fsl_qdma->status[j]->bus_addr),
+			    block + FSL_QDMA_SQEPAR);
+		qdma_writel(
+			    upper_32_bits(fsl_qdma->status[j]->bus_addr),
+			    block + FSL_QDMA_SQEDPAR);
+		qdma_writel(
+			    lower_32_bits(fsl_qdma->status[j]->bus_addr),
+			    block + FSL_QDMA_SQDPAR);
+		/* Desiable status queue interrupt. */
+
+		qdma_writel(0x0, block + FSL_QDMA_BCQIER(0));
+		qdma_writel(0x0, block + FSL_QDMA_BSQICR);
+		qdma_writel(0x0, block + FSL_QDMA_CQIER);
+
+		/* Initialize the status queue mode. */
+		reg = FSL_QDMA_BSQMR_EN;
+		val = ilog2(fsl_qdma->status[j]->n_cq) - 6;
+		reg |= FSL_QDMA_BSQMR_CQ_SIZE(val);
+		qdma_writel(reg, block + FSL_QDMA_BSQMR);
+	}
+
+	reg = qdma_readl(ctrl + FSL_QDMA_DMR);
+	reg &= ~FSL_QDMA_DMR_DQD;
+	qdma_writel(reg, ctrl + FSL_QDMA_DMR);
+
+	return 0;
+}
+
+static void
+dma_release(void *fsl_chan)
+{
+	((struct fsl_qdma_chan *)fsl_chan)->free = true;
+	fsl_qdma_free_chan_resources((struct fsl_qdma_chan *)fsl_chan);
+}
+
+static int
+dpaa_qdma_init(struct rte_dma_dev *dmadev)
+{
+	struct fsl_qdma_engine *fsl_qdma = dmadev->data->dev_private;
+	struct fsl_qdma_chan *fsl_chan;
+	uint64_t phys_addr;
+	unsigned int len;
+	int ccsr_qdma_fd;
+	int regs_size;
+	int ret;
+	u32 i;
+
+	fsl_qdma->desc_allocated = 0;
+	fsl_qdma->n_chans = VIRT_CHANNELS;
+	fsl_qdma->n_queues = QDMA_QUEUES;
+	fsl_qdma->num_blocks = QDMA_BLOCKS;
+	fsl_qdma->block_offset = QDMA_BLOCK_OFFSET;
+
+	len = sizeof(*fsl_chan) * fsl_qdma->n_chans;
+	fsl_qdma->chans = rte_zmalloc("qdma: fsl chans", len, 0);
+	if (!fsl_qdma->chans)
+		return -1;
+
+	len = sizeof(struct fsl_qdma_queue *) * fsl_qdma->num_blocks;
+	fsl_qdma->status = rte_zmalloc("qdma: fsl status", len, 0);
+	if (!fsl_qdma->status) {
+		rte_free(fsl_qdma->chans);
+		return -1;
+	}
+
+	for (i = 0; i < fsl_qdma->num_blocks; i++) {
+		rte_atomic32_init(&wait_task[i]);
+		fsl_qdma->status[i] = fsl_qdma_prep_status_queue();
+		if (!fsl_qdma->status[i])
+			goto err;
+	}
+
+	ccsr_qdma_fd = open("/dev/mem", O_RDWR);
+	if (unlikely(ccsr_qdma_fd < 0)) {
+		goto err;
+	}
+
+	regs_size = fsl_qdma->block_offset * (fsl_qdma->num_blocks + 2);
+	phys_addr = QDMA_CCSR_BASE;
+	fsl_qdma->ctrl_base = mmap(NULL, regs_size, PROT_READ |
+					 PROT_WRITE, MAP_SHARED,
+					 ccsr_qdma_fd, phys_addr);
+
+	close(ccsr_qdma_fd);
+	if (fsl_qdma->ctrl_base == MAP_FAILED) {
+		goto err;
+	}
+
+	fsl_qdma->status_base = fsl_qdma->ctrl_base + QDMA_BLOCK_OFFSET;
+	fsl_qdma->block_base = fsl_qdma->status_base + QDMA_BLOCK_OFFSET;
+
+	fsl_qdma->queue = fsl_qdma_alloc_queue_resources(fsl_qdma);
+	if (!fsl_qdma->queue) {
+		munmap(fsl_qdma->ctrl_base, regs_size);
+		goto err;
+	}
+
+	for (i = 0; i < fsl_qdma->n_chans; i++) {
+		struct fsl_qdma_chan *fsl_chan = &fsl_qdma->chans[i];
+
+		fsl_chan->qdma = fsl_qdma;
+		fsl_chan->queue = fsl_qdma->queue + i % (fsl_qdma->n_queues *
+							fsl_qdma->num_blocks);
+		fsl_chan->free = true;
+	}
+
+	ret = fsl_qdma_reg_init(fsl_qdma);
+	if (ret) {
+		munmap(fsl_qdma->ctrl_base, regs_size);
+		goto err;
+	}
+
+	return 0;
+
+err:
+	rte_free(fsl_qdma->chans);
+	rte_free(fsl_qdma->status);
+
+	return -1;
+}
 
 static int
 dpaa_qdma_probe(__rte_unused struct rte_dpaa_driver *dpaa_drv,
-		__rte_unused struct rte_dpaa_device *dpaa_dev)
+		struct rte_dpaa_device *dpaa_dev)
 {
+	struct rte_dma_dev *dmadev;
+	int ret;
+
+	dmadev = rte_dma_pmd_allocate(dpaa_dev->device.name,
+				      rte_socket_id(),
+				      sizeof(struct fsl_qdma_engine));
+	if (!dmadev) {
+		return -EINVAL;
+	}
+
+	dpaa_dev->dmadev = dmadev;
+
+	/* Invoke PMD device initialization function */
+	ret = dpaa_qdma_init(dmadev);
+	if (ret) {
+		(void)rte_dma_pmd_release(dpaa_dev->device.name);
+		return ret;
+	}
+
+	dmadev->state = RTE_DMA_DEV_READY;
 	return 0;
 }
 
 static int
-dpaa_qdma_remove(__rte_unused struct rte_dpaa_device *dpaa_dev)
+dpaa_qdma_remove(struct rte_dpaa_device *dpaa_dev)
 {
+	struct rte_dma_dev *dmadev = dpaa_dev->dmadev;
+	struct fsl_qdma_engine *fsl_qdma = dmadev->data->dev_private;
+	int i = 0, max = QDMA_QUEUES * QDMA_BLOCKS;
+
+	for (i = 0; i < max; i++) {
+		struct fsl_qdma_chan *fsl_chan = &fsl_qdma->chans[i];
+
+		if (fsl_chan->free == false)
+			dma_release(fsl_chan);
+	}
+
+	rte_free(fsl_qdma->status);
+	rte_free(fsl_qdma->chans);
+
 	return 0;
 }
 
diff --git a/drivers/dma/dpaa/dpaa_qdma.h b/drivers/dma/dpaa/dpaa_qdma.h
new file mode 100644
index 0000000000..cc0d1f114e
--- /dev/null
+++ b/drivers/dma/dpaa/dpaa_qdma.h
@@ -0,0 +1,247 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 NXP
+ */
+
+#ifndef _DPAA_QDMA_H_
+#define _DPAA_QDMA_H_
+
+#define CORE_NUMBER 4
+#define RETRIES	5
+
+#define FSL_QDMA_DMR			0x0
+#define FSL_QDMA_DSR			0x4
+#define FSL_QDMA_DEIER			0xe00
+#define FSL_QDMA_DEDR			0xe04
+#define FSL_QDMA_DECFDW0R		0xe10
+#define FSL_QDMA_DECFDW1R		0xe14
+#define FSL_QDMA_DECFDW2R		0xe18
+#define FSL_QDMA_DECFDW3R		0xe1c
+#define FSL_QDMA_DECFQIDR		0xe30
+#define FSL_QDMA_DECBR			0xe34
+
+#define FSL_QDMA_BCQMR(x)		(0xc0 + 0x100 * (x))
+#define FSL_QDMA_BCQSR(x)		(0xc4 + 0x100 * (x))
+#define FSL_QDMA_BCQEDPA_SADDR(x)	(0xc8 + 0x100 * (x))
+#define FSL_QDMA_BCQDPA_SADDR(x)	(0xcc + 0x100 * (x))
+#define FSL_QDMA_BCQEEPA_SADDR(x)	(0xd0 + 0x100 * (x))
+#define FSL_QDMA_BCQEPA_SADDR(x)	(0xd4 + 0x100 * (x))
+#define FSL_QDMA_BCQIER(x)		(0xe0 + 0x100 * (x))
+#define FSL_QDMA_BCQIDR(x)		(0xe4 + 0x100 * (x))
+
+#define FSL_QDMA_SQEDPAR		0x808
+#define FSL_QDMA_SQDPAR			0x80c
+#define FSL_QDMA_SQEEPAR		0x810
+#define FSL_QDMA_SQEPAR			0x814
+#define FSL_QDMA_BSQMR			0x800
+#define FSL_QDMA_BSQSR			0x804
+#define FSL_QDMA_BSQICR			0x828
+#define FSL_QDMA_CQMR			0xa00
+#define FSL_QDMA_CQDSCR1		0xa08
+#define FSL_QDMA_CQDSCR2                0xa0c
+#define FSL_QDMA_CQIER			0xa10
+#define FSL_QDMA_CQEDR			0xa14
+#define FSL_QDMA_SQCCMR			0xa20
+
+#define FSL_QDMA_SQICR_ICEN
+
+#define FSL_QDMA_CQIDR_CQT		0xff000000
+#define FSL_QDMA_CQIDR_SQPE		0x800000
+#define FSL_QDMA_CQIDR_SQT		0x8000
+
+#define FSL_QDMA_BCQIER_CQTIE		0x8000
+#define FSL_QDMA_BCQIER_CQPEIE		0x800000
+#define FSL_QDMA_BSQICR_ICEN		0x80000000
+#define FSL_QDMA_BSQICR_ICST(x)		((x) << 16)
+#define FSL_QDMA_CQIER_MEIE		0x80000000
+#define FSL_QDMA_CQIER_TEIE		0x1
+#define FSL_QDMA_SQCCMR_ENTER_WM	0x200000
+
+#define FSL_QDMA_QUEUE_MAX		8
+
+#define FSL_QDMA_BCQMR_EN		0x80000000
+#define FSL_QDMA_BCQMR_EI		0x40000000
+#define FSL_QDMA_BCQMR_EI_BE           0x40
+#define FSL_QDMA_BCQMR_CD_THLD(x)	((x) << 20)
+#define FSL_QDMA_BCQMR_CQ_SIZE(x)	((x) << 16)
+
+#define FSL_QDMA_BCQSR_QF		0x10000
+#define FSL_QDMA_BCQSR_XOFF		0x1
+#define FSL_QDMA_BCQSR_QF_XOFF_BE      0x1000100
+
+#define FSL_QDMA_BSQMR_EN		0x80000000
+#define FSL_QDMA_BSQMR_DI		0x40000000
+#define FSL_QDMA_BSQMR_DI_BE		0x40
+#define FSL_QDMA_BSQMR_CQ_SIZE(x)	((x) << 16)
+
+#define FSL_QDMA_BSQSR_QE		0x20000
+#define FSL_QDMA_BSQSR_QE_BE		0x200
+#define FSL_QDMA_BSQSR_QF		0x10000
+
+#define FSL_QDMA_DMR_DQD		0x40000000
+#define FSL_QDMA_DSR_DB			0x80000000
+
+#define FSL_QDMA_COMMAND_BUFFER_SIZE	64
+#define FSL_QDMA_DESCRIPTOR_BUFFER_SIZE 32
+#define FSL_QDMA_CIRCULAR_DESC_SIZE_MIN	64
+#define FSL_QDMA_CIRCULAR_DESC_SIZE_MAX	16384
+#define FSL_QDMA_QUEUE_NUM_MAX		8
+
+#define FSL_QDMA_CMD_RWTTYPE		0x4
+#define FSL_QDMA_CMD_LWC                0x2
+
+#define FSL_QDMA_CMD_RWTTYPE_OFFSET	28
+#define FSL_QDMA_CMD_NS_OFFSET		27
+#define FSL_QDMA_CMD_DQOS_OFFSET	24
+#define FSL_QDMA_CMD_WTHROTL_OFFSET	20
+#define FSL_QDMA_CMD_DSEN_OFFSET	19
+#define FSL_QDMA_CMD_LWC_OFFSET		16
+
+#define QDMA_CCDF_STATUS		20
+#define QDMA_CCDF_OFFSET		20
+#define QDMA_CCDF_MASK			GENMASK(28, 20)
+#define QDMA_CCDF_FOTMAT		BIT(29)
+#define QDMA_CCDF_SER			BIT(30)
+
+#define QDMA_SG_FIN			BIT(30)
+#define QDMA_SG_EXT			BIT(31)
+#define QDMA_SG_LEN_MASK		GENMASK(29, 0)
+
+#define QDMA_BIG_ENDIAN			1
+#define COMP_TIMEOUT			100000
+#define COMMAND_QUEUE_OVERFLLOW		10
+
+/* qdma engine attribute */
+#define QDMA_QUEUE_SIZE 64
+#define QDMA_STATUS_SIZE 64
+#define QDMA_CCSR_BASE 0x8380000
+#define VIRT_CHANNELS 32
+#define QDMA_BLOCK_OFFSET 0x10000
+#define QDMA_BLOCKS 4
+#define QDMA_QUEUES 8
+#define QDMA_DELAY 1000
+
+#define __arch_getq(a)		(*(volatile u64 *)(a))
+#define __arch_putq(v, a)	(*(volatile u64 *)(a) = (v))
+#define __arch_getq32(a)	(*(volatile u32 *)(a))
+#define __arch_putq32(v, a)	(*(volatile u32 *)(a) = (v))
+#define readq32(c) \
+	({ u32 __v = __arch_getq32(c); rte_io_rmb(); __v; })
+#define writeq32(v, c) \
+	({ u32 __v = v; __arch_putq32(__v, c); __v; })
+#define ioread32(_p)		readq32(_p)
+#define iowrite32(_v, _p)	writeq32(_v, _p)
+
+#define ioread32be(_p)          be32_to_cpu(readq32(_p))
+#define iowrite32be(_v, _p)	writeq32(be32_to_cpu(_v), _p)
+
+#ifdef QDMA_BIG_ENDIAN
+#define QDMA_IN(addr)		ioread32be(addr)
+#define QDMA_OUT(addr, val)	iowrite32be(val, addr)
+#define QDMA_IN_BE(addr)	ioread32(addr)
+#define QDMA_OUT_BE(addr, val)	iowrite32(val, addr)
+#else
+#define QDMA_IN(addr)		ioread32(addr)
+#define QDMA_OUT(addr, val)	iowrite32(val, addr)
+#define QDMA_IN_BE(addr)	ioread32be(addr)
+#define QDMA_OUT_BE(addr, val)	iowrite32be(val, addr)
+#endif
+
+#define FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma_engine, x)			\
+	(((fsl_qdma_engine)->block_offset) * (x))
+
+typedef void (*dma_call_back)(void *params);
+
+/* qDMA Command Descriptor Formats */
+struct fsl_qdma_format {
+	__le32 status; /* ser, status */
+	__le32 cfg;	/* format, offset */
+	union {
+		struct {
+			__le32 addr_lo;	/* low 32-bits of 40-bit address */
+			u8 addr_hi;	/* high 8-bits of 40-bit address */
+			u8 __reserved1[2];
+			u8 cfg8b_w1; /* dd, queue */
+		};
+		__le64 data;
+	};
+};
+
+/* qDMA Source Descriptor Format */
+struct fsl_qdma_sdf {
+	__le32 rev3;
+	__le32 cfg; /* rev4, bit[0-11] - ssd, bit[12-23] sss */
+	__le32 rev5;
+	__le32 cmd;
+};
+
+/* qDMA Destination Descriptor Format */
+struct fsl_qdma_ddf {
+	__le32 rev1;
+	__le32 cfg; /* rev2, bit[0-11] - dsd, bit[12-23] - dss */
+	__le32 rev3;
+	__le32 cmd;
+};
+
+enum dma_status {
+	DMA_COMPLETE,
+	DMA_IN_PROGRESS,
+	DMA_IN_PREPAR,
+	DMA_PAUSED,
+	DMA_ERROR,
+};
+
+struct fsl_qdma_chan {
+	struct fsl_qdma_engine	*qdma;
+	struct fsl_qdma_queue	*queue;
+	bool			free;
+	struct list_head	list;
+};
+
+struct fsl_qdma_list {
+	struct list_head	dma_list;
+};
+
+struct fsl_qdma_queue {
+	struct fsl_qdma_format	*virt_head;
+	struct list_head	comp_used;
+	struct list_head	comp_free;
+	dma_addr_t		bus_addr;
+	u32                     n_cq;
+	u32			id;
+	u32			count;
+	struct fsl_qdma_format	*cq;
+	void			*block_base;
+};
+
+struct fsl_qdma_comp {
+	dma_addr_t              bus_addr;
+	dma_addr_t              desc_bus_addr;
+	void			*virt_addr;
+	int			index;
+	void			*desc_virt_addr;
+	struct fsl_qdma_chan	*qchan;
+	dma_call_back		call_back_func;
+	void			*params;
+	struct list_head	list;
+};
+
+struct fsl_qdma_engine {
+	int			desc_allocated;
+	void			*ctrl_base;
+	void			*status_base;
+	void			*block_base;
+	u32			n_chans;
+	u32			n_queues;
+	int			error_irq;
+	struct fsl_qdma_queue	*queue;
+	struct fsl_qdma_queue	**status;
+	struct fsl_qdma_chan	*chans;
+	u32			num_blocks;
+	u8			free_block_id;
+	u32			vchan_map[4];
+	int			block_offset;
+};
+
+static rte_atomic32_t wait_task[CORE_NUMBER];
+
+#endif /* _DPAA_QDMA_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH v2 3/6] dma/dpaa: add driver logs
  2021-11-01  8:51 ` [dpdk-dev] [PATCH v2 " Gagandeep Singh
  2021-11-01  8:51   ` [dpdk-dev] [PATCH v2 1/6] dma/dpaa: introduce " Gagandeep Singh
  2021-11-01  8:51   ` [dpdk-dev] [PATCH v2 2/6] dma/dpaa: add device probe and remove functionality Gagandeep Singh
@ 2021-11-01  8:51   ` Gagandeep Singh
  2021-11-01  8:51   ` [dpdk-dev] [PATCH v2 4/6] dma/dpaa: support basic operations Gagandeep Singh
                     ` (2 subsequent siblings)
  5 siblings, 0 replies; 42+ messages in thread
From: Gagandeep Singh @ 2021-11-01  8:51 UTC (permalink / raw)
  To: thomas, dev; +Cc: nipun.gupta, Gagandeep Singh

This patch supports DPAA DMA driver logs.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 drivers/dma/dpaa/dpaa_qdma.c      | 10 +++++++
 drivers/dma/dpaa/dpaa_qdma_logs.h | 46 +++++++++++++++++++++++++++++++
 2 files changed, 56 insertions(+)
 create mode 100644 drivers/dma/dpaa/dpaa_qdma_logs.h

diff --git a/drivers/dma/dpaa/dpaa_qdma.c b/drivers/dma/dpaa/dpaa_qdma.c
index 3ad23513e9..7808b3de7f 100644
--- a/drivers/dma/dpaa/dpaa_qdma.c
+++ b/drivers/dma/dpaa/dpaa_qdma.c
@@ -6,6 +6,7 @@
 #include <rte_dmadev_pmd.h>
 
 #include "dpaa_qdma.h"
+#include "dpaa_qdma_logs.h"
 
 static inline int ilog2(int x)
 {
@@ -107,6 +108,7 @@ static struct fsl_qdma_queue
 		for (i = 0; i < queue_num; i++) {
 			if (queue_size[i] > FSL_QDMA_CIRCULAR_DESC_SIZE_MAX ||
 			    queue_size[i] < FSL_QDMA_CIRCULAR_DESC_SIZE_MIN) {
+				DPAA_QDMA_ERR("Get wrong queue-sizes.\n");
 				return NULL;
 			}
 			queue_temp = queue_head + i + (j * queue_num);
@@ -143,6 +145,7 @@ static struct fsl_qdma_queue *fsl_qdma_prep_status_queue(void)
 	status_size = QDMA_STATUS_SIZE;
 	if (status_size > FSL_QDMA_CIRCULAR_DESC_SIZE_MAX ||
 	    status_size < FSL_QDMA_CIRCULAR_DESC_SIZE_MIN) {
+		DPAA_QDMA_ERR("Get wrong status_size.\n");
 		return NULL;
 	}
 
@@ -227,6 +230,7 @@ static int fsl_qdma_reg_init(struct fsl_qdma_engine *fsl_qdma)
 	/* Try to halt the qDMA engine first. */
 	ret = fsl_qdma_halt(fsl_qdma);
 	if (ret) {
+		DPAA_QDMA_ERR("DMA halt failed!");
 		return ret;
 	}
 
@@ -353,6 +357,7 @@ dpaa_qdma_init(struct rte_dma_dev *dmadev)
 
 	ccsr_qdma_fd = open("/dev/mem", O_RDWR);
 	if (unlikely(ccsr_qdma_fd < 0)) {
+		DPAA_QDMA_ERR("Can not open /dev/mem for qdma CCSR map");
 		goto err;
 	}
 
@@ -364,6 +369,8 @@ dpaa_qdma_init(struct rte_dma_dev *dmadev)
 
 	close(ccsr_qdma_fd);
 	if (fsl_qdma->ctrl_base == MAP_FAILED) {
+		DPAA_QDMA_ERR("Can not map CCSR base qdma: Phys: %08" PRIx64
+		       "size %d\n", phys_addr, regs_size);
 		goto err;
 	}
 
@@ -387,6 +394,7 @@ dpaa_qdma_init(struct rte_dma_dev *dmadev)
 
 	ret = fsl_qdma_reg_init(fsl_qdma);
 	if (ret) {
+		DPAA_QDMA_ERR("Can't Initialize the qDMA engine.\n");
 		munmap(fsl_qdma->ctrl_base, regs_size);
 		goto err;
 	}
@@ -411,6 +419,7 @@ dpaa_qdma_probe(__rte_unused struct rte_dpaa_driver *dpaa_drv,
 				      rte_socket_id(),
 				      sizeof(struct fsl_qdma_engine));
 	if (!dmadev) {
+		DPAA_QDMA_ERR("Unable to allocate dmadevice");
 		return -EINVAL;
 	}
 
@@ -456,3 +465,4 @@ static struct rte_dpaa_driver rte_dpaa_qdma_pmd = {
 };
 
 RTE_PMD_REGISTER_DPAA(dpaa_qdma, rte_dpaa_qdma_pmd);
+RTE_LOG_REGISTER_DEFAULT(dpaa_qdma_logtype, INFO);
diff --git a/drivers/dma/dpaa/dpaa_qdma_logs.h b/drivers/dma/dpaa/dpaa_qdma_logs.h
new file mode 100644
index 0000000000..01d4a508fc
--- /dev/null
+++ b/drivers/dma/dpaa/dpaa_qdma_logs.h
@@ -0,0 +1,46 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 NXP
+ */
+
+#ifndef __DPAA_QDMA_LOGS_H__
+#define __DPAA_QDMA_LOGS_H__
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+extern int dpaa_qdma_logtype;
+
+#define DPAA_QDMA_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, dpaa_qdma_logtype, "dpaa_qdma: " \
+		fmt "\n", ## args)
+
+#define DPAA_QDMA_DEBUG(fmt, args...) \
+	rte_log(RTE_LOG_DEBUG, dpaa_qdma_logtype, "dpaa_qdma: %s(): " \
+		fmt "\n", __func__, ## args)
+
+#define DPAA_QDMA_FUNC_TRACE() DPAA_QDMA_DEBUG(">>")
+
+#define DPAA_QDMA_INFO(fmt, args...) \
+	DPAA_QDMA_LOG(INFO, fmt, ## args)
+#define DPAA_QDMA_ERR(fmt, args...) \
+	DPAA_QDMA_LOG(ERR, fmt, ## args)
+#define DPAA_QDMA_WARN(fmt, args...) \
+	DPAA_QDMA_LOG(WARNING, fmt, ## args)
+
+/* DP Logs, toggled out at compile time if level lower than current level */
+#define DPAA_QDMA_DP_LOG(level, fmt, args...) \
+	RTE_LOG_DP(level, PMD, "dpaa_qdma: " fmt "\n", ## args)
+
+#define DPAA_QDMA_DP_DEBUG(fmt, args...) \
+	DPAA_QDMA_DP_LOG(DEBUG, fmt, ## args)
+#define DPAA_QDMA_DP_INFO(fmt, args...) \
+	DPAA_QDMA_DP_LOG(INFO, fmt, ## args)
+#define DPAA_QDMA_DP_WARN(fmt, args...) \
+	DPAA_QDMA_DP_LOG(WARNING, fmt, ## args)
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __DPAA_QDMA_LOGS_H__ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH v2 4/6] dma/dpaa: support basic operations
  2021-11-01  8:51 ` [dpdk-dev] [PATCH v2 " Gagandeep Singh
                     ` (2 preceding siblings ...)
  2021-11-01  8:51   ` [dpdk-dev] [PATCH v2 3/6] dma/dpaa: add driver logs Gagandeep Singh
@ 2021-11-01  8:51   ` Gagandeep Singh
  2021-11-02  9:21     ` fengchengwen
  2021-11-01  8:51   ` [dpdk-dev] [PATCH v2 5/6] dma/dpaa: support DMA operations Gagandeep Singh
  2021-11-01  8:51   ` [dpdk-dev] [PATCH v2 6/6] doc: add user guide of DPAA DMA driver Gagandeep Singh
  5 siblings, 1 reply; 42+ messages in thread
From: Gagandeep Singh @ 2021-11-01  8:51 UTC (permalink / raw)
  To: thomas, dev; +Cc: nipun.gupta, Gagandeep Singh

This patch support basic DMA operations which includes
device capability and channel setup.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 drivers/dma/dpaa/dpaa_qdma.c | 185 +++++++++++++++++++++++++++++++++++
 drivers/dma/dpaa/dpaa_qdma.h |   6 ++
 2 files changed, 191 insertions(+)

diff --git a/drivers/dma/dpaa/dpaa_qdma.c b/drivers/dma/dpaa/dpaa_qdma.c
index 7808b3de7f..0240f40907 100644
--- a/drivers/dma/dpaa/dpaa_qdma.c
+++ b/drivers/dma/dpaa/dpaa_qdma.c
@@ -8,6 +8,18 @@
 #include "dpaa_qdma.h"
 #include "dpaa_qdma_logs.h"
 
+static inline void
+qdma_desc_addr_set64(struct fsl_qdma_format *ccdf, u64 addr)
+{
+	ccdf->addr_hi = upper_32_bits(addr);
+	ccdf->addr_lo = rte_cpu_to_le_32(lower_32_bits(addr));
+}
+
+static inline void qdma_csgf_set_len(struct fsl_qdma_format *csgf, int len)
+{
+	csgf->cfg = rte_cpu_to_le_32(len & QDMA_SG_LEN_MASK);
+}
+
 static inline int ilog2(int x)
 {
 	int log = 0;
@@ -84,6 +96,64 @@ static void fsl_qdma_free_chan_resources(struct fsl_qdma_chan *fsl_chan)
 finally:
 	fsl_qdma->desc_allocated--;
 }
+
+/*
+ * Pre-request command descriptor and compound S/G for enqueue.
+ */
+static int fsl_qdma_pre_request_enqueue_comp_sd_desc(
+					struct fsl_qdma_queue *queue,
+					int size, int aligned)
+{
+	struct fsl_qdma_comp *comp_temp;
+	struct fsl_qdma_sdf *sdf;
+	struct fsl_qdma_ddf *ddf;
+	struct fsl_qdma_format *csgf_desc;
+	int i;
+
+	for (i = 0; i < (int)(queue->n_cq + COMMAND_QUEUE_OVERFLLOW); i++) {
+		comp_temp = rte_zmalloc("qdma: comp temp",
+					sizeof(*comp_temp), 0);
+		if (!comp_temp)
+			return -ENOMEM;
+
+		comp_temp->virt_addr =
+		dma_pool_alloc(size, aligned, &comp_temp->bus_addr);
+		if (!comp_temp->virt_addr) {
+			rte_free(comp_temp);
+			return -ENOMEM;
+		}
+
+		comp_temp->desc_virt_addr =
+		dma_pool_alloc(size, aligned, &comp_temp->desc_bus_addr);
+		if (!comp_temp->desc_virt_addr)
+			return -ENOMEM;
+
+		memset(comp_temp->virt_addr, 0, FSL_QDMA_COMMAND_BUFFER_SIZE);
+		memset(comp_temp->desc_virt_addr, 0,
+		       FSL_QDMA_DESCRIPTOR_BUFFER_SIZE);
+
+		csgf_desc = (struct fsl_qdma_format *)comp_temp->virt_addr + 1;
+		sdf = (struct fsl_qdma_sdf *)comp_temp->desc_virt_addr;
+		ddf = (struct fsl_qdma_ddf *)comp_temp->desc_virt_addr + 1;
+		/* Compound Command Descriptor(Frame List Table) */
+		qdma_desc_addr_set64(csgf_desc, comp_temp->desc_bus_addr);
+		/* It must be 32 as Compound S/G Descriptor */
+		qdma_csgf_set_len(csgf_desc, 32);
+		/* Descriptor Buffer */
+		sdf->cmd = rte_cpu_to_le_32(FSL_QDMA_CMD_RWTTYPE <<
+			       FSL_QDMA_CMD_RWTTYPE_OFFSET);
+		ddf->cmd = rte_cpu_to_le_32(FSL_QDMA_CMD_RWTTYPE <<
+			       FSL_QDMA_CMD_RWTTYPE_OFFSET);
+		ddf->cmd |= rte_cpu_to_le_32(FSL_QDMA_CMD_LWC <<
+				FSL_QDMA_CMD_LWC_OFFSET);
+
+		list_add_tail(&comp_temp->list, &queue->comp_free);
+	}
+
+	return 0;
+}
+
+
 static struct fsl_qdma_queue
 *fsl_qdma_alloc_queue_resources(struct fsl_qdma_engine *fsl_qdma)
 {
@@ -311,6 +381,79 @@ static int fsl_qdma_reg_init(struct fsl_qdma_engine *fsl_qdma)
 	return 0;
 }
 
+static int fsl_qdma_alloc_chan_resources(struct fsl_qdma_chan *fsl_chan)
+{
+	struct fsl_qdma_queue *fsl_queue = fsl_chan->queue;
+	struct fsl_qdma_engine *fsl_qdma = fsl_chan->qdma;
+	int ret;
+
+	if (fsl_queue->count++)
+		goto finally;
+
+	INIT_LIST_HEAD(&fsl_queue->comp_free);
+	INIT_LIST_HEAD(&fsl_queue->comp_used);
+
+	ret = fsl_qdma_pre_request_enqueue_comp_sd_desc(fsl_queue,
+				FSL_QDMA_COMMAND_BUFFER_SIZE, 64);
+	if (ret) {
+		DPAA_QDMA_ERR(
+			"failed to alloc dma buffer for comp descriptor\n");
+		goto exit;
+	}
+
+finally:
+	return fsl_qdma->desc_allocated++;
+
+exit:
+	return -ENOMEM;
+}
+
+static int
+dpaa_info_get(const struct rte_dma_dev *dev, struct rte_dma_info *dev_info,
+	      uint32_t info_sz)
+{
+#define DPAADMA_MAX_DESC        64
+#define DPAADMA_MIN_DESC        64
+
+	RTE_SET_USED(dev);
+	RTE_SET_USED(info_sz);
+
+	dev_info->dev_capa = RTE_DMA_CAPA_MEM_TO_MEM |
+			     RTE_DMA_CAPA_MEM_TO_DEV |
+			     RTE_DMA_CAPA_DEV_TO_DEV |
+			     RTE_DMA_CAPA_DEV_TO_MEM |
+			     RTE_DMA_CAPA_SILENT |
+			     RTE_DMA_CAPA_OPS_COPY;
+	dev_info->max_vchans = 1;
+	dev_info->max_desc = DPAADMA_MAX_DESC;
+	dev_info->min_desc = DPAADMA_MIN_DESC;
+
+	return 0;
+}
+
+static int
+dpaa_get_channel(struct fsl_qdma_engine *fsl_qdma,  uint16_t vchan)
+{
+	u32 i, start, end;
+
+	start = fsl_qdma->free_block_id * QDMA_QUEUES;
+	fsl_qdma->free_block_id++;
+
+	end = start + 1;
+	for (i = start; i < end; i++) {
+		struct fsl_qdma_chan *fsl_chan = &fsl_qdma->chans[i];
+
+		if (fsl_chan->free) {
+			fsl_chan->free = false;
+			fsl_qdma_alloc_chan_resources(fsl_chan);
+			fsl_qdma->vchan_map[vchan] = i;
+			return 0;
+		}
+	}
+
+	return -1;
+}
+
 static void
 dma_release(void *fsl_chan)
 {
@@ -318,6 +461,45 @@ dma_release(void *fsl_chan)
 	fsl_qdma_free_chan_resources((struct fsl_qdma_chan *)fsl_chan);
 }
 
+static int
+dpaa_qdma_configure(__rte_unused struct rte_dma_dev *dmadev,
+		    __rte_unused const struct rte_dma_conf *dev_conf,
+		    __rte_unused uint32_t conf_sz)
+{
+	return 0;
+}
+
+static int
+dpaa_qdma_start(__rte_unused struct rte_dma_dev *dev)
+{
+	return 0;
+}
+
+static int
+dpaa_qdma_close(__rte_unused struct rte_dma_dev *dev)
+{
+	return 0;
+}
+
+static int
+dpaa_qdma_queue_setup(struct rte_dma_dev *dmadev,
+		      uint16_t vchan,
+		      __rte_unused const struct rte_dma_vchan_conf *conf,
+		      __rte_unused uint32_t conf_sz)
+{
+	struct fsl_qdma_engine *fsl_qdma = dmadev->data->dev_private;
+
+	return dpaa_get_channel(fsl_qdma, vchan);
+}
+
+static struct rte_dma_dev_ops dpaa_qdma_ops = {
+	.dev_info_get		  = dpaa_info_get,
+	.dev_configure            = dpaa_qdma_configure,
+	.dev_start                = dpaa_qdma_start,
+	.dev_close                = dpaa_qdma_close,
+	.vchan_setup		  = dpaa_qdma_queue_setup,
+};
+
 static int
 dpaa_qdma_init(struct rte_dma_dev *dmadev)
 {
@@ -424,6 +606,9 @@ dpaa_qdma_probe(__rte_unused struct rte_dpaa_driver *dpaa_drv,
 	}
 
 	dpaa_dev->dmadev = dmadev;
+	dmadev->dev_ops = &dpaa_qdma_ops;
+	dmadev->device = &dpaa_dev->device;
+	dmadev->fp_obj->dev_private = dmadev->data->dev_private;
 
 	/* Invoke PMD device initialization function */
 	ret = dpaa_qdma_init(dmadev);
diff --git a/drivers/dma/dpaa/dpaa_qdma.h b/drivers/dma/dpaa/dpaa_qdma.h
index cc0d1f114e..f482b16334 100644
--- a/drivers/dma/dpaa/dpaa_qdma.h
+++ b/drivers/dma/dpaa/dpaa_qdma.h
@@ -8,6 +8,12 @@
 #define CORE_NUMBER 4
 #define RETRIES	5
 
+#ifndef GENMASK
+#define BITS_PER_LONG	(__SIZEOF_LONG__ * 8)
+#define GENMASK(h, l) \
+		(((~0UL) << (l)) & (~0UL >> (BITS_PER_LONG - 1 - (h))))
+#endif
+
 #define FSL_QDMA_DMR			0x0
 #define FSL_QDMA_DSR			0x4
 #define FSL_QDMA_DEIER			0xe00
-- 
2.25.1


^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH v2 5/6] dma/dpaa: support DMA operations
  2021-11-01  8:51 ` [dpdk-dev] [PATCH v2 " Gagandeep Singh
                     ` (3 preceding siblings ...)
  2021-11-01  8:51   ` [dpdk-dev] [PATCH v2 4/6] dma/dpaa: support basic operations Gagandeep Singh
@ 2021-11-01  8:51   ` Gagandeep Singh
  2021-11-02  9:31     ` fengchengwen
  2021-11-01  8:51   ` [dpdk-dev] [PATCH v2 6/6] doc: add user guide of DPAA DMA driver Gagandeep Singh
  5 siblings, 1 reply; 42+ messages in thread
From: Gagandeep Singh @ 2021-11-01  8:51 UTC (permalink / raw)
  To: thomas, dev; +Cc: nipun.gupta, Gagandeep Singh

This patch support copy, submit, completed and
completed status functionality of DMA driver.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 drivers/dma/dpaa/dpaa_qdma.c | 344 +++++++++++++++++++++++++++++++++++
 drivers/dma/dpaa/dpaa_qdma.h |   4 +
 2 files changed, 348 insertions(+)

diff --git a/drivers/dma/dpaa/dpaa_qdma.c b/drivers/dma/dpaa/dpaa_qdma.c
index 0240f40907..a5973c22ae 100644
--- a/drivers/dma/dpaa/dpaa_qdma.c
+++ b/drivers/dma/dpaa/dpaa_qdma.c
@@ -15,11 +15,48 @@ qdma_desc_addr_set64(struct fsl_qdma_format *ccdf, u64 addr)
 	ccdf->addr_lo = rte_cpu_to_le_32(lower_32_bits(addr));
 }
 
+static inline u64
+qdma_ccdf_get_queue(const struct fsl_qdma_format *ccdf)
+{
+	return ccdf->cfg8b_w1 & 0xff;
+}
+
+static inline int
+qdma_ccdf_get_offset(const struct fsl_qdma_format *ccdf)
+{
+	return (rte_le_to_cpu_32(ccdf->cfg) & QDMA_CCDF_MASK)
+		>> QDMA_CCDF_OFFSET;
+}
+
+static inline void
+qdma_ccdf_set_format(struct fsl_qdma_format *ccdf, int offset)
+{
+	ccdf->cfg = rte_cpu_to_le_32(QDMA_CCDF_FOTMAT | offset);
+}
+
+static inline int
+qdma_ccdf_get_status(const struct fsl_qdma_format *ccdf)
+{
+	return (rte_le_to_cpu_32(ccdf->status) & QDMA_CCDF_MASK)
+		>> QDMA_CCDF_STATUS;
+}
+
+static inline void
+qdma_ccdf_set_ser(struct fsl_qdma_format *ccdf, int status)
+{
+	ccdf->status = rte_cpu_to_le_32(QDMA_CCDF_SER | status);
+}
+
 static inline void qdma_csgf_set_len(struct fsl_qdma_format *csgf, int len)
 {
 	csgf->cfg = rte_cpu_to_le_32(len & QDMA_SG_LEN_MASK);
 }
 
+static inline void qdma_csgf_set_f(struct fsl_qdma_format *csgf, int len)
+{
+	csgf->cfg = rte_cpu_to_le_32(QDMA_SG_FIN | (len & QDMA_SG_LEN_MASK));
+}
+
 static inline int ilog2(int x)
 {
 	int log = 0;
@@ -43,6 +80,16 @@ static void qdma_writel(u32 val, void *addr)
 	QDMA_OUT(addr, val);
 }
 
+static u32 qdma_readl_be(void *addr)
+{
+	return QDMA_IN_BE(addr);
+}
+
+static void qdma_writel_be(u32 val, void *addr)
+{
+	QDMA_OUT_BE(addr, val);
+}
+
 static void *dma_pool_alloc(int size, int aligned, dma_addr_t *phy_addr)
 {
 	void *virt_addr;
@@ -97,6 +144,31 @@ static void fsl_qdma_free_chan_resources(struct fsl_qdma_chan *fsl_chan)
 	fsl_qdma->desc_allocated--;
 }
 
+static void fsl_qdma_comp_fill_memcpy(struct fsl_qdma_comp *fsl_comp,
+				      dma_addr_t dst, dma_addr_t src, u32 len)
+{
+	struct fsl_qdma_format *csgf_src, *csgf_dest;
+
+	/* Note: command table (fsl_comp->virt_addr) is getting filled
+	 * directly in cmd descriptors of queues while enqueuing the descriptor
+	 * please refer fsl_qdma_enqueue_desc
+	 * frame list table (virt_addr) + 1) and source,
+	 * destination descriptor table
+	 * (fsl_comp->desc_virt_addr and fsl_comp->desc_virt_addr+1) move to
+	 * the control path to fsl_qdma_pre_request_enqueue_comp_sd_desc
+	 */
+	csgf_src = (struct fsl_qdma_format *)fsl_comp->virt_addr + 2;
+	csgf_dest = (struct fsl_qdma_format *)fsl_comp->virt_addr + 3;
+
+	/* Status notification is enqueued to status queue. */
+	qdma_desc_addr_set64(csgf_src, src);
+	qdma_csgf_set_len(csgf_src, len);
+	qdma_desc_addr_set64(csgf_dest, dst);
+	qdma_csgf_set_len(csgf_dest, len);
+	/* This entry is the last entry. */
+	qdma_csgf_set_f(csgf_dest, len);
+}
+
 /*
  * Pre-request command descriptor and compound S/G for enqueue.
  */
@@ -153,6 +225,25 @@ static int fsl_qdma_pre_request_enqueue_comp_sd_desc(
 	return 0;
 }
 
+/*
+ * Request a command descriptor for enqueue.
+ */
+static struct fsl_qdma_comp *
+fsl_qdma_request_enqueue_desc(struct fsl_qdma_chan *fsl_chan)
+{
+	struct fsl_qdma_queue *queue = fsl_chan->queue;
+	struct fsl_qdma_comp *comp_temp;
+
+	if (!list_empty(&queue->comp_free)) {
+		comp_temp = list_first_entry(&queue->comp_free,
+					     struct fsl_qdma_comp,
+					     list);
+		list_del(&comp_temp->list);
+		return comp_temp;
+	}
+
+	return NULL;
+}
 
 static struct fsl_qdma_queue
 *fsl_qdma_alloc_queue_resources(struct fsl_qdma_engine *fsl_qdma)
@@ -287,6 +378,54 @@ static int fsl_qdma_halt(struct fsl_qdma_engine *fsl_qdma)
 	return 0;
 }
 
+static int
+fsl_qdma_queue_transfer_complete(struct fsl_qdma_engine *fsl_qdma,
+				 void *block, int id, const uint16_t nb_cpls,
+				 uint16_t *last_idx,
+				 enum rte_dma_status_code *status)
+{
+	struct fsl_qdma_queue *fsl_queue = fsl_qdma->queue;
+	struct fsl_qdma_queue *fsl_status = fsl_qdma->status[id];
+	struct fsl_qdma_queue *temp_queue;
+	struct fsl_qdma_format *status_addr;
+	struct fsl_qdma_comp *fsl_comp = NULL;
+	u32 reg, i;
+	int count = 0;
+
+	while (count < nb_cpls) {
+		reg = qdma_readl_be(block + FSL_QDMA_BSQSR);
+		if (reg & FSL_QDMA_BSQSR_QE_BE)
+			return count;
+
+		status_addr = fsl_status->virt_head;
+
+		i = qdma_ccdf_get_queue(status_addr) +
+			id * fsl_qdma->n_queues;
+		temp_queue = fsl_queue + i;
+		fsl_comp = list_first_entry(&temp_queue->comp_used,
+					    struct fsl_qdma_comp,
+					    list);
+		list_del(&fsl_comp->list);
+
+		reg = qdma_readl_be(block + FSL_QDMA_BSQMR);
+		reg |= FSL_QDMA_BSQMR_DI_BE;
+
+		qdma_desc_addr_set64(status_addr, 0x0);
+		fsl_status->virt_head++;
+		if (fsl_status->virt_head == fsl_status->cq + fsl_status->n_cq)
+			fsl_status->virt_head = fsl_status->cq;
+		qdma_writel_be(reg, block + FSL_QDMA_BSQMR);
+		*last_idx = fsl_comp->index;
+		if (status != NULL)
+			status[count] = RTE_DMA_STATUS_SUCCESSFUL;
+
+		list_add_tail(&fsl_comp->list, &temp_queue->comp_free);
+		count++;
+
+	}
+	return count;
+}
+
 static int fsl_qdma_reg_init(struct fsl_qdma_engine *fsl_qdma)
 {
 	struct fsl_qdma_queue *fsl_queue = fsl_qdma->queue;
@@ -381,6 +520,65 @@ static int fsl_qdma_reg_init(struct fsl_qdma_engine *fsl_qdma)
 	return 0;
 }
 
+static void *
+fsl_qdma_prep_memcpy(void *fsl_chan, dma_addr_t dst,
+			   dma_addr_t src, size_t len,
+			   void *call_back,
+			   void *param)
+{
+	struct fsl_qdma_comp *fsl_comp;
+
+	fsl_comp =
+	fsl_qdma_request_enqueue_desc((struct fsl_qdma_chan *)fsl_chan);
+	if (!fsl_comp)
+		return NULL;
+
+	fsl_comp->qchan = fsl_chan;
+	fsl_comp->call_back_func = call_back;
+	fsl_comp->params = param;
+
+	fsl_qdma_comp_fill_memcpy(fsl_comp, dst, src, len);
+	return (void *)fsl_comp;
+}
+
+static int fsl_qdma_enqueue_desc(struct fsl_qdma_chan *fsl_chan,
+				  struct fsl_qdma_comp *fsl_comp,
+				  uint64_t flags)
+{
+	struct fsl_qdma_queue *fsl_queue = fsl_chan->queue;
+	void *block = fsl_queue->block_base;
+	struct fsl_qdma_format *ccdf;
+	u32 reg;
+
+	/* retrieve and store the register value in big endian
+	 * to avoid bits swap
+	 */
+	reg = qdma_readl_be(block +
+			 FSL_QDMA_BCQSR(fsl_queue->id));
+	if (reg & (FSL_QDMA_BCQSR_QF_XOFF_BE))
+		return -1;
+
+	/* filling descriptor  command table */
+	ccdf = (struct fsl_qdma_format *)fsl_queue->virt_head;
+	qdma_desc_addr_set64(ccdf, fsl_comp->bus_addr + 16);
+	qdma_ccdf_set_format(ccdf, qdma_ccdf_get_offset(fsl_comp->virt_addr));
+	qdma_ccdf_set_ser(ccdf, qdma_ccdf_get_status(fsl_comp->virt_addr));
+	fsl_comp->index = fsl_queue->virt_head - fsl_queue->cq;
+	fsl_queue->virt_head++;
+
+	if (fsl_queue->virt_head == fsl_queue->cq + fsl_queue->n_cq)
+		fsl_queue->virt_head = fsl_queue->cq;
+
+	list_add_tail(&fsl_comp->list, &fsl_queue->comp_used);
+
+	if (flags == RTE_DMA_OP_FLAG_SUBMIT) {
+		reg = qdma_readl_be(block + FSL_QDMA_BCQMR(fsl_queue->id));
+		reg |= FSL_QDMA_BCQMR_EI_BE;
+		qdma_writel_be(reg, block + FSL_QDMA_BCQMR(fsl_queue->id));
+	}
+	return fsl_comp->index;
+}
+
 static int fsl_qdma_alloc_chan_resources(struct fsl_qdma_chan *fsl_chan)
 {
 	struct fsl_qdma_queue *fsl_queue = fsl_chan->queue;
@@ -492,6 +690,148 @@ dpaa_qdma_queue_setup(struct rte_dma_dev *dmadev,
 	return dpaa_get_channel(fsl_qdma, vchan);
 }
 
+static int
+dpaa_qdma_submit(void *dev_private, uint16_t vchan)
+{
+	struct fsl_qdma_engine *fsl_qdma = (struct fsl_qdma_engine *)dev_private;
+	struct fsl_qdma_chan *fsl_chan =
+		&fsl_qdma->chans[fsl_qdma->vchan_map[vchan]];
+	struct fsl_qdma_queue *fsl_queue = fsl_chan->queue;
+	void *block = fsl_queue->block_base;
+	u32 reg;
+
+	reg = qdma_readl_be(block + FSL_QDMA_BCQMR(fsl_queue->id));
+	reg |= FSL_QDMA_BCQMR_EI_BE;
+	qdma_writel_be(reg, block + FSL_QDMA_BCQMR(fsl_queue->id));
+
+	return 0;
+}
+
+static int
+dpaa_qdma_enqueue(void *dev_private, uint16_t vchan,
+		  rte_iova_t src, rte_iova_t dst,
+		  uint32_t length, uint64_t flags)
+{
+	struct fsl_qdma_engine *fsl_qdma = (struct fsl_qdma_engine *)dev_private;
+	struct fsl_qdma_chan *fsl_chan =
+		&fsl_qdma->chans[fsl_qdma->vchan_map[vchan]];
+	int ret;
+
+	void *fsl_comp = NULL;
+
+	fsl_comp = fsl_qdma_prep_memcpy(fsl_chan,
+			(dma_addr_t)dst, (dma_addr_t)src,
+			length, NULL, NULL);
+	if (!fsl_comp) {
+		DPAA_QDMA_DP_DEBUG("fsl_comp is NULL\n");
+		return -1;
+	}
+	ret = fsl_qdma_enqueue_desc(fsl_chan, fsl_comp, flags);
+
+	return ret;
+}
+
+static uint16_t
+dpaa_qdma_dequeue_status(void *dev_private, uint16_t vchan,
+			 const uint16_t nb_cpls, uint16_t *last_idx,
+			 enum rte_dma_status_code *st)
+{
+	struct fsl_qdma_engine *fsl_qdma = (struct fsl_qdma_engine *)dev_private;
+	int id = (int)((fsl_qdma->vchan_map[vchan]) / QDMA_QUEUES);
+	void *block;
+	unsigned int reg;
+	int intr;
+	void *status = fsl_qdma->status_base;
+
+	intr = qdma_readl_be(status + FSL_QDMA_DEDR);
+	if (intr) {
+		DPAA_QDMA_ERR("DMA transaction error! %x\n", intr);
+		intr = qdma_readl(status + FSL_QDMA_DECFDW0R);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW0R %x\n", intr);
+		intr = qdma_readl(status + FSL_QDMA_DECFDW1R);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW1R %x\n", intr);
+		intr = qdma_readl(status + FSL_QDMA_DECFDW2R);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW2R %x\n", intr);
+		intr = qdma_readl(status + FSL_QDMA_DECFDW3R);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW3R %x\n", intr);
+		intr = qdma_readl(status + FSL_QDMA_DECFQIDR);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFQIDR %x\n", intr);
+		intr = qdma_readl(status + FSL_QDMA_DECBR);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECBR %x\n", intr);
+		qdma_writel(0xffffffff,
+			    status + FSL_QDMA_DEDR);
+		intr = qdma_readl(status + FSL_QDMA_DEDR);
+	}
+
+	block = fsl_qdma->block_base +
+		FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, id);
+
+	intr = fsl_qdma_queue_transfer_complete(fsl_qdma, block, id, nb_cpls,
+						last_idx, st);
+	if (intr < 0) {
+		void *ctrl = fsl_qdma->ctrl_base;
+
+		reg = qdma_readl(ctrl + FSL_QDMA_DMR);
+		reg |= FSL_QDMA_DMR_DQD;
+		qdma_writel(reg, ctrl + FSL_QDMA_DMR);
+		qdma_writel(0, block + FSL_QDMA_BCQIER(0));
+		DPAA_QDMA_ERR("QDMA: status err!\n");
+	}
+
+	return intr;
+}
+
+
+static uint16_t
+dpaa_qdma_dequeue(void *dev_private,
+		  uint16_t vchan, const uint16_t nb_cpls,
+		  uint16_t *last_idx, __rte_unused bool *has_error)
+{
+	struct fsl_qdma_engine *fsl_qdma = (struct fsl_qdma_engine *)dev_private;
+	int id = (int)((fsl_qdma->vchan_map[vchan]) / QDMA_QUEUES);
+	void *block;
+	unsigned int reg;
+	int intr;
+	void *status = fsl_qdma->status_base;
+
+	intr = qdma_readl_be(status + FSL_QDMA_DEDR);
+	if (intr) {
+		DPAA_QDMA_ERR("DMA transaction error! %x\n", intr);
+		intr = qdma_readl(status + FSL_QDMA_DECFDW0R);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW0R %x\n", intr);
+		intr = qdma_readl(status + FSL_QDMA_DECFDW1R);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW1R %x\n", intr);
+		intr = qdma_readl(status + FSL_QDMA_DECFDW2R);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW2R %x\n", intr);
+		intr = qdma_readl(status + FSL_QDMA_DECFDW3R);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW3R %x\n", intr);
+		intr = qdma_readl(status + FSL_QDMA_DECFQIDR);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFQIDR %x\n", intr);
+		intr = qdma_readl(status + FSL_QDMA_DECBR);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECBR %x\n", intr);
+		qdma_writel(0xffffffff,
+			    status + FSL_QDMA_DEDR);
+		intr = qdma_readl(status + FSL_QDMA_DEDR);
+	}
+
+	block = fsl_qdma->block_base +
+		FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, id);
+
+	intr = fsl_qdma_queue_transfer_complete(fsl_qdma, block, id, nb_cpls,
+						last_idx, NULL);
+	if (intr < 0) {
+		void *ctrl = fsl_qdma->ctrl_base;
+
+		reg = qdma_readl(ctrl + FSL_QDMA_DMR);
+		reg |= FSL_QDMA_DMR_DQD;
+		qdma_writel(reg, ctrl + FSL_QDMA_DMR);
+		qdma_writel(0, block + FSL_QDMA_BCQIER(0));
+		DPAA_QDMA_ERR("QDMA: status err!\n");
+	}
+
+	return intr;
+}
+
 static struct rte_dma_dev_ops dpaa_qdma_ops = {
 	.dev_info_get		  = dpaa_info_get,
 	.dev_configure            = dpaa_qdma_configure,
@@ -609,6 +949,10 @@ dpaa_qdma_probe(__rte_unused struct rte_dpaa_driver *dpaa_drv,
 	dmadev->dev_ops = &dpaa_qdma_ops;
 	dmadev->device = &dpaa_dev->device;
 	dmadev->fp_obj->dev_private = dmadev->data->dev_private;
+	dmadev->fp_obj->copy = dpaa_qdma_enqueue;
+	dmadev->fp_obj->submit = dpaa_qdma_submit;
+	dmadev->fp_obj->completed = dpaa_qdma_dequeue;
+	dmadev->fp_obj->completed_status = dpaa_qdma_dequeue_status;
 
 	/* Invoke PMD device initialization function */
 	ret = dpaa_qdma_init(dmadev);
diff --git a/drivers/dma/dpaa/dpaa_qdma.h b/drivers/dma/dpaa/dpaa_qdma.h
index f482b16334..ef3c37e3a8 100644
--- a/drivers/dma/dpaa/dpaa_qdma.h
+++ b/drivers/dma/dpaa/dpaa_qdma.h
@@ -5,6 +5,10 @@
 #ifndef _DPAA_QDMA_H_
 #define _DPAA_QDMA_H_
 
+#ifndef BIT
+#define BIT(nr)		(1UL << (nr))
+#endif
+
 #define CORE_NUMBER 4
 #define RETRIES	5
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH v2 6/6] doc: add user guide of DPAA DMA driver
  2021-11-01  8:51 ` [dpdk-dev] [PATCH v2 " Gagandeep Singh
                     ` (4 preceding siblings ...)
  2021-11-01  8:51   ` [dpdk-dev] [PATCH v2 5/6] dma/dpaa: support DMA operations Gagandeep Singh
@ 2021-11-01  8:51   ` Gagandeep Singh
  5 siblings, 0 replies; 42+ messages in thread
From: Gagandeep Singh @ 2021-11-01  8:51 UTC (permalink / raw)
  To: thomas, dev; +Cc: nipun.gupta, Gagandeep Singh

This patch adds DPAA DMA user guide.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 MAINTAINERS                 |  1 +
 doc/guides/dmadevs/dpaa.rst | 60 +++++++++++++++++++++++++++++++++++++
 2 files changed, 61 insertions(+)
 create mode 100644 doc/guides/dmadevs/dpaa.rst

diff --git a/MAINTAINERS b/MAINTAINERS
index 76b9fb8e6c..a5ad16e309 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1361,6 +1361,7 @@ NXP DPAA DMA
 M: Gagandeep Singh <g.singh@nxp.com>
 M: Nipun Gupta <nipun.gupta@nxp.com>
 F: drivers/dma/dpaa/
+F: doc/guides/dmadevs/dpaa.rst
 
 
 Packet processing
diff --git a/doc/guides/dmadevs/dpaa.rst b/doc/guides/dmadevs/dpaa.rst
new file mode 100644
index 0000000000..ed9628ed79
--- /dev/null
+++ b/doc/guides/dmadevs/dpaa.rst
@@ -0,0 +1,60 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright 2021 NXP
+
+NXP DPAA DMA Driver
+=====================
+
+The DPAA DMA is an implementation of the dmadev APIs, that provide means
+to initiate a DMA transaction from CPU. The initiated DMA is performed
+without CPU being involved in the actual DMA transaction. This is achieved
+via using the QDMA controller of DPAA SoC.
+
+The QDMA controller transfers blocks of data between one source and one
+destination. The blocks of data transferred can be represented in memory
+as contiguous or noncontiguous using scatter/gather table(s).
+
+More information can be found at `NXP Official Website
+<http://www.nxp.com/products/microcontrollers-and-processors/arm-processors/qoriq-arm-processors:QORIQ-ARM>`_.
+
+Features
+--------
+
+The DPAA DMA implements following features in the dmadev API:
+
+- Supports 1 virtual channel.
+- Supports all 4 DMA transfers: MEM_TO_MEM, MEM_TO_DEV,
+  DEV_TO_MEM, DEV_TO_DEV.
+- Supports DMA silent mode.
+- Supports issuing DMA of data within memory without hogging CPU while
+  performing DMA operation.
+
+Supported DPAA SoCs
+--------------------
+
+- LS1046A
+- LS1043A
+
+Prerequisites
+-------------
+
+See :doc:`../platform/dpaa` for setup information
+
+- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
+
+.. note::
+
+   Some part of dpaa bus code (qbman and fman - library) routines are
+   dual licensed (BSD & GPLv2), however they are used as BSD in DPDK in userspace.
+
+Initialization
+--------------
+
+On EAL initialization, DPAA DMA devices will be detected on DPAA bus and
+will be probed and populated into their device list.
+
+
+Platform Requirement
+~~~~~~~~~~~~~~~~~~~~
+
+DPAA DMA driver for DPDK can only work on NXP SoCs as listed in the
+``Supported DPAA SoCs``.
-- 
2.25.1


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [dpdk-dev] [PATCH v2 1/6] dma/dpaa: introduce DPAA DMA driver
  2021-11-01  8:51   ` [dpdk-dev] [PATCH v2 1/6] dma/dpaa: introduce " Gagandeep Singh
@ 2021-11-02  8:51     ` fengchengwen
  2021-11-02 15:27       ` Thomas Monjalon
  2021-11-08  9:06     ` [dpdk-dev] [PATCH v3 0/7] Introduce " Gagandeep Singh
  1 sibling, 1 reply; 42+ messages in thread
From: fengchengwen @ 2021-11-02  8:51 UTC (permalink / raw)
  To: Gagandeep Singh, thomas, dev; +Cc: nipun.gupta

On 2021/11/1 16:51, Gagandeep Singh wrote:
> The DPAA DMA  driver is an implementation of the dmadev APIs,
> that provide means to initiate a DMA transaction from CPU.
> The initiated DMA is performed without CPU being involved
> in the actual DMA transaction. This is achieved via using
> the QDMA controller of DPAA SoC.
> 
> Signed-off-by: Gagandeep Singh <g.singh@nxp.com>

[snip]

> +DPDK_22 {
> +
> +	local: *;
> +};
> diff --git a/drivers/dma/meson.build b/drivers/dma/meson.build
> index a69418ce9b..ab2733f7f6 100644
> --- a/drivers/dma/meson.build
> +++ b/drivers/dma/meson.build
> @@ -5,5 +5,6 @@ drivers = [
>          'idxd',
>          'ioat',
>          'skeleton',
> +	'dpaa',

suggest use space replace tab, and add dpaa before skeleton

>  ]
>  std_deps = ['dmadev']
> 


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [dpdk-dev] [PATCH v2 2/6] dma/dpaa: add device probe and remove functionality
  2021-11-01  8:51   ` [dpdk-dev] [PATCH v2 2/6] dma/dpaa: add device probe and remove functionality Gagandeep Singh
@ 2021-11-02  9:07     ` fengchengwen
  0 siblings, 0 replies; 42+ messages in thread
From: fengchengwen @ 2021-11-02  9:07 UTC (permalink / raw)
  To: Gagandeep Singh, thomas, dev; +Cc: nipun.gupta

On 2021/11/1 16:51, Gagandeep Singh wrote:
> This patch add device initialisation functionality.
> 
> Signed-off-by: Gagandeep Singh <g.singh@nxp.com>

[snip]

> +
> +static void fsl_qdma_free_chan_resources(struct fsl_qdma_chan *fsl_chan)
> +{
> +	struct fsl_qdma_queue *fsl_queue = fsl_chan->queue;
> +	struct fsl_qdma_engine *fsl_qdma = fsl_chan->qdma;
> +	struct fsl_qdma_comp *comp_temp, *_comp_temp;
> +	int id;
> +
> +	if (--fsl_queue->count)
> +		goto finally;
> +
> +	id = (fsl_qdma->block_base - fsl_queue->block_base) /
> +	      fsl_qdma->block_offset;
> +
> +	while (rte_atomic32_read(&wait_task[id]) == 1)
> +		rte_delay_us(QDMA_DELAY);
> +
> +	list_for_each_entry_safe(comp_temp, _comp_temp,
> +				 &fsl_queue->comp_used,	list) {
> +		list_del(&comp_temp->list);
> +		dma_pool_free(comp_temp->virt_addr);
> +		dma_pool_free(comp_temp->desc_virt_addr);
> +		rte_free(comp_temp);
> +	}
> +
> +	list_for_each_entry_safe(comp_temp, _comp_temp,
> +				 &fsl_queue->comp_free, list) {
> +		list_del(&comp_temp->list);
> +		dma_pool_free(comp_temp->virt_addr);
> +		dma_pool_free(comp_temp->desc_virt_addr);
> +		rte_free(comp_temp);
> +	}
> +
> +finally:
> +	fsl_qdma->desc_allocated--;
> +}

add a blank line

> +static struct fsl_qdma_queue
> +*fsl_qdma_alloc_queue_resources(struct fsl_qdma_engine *fsl_qdma)
> +{
> +	struct fsl_qdma_queue *queue_head, *queue_temp;
> +	int len, i, j;
> +	int queue_num;
> +	int blocks;
> +	unsigned int queue_size[FSL_QDMA_QUEUE_MAX];
> +
> +	queue_num = fsl_qdma->n_queues;
> +	blocks = fsl_qdma->num_blocks;
> +
> +	len = sizeof(*queue_head) * queue_num * blocks;
> +	queue_head = rte_zmalloc("qdma: queue head", len, 0);
> +	if (!queue_head)
> +		return NULL;
> +
> +	for (i = 0; i < FSL_QDMA_QUEUE_MAX; i++)
> +		queue_size[i] = QDMA_QUEUE_SIZE;
> +
> +	for (j = 0; j < blocks; j++) {
> +		for (i = 0; i < queue_num; i++) {
> +			if (queue_size[i] > FSL_QDMA_CIRCULAR_DESC_SIZE_MAX ||
> +			    queue_size[i] < FSL_QDMA_CIRCULAR_DESC_SIZE_MIN) {
> +				return NULL;
> +			}
> +			queue_temp = queue_head + i + (j * queue_num);
> +
> +			queue_temp->cq =
> +			dma_pool_alloc(sizeof(struct fsl_qdma_format) *
> +				       queue_size[i],
> +				       sizeof(struct fsl_qdma_format) *
> +				       queue_size[i], &queue_temp->bus_addr);
> +
> +			memset(queue_temp->cq, 0x0, queue_size[i] *
> +			       sizeof(struct fsl_qdma_format));
> +

move memset after check queue_temp->cq validity.

> +			if (!queue_temp->cq)

queue_head and previous queue_temp->cq need also freed.

> +				return NULL;
> +
> +			queue_temp->block_base = fsl_qdma->block_base +
> +				FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, j);
> +			queue_temp->n_cq = queue_size[i];
> +			queue_temp->id = i;
> +			queue_temp->count = 0;
> +			queue_temp->virt_head = queue_temp->cq;
> +
> +		}
> +	}
> +	return queue_head;
> +}
> +
> +static struct fsl_qdma_queue *fsl_qdma_prep_status_queue(void)
> +{
> +	struct fsl_qdma_queue *status_head;
> +	unsigned int status_size;
> +
> +	status_size = QDMA_STATUS_SIZE;
> +	if (status_size > FSL_QDMA_CIRCULAR_DESC_SIZE_MAX ||
> +	    status_size < FSL_QDMA_CIRCULAR_DESC_SIZE_MIN) {
> +		return NULL;
> +	}
> +
> +	status_head = rte_zmalloc("qdma: status head", sizeof(*status_head), 0);
> +	if (!status_head)
> +		return NULL;
> +
> +	/*
> +	 * Buffer for queue command
> +	 */
> +	status_head->cq = dma_pool_alloc(sizeof(struct fsl_qdma_format) *
> +					 status_size,
> +					 sizeof(struct fsl_qdma_format) *
> +					 status_size,
> +					 &status_head->bus_addr);
> +
> +	memset(status_head->cq, 0x0, status_size *
> +	       sizeof(struct fsl_qdma_format));

move memset after check queue_temp->cq validity.

> +	if (!status_head->cq)

status_head also need free

> +		return NULL;
> +
> +	status_head->n_cq = status_size;
> +	status_head->virt_head = status_head->cq;
> +
> +	return status_head;
> +}
> +
> +static int fsl_qdma_halt(struct fsl_qdma_engine *fsl_qdma)
> +{
> +	void *ctrl = fsl_qdma->ctrl_base;
> +	void *block;
> +	int i, count = RETRIES;
> +	unsigned int j;
> +	u32 reg;
> +
> +	/* Disable the command queue and wait for idle state. */
> +	reg = qdma_readl(ctrl + FSL_QDMA_DMR);
> +	reg |= FSL_QDMA_DMR_DQD;
> +	qdma_writel(reg, ctrl + FSL_QDMA_DMR);
> +	for (j = 0; j < fsl_qdma->num_blocks; j++) {
> +		block = fsl_qdma->block_base +
> +			FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, j);
> +		for (i = 0; i < FSL_QDMA_QUEUE_NUM_MAX; i++)
> +			qdma_writel(0, block + FSL_QDMA_BCQMR(i));
> +	}
> +	while (true) {
> +		reg = qdma_readl(ctrl + FSL_QDMA_DSR);
> +		if (!(reg & FSL_QDMA_DSR_DB))
> +			break;
> +		if (count-- < 0)
> +			return -EBUSY;
> +		rte_delay_us(100);
> +	}
> +
> +	for (j = 0; j < fsl_qdma->num_blocks; j++) {
> +		block = fsl_qdma->block_base +
> +			FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, j);
> +
> +		/* Disable status queue. */
> +		qdma_writel(0, block + FSL_QDMA_BSQMR);
> +
> +		/*
> +		 * clear the command queue interrupt detect register for
> +		 * all queues.
> +		 */
> +		qdma_writel(0xffffffff, block + FSL_QDMA_BCQIDR(0));
> +	}
> +
> +	return 0;
> +}
> +
> +static int fsl_qdma_reg_init(struct fsl_qdma_engine *fsl_qdma)
> +{
> +	struct fsl_qdma_queue *fsl_queue = fsl_qdma->queue;
> +	struct fsl_qdma_queue *temp;
> +	void *ctrl = fsl_qdma->ctrl_base;
> +	void *block;
> +	u32 i, j;
> +	u32 reg;
> +	int ret, val;
> +
> +	/* Try to halt the qDMA engine first. */
> +	ret = fsl_qdma_halt(fsl_qdma);
> +	if (ret) {
> +		return ret;
> +	}
> +
> +	for (j = 0; j < fsl_qdma->num_blocks; j++) {
> +		block = fsl_qdma->block_base +
> +			FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, j);
> +		for (i = 0; i < fsl_qdma->n_queues; i++) {
> +			temp = fsl_queue + i + (j * fsl_qdma->n_queues);
> +			/*
> +			 * Initialize Command Queue registers to
> +			 * point to the first
> +			 * command descriptor in memory.
> +			 * Dequeue Pointer Address Registers
> +			 * Enqueue Pointer Address Registers
> +			 */
> +
> +			qdma_writel(lower_32_bits(temp->bus_addr),
> +				    block + FSL_QDMA_BCQDPA_SADDR(i));
> +			qdma_writel(upper_32_bits(temp->bus_addr),
> +				    block + FSL_QDMA_BCQEDPA_SADDR(i));
> +			qdma_writel(lower_32_bits(temp->bus_addr),
> +				    block + FSL_QDMA_BCQEPA_SADDR(i));
> +			qdma_writel(upper_32_bits(temp->bus_addr),
> +				    block + FSL_QDMA_BCQEEPA_SADDR(i));
> +
> +			/* Initialize the queue mode. */
> +			reg = FSL_QDMA_BCQMR_EN;
> +			reg |= FSL_QDMA_BCQMR_CD_THLD(ilog2(temp->n_cq) - 4);
> +			reg |= FSL_QDMA_BCQMR_CQ_SIZE(ilog2(temp->n_cq) - 6);
> +			qdma_writel(reg, block + FSL_QDMA_BCQMR(i));
> +		}
> +
> +		/*
> +		 * Workaround for erratum: ERR010812.
> +		 * We must enable XOFF to avoid the enqueue rejection occurs.
> +		 * Setting SQCCMR ENTER_WM to 0x20.
> +		 */
> +
> +		qdma_writel(FSL_QDMA_SQCCMR_ENTER_WM,
> +			    block + FSL_QDMA_SQCCMR);
> +
> +		/*
> +		 * Initialize status queue registers to point to the first
> +		 * command descriptor in memory.
> +		 * Dequeue Pointer Address Registers
> +		 * Enqueue Pointer Address Registers
> +		 */
> +
> +		qdma_writel(
> +			    upper_32_bits(fsl_qdma->status[j]->bus_addr),
> +			    block + FSL_QDMA_SQEEPAR);
> +		qdma_writel(
> +			    lower_32_bits(fsl_qdma->status[j]->bus_addr),
> +			    block + FSL_QDMA_SQEPAR);
> +		qdma_writel(
> +			    upper_32_bits(fsl_qdma->status[j]->bus_addr),
> +			    block + FSL_QDMA_SQEDPAR);
> +		qdma_writel(
> +			    lower_32_bits(fsl_qdma->status[j]->bus_addr),
> +			    block + FSL_QDMA_SQDPAR);
> +		/* Desiable status queue interrupt. */
> +
> +		qdma_writel(0x0, block + FSL_QDMA_BCQIER(0));
> +		qdma_writel(0x0, block + FSL_QDMA_BSQICR);
> +		qdma_writel(0x0, block + FSL_QDMA_CQIER);
> +
> +		/* Initialize the status queue mode. */
> +		reg = FSL_QDMA_BSQMR_EN;
> +		val = ilog2(fsl_qdma->status[j]->n_cq) - 6;
> +		reg |= FSL_QDMA_BSQMR_CQ_SIZE(val);
> +		qdma_writel(reg, block + FSL_QDMA_BSQMR);
> +	}
> +
> +	reg = qdma_readl(ctrl + FSL_QDMA_DMR);
> +	reg &= ~FSL_QDMA_DMR_DQD;
> +	qdma_writel(reg, ctrl + FSL_QDMA_DMR);
> +
> +	return 0;
> +}
> +
> +static void
> +dma_release(void *fsl_chan)
> +{
> +	((struct fsl_qdma_chan *)fsl_chan)->free = true;
> +	fsl_qdma_free_chan_resources((struct fsl_qdma_chan *)fsl_chan);
> +}
> +
> +static int
> +dpaa_qdma_init(struct rte_dma_dev *dmadev)
> +{
> +	struct fsl_qdma_engine *fsl_qdma = dmadev->data->dev_private;
> +	struct fsl_qdma_chan *fsl_chan;
> +	uint64_t phys_addr;
> +	unsigned int len;
> +	int ccsr_qdma_fd;
> +	int regs_size;
> +	int ret;
> +	u32 i;
> +
> +	fsl_qdma->desc_allocated = 0;
> +	fsl_qdma->n_chans = VIRT_CHANNELS;
> +	fsl_qdma->n_queues = QDMA_QUEUES;
> +	fsl_qdma->num_blocks = QDMA_BLOCKS;
> +	fsl_qdma->block_offset = QDMA_BLOCK_OFFSET;
> +
> +	len = sizeof(*fsl_chan) * fsl_qdma->n_chans;
> +	fsl_qdma->chans = rte_zmalloc("qdma: fsl chans", len, 0);
> +	if (!fsl_qdma->chans)
> +		return -1;
> +
> +	len = sizeof(struct fsl_qdma_queue *) * fsl_qdma->num_blocks;
> +	fsl_qdma->status = rte_zmalloc("qdma: fsl status", len, 0);
> +	if (!fsl_qdma->status) {
> +		rte_free(fsl_qdma->chans);
> +		return -1;
> +	}
> +
> +	for (i = 0; i < fsl_qdma->num_blocks; i++) {
> +		rte_atomic32_init(&wait_task[i]);
> +		fsl_qdma->status[i] = fsl_qdma_prep_status_queue();
> +		if (!fsl_qdma->status[i])
> +			goto err;
> +	}
> +
> +	ccsr_qdma_fd = open("/dev/mem", O_RDWR);
> +	if (unlikely(ccsr_qdma_fd < 0)) {
> +		goto err;
> +	}
> +
> +	regs_size = fsl_qdma->block_offset * (fsl_qdma->num_blocks + 2);
> +	phys_addr = QDMA_CCSR_BASE;
> +	fsl_qdma->ctrl_base = mmap(NULL, regs_size, PROT_READ |
> +					 PROT_WRITE, MAP_SHARED,
> +					 ccsr_qdma_fd, phys_addr);
> +
> +	close(ccsr_qdma_fd);
> +	if (fsl_qdma->ctrl_base == MAP_FAILED) {
> +		goto err;
> +	}
> +
> +	fsl_qdma->status_base = fsl_qdma->ctrl_base + QDMA_BLOCK_OFFSET;
> +	fsl_qdma->block_base = fsl_qdma->status_base + QDMA_BLOCK_OFFSET;
> +
> +	fsl_qdma->queue = fsl_qdma_alloc_queue_resources(fsl_qdma);
> +	if (!fsl_qdma->queue) {
> +		munmap(fsl_qdma->ctrl_base, regs_size);
> +		goto err;
> +	}
> +
> +	for (i = 0; i < fsl_qdma->n_chans; i++) {
> +		struct fsl_qdma_chan *fsl_chan = &fsl_qdma->chans[i];
> +
> +		fsl_chan->qdma = fsl_qdma;
> +		fsl_chan->queue = fsl_qdma->queue + i % (fsl_qdma->n_queues *
> +							fsl_qdma->num_blocks);
> +		fsl_chan->free = true;
> +	}
> +
> +	ret = fsl_qdma_reg_init(fsl_qdma);
> +	if (ret) {
> +		munmap(fsl_qdma->ctrl_base, regs_size);
> +		goto err;
> +	}
> +
> +	return 0;
> +
> +err:
> +	rte_free(fsl_qdma->chans);
> +	rte_free(fsl_qdma->status);
> +
> +	return -1;
> +}
>  
>  static int
>  dpaa_qdma_probe(__rte_unused struct rte_dpaa_driver *dpaa_drv,
> -		__rte_unused struct rte_dpaa_device *dpaa_dev)
> +		struct rte_dpaa_device *dpaa_dev)
>  {
> +	struct rte_dma_dev *dmadev;
> +	int ret;
> +
> +	dmadev = rte_dma_pmd_allocate(dpaa_dev->device.name,
> +				      rte_socket_id(),
> +				      sizeof(struct fsl_qdma_engine));
> +	if (!dmadev) {
> +		return -EINVAL;
> +	}

no need {}

> +
> +	dpaa_dev->dmadev = dmadev;
> +
> +	/* Invoke PMD device initialization function */
> +	ret = dpaa_qdma_init(dmadev);
> +	if (ret) {
> +		(void)rte_dma_pmd_release(dpaa_dev->device.name);
> +		return ret;
> +	}
> +
> +	dmadev->state = RTE_DMA_DEV_READY;
>  	return 0;
>  }
>  
>  static int
> -dpaa_qdma_remove(__rte_unused struct rte_dpaa_device *dpaa_dev)
> +dpaa_qdma_remove(struct rte_dpaa_device *dpaa_dev)
>  {
> +	struct rte_dma_dev *dmadev = dpaa_dev->dmadev;
> +	struct fsl_qdma_engine *fsl_qdma = dmadev->data->dev_private;
> +	int i = 0, max = QDMA_QUEUES * QDMA_BLOCKS;
> +
> +	for (i = 0; i < max; i++) {
> +		struct fsl_qdma_chan *fsl_chan = &fsl_qdma->chans[i];
> +
> +		if (fsl_chan->free == false)
> +			dma_release(fsl_chan);

where to release rte_dma_dev ?

> +	}
> +
> +	rte_free(fsl_qdma->status);
> +	rte_free(fsl_qdma->chans);
> +
>  	return 0;
>  }
>  

[snip]

> 


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [dpdk-dev] [PATCH v2 4/6] dma/dpaa: support basic operations
  2021-11-01  8:51   ` [dpdk-dev] [PATCH v2 4/6] dma/dpaa: support basic operations Gagandeep Singh
@ 2021-11-02  9:21     ` fengchengwen
  0 siblings, 0 replies; 42+ messages in thread
From: fengchengwen @ 2021-11-02  9:21 UTC (permalink / raw)
  To: Gagandeep Singh, thomas, dev; +Cc: nipun.gupta



On 2021/11/1 16:51, Gagandeep Singh wrote:
> This patch support basic DMA operations which includes
> device capability and channel setup.
> 
> Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
> ---
>  drivers/dma/dpaa/dpaa_qdma.c | 185 +++++++++++++++++++++++++++++++++++
>  drivers/dma/dpaa/dpaa_qdma.h |   6 ++
>  2 files changed, 191 insertions(+)
> 
> diff --git a/drivers/dma/dpaa/dpaa_qdma.c b/drivers/dma/dpaa/dpaa_qdma.c
> index 7808b3de7f..0240f40907 100644
> --- a/drivers/dma/dpaa/dpaa_qdma.c
> +++ b/drivers/dma/dpaa/dpaa_qdma.c
> @@ -8,6 +8,18 @@
>  #include "dpaa_qdma.h"
>  #include "dpaa_qdma_logs.h"
>  
> +static inline void
> +qdma_desc_addr_set64(struct fsl_qdma_format *ccdf, u64 addr)
> +{
> +	ccdf->addr_hi = upper_32_bits(addr);
> +	ccdf->addr_lo = rte_cpu_to_le_32(lower_32_bits(addr));
> +}
> +
> +static inline void qdma_csgf_set_len(struct fsl_qdma_format *csgf, int len)

staitc inline void stay one seperate line.

> +{
> +	csgf->cfg = rte_cpu_to_le_32(len & QDMA_SG_LEN_MASK);
> +}
> +
>  static inline int ilog2(int x)
>  {
>  	int log = 0;
> @@ -84,6 +96,64 @@ static void fsl_qdma_free_chan_resources(struct fsl_qdma_chan *fsl_chan)
>  finally:
>  	fsl_qdma->desc_allocated--;
>  }
> +
> +/*
> + * Pre-request command descriptor and compound S/G for enqueue.
> + */
> +static int fsl_qdma_pre_request_enqueue_comp_sd_desc(
> +					struct fsl_qdma_queue *queue,
> +					int size, int aligned)
> +{
> +	struct fsl_qdma_comp *comp_temp;
> +	struct fsl_qdma_sdf *sdf;
> +	struct fsl_qdma_ddf *ddf;
> +	struct fsl_qdma_format *csgf_desc;
> +	int i;
> +
> +	for (i = 0; i < (int)(queue->n_cq + COMMAND_QUEUE_OVERFLLOW); i++) {
> +		comp_temp = rte_zmalloc("qdma: comp temp",
> +					sizeof(*comp_temp), 0);
> +		if (!comp_temp)
> +			return -ENOMEM;
> +
> +		comp_temp->virt_addr =
> +		dma_pool_alloc(size, aligned, &comp_temp->bus_addr);
> +		if (!comp_temp->virt_addr) {
> +			rte_free(comp_temp);
> +			return -ENOMEM;
> +		}
> +
> +		comp_temp->desc_virt_addr =
> +		dma_pool_alloc(size, aligned, &comp_temp->desc_bus_addr);
> +		if (!comp_temp->desc_virt_addr)

add free comp_temp_virt_addr and comp_temp

and also free pre queue resource when fail

> +			return -ENOMEM;
> +
> +		memset(comp_temp->virt_addr, 0, FSL_QDMA_COMMAND_BUFFER_SIZE);
> +		memset(comp_temp->desc_virt_addr, 0,
> +		       FSL_QDMA_DESCRIPTOR_BUFFER_SIZE);
> +
> +		csgf_desc = (struct fsl_qdma_format *)comp_temp->virt_addr + 1;
> +		sdf = (struct fsl_qdma_sdf *)comp_temp->desc_virt_addr;
> +		ddf = (struct fsl_qdma_ddf *)comp_temp->desc_virt_addr + 1;
> +		/* Compound Command Descriptor(Frame List Table) */
> +		qdma_desc_addr_set64(csgf_desc, comp_temp->desc_bus_addr);
> +		/* It must be 32 as Compound S/G Descriptor */
> +		qdma_csgf_set_len(csgf_desc, 32);
> +		/* Descriptor Buffer */
> +		sdf->cmd = rte_cpu_to_le_32(FSL_QDMA_CMD_RWTTYPE <<
> +			       FSL_QDMA_CMD_RWTTYPE_OFFSET);
> +		ddf->cmd = rte_cpu_to_le_32(FSL_QDMA_CMD_RWTTYPE <<
> +			       FSL_QDMA_CMD_RWTTYPE_OFFSET);
> +		ddf->cmd |= rte_cpu_to_le_32(FSL_QDMA_CMD_LWC <<
> +				FSL_QDMA_CMD_LWC_OFFSET);
> +
> +		list_add_tail(&comp_temp->list, &queue->comp_free);
> +	}
> +
> +	return 0;> +}
> +
> +
>  static struct fsl_qdma_queue
>  *fsl_qdma_alloc_queue_resources(struct fsl_qdma_engine *fsl_qdma)
>  {
> @@ -311,6 +381,79 @@ static int fsl_qdma_reg_init(struct fsl_qdma_engine *fsl_qdma)
>  	return 0;
>  }
>  
> +static int fsl_qdma_alloc_chan_resources(struct fsl_qdma_chan *fsl_chan)
> +{
> +	struct fsl_qdma_queue *fsl_queue = fsl_chan->queue;
> +	struct fsl_qdma_engine *fsl_qdma = fsl_chan->qdma;
> +	int ret;
> +
> +	if (fsl_queue->count++)
> +		goto finally;
> +
> +	INIT_LIST_HEAD(&fsl_queue->comp_free);
> +	INIT_LIST_HEAD(&fsl_queue->comp_used);
> +
> +	ret = fsl_qdma_pre_request_enqueue_comp_sd_desc(fsl_queue,
> +				FSL_QDMA_COMMAND_BUFFER_SIZE, 64);
> +	if (ret) {
> +		DPAA_QDMA_ERR(
> +			"failed to alloc dma buffer for comp descriptor\n");
> +		goto exit;
> +	}
> +
> +finally:
> +	return fsl_qdma->desc_allocated++;
> +
> +exit:
> +	return -ENOMEM;
> +}
> +
> +static int
> +dpaa_info_get(const struct rte_dma_dev *dev, struct rte_dma_info *dev_info,
> +	      uint32_t info_sz)
> +{
> +#define DPAADMA_MAX_DESC        64
> +#define DPAADMA_MIN_DESC        64
> +
> +	RTE_SET_USED(dev);
> +	RTE_SET_USED(info_sz);
> +
> +	dev_info->dev_capa = RTE_DMA_CAPA_MEM_TO_MEM |
> +			     RTE_DMA_CAPA_MEM_TO_DEV |
> +			     RTE_DMA_CAPA_DEV_TO_DEV |
> +			     RTE_DMA_CAPA_DEV_TO_MEM |
> +			     RTE_DMA_CAPA_SILENT |
> +			     RTE_DMA_CAPA_OPS_COPY;
> +	dev_info->max_vchans = 1;
> +	dev_info->max_desc = DPAADMA_MAX_DESC;
> +	dev_info->min_desc = DPAADMA_MIN_DESC;
> +
> +	return 0;
> +}
> +
> +static int
> +dpaa_get_channel(struct fsl_qdma_engine *fsl_qdma,  uint16_t vchan)
> +{
> +	u32 i, start, end;
> +
> +	start = fsl_qdma->free_block_id * QDMA_QUEUES;
> +	fsl_qdma->free_block_id++;
> +
> +	end = start + 1;
> +	for (i = start; i < end; i++) {
> +		struct fsl_qdma_chan *fsl_chan = &fsl_qdma->chans[i];
> +
> +		if (fsl_chan->free) {
> +			fsl_chan->free = false;
> +			fsl_qdma_alloc_chan_resources(fsl_chan);

why not check fsl_qdma_alloc_chan_resources retcode ?

> +			fsl_qdma->vchan_map[vchan] = i;
> +			return 0;
> +		}
> +	}
> +
> +	return -1;
> +}
> +
>  static void
>  dma_release(void *fsl_chan)
>  {
> @@ -318,6 +461,45 @@ dma_release(void *fsl_chan)
>  	fsl_qdma_free_chan_resources((struct fsl_qdma_chan *)fsl_chan);
>  }
>  
> +static int
> +dpaa_qdma_configure(__rte_unused struct rte_dma_dev *dmadev,
> +		    __rte_unused const struct rte_dma_conf *dev_conf,
> +		    __rte_unused uint32_t conf_sz)
> +{
> +	return 0;
> +}
> +
> +static int
> +dpaa_qdma_start(__rte_unused struct rte_dma_dev *dev)
> +{
> +	return 0;
> +}
> +
> +static int
> +dpaa_qdma_close(__rte_unused struct rte_dma_dev *dev)
> +{
> +	return 0;
> +}
> +
> +static int
> +dpaa_qdma_queue_setup(struct rte_dma_dev *dmadev,
> +		      uint16_t vchan,
> +		      __rte_unused const struct rte_dma_vchan_conf *conf,
> +		      __rte_unused uint32_t conf_sz)
> +{
> +	struct fsl_qdma_engine *fsl_qdma = dmadev->data->dev_private;
> +
> +	return dpaa_get_channel(fsl_qdma, vchan);
> +}
> +
> +static struct rte_dma_dev_ops dpaa_qdma_ops = {
> +	.dev_info_get		  = dpaa_info_get,
> +	.dev_configure            = dpaa_qdma_configure,
> +	.dev_start                = dpaa_qdma_start,
> +	.dev_close                = dpaa_qdma_close,
> +	.vchan_setup		  = dpaa_qdma_queue_setup,
> +};
> +
>  static int
>  dpaa_qdma_init(struct rte_dma_dev *dmadev)
>  {
> @@ -424,6 +606,9 @@ dpaa_qdma_probe(__rte_unused struct rte_dpaa_driver *dpaa_drv,
>  	}
>  
>  	dpaa_dev->dmadev = dmadev;
> +	dmadev->dev_ops = &dpaa_qdma_ops;
> +	dmadev->device = &dpaa_dev->device;
> +	dmadev->fp_obj->dev_private = dmadev->data->dev_private;
>  
>  	/* Invoke PMD device initialization function */
>  	ret = dpaa_qdma_init(dmadev);
> diff --git a/drivers/dma/dpaa/dpaa_qdma.h b/drivers/dma/dpaa/dpaa_qdma.h
> index cc0d1f114e..f482b16334 100644
> --- a/drivers/dma/dpaa/dpaa_qdma.h
> +++ b/drivers/dma/dpaa/dpaa_qdma.h
> @@ -8,6 +8,12 @@
>  #define CORE_NUMBER 4
>  #define RETRIES	5
>  
> +#ifndef GENMASK
> +#define BITS_PER_LONG	(__SIZEOF_LONG__ * 8)
> +#define GENMASK(h, l) \
> +		(((~0UL) << (l)) & (~0UL >> (BITS_PER_LONG - 1 - (h))))
> +#endif
> +
>  #define FSL_QDMA_DMR			0x0
>  #define FSL_QDMA_DSR			0x4
>  #define FSL_QDMA_DEIER			0xe00
> 


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [dpdk-dev] [PATCH v2 5/6] dma/dpaa: support DMA operations
  2021-11-01  8:51   ` [dpdk-dev] [PATCH v2 5/6] dma/dpaa: support DMA operations Gagandeep Singh
@ 2021-11-02  9:31     ` fengchengwen
  2021-11-08  9:06       ` Gagandeep Singh
  0 siblings, 1 reply; 42+ messages in thread
From: fengchengwen @ 2021-11-02  9:31 UTC (permalink / raw)
  To: Gagandeep Singh, thomas, dev; +Cc: nipun.gupta

On 2021/11/1 16:51, Gagandeep Singh wrote:
> This patch support copy, submit, completed and
> completed status functionality of DMA driver.
> 
> Signed-off-by: Gagandeep Singh <g.singh@nxp.com>

...

> +
> +static int fsl_qdma_enqueue_desc(struct fsl_qdma_chan *fsl_chan,
> +				  struct fsl_qdma_comp *fsl_comp,
> +				  uint64_t flags)
> +{
> +	struct fsl_qdma_queue *fsl_queue = fsl_chan->queue;
> +	void *block = fsl_queue->block_base;
> +	struct fsl_qdma_format *ccdf;
> +	u32 reg;
> +
> +	/* retrieve and store the register value in big endian
> +	 * to avoid bits swap
> +	 */
> +	reg = qdma_readl_be(block +
> +			 FSL_QDMA_BCQSR(fsl_queue->id));
> +	if (reg & (FSL_QDMA_BCQSR_QF_XOFF_BE))
> +		return -1;
> +
> +	/* filling descriptor  command table */
> +	ccdf = (struct fsl_qdma_format *)fsl_queue->virt_head;
> +	qdma_desc_addr_set64(ccdf, fsl_comp->bus_addr + 16);
> +	qdma_ccdf_set_format(ccdf, qdma_ccdf_get_offset(fsl_comp->virt_addr));
> +	qdma_ccdf_set_ser(ccdf, qdma_ccdf_get_status(fsl_comp->virt_addr));
> +	fsl_comp->index = fsl_queue->virt_head - fsl_queue->cq;
> +	fsl_queue->virt_head++;
> +
> +	if (fsl_queue->virt_head == fsl_queue->cq + fsl_queue->n_cq)
> +		fsl_queue->virt_head = fsl_queue->cq;
> +
> +	list_add_tail(&fsl_comp->list, &fsl_queue->comp_used);
> +
> +	if (flags == RTE_DMA_OP_FLAG_SUBMIT) {
> +		reg = qdma_readl_be(block + FSL_QDMA_BCQMR(fsl_queue->id));
> +		reg |= FSL_QDMA_BCQMR_EI_BE;
> +		qdma_writel_be(reg, block + FSL_QDMA_BCQMR(fsl_queue->id));
> +	}
> +	return fsl_comp->index;

I can't catch the index real range? it should to be [0, 0xffff] from framework view.

> +}
> +
>  static int fsl_qdma_alloc_chan_resources(struct fsl_qdma_chan *fsl_chan)
>  {
>  	struct fsl_qdma_queue *fsl_queue = fsl_chan->queue;
> @@ -492,6 +690,148 @@ dpaa_qdma_queue_setup(struct rte_dma_dev *dmadev,
>  	return dpaa_get_channel(fsl_qdma, vchan);
>  }
>  
> +static int
> +dpaa_qdma_submit(void *dev_private, uint16_t vchan)
> +{
> +	struct fsl_qdma_engine *fsl_qdma = (struct fsl_qdma_engine *)dev_private;
> +	struct fsl_qdma_chan *fsl_chan =
> +		&fsl_qdma->chans[fsl_qdma->vchan_map[vchan]];
> +	struct fsl_qdma_queue *fsl_queue = fsl_chan->queue;
> +	void *block = fsl_queue->block_base;
> +	u32 reg;
> +
> +	reg = qdma_readl_be(block + FSL_QDMA_BCQMR(fsl_queue->id));
> +	reg |= FSL_QDMA_BCQMR_EI_BE;
> +	qdma_writel_be(reg, block + FSL_QDMA_BCQMR(fsl_queue->id));
> +
> +	return 0;
> +}
> +

...

>  
> 


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [dpdk-dev] [PATCH v2 1/6] dma/dpaa: introduce DPAA DMA driver
  2021-11-02  8:51     ` fengchengwen
@ 2021-11-02 15:27       ` Thomas Monjalon
  0 siblings, 0 replies; 42+ messages in thread
From: Thomas Monjalon @ 2021-11-02 15:27 UTC (permalink / raw)
  To: Gagandeep Singh, fengchengwen; +Cc: dev, nipun.gupta

02/11/2021 09:51, fengchengwen:
> On 2021/11/1 16:51, Gagandeep Singh wrote:
> > The DPAA DMA  driver is an implementation of the dmadev APIs,
> > that provide means to initiate a DMA transaction from CPU.
> > The initiated DMA is performed without CPU being involved
> > in the actual DMA transaction. This is achieved via using
> > the QDMA controller of DPAA SoC.
> > 
> > Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
> 
> [snip]
> 
> > +DPDK_22 {
> > +
> > +	local: *;
> > +};
> > diff --git a/drivers/dma/meson.build b/drivers/dma/meson.build
> > index a69418ce9b..ab2733f7f6 100644
> > --- a/drivers/dma/meson.build
> > +++ b/drivers/dma/meson.build
> > @@ -5,5 +5,6 @@ drivers = [
> >          'idxd',
> >          'ioat',
> >          'skeleton',
> > +	'dpaa',
> 
> suggest use space replace tab, and add dpaa before skeleton

I think alphabetically sorted would be better?




^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH v3 0/7] Introduce DPAA DMA driver
  2021-11-01  8:51   ` [dpdk-dev] [PATCH v2 1/6] dma/dpaa: introduce " Gagandeep Singh
  2021-11-02  8:51     ` fengchengwen
@ 2021-11-08  9:06     ` Gagandeep Singh
  2021-11-08  9:06       ` [dpdk-dev] [PATCH v3 1/7] dma/dpaa: introduce " Gagandeep Singh
                         ` (6 more replies)
  1 sibling, 7 replies; 42+ messages in thread
From: Gagandeep Singh @ 2021-11-08  9:06 UTC (permalink / raw)
  To: dev; +Cc: nipun.gupta, thomas, Gagandeep Singh

This series support DMA driver for NXP
1046A and 1043A SoCs.

v2-change-log:
* series rebased with latest dma driver

v3-change-log:
* support statistics.
* replaced local endianness conversion functions with rte_*.
* improved submit API logic.
* Handled all comments given by fengchengwen

Gagandeep Singh (7):
  dma/dpaa: introduce DPAA DMA driver
  dma/dpaa: add device probe and remove functionality
  dma/dpaa: add driver logs
  dma/dpaa: support basic operations
  dma/dpaa: support DMA operations
  dma/dpaa: support statistics
  doc: add user guide of DPAA DMA driver

 MAINTAINERS                            |   11 +
 doc/guides/dmadevs/dpaa.rst            |   60 ++
 doc/guides/rel_notes/release_21_11.rst |    3 +
 drivers/bus/dpaa/dpaa_bus.c            |   22 +
 drivers/bus/dpaa/rte_dpaa_bus.h        |    5 +
 drivers/common/dpaax/dpaa_list.h       |    2 +
 drivers/dma/dpaa/dpaa_qdma.c           | 1081 ++++++++++++++++++++++++
 drivers/dma/dpaa/dpaa_qdma.h           |  247 ++++++
 drivers/dma/dpaa/dpaa_qdma_logs.h      |   46 +
 drivers/dma/dpaa/meson.build           |   14 +
 drivers/dma/dpaa/version.map           |    4 +
 drivers/dma/meson.build                |    3 +-
 12 files changed, 1497 insertions(+), 1 deletion(-)
 create mode 100644 doc/guides/dmadevs/dpaa.rst
 create mode 100644 drivers/dma/dpaa/dpaa_qdma.c
 create mode 100644 drivers/dma/dpaa/dpaa_qdma.h
 create mode 100644 drivers/dma/dpaa/dpaa_qdma_logs.h
 create mode 100644 drivers/dma/dpaa/meson.build
 create mode 100644 drivers/dma/dpaa/version.map

-- 
2.25.1


^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH v3 1/7] dma/dpaa: introduce DPAA DMA driver
  2021-11-08  9:06     ` [dpdk-dev] [PATCH v3 0/7] Introduce " Gagandeep Singh
@ 2021-11-08  9:06       ` Gagandeep Singh
  2021-11-09  4:39         ` [dpdk-dev] [PATCH v4 0/5] Introduce " Gagandeep Singh
  2021-11-08  9:06       ` [dpdk-dev] [PATCH v3 2/7] dma/dpaa: add device probe and remove functionality Gagandeep Singh
                         ` (5 subsequent siblings)
  6 siblings, 1 reply; 42+ messages in thread
From: Gagandeep Singh @ 2021-11-08  9:06 UTC (permalink / raw)
  To: dev; +Cc: nipun.gupta, thomas, Gagandeep Singh

The DPAA DMA  driver is an implementation of the dmadev APIs,
that provide means to initiate a DMA transaction from CPU.
The initiated DMA is performed without CPU being involved
in the actual DMA transaction. This is achieved via using
the QDMA controller of DPAA SoC.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 MAINTAINERS                            | 10 +++++++++
 doc/guides/rel_notes/release_21_11.rst |  3 +++
 drivers/bus/dpaa/dpaa_bus.c            | 22 ++++++++++++++++++++
 drivers/bus/dpaa/rte_dpaa_bus.h        |  5 +++++
 drivers/common/dpaax/dpaa_list.h       |  2 ++
 drivers/dma/dpaa/dpaa_qdma.c           | 28 ++++++++++++++++++++++++++
 drivers/dma/dpaa/meson.build           | 14 +++++++++++++
 drivers/dma/dpaa/version.map           |  4 ++++
 drivers/dma/meson.build                |  3 ++-
 9 files changed, 90 insertions(+), 1 deletion(-)
 create mode 100644 drivers/dma/dpaa/dpaa_qdma.c
 create mode 100644 drivers/dma/dpaa/meson.build
 create mode 100644 drivers/dma/dpaa/version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index 0e5951f8f1..76b9fb8e6c 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1353,6 +1353,16 @@ F: drivers/raw/dpaa2_qdma/
 F: doc/guides/rawdevs/dpaa2_qdma.rst
 
 
+
+Dmadev Drivers
+--------------
+
+NXP DPAA DMA
+M: Gagandeep Singh <g.singh@nxp.com>
+M: Nipun Gupta <nipun.gupta@nxp.com>
+F: drivers/dma/dpaa/
+
+
 Packet processing
 -----------------
 
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 502cc5ceb2..8080ada721 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -20,6 +20,9 @@ DPDK Release 21.11
       ninja -C build doc
       xdg-open build/doc/guides/html/rel_notes/release_21_11.html
 
+* **Added NXP DPAA DMA driver.**
+
+  * Added a new dmadev driver for NXP DPAA platform.
 
 New Features
 ------------
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index 9a53fdc1fb..737ac8d8c5 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -250,6 +250,28 @@ dpaa_create_device_list(void)
 
 	rte_dpaa_bus.device_count += i;
 
+	/* Creating QDMA Device */
+	for (i = 0; i < RTE_DPAA_QDMA_DEVICES; i++) {
+		dev = calloc(1, sizeof(struct rte_dpaa_device));
+		if (!dev) {
+			DPAA_BUS_LOG(ERR, "Failed to allocate QDMA device");
+			ret = -1;
+			goto cleanup;
+		}
+
+		dev->device_type = FSL_DPAA_QDMA;
+		dev->id.dev_id = rte_dpaa_bus.device_count + i;
+
+		memset(dev->name, 0, RTE_ETH_NAME_MAX_LEN);
+		sprintf(dev->name, "dpaa_qdma-%d", i+1);
+		DPAA_BUS_LOG(INFO, "%s qdma device added", dev->name);
+		dev->device.name = dev->name;
+		dev->device.devargs = dpaa_devargs_lookup(dev);
+
+		dpaa_add_to_device_list(dev);
+	}
+	rte_dpaa_bus.device_count += i;
+
 	return 0;
 
 cleanup:
diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h
index 97d189f9b0..31a5ea3fca 100644
--- a/drivers/bus/dpaa/rte_dpaa_bus.h
+++ b/drivers/bus/dpaa/rte_dpaa_bus.h
@@ -58,6 +58,9 @@ dpaa_seqn(struct rte_mbuf *mbuf)
 /** Device driver supports link state interrupt */
 #define RTE_DPAA_DRV_INTR_LSC  0x0008
 
+/** Number of supported QDMA devices */
+#define RTE_DPAA_QDMA_DEVICES  1
+
 #define RTE_DEV_TO_DPAA_CONST(ptr) \
 	container_of(ptr, const struct rte_dpaa_device, device)
 
@@ -73,6 +76,7 @@ TAILQ_HEAD(rte_dpaa_driver_list, rte_dpaa_driver);
 enum rte_dpaa_type {
 	FSL_DPAA_ETH = 1,
 	FSL_DPAA_CRYPTO,
+	FSL_DPAA_QDMA
 };
 
 struct rte_dpaa_bus {
@@ -95,6 +99,7 @@ struct rte_dpaa_device {
 	union {
 		struct rte_eth_dev *eth_dev;
 		struct rte_cryptodev *crypto_dev;
+		struct rte_dma_dev *dmadev;
 	};
 	struct rte_dpaa_driver *driver;
 	struct dpaa_device_id id;
diff --git a/drivers/common/dpaax/dpaa_list.h b/drivers/common/dpaax/dpaa_list.h
index e94575982b..319a3562ab 100644
--- a/drivers/common/dpaax/dpaa_list.h
+++ b/drivers/common/dpaax/dpaa_list.h
@@ -35,6 +35,8 @@ do { \
 	const struct list_head *__p298 = (p); \
 	((__p298->next == __p298) && (__p298->prev == __p298)); \
 })
+#define list_first_entry(ptr, type, member) \
+	list_entry((ptr)->next, type, member)
 #define list_add(p, l) \
 do { \
 	struct list_head *__p298 = (p); \
diff --git a/drivers/dma/dpaa/dpaa_qdma.c b/drivers/dma/dpaa/dpaa_qdma.c
new file mode 100644
index 0000000000..2ef3ee0c35
--- /dev/null
+++ b/drivers/dma/dpaa/dpaa_qdma.c
@@ -0,0 +1,28 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 NXP
+ */
+
+#include <rte_dpaa_bus.h>
+
+static int
+dpaa_qdma_probe(__rte_unused struct rte_dpaa_driver *dpaa_drv,
+		__rte_unused struct rte_dpaa_device *dpaa_dev)
+{
+	return 0;
+}
+
+static int
+dpaa_qdma_remove(__rte_unused struct rte_dpaa_device *dpaa_dev)
+{
+	return 0;
+}
+
+static struct rte_dpaa_driver rte_dpaa_qdma_pmd;
+
+static struct rte_dpaa_driver rte_dpaa_qdma_pmd = {
+	.drv_type = FSL_DPAA_QDMA,
+	.probe = dpaa_qdma_probe,
+	.remove = dpaa_qdma_remove,
+};
+
+RTE_PMD_REGISTER_DPAA(dpaa_qdma, rte_dpaa_qdma_pmd);
diff --git a/drivers/dma/dpaa/meson.build b/drivers/dma/dpaa/meson.build
new file mode 100644
index 0000000000..9ab0862ede
--- /dev/null
+++ b/drivers/dma/dpaa/meson.build
@@ -0,0 +1,14 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright 2021 NXP
+
+if not is_linux
+	build = false
+	reason = 'only supported on linux'
+endif
+
+deps += ['dmadev', 'bus_dpaa']
+sources = files('dpaa_qdma.c')
+
+if cc.has_argument('-Wno-pointer-arith')
+	cflags += '-Wno-pointer-arith'
+endif
diff --git a/drivers/dma/dpaa/version.map b/drivers/dma/dpaa/version.map
new file mode 100644
index 0000000000..7bab7bea48
--- /dev/null
+++ b/drivers/dma/dpaa/version.map
@@ -0,0 +1,4 @@
+DPDK_22 {
+
+	local: *;
+};
diff --git a/drivers/dma/meson.build b/drivers/dma/meson.build
index a69418ce9b..37c5c31445 100644
--- a/drivers/dma/meson.build
+++ b/drivers/dma/meson.build
@@ -2,7 +2,8 @@
 # Copyright 2021 HiSilicon Limited
 
 drivers = [
-        'idxd',
+        'dpaa',
+	'idxd',
         'ioat',
         'skeleton',
 ]
-- 
2.25.1


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [dpdk-dev] [PATCH v2 5/6] dma/dpaa: support DMA operations
  2021-11-02  9:31     ` fengchengwen
@ 2021-11-08  9:06       ` Gagandeep Singh
  0 siblings, 0 replies; 42+ messages in thread
From: Gagandeep Singh @ 2021-11-08  9:06 UTC (permalink / raw)
  To: fengchengwen, thomas, dev; +Cc: Nipun Gupta



> -----Original Message-----
> From: fengchengwen <fengchengwen@huawei.com>
> Sent: Tuesday, November 2, 2021 3:01 PM
> To: Gagandeep Singh <G.Singh@nxp.com>; thomas@monjalon.net;
> dev@dpdk.org
> Cc: Nipun Gupta <nipun.gupta@nxp.com>
> Subject: Re: [dpdk-dev] [PATCH v2 5/6] dma/dpaa: support DMA operations
> 
> On 2021/11/1 16:51, Gagandeep Singh wrote:
> > This patch support copy, submit, completed and
> > completed status functionality of DMA driver.
> >
> > Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
> 
> ...
> 
> > +
> > +static int fsl_qdma_enqueue_desc(struct fsl_qdma_chan *fsl_chan,
> > +				  struct fsl_qdma_comp *fsl_comp,
> > +				  uint64_t flags)
> > +{
> > +	struct fsl_qdma_queue *fsl_queue = fsl_chan->queue;
> > +	void *block = fsl_queue->block_base;
> > +	struct fsl_qdma_format *ccdf;
> > +	u32 reg;
> > +
> > +	/* retrieve and store the register value in big endian
> > +	 * to avoid bits swap
> > +	 */
> > +	reg = qdma_readl_be(block +
> > +			 FSL_QDMA_BCQSR(fsl_queue->id));
> > +	if (reg & (FSL_QDMA_BCQSR_QF_XOFF_BE))
> > +		return -1;
> > +
> > +	/* filling descriptor  command table */
> > +	ccdf = (struct fsl_qdma_format *)fsl_queue->virt_head;
> > +	qdma_desc_addr_set64(ccdf, fsl_comp->bus_addr + 16);
> > +	qdma_ccdf_set_format(ccdf, qdma_ccdf_get_offset(fsl_comp-
> >virt_addr));
> > +	qdma_ccdf_set_ser(ccdf, qdma_ccdf_get_status(fsl_comp->virt_addr));
> > +	fsl_comp->index = fsl_queue->virt_head - fsl_queue->cq;
> > +	fsl_queue->virt_head++;
> > +
> > +	if (fsl_queue->virt_head == fsl_queue->cq + fsl_queue->n_cq)
> > +		fsl_queue->virt_head = fsl_queue->cq;
> > +
> > +	list_add_tail(&fsl_comp->list, &fsl_queue->comp_used);
> > +
> > +	if (flags == RTE_DMA_OP_FLAG_SUBMIT) {
> > +		reg = qdma_readl_be(block + FSL_QDMA_BCQMR(fsl_queue-
> >id));
> > +		reg |= FSL_QDMA_BCQMR_EI_BE;
> > +		qdma_writel_be(reg, block + FSL_QDMA_BCQMR(fsl_queue-
> >id));
> > +	}
> > +	return fsl_comp->index;
> 
> I can't catch the index real range? it should to be [0, 0xffff] from framework
> view.
Index range is [0, 63]. It depends upon the fsl_queue->n_cq which is configured as QDMA_QUEUE_SIZE.

> 
> > +}
> > +
> >  static int fsl_qdma_alloc_chan_resources(struct fsl_qdma_chan *fsl_chan)
> >  {
> >  	struct fsl_qdma_queue *fsl_queue = fsl_chan->queue;
> > @@ -492,6 +690,148 @@ dpaa_qdma_queue_setup(struct rte_dma_dev
> *dmadev,
> >  	return dpaa_get_channel(fsl_qdma, vchan);
> >  }
> >
> > +static int
> > +dpaa_qdma_submit(void *dev_private, uint16_t vchan)
> > +{
> > +	struct fsl_qdma_engine *fsl_qdma = (struct fsl_qdma_engine
> *)dev_private;
> > +	struct fsl_qdma_chan *fsl_chan =
> > +		&fsl_qdma->chans[fsl_qdma->vchan_map[vchan]];
> > +	struct fsl_qdma_queue *fsl_queue = fsl_chan->queue;
> > +	void *block = fsl_queue->block_base;
> > +	u32 reg;
> > +
> > +	reg = qdma_readl_be(block + FSL_QDMA_BCQMR(fsl_queue->id));
> > +	reg |= FSL_QDMA_BCQMR_EI_BE;
> > +	qdma_writel_be(reg, block + FSL_QDMA_BCQMR(fsl_queue->id));
> > +
> > +	return 0;
> > +}
> > +
> 
> ...
> 
> >
> >


^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH v3 2/7] dma/dpaa: add device probe and remove functionality
  2021-11-08  9:06     ` [dpdk-dev] [PATCH v3 0/7] Introduce " Gagandeep Singh
  2021-11-08  9:06       ` [dpdk-dev] [PATCH v3 1/7] dma/dpaa: introduce " Gagandeep Singh
@ 2021-11-08  9:06       ` Gagandeep Singh
  2021-11-08  9:07       ` [dpdk-dev] [PATCH v3 3/7] dma/dpaa: add driver logs Gagandeep Singh
                         ` (4 subsequent siblings)
  6 siblings, 0 replies; 42+ messages in thread
From: Gagandeep Singh @ 2021-11-08  9:06 UTC (permalink / raw)
  To: dev; +Cc: nipun.gupta, thomas, Gagandeep Singh

This patch add device initialisation functionality.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 drivers/dma/dpaa/dpaa_qdma.c | 456 ++++++++++++++++++++++++++++++++++-
 drivers/dma/dpaa/dpaa_qdma.h | 236 ++++++++++++++++++
 2 files changed, 690 insertions(+), 2 deletions(-)
 create mode 100644 drivers/dma/dpaa/dpaa_qdma.h

diff --git a/drivers/dma/dpaa/dpaa_qdma.c b/drivers/dma/dpaa/dpaa_qdma.c
index 2ef3ee0c35..f958f78af5 100644
--- a/drivers/dma/dpaa/dpaa_qdma.c
+++ b/drivers/dma/dpaa/dpaa_qdma.c
@@ -3,17 +3,469 @@
  */
 
 #include <rte_dpaa_bus.h>
+#include <rte_dmadev_pmd.h>
+
+#include "dpaa_qdma.h"
+
+static inline int
+ilog2(int x)
+{
+	int log = 0;
+
+	x >>= 1;
+
+	while (x) {
+		log++;
+		x >>= 1;
+	}
+	return log;
+}
+
+static u32
+qdma_readl(void *addr)
+{
+	return QDMA_IN(addr);
+}
+
+static void
+qdma_writel(u32 val, void *addr)
+{
+	QDMA_OUT(addr, val);
+}
+
+static void
+*dma_pool_alloc(int size, int aligned, dma_addr_t *phy_addr)
+{
+	void *virt_addr;
+
+	virt_addr = rte_malloc("dma pool alloc", size, aligned);
+	if (!virt_addr)
+		return NULL;
+
+	*phy_addr = rte_mem_virt2iova(virt_addr);
+
+	return virt_addr;
+}
+
+static void
+dma_pool_free(void *addr)
+{
+	rte_free(addr);
+}
+
+static void
+fsl_qdma_free_chan_resources(struct fsl_qdma_chan *fsl_chan)
+{
+	struct fsl_qdma_queue *fsl_queue = fsl_chan->queue;
+	struct fsl_qdma_engine *fsl_qdma = fsl_chan->qdma;
+	struct fsl_qdma_comp *comp_temp, *_comp_temp;
+	int id;
+
+	if (--fsl_queue->count)
+		goto finally;
+
+	id = (fsl_qdma->block_base - fsl_queue->block_base) /
+	      fsl_qdma->block_offset;
+
+	while (rte_atomic32_read(&wait_task[id]) == 1)
+		rte_delay_us(QDMA_DELAY);
+
+	list_for_each_entry_safe(comp_temp, _comp_temp,
+				 &fsl_queue->comp_used,	list) {
+		list_del(&comp_temp->list);
+		dma_pool_free(comp_temp->virt_addr);
+		dma_pool_free(comp_temp->desc_virt_addr);
+		rte_free(comp_temp);
+	}
+
+	list_for_each_entry_safe(comp_temp, _comp_temp,
+				 &fsl_queue->comp_free, list) {
+		list_del(&comp_temp->list);
+		dma_pool_free(comp_temp->virt_addr);
+		dma_pool_free(comp_temp->desc_virt_addr);
+		rte_free(comp_temp);
+	}
+
+finally:
+	fsl_qdma->desc_allocated--;
+}
+
+static struct fsl_qdma_queue
+*fsl_qdma_alloc_queue_resources(struct fsl_qdma_engine *fsl_qdma)
+{
+	struct fsl_qdma_queue *queue_head, *queue_temp;
+	int len, i, j;
+	int queue_num;
+	int blocks;
+	unsigned int queue_size[FSL_QDMA_QUEUE_MAX];
+
+	queue_num = fsl_qdma->n_queues;
+	blocks = fsl_qdma->num_blocks;
+
+	len = sizeof(*queue_head) * queue_num * blocks;
+	queue_head = rte_zmalloc("qdma: queue head", len, 0);
+	if (!queue_head)
+		return NULL;
+
+	for (i = 0; i < FSL_QDMA_QUEUE_MAX; i++)
+		queue_size[i] = QDMA_QUEUE_SIZE;
+
+	for (j = 0; j < blocks; j++) {
+		for (i = 0; i < queue_num; i++) {
+			if (queue_size[i] > FSL_QDMA_CIRCULAR_DESC_SIZE_MAX ||
+			    queue_size[i] < FSL_QDMA_CIRCULAR_DESC_SIZE_MIN) {
+				goto fail;
+			}
+			queue_temp = queue_head + i + (j * queue_num);
+
+			queue_temp->cq =
+			dma_pool_alloc(sizeof(struct fsl_qdma_format) *
+				       queue_size[i],
+				       sizeof(struct fsl_qdma_format) *
+				       queue_size[i], &queue_temp->bus_addr);
+
+			if (!queue_temp->cq)
+				goto fail;
+
+			memset(queue_temp->cq, 0x0, queue_size[i] *
+			       sizeof(struct fsl_qdma_format));
+
+			queue_temp->block_base = fsl_qdma->block_base +
+				FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, j);
+			queue_temp->n_cq = queue_size[i];
+			queue_temp->id = i;
+			queue_temp->count = 0;
+			queue_temp->pending = 0;
+			queue_temp->virt_head = queue_temp->cq;
+
+		}
+	}
+	return queue_head;
+
+fail:
+	for (j = 0; j < blocks; j++) {
+		for (i = 0; i < queue_num; i++) {
+			queue_temp = queue_head + i + (j * queue_num);
+			dma_pool_free(queue_temp->cq);
+		}
+	}
+	rte_free(queue_head);
+
+	return NULL;
+}
+
+static struct
+fsl_qdma_queue *fsl_qdma_prep_status_queue(void)
+{
+	struct fsl_qdma_queue *status_head;
+	unsigned int status_size;
+
+	status_size = QDMA_STATUS_SIZE;
+	if (status_size > FSL_QDMA_CIRCULAR_DESC_SIZE_MAX ||
+	    status_size < FSL_QDMA_CIRCULAR_DESC_SIZE_MIN) {
+		return NULL;
+	}
+
+	status_head = rte_zmalloc("qdma: status head", sizeof(*status_head), 0);
+	if (!status_head)
+		return NULL;
+
+	/*
+	 * Buffer for queue command
+	 */
+	status_head->cq = dma_pool_alloc(sizeof(struct fsl_qdma_format) *
+					 status_size,
+					 sizeof(struct fsl_qdma_format) *
+					 status_size,
+					 &status_head->bus_addr);
+
+	if (!status_head->cq) {
+		rte_free(status_head);
+		return NULL;
+	}
+
+	memset(status_head->cq, 0x0, status_size *
+	       sizeof(struct fsl_qdma_format));
+	status_head->n_cq = status_size;
+	status_head->virt_head = status_head->cq;
+
+	return status_head;
+}
+
+static int
+fsl_qdma_halt(struct fsl_qdma_engine *fsl_qdma)
+{
+	void *ctrl = fsl_qdma->ctrl_base;
+	void *block;
+	int i, count = RETRIES;
+	unsigned int j;
+	u32 reg;
+
+	/* Disable the command queue and wait for idle state. */
+	reg = qdma_readl(ctrl + FSL_QDMA_DMR);
+	reg |= FSL_QDMA_DMR_DQD;
+	qdma_writel(reg, ctrl + FSL_QDMA_DMR);
+	for (j = 0; j < fsl_qdma->num_blocks; j++) {
+		block = fsl_qdma->block_base +
+			FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, j);
+		for (i = 0; i < FSL_QDMA_QUEUE_NUM_MAX; i++)
+			qdma_writel(0, block + FSL_QDMA_BCQMR(i));
+	}
+	while (true) {
+		reg = qdma_readl(ctrl + FSL_QDMA_DSR);
+		if (!(reg & FSL_QDMA_DSR_DB))
+			break;
+		if (count-- < 0)
+			return -EBUSY;
+		rte_delay_us(100);
+	}
+
+	for (j = 0; j < fsl_qdma->num_blocks; j++) {
+		block = fsl_qdma->block_base +
+			FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, j);
+
+		/* Disable status queue. */
+		qdma_writel(0, block + FSL_QDMA_BSQMR);
+
+		/*
+		 * clear the command queue interrupt detect register for
+		 * all queues.
+		 */
+		qdma_writel(0xffffffff, block + FSL_QDMA_BCQIDR(0));
+	}
+
+	return 0;
+}
+
+static int
+fsl_qdma_reg_init(struct fsl_qdma_engine *fsl_qdma)
+{
+	struct fsl_qdma_queue *fsl_queue = fsl_qdma->queue;
+	struct fsl_qdma_queue *temp;
+	void *ctrl = fsl_qdma->ctrl_base;
+	void *block;
+	u32 i, j;
+	u32 reg;
+	int ret, val;
+
+	/* Try to halt the qDMA engine first. */
+	ret = fsl_qdma_halt(fsl_qdma);
+	if (ret)
+		return ret;
+
+	for (j = 0; j < fsl_qdma->num_blocks; j++) {
+		block = fsl_qdma->block_base +
+			FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, j);
+		for (i = 0; i < fsl_qdma->n_queues; i++) {
+			temp = fsl_queue + i + (j * fsl_qdma->n_queues);
+			/*
+			 * Initialize Command Queue registers to
+			 * point to the first
+			 * command descriptor in memory.
+			 * Dequeue Pointer Address Registers
+			 * Enqueue Pointer Address Registers
+			 */
+
+			qdma_writel(lower_32_bits(temp->bus_addr),
+				    block + FSL_QDMA_BCQDPA_SADDR(i));
+			qdma_writel(upper_32_bits(temp->bus_addr),
+				    block + FSL_QDMA_BCQEDPA_SADDR(i));
+			qdma_writel(lower_32_bits(temp->bus_addr),
+				    block + FSL_QDMA_BCQEPA_SADDR(i));
+			qdma_writel(upper_32_bits(temp->bus_addr),
+				    block + FSL_QDMA_BCQEEPA_SADDR(i));
+
+			/* Initialize the queue mode. */
+			reg = FSL_QDMA_BCQMR_EN;
+			reg |= FSL_QDMA_BCQMR_CD_THLD(ilog2(temp->n_cq) - 4);
+			reg |= FSL_QDMA_BCQMR_CQ_SIZE(ilog2(temp->n_cq) - 6);
+			qdma_writel(reg, block + FSL_QDMA_BCQMR(i));
+		}
+
+		/*
+		 * Workaround for erratum: ERR010812.
+		 * We must enable XOFF to avoid the enqueue rejection occurs.
+		 * Setting SQCCMR ENTER_WM to 0x20.
+		 */
+
+		qdma_writel(FSL_QDMA_SQCCMR_ENTER_WM,
+			    block + FSL_QDMA_SQCCMR);
+
+		/*
+		 * Initialize status queue registers to point to the first
+		 * command descriptor in memory.
+		 * Dequeue Pointer Address Registers
+		 * Enqueue Pointer Address Registers
+		 */
+
+		qdma_writel(
+			    upper_32_bits(fsl_qdma->status[j]->bus_addr),
+			    block + FSL_QDMA_SQEEPAR);
+		qdma_writel(
+			    lower_32_bits(fsl_qdma->status[j]->bus_addr),
+			    block + FSL_QDMA_SQEPAR);
+		qdma_writel(
+			    upper_32_bits(fsl_qdma->status[j]->bus_addr),
+			    block + FSL_QDMA_SQEDPAR);
+		qdma_writel(
+			    lower_32_bits(fsl_qdma->status[j]->bus_addr),
+			    block + FSL_QDMA_SQDPAR);
+		/* Desiable status queue interrupt. */
+
+		qdma_writel(0x0, block + FSL_QDMA_BCQIER(0));
+		qdma_writel(0x0, block + FSL_QDMA_BSQICR);
+		qdma_writel(0x0, block + FSL_QDMA_CQIER);
+
+		/* Initialize the status queue mode. */
+		reg = FSL_QDMA_BSQMR_EN;
+		val = ilog2(fsl_qdma->status[j]->n_cq) - 6;
+		reg |= FSL_QDMA_BSQMR_CQ_SIZE(val);
+		qdma_writel(reg, block + FSL_QDMA_BSQMR);
+	}
+
+	reg = qdma_readl(ctrl + FSL_QDMA_DMR);
+	reg &= ~FSL_QDMA_DMR_DQD;
+	qdma_writel(reg, ctrl + FSL_QDMA_DMR);
+
+	return 0;
+}
+
+static void
+dma_release(void *fsl_chan)
+{
+	((struct fsl_qdma_chan *)fsl_chan)->free = true;
+	fsl_qdma_free_chan_resources((struct fsl_qdma_chan *)fsl_chan);
+}
+
+static int
+dpaa_qdma_init(struct rte_dma_dev *dmadev)
+{
+	struct fsl_qdma_engine *fsl_qdma = dmadev->data->dev_private;
+	struct fsl_qdma_chan *fsl_chan;
+	uint64_t phys_addr;
+	unsigned int len;
+	int ccsr_qdma_fd;
+	int regs_size;
+	int ret;
+	u32 i;
+
+	fsl_qdma->desc_allocated = 0;
+	fsl_qdma->n_chans = VIRT_CHANNELS;
+	fsl_qdma->n_queues = QDMA_QUEUES;
+	fsl_qdma->num_blocks = QDMA_BLOCKS;
+	fsl_qdma->block_offset = QDMA_BLOCK_OFFSET;
+
+	len = sizeof(*fsl_chan) * fsl_qdma->n_chans;
+	fsl_qdma->chans = rte_zmalloc("qdma: fsl chans", len, 0);
+	if (!fsl_qdma->chans)
+		return -1;
+
+	len = sizeof(struct fsl_qdma_queue *) * fsl_qdma->num_blocks;
+	fsl_qdma->status = rte_zmalloc("qdma: fsl status", len, 0);
+	if (!fsl_qdma->status) {
+		rte_free(fsl_qdma->chans);
+		return -1;
+	}
+
+	for (i = 0; i < fsl_qdma->num_blocks; i++) {
+		rte_atomic32_init(&wait_task[i]);
+		fsl_qdma->status[i] = fsl_qdma_prep_status_queue();
+		if (!fsl_qdma->status[i])
+			goto err;
+	}
+
+	ccsr_qdma_fd = open("/dev/mem", O_RDWR);
+	if (unlikely(ccsr_qdma_fd < 0))
+		goto err;
+
+	regs_size = fsl_qdma->block_offset * (fsl_qdma->num_blocks + 2);
+	phys_addr = QDMA_CCSR_BASE;
+	fsl_qdma->ctrl_base = mmap(NULL, regs_size, PROT_READ |
+					 PROT_WRITE, MAP_SHARED,
+					 ccsr_qdma_fd, phys_addr);
+
+	close(ccsr_qdma_fd);
+	if (fsl_qdma->ctrl_base == MAP_FAILED)
+		goto err;
+
+	fsl_qdma->status_base = fsl_qdma->ctrl_base + QDMA_BLOCK_OFFSET;
+	fsl_qdma->block_base = fsl_qdma->status_base + QDMA_BLOCK_OFFSET;
+
+	fsl_qdma->queue = fsl_qdma_alloc_queue_resources(fsl_qdma);
+	if (!fsl_qdma->queue) {
+		munmap(fsl_qdma->ctrl_base, regs_size);
+		goto err;
+	}
+
+	for (i = 0; i < fsl_qdma->n_chans; i++) {
+		struct fsl_qdma_chan *fsl_chan = &fsl_qdma->chans[i];
+
+		fsl_chan->qdma = fsl_qdma;
+		fsl_chan->queue = fsl_qdma->queue + i % (fsl_qdma->n_queues *
+							fsl_qdma->num_blocks);
+		fsl_chan->free = true;
+	}
+
+	ret = fsl_qdma_reg_init(fsl_qdma);
+	if (ret) {
+		munmap(fsl_qdma->ctrl_base, regs_size);
+		goto err;
+	}
+
+	return 0;
+
+err:
+	rte_free(fsl_qdma->chans);
+	rte_free(fsl_qdma->status);
+
+	return -1;
+}
 
 static int
 dpaa_qdma_probe(__rte_unused struct rte_dpaa_driver *dpaa_drv,
-		__rte_unused struct rte_dpaa_device *dpaa_dev)
+		struct rte_dpaa_device *dpaa_dev)
 {
+	struct rte_dma_dev *dmadev;
+	int ret;
+
+	dmadev = rte_dma_pmd_allocate(dpaa_dev->device.name,
+				      rte_socket_id(),
+				      sizeof(struct fsl_qdma_engine));
+	if (!dmadev)
+		return -EINVAL;
+
+	dpaa_dev->dmadev = dmadev;
+
+	/* Invoke PMD device initialization function */
+	ret = dpaa_qdma_init(dmadev);
+	if (ret) {
+		(void)rte_dma_pmd_release(dpaa_dev->device.name);
+		return ret;
+	}
+
+	dmadev->state = RTE_DMA_DEV_READY;
 	return 0;
 }
 
 static int
-dpaa_qdma_remove(__rte_unused struct rte_dpaa_device *dpaa_dev)
+dpaa_qdma_remove(struct rte_dpaa_device *dpaa_dev)
 {
+	struct rte_dma_dev *dmadev = dpaa_dev->dmadev;
+	struct fsl_qdma_engine *fsl_qdma = dmadev->data->dev_private;
+	int i = 0, max = QDMA_QUEUES * QDMA_BLOCKS;
+
+	for (i = 0; i < max; i++) {
+		struct fsl_qdma_chan *fsl_chan = &fsl_qdma->chans[i];
+
+		if (fsl_chan->free == false)
+			dma_release(fsl_chan);
+	}
+
+	rte_free(fsl_qdma->status);
+	rte_free(fsl_qdma->chans);
+
+	(void)rte_dma_pmd_release(dpaa_dev->device.name);
+
 	return 0;
 }
 
diff --git a/drivers/dma/dpaa/dpaa_qdma.h b/drivers/dma/dpaa/dpaa_qdma.h
new file mode 100644
index 0000000000..c05620b740
--- /dev/null
+++ b/drivers/dma/dpaa/dpaa_qdma.h
@@ -0,0 +1,236 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 NXP
+ */
+
+#ifndef _DPAA_QDMA_H_
+#define _DPAA_QDMA_H_
+
+#include <rte_io.h>
+
+#define CORE_NUMBER 4
+#define RETRIES	5
+
+#define FSL_QDMA_DMR			0x0
+#define FSL_QDMA_DSR			0x4
+#define FSL_QDMA_DEIER			0xe00
+#define FSL_QDMA_DEDR			0xe04
+#define FSL_QDMA_DECFDW0R		0xe10
+#define FSL_QDMA_DECFDW1R		0xe14
+#define FSL_QDMA_DECFDW2R		0xe18
+#define FSL_QDMA_DECFDW3R		0xe1c
+#define FSL_QDMA_DECFQIDR		0xe30
+#define FSL_QDMA_DECBR			0xe34
+
+#define FSL_QDMA_BCQMR(x)		(0xc0 + 0x100 * (x))
+#define FSL_QDMA_BCQSR(x)		(0xc4 + 0x100 * (x))
+#define FSL_QDMA_BCQEDPA_SADDR(x)	(0xc8 + 0x100 * (x))
+#define FSL_QDMA_BCQDPA_SADDR(x)	(0xcc + 0x100 * (x))
+#define FSL_QDMA_BCQEEPA_SADDR(x)	(0xd0 + 0x100 * (x))
+#define FSL_QDMA_BCQEPA_SADDR(x)	(0xd4 + 0x100 * (x))
+#define FSL_QDMA_BCQIER(x)		(0xe0 + 0x100 * (x))
+#define FSL_QDMA_BCQIDR(x)		(0xe4 + 0x100 * (x))
+
+#define FSL_QDMA_SQEDPAR		0x808
+#define FSL_QDMA_SQDPAR			0x80c
+#define FSL_QDMA_SQEEPAR		0x810
+#define FSL_QDMA_SQEPAR			0x814
+#define FSL_QDMA_BSQMR			0x800
+#define FSL_QDMA_BSQSR			0x804
+#define FSL_QDMA_BSQICR			0x828
+#define FSL_QDMA_CQMR			0xa00
+#define FSL_QDMA_CQDSCR1		0xa08
+#define FSL_QDMA_CQDSCR2                0xa0c
+#define FSL_QDMA_CQIER			0xa10
+#define FSL_QDMA_CQEDR			0xa14
+#define FSL_QDMA_SQCCMR			0xa20
+
+#define FSL_QDMA_SQICR_ICEN
+
+#define FSL_QDMA_CQIDR_CQT		0xff000000
+#define FSL_QDMA_CQIDR_SQPE		0x800000
+#define FSL_QDMA_CQIDR_SQT		0x8000
+
+#define FSL_QDMA_BCQIER_CQTIE		0x8000
+#define FSL_QDMA_BCQIER_CQPEIE		0x800000
+#define FSL_QDMA_BSQICR_ICEN		0x80000000
+#define FSL_QDMA_BSQICR_ICST(x)		((x) << 16)
+#define FSL_QDMA_CQIER_MEIE		0x80000000
+#define FSL_QDMA_CQIER_TEIE		0x1
+#define FSL_QDMA_SQCCMR_ENTER_WM	0x200000
+
+#define FSL_QDMA_QUEUE_MAX		8
+
+#define FSL_QDMA_BCQMR_EN		0x80000000
+#define FSL_QDMA_BCQMR_EI		0x40000000
+#define FSL_QDMA_BCQMR_EI_BE           0x40
+#define FSL_QDMA_BCQMR_CD_THLD(x)	((x) << 20)
+#define FSL_QDMA_BCQMR_CQ_SIZE(x)	((x) << 16)
+
+#define FSL_QDMA_BCQSR_QF		0x10000
+#define FSL_QDMA_BCQSR_XOFF		0x1
+#define FSL_QDMA_BCQSR_QF_XOFF_BE      0x1000100
+
+#define FSL_QDMA_BSQMR_EN		0x80000000
+#define FSL_QDMA_BSQMR_DI		0x40000000
+#define FSL_QDMA_BSQMR_DI_BE		0x40
+#define FSL_QDMA_BSQMR_CQ_SIZE(x)	((x) << 16)
+
+#define FSL_QDMA_BSQSR_QE		0x20000
+#define FSL_QDMA_BSQSR_QE_BE		0x200
+#define FSL_QDMA_BSQSR_QF		0x10000
+
+#define FSL_QDMA_DMR_DQD		0x40000000
+#define FSL_QDMA_DSR_DB			0x80000000
+
+#define FSL_QDMA_COMMAND_BUFFER_SIZE	64
+#define FSL_QDMA_DESCRIPTOR_BUFFER_SIZE 32
+#define FSL_QDMA_CIRCULAR_DESC_SIZE_MIN	64
+#define FSL_QDMA_CIRCULAR_DESC_SIZE_MAX	16384
+#define FSL_QDMA_QUEUE_NUM_MAX		8
+
+#define FSL_QDMA_CMD_RWTTYPE		0x4
+#define FSL_QDMA_CMD_LWC                0x2
+
+#define FSL_QDMA_CMD_RWTTYPE_OFFSET	28
+#define FSL_QDMA_CMD_NS_OFFSET		27
+#define FSL_QDMA_CMD_DQOS_OFFSET	24
+#define FSL_QDMA_CMD_WTHROTL_OFFSET	20
+#define FSL_QDMA_CMD_DSEN_OFFSET	19
+#define FSL_QDMA_CMD_LWC_OFFSET		16
+
+#define QDMA_CCDF_STATUS		20
+#define QDMA_CCDF_OFFSET		20
+#define QDMA_CCDF_MASK			GENMASK(28, 20)
+#define QDMA_CCDF_FOTMAT		BIT(29)
+#define QDMA_CCDF_SER			BIT(30)
+
+#define QDMA_SG_FIN			BIT(30)
+#define QDMA_SG_EXT			BIT(31)
+#define QDMA_SG_LEN_MASK		GENMASK(29, 0)
+
+#define QDMA_BIG_ENDIAN			1
+#define COMP_TIMEOUT			100000
+#define COMMAND_QUEUE_OVERFLLOW		10
+
+/* qdma engine attribute */
+#define QDMA_QUEUE_SIZE 64
+#define QDMA_STATUS_SIZE 64
+#define QDMA_CCSR_BASE 0x8380000
+#define VIRT_CHANNELS 32
+#define QDMA_BLOCK_OFFSET 0x10000
+#define QDMA_BLOCKS 4
+#define QDMA_QUEUES 8
+#define QDMA_DELAY 1000
+
+#ifdef QDMA_BIG_ENDIAN
+#define QDMA_IN(addr)		be32_to_cpu(rte_read32(addr))
+#define QDMA_OUT(addr, val)	rte_write32(be32_to_cpu(val), addr)
+#define QDMA_IN_BE(addr)	rte_read32(addr)
+#define QDMA_OUT_BE(addr, val)	rte_write32(val, addr)
+#else
+#define QDMA_IN(addr)		rte_read32(addr)
+#define QDMA_OUT(addr, val)	rte_write32(val, addr)
+#define QDMA_IN_BE(addr)	be32_to_cpu(rte_write32(addr))
+#define QDMA_OUT_BE(addr, val)	rte_write32(be32_to_cpu(val), addr)
+#endif
+
+#define FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma_engine, x)			\
+	(((fsl_qdma_engine)->block_offset) * (x))
+
+typedef void (*dma_call_back)(void *params);
+
+/* qDMA Command Descriptor Formats */
+struct fsl_qdma_format {
+	__le32 status; /* ser, status */
+	__le32 cfg;	/* format, offset */
+	union {
+		struct {
+			__le32 addr_lo;	/* low 32-bits of 40-bit address */
+			u8 addr_hi;	/* high 8-bits of 40-bit address */
+			u8 __reserved1[2];
+			u8 cfg8b_w1; /* dd, queue */
+		};
+		__le64 data;
+	};
+};
+
+/* qDMA Source Descriptor Format */
+struct fsl_qdma_sdf {
+	__le32 rev3;
+	__le32 cfg; /* rev4, bit[0-11] - ssd, bit[12-23] sss */
+	__le32 rev5;
+	__le32 cmd;
+};
+
+/* qDMA Destination Descriptor Format */
+struct fsl_qdma_ddf {
+	__le32 rev1;
+	__le32 cfg; /* rev2, bit[0-11] - dsd, bit[12-23] - dss */
+	__le32 rev3;
+	__le32 cmd;
+};
+
+enum dma_status {
+	DMA_COMPLETE,
+	DMA_IN_PROGRESS,
+	DMA_IN_PREPAR,
+	DMA_PAUSED,
+	DMA_ERROR,
+};
+
+struct fsl_qdma_chan {
+	struct fsl_qdma_engine	*qdma;
+	struct fsl_qdma_queue	*queue;
+	bool			free;
+	struct list_head	list;
+};
+
+struct fsl_qdma_list {
+	struct list_head	dma_list;
+};
+
+struct fsl_qdma_queue {
+	struct fsl_qdma_format	*virt_head;
+	struct list_head	comp_used;
+	struct list_head	comp_free;
+	dma_addr_t		bus_addr;
+	u32                     n_cq;
+	u32			id;
+	u32			count;
+	u32			pending;
+	struct fsl_qdma_format	*cq;
+	void			*block_base;
+};
+
+struct fsl_qdma_comp {
+	dma_addr_t              bus_addr;
+	dma_addr_t              desc_bus_addr;
+	void			*virt_addr;
+	int			index;
+	void			*desc_virt_addr;
+	struct fsl_qdma_chan	*qchan;
+	dma_call_back		call_back_func;
+	void			*params;
+	struct list_head	list;
+};
+
+struct fsl_qdma_engine {
+	int			desc_allocated;
+	void			*ctrl_base;
+	void			*status_base;
+	void			*block_base;
+	u32			n_chans;
+	u32			n_queues;
+	int			error_irq;
+	struct fsl_qdma_queue	*queue;
+	struct fsl_qdma_queue	**status;
+	struct fsl_qdma_chan	*chans;
+	u32			num_blocks;
+	u8			free_block_id;
+	u32			vchan_map[4];
+	int			block_offset;
+};
+
+static rte_atomic32_t wait_task[CORE_NUMBER];
+
+#endif /* _DPAA_QDMA_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH v3 3/7] dma/dpaa: add driver logs
  2021-11-08  9:06     ` [dpdk-dev] [PATCH v3 0/7] Introduce " Gagandeep Singh
  2021-11-08  9:06       ` [dpdk-dev] [PATCH v3 1/7] dma/dpaa: introduce " Gagandeep Singh
  2021-11-08  9:06       ` [dpdk-dev] [PATCH v3 2/7] dma/dpaa: add device probe and remove functionality Gagandeep Singh
@ 2021-11-08  9:07       ` Gagandeep Singh
  2021-11-08  9:38         ` Thomas Monjalon
  2021-11-08  9:07       ` [dpdk-dev] [PATCH v3 4/7] dma/dpaa: support basic operations Gagandeep Singh
                         ` (3 subsequent siblings)
  6 siblings, 1 reply; 42+ messages in thread
From: Gagandeep Singh @ 2021-11-08  9:07 UTC (permalink / raw)
  To: dev; +Cc: nipun.gupta, thomas, Gagandeep Singh

This patch supports DPAA DMA driver logs.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 drivers/dma/dpaa/dpaa_qdma.c      | 22 ++++++++++++---
 drivers/dma/dpaa/dpaa_qdma_logs.h | 46 +++++++++++++++++++++++++++++++
 2 files changed, 64 insertions(+), 4 deletions(-)
 create mode 100644 drivers/dma/dpaa/dpaa_qdma_logs.h

diff --git a/drivers/dma/dpaa/dpaa_qdma.c b/drivers/dma/dpaa/dpaa_qdma.c
index f958f78af5..c3255dc0c7 100644
--- a/drivers/dma/dpaa/dpaa_qdma.c
+++ b/drivers/dma/dpaa/dpaa_qdma.c
@@ -6,6 +6,7 @@
 #include <rte_dmadev_pmd.h>
 
 #include "dpaa_qdma.h"
+#include "dpaa_qdma_logs.h"
 
 static inline int
 ilog2(int x)
@@ -114,6 +115,7 @@ static struct fsl_qdma_queue
 		for (i = 0; i < queue_num; i++) {
 			if (queue_size[i] > FSL_QDMA_CIRCULAR_DESC_SIZE_MAX ||
 			    queue_size[i] < FSL_QDMA_CIRCULAR_DESC_SIZE_MIN) {
+				DPAA_QDMA_ERR("Get wrong queue-sizes.\n");
 				goto fail;
 			}
 			queue_temp = queue_head + i + (j * queue_num);
@@ -163,6 +165,7 @@ fsl_qdma_queue *fsl_qdma_prep_status_queue(void)
 	status_size = QDMA_STATUS_SIZE;
 	if (status_size > FSL_QDMA_CIRCULAR_DESC_SIZE_MAX ||
 	    status_size < FSL_QDMA_CIRCULAR_DESC_SIZE_MIN) {
+		DPAA_QDMA_ERR("Get wrong status_size.\n");
 		return NULL;
 	}
 
@@ -250,8 +253,10 @@ fsl_qdma_reg_init(struct fsl_qdma_engine *fsl_qdma)
 
 	/* Try to halt the qDMA engine first. */
 	ret = fsl_qdma_halt(fsl_qdma);
-	if (ret)
+	if (ret) {
+		DPAA_QDMA_ERR("DMA halt failed!");
 		return ret;
+	}
 
 	for (j = 0; j < fsl_qdma->num_blocks; j++) {
 		block = fsl_qdma->block_base +
@@ -375,8 +380,10 @@ dpaa_qdma_init(struct rte_dma_dev *dmadev)
 	}
 
 	ccsr_qdma_fd = open("/dev/mem", O_RDWR);
-	if (unlikely(ccsr_qdma_fd < 0))
+	if (unlikely(ccsr_qdma_fd < 0)) {
+		DPAA_QDMA_ERR("Can not open /dev/mem for qdma CCSR map");
 		goto err;
+	}
 
 	regs_size = fsl_qdma->block_offset * (fsl_qdma->num_blocks + 2);
 	phys_addr = QDMA_CCSR_BASE;
@@ -385,8 +392,11 @@ dpaa_qdma_init(struct rte_dma_dev *dmadev)
 					 ccsr_qdma_fd, phys_addr);
 
 	close(ccsr_qdma_fd);
-	if (fsl_qdma->ctrl_base == MAP_FAILED)
+	if (fsl_qdma->ctrl_base == MAP_FAILED) {
+		DPAA_QDMA_ERR("Can not map CCSR base qdma: Phys: %08" PRIx64
+		       "size %d\n", phys_addr, regs_size);
 		goto err;
+	}
 
 	fsl_qdma->status_base = fsl_qdma->ctrl_base + QDMA_BLOCK_OFFSET;
 	fsl_qdma->block_base = fsl_qdma->status_base + QDMA_BLOCK_OFFSET;
@@ -408,6 +418,7 @@ dpaa_qdma_init(struct rte_dma_dev *dmadev)
 
 	ret = fsl_qdma_reg_init(fsl_qdma);
 	if (ret) {
+		DPAA_QDMA_ERR("Can't Initialize the qDMA engine.\n");
 		munmap(fsl_qdma->ctrl_base, regs_size);
 		goto err;
 	}
@@ -431,8 +442,10 @@ dpaa_qdma_probe(__rte_unused struct rte_dpaa_driver *dpaa_drv,
 	dmadev = rte_dma_pmd_allocate(dpaa_dev->device.name,
 				      rte_socket_id(),
 				      sizeof(struct fsl_qdma_engine));
-	if (!dmadev)
+	if (!dmadev) {
+		DPAA_QDMA_ERR("Unable to allocate dmadevice");
 		return -EINVAL;
+	}
 
 	dpaa_dev->dmadev = dmadev;
 
@@ -478,3 +491,4 @@ static struct rte_dpaa_driver rte_dpaa_qdma_pmd = {
 };
 
 RTE_PMD_REGISTER_DPAA(dpaa_qdma, rte_dpaa_qdma_pmd);
+RTE_LOG_REGISTER_DEFAULT(dpaa_qdma_logtype, INFO);
diff --git a/drivers/dma/dpaa/dpaa_qdma_logs.h b/drivers/dma/dpaa/dpaa_qdma_logs.h
new file mode 100644
index 0000000000..01d4a508fc
--- /dev/null
+++ b/drivers/dma/dpaa/dpaa_qdma_logs.h
@@ -0,0 +1,46 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 NXP
+ */
+
+#ifndef __DPAA_QDMA_LOGS_H__
+#define __DPAA_QDMA_LOGS_H__
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+extern int dpaa_qdma_logtype;
+
+#define DPAA_QDMA_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, dpaa_qdma_logtype, "dpaa_qdma: " \
+		fmt "\n", ## args)
+
+#define DPAA_QDMA_DEBUG(fmt, args...) \
+	rte_log(RTE_LOG_DEBUG, dpaa_qdma_logtype, "dpaa_qdma: %s(): " \
+		fmt "\n", __func__, ## args)
+
+#define DPAA_QDMA_FUNC_TRACE() DPAA_QDMA_DEBUG(">>")
+
+#define DPAA_QDMA_INFO(fmt, args...) \
+	DPAA_QDMA_LOG(INFO, fmt, ## args)
+#define DPAA_QDMA_ERR(fmt, args...) \
+	DPAA_QDMA_LOG(ERR, fmt, ## args)
+#define DPAA_QDMA_WARN(fmt, args...) \
+	DPAA_QDMA_LOG(WARNING, fmt, ## args)
+
+/* DP Logs, toggled out at compile time if level lower than current level */
+#define DPAA_QDMA_DP_LOG(level, fmt, args...) \
+	RTE_LOG_DP(level, PMD, "dpaa_qdma: " fmt "\n", ## args)
+
+#define DPAA_QDMA_DP_DEBUG(fmt, args...) \
+	DPAA_QDMA_DP_LOG(DEBUG, fmt, ## args)
+#define DPAA_QDMA_DP_INFO(fmt, args...) \
+	DPAA_QDMA_DP_LOG(INFO, fmt, ## args)
+#define DPAA_QDMA_DP_WARN(fmt, args...) \
+	DPAA_QDMA_DP_LOG(WARNING, fmt, ## args)
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __DPAA_QDMA_LOGS_H__ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH v3 4/7] dma/dpaa: support basic operations
  2021-11-08  9:06     ` [dpdk-dev] [PATCH v3 0/7] Introduce " Gagandeep Singh
                         ` (2 preceding siblings ...)
  2021-11-08  9:07       ` [dpdk-dev] [PATCH v3 3/7] dma/dpaa: add driver logs Gagandeep Singh
@ 2021-11-08  9:07       ` Gagandeep Singh
  2021-11-08  9:07       ` [dpdk-dev] [PATCH v3 5/7] dma/dpaa: support DMA operations Gagandeep Singh
                         ` (2 subsequent siblings)
  6 siblings, 0 replies; 42+ messages in thread
From: Gagandeep Singh @ 2021-11-08  9:07 UTC (permalink / raw)
  To: dev; +Cc: nipun.gupta, thomas, Gagandeep Singh

This patch support basic DMA operations which includes
device capability and channel setup.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 drivers/dma/dpaa/dpaa_qdma.c | 204 +++++++++++++++++++++++++++++++++++
 drivers/dma/dpaa/dpaa_qdma.h |   6 ++
 2 files changed, 210 insertions(+)

diff --git a/drivers/dma/dpaa/dpaa_qdma.c b/drivers/dma/dpaa/dpaa_qdma.c
index c3255dc0c7..e59cd36872 100644
--- a/drivers/dma/dpaa/dpaa_qdma.c
+++ b/drivers/dma/dpaa/dpaa_qdma.c
@@ -8,6 +8,19 @@
 #include "dpaa_qdma.h"
 #include "dpaa_qdma_logs.h"
 
+static inline void
+qdma_desc_addr_set64(struct fsl_qdma_format *ccdf, u64 addr)
+{
+	ccdf->addr_hi = upper_32_bits(addr);
+	ccdf->addr_lo = rte_cpu_to_le_32(lower_32_bits(addr));
+}
+
+static inline void
+qdma_csgf_set_len(struct fsl_qdma_format *csgf, int len)
+{
+	csgf->cfg = rte_cpu_to_le_32(len & QDMA_SG_LEN_MASK);
+}
+
 static inline int
 ilog2(int x)
 {
@@ -91,6 +104,77 @@ fsl_qdma_free_chan_resources(struct fsl_qdma_chan *fsl_chan)
 	fsl_qdma->desc_allocated--;
 }
 
+/*
+ * Pre-request command descriptor and compound S/G for enqueue.
+ */
+static int
+fsl_qdma_pre_request_enqueue_comp_sd_desc(
+					struct fsl_qdma_queue *queue,
+					int size, int aligned)
+{
+	struct fsl_qdma_comp *comp_temp, *_comp_temp;
+	struct fsl_qdma_sdf *sdf;
+	struct fsl_qdma_ddf *ddf;
+	struct fsl_qdma_format *csgf_desc;
+	int i;
+
+	for (i = 0; i < (int)(queue->n_cq + COMMAND_QUEUE_OVERFLLOW); i++) {
+		comp_temp = rte_zmalloc("qdma: comp temp",
+					sizeof(*comp_temp), 0);
+		if (!comp_temp)
+			return -ENOMEM;
+
+		comp_temp->virt_addr =
+		dma_pool_alloc(size, aligned, &comp_temp->bus_addr);
+		if (!comp_temp->virt_addr) {
+			rte_free(comp_temp);
+			goto fail;
+		}
+
+		comp_temp->desc_virt_addr =
+		dma_pool_alloc(size, aligned, &comp_temp->desc_bus_addr);
+		if (!comp_temp->desc_virt_addr) {
+			rte_free(comp_temp->virt_addr);
+			rte_free(comp_temp);
+			goto fail;
+		}
+
+		memset(comp_temp->virt_addr, 0, FSL_QDMA_COMMAND_BUFFER_SIZE);
+		memset(comp_temp->desc_virt_addr, 0,
+		       FSL_QDMA_DESCRIPTOR_BUFFER_SIZE);
+
+		csgf_desc = (struct fsl_qdma_format *)comp_temp->virt_addr + 1;
+		sdf = (struct fsl_qdma_sdf *)comp_temp->desc_virt_addr;
+		ddf = (struct fsl_qdma_ddf *)comp_temp->desc_virt_addr + 1;
+		/* Compound Command Descriptor(Frame List Table) */
+		qdma_desc_addr_set64(csgf_desc, comp_temp->desc_bus_addr);
+		/* It must be 32 as Compound S/G Descriptor */
+		qdma_csgf_set_len(csgf_desc, 32);
+		/* Descriptor Buffer */
+		sdf->cmd = rte_cpu_to_le_32(FSL_QDMA_CMD_RWTTYPE <<
+			       FSL_QDMA_CMD_RWTTYPE_OFFSET);
+		ddf->cmd = rte_cpu_to_le_32(FSL_QDMA_CMD_RWTTYPE <<
+			       FSL_QDMA_CMD_RWTTYPE_OFFSET);
+		ddf->cmd |= rte_cpu_to_le_32(FSL_QDMA_CMD_LWC <<
+				FSL_QDMA_CMD_LWC_OFFSET);
+
+		list_add_tail(&comp_temp->list, &queue->comp_free);
+	}
+
+	return 0;
+
+fail:
+	list_for_each_entry_safe(comp_temp, _comp_temp,
+				 &queue->comp_free, list) {
+		list_del(&comp_temp->list);
+		rte_free(comp_temp->virt_addr);
+		rte_free(comp_temp->desc_virt_addr);
+		rte_free(comp_temp);
+	}
+
+	return -ENOMEM;
+}
+
 static struct fsl_qdma_queue
 *fsl_qdma_alloc_queue_resources(struct fsl_qdma_engine *fsl_qdma)
 {
@@ -335,6 +419,84 @@ fsl_qdma_reg_init(struct fsl_qdma_engine *fsl_qdma)
 	return 0;
 }
 
+static int
+fsl_qdma_alloc_chan_resources(struct fsl_qdma_chan *fsl_chan)
+{
+	struct fsl_qdma_queue *fsl_queue = fsl_chan->queue;
+	struct fsl_qdma_engine *fsl_qdma = fsl_chan->qdma;
+	int ret;
+
+	if (fsl_queue->count++)
+		goto finally;
+
+	INIT_LIST_HEAD(&fsl_queue->comp_free);
+	INIT_LIST_HEAD(&fsl_queue->comp_used);
+
+	ret = fsl_qdma_pre_request_enqueue_comp_sd_desc(fsl_queue,
+				FSL_QDMA_COMMAND_BUFFER_SIZE, 64);
+	if (ret) {
+		DPAA_QDMA_ERR(
+			"failed to alloc dma buffer for comp descriptor\n");
+		goto exit;
+	}
+
+finally:
+	return fsl_qdma->desc_allocated++;
+
+exit:
+	return -ENOMEM;
+}
+
+static int
+dpaa_info_get(const struct rte_dma_dev *dev, struct rte_dma_info *dev_info,
+	      uint32_t info_sz)
+{
+#define DPAADMA_MAX_DESC        64
+#define DPAADMA_MIN_DESC        64
+
+	RTE_SET_USED(dev);
+	RTE_SET_USED(info_sz);
+
+	dev_info->dev_capa = RTE_DMA_CAPA_MEM_TO_MEM |
+			     RTE_DMA_CAPA_MEM_TO_DEV |
+			     RTE_DMA_CAPA_DEV_TO_DEV |
+			     RTE_DMA_CAPA_DEV_TO_MEM |
+			     RTE_DMA_CAPA_SILENT |
+			     RTE_DMA_CAPA_OPS_COPY;
+	dev_info->max_vchans = 1;
+	dev_info->max_desc = DPAADMA_MAX_DESC;
+	dev_info->min_desc = DPAADMA_MIN_DESC;
+
+	return 0;
+}
+
+static int
+dpaa_get_channel(struct fsl_qdma_engine *fsl_qdma,  uint16_t vchan)
+{
+	u32 i, start, end;
+	int ret;
+
+	start = fsl_qdma->free_block_id * QDMA_QUEUES;
+	fsl_qdma->free_block_id++;
+
+	end = start + 1;
+	for (i = start; i < end; i++) {
+		struct fsl_qdma_chan *fsl_chan = &fsl_qdma->chans[i];
+
+		if (fsl_chan->free) {
+			fsl_chan->free = false;
+			ret = fsl_qdma_alloc_chan_resources(fsl_chan);
+			if (ret)
+				return ret;
+
+			fsl_qdma->vchan_map[vchan] = i;
+			return 0;
+		}
+	}
+
+	return -1;
+}
+
 static void
 dma_release(void *fsl_chan)
 {
@@ -342,6 +504,45 @@ dma_release(void *fsl_chan)
 	fsl_qdma_free_chan_resources((struct fsl_qdma_chan *)fsl_chan);
 }
 
+static int
+dpaa_qdma_configure(__rte_unused struct rte_dma_dev *dmadev,
+		    __rte_unused const struct rte_dma_conf *dev_conf,
+		    __rte_unused uint32_t conf_sz)
+{
+	return 0;
+}
+
+static int
+dpaa_qdma_start(__rte_unused struct rte_dma_dev *dev)
+{
+	return 0;
+}
+
+static int
+dpaa_qdma_close(__rte_unused struct rte_dma_dev *dev)
+{
+	return 0;
+}
+
+static int
+dpaa_qdma_queue_setup(struct rte_dma_dev *dmadev,
+		      uint16_t vchan,
+		      __rte_unused const struct rte_dma_vchan_conf *conf,
+		      __rte_unused uint32_t conf_sz)
+{
+	struct fsl_qdma_engine *fsl_qdma = dmadev->data->dev_private;
+
+	return dpaa_get_channel(fsl_qdma, vchan);
+}
+
+static struct rte_dma_dev_ops dpaa_qdma_ops = {
+	.dev_info_get		  = dpaa_info_get,
+	.dev_configure            = dpaa_qdma_configure,
+	.dev_start                = dpaa_qdma_start,
+	.dev_close                = dpaa_qdma_close,
+	.vchan_setup		  = dpaa_qdma_queue_setup,
+};
+
 static int
 dpaa_qdma_init(struct rte_dma_dev *dmadev)
 {
@@ -448,6 +649,9 @@ dpaa_qdma_probe(__rte_unused struct rte_dpaa_driver *dpaa_drv,
 	}
 
 	dpaa_dev->dmadev = dmadev;
+	dmadev->dev_ops = &dpaa_qdma_ops;
+	dmadev->device = &dpaa_dev->device;
+	dmadev->fp_obj->dev_private = dmadev->data->dev_private;
 
 	/* Invoke PMD device initialization function */
 	ret = dpaa_qdma_init(dmadev);
diff --git a/drivers/dma/dpaa/dpaa_qdma.h b/drivers/dma/dpaa/dpaa_qdma.h
index c05620b740..f046167108 100644
--- a/drivers/dma/dpaa/dpaa_qdma.h
+++ b/drivers/dma/dpaa/dpaa_qdma.h
@@ -10,6 +10,12 @@
 #define CORE_NUMBER 4
 #define RETRIES	5
 
+#ifndef GENMASK
+#define BITS_PER_LONG	(__SIZEOF_LONG__ * 8)
+#define GENMASK(h, l) \
+		(((~0UL) << (l)) & (~0UL >> (BITS_PER_LONG - 1 - (h))))
+#endif
+
 #define FSL_QDMA_DMR			0x0
 #define FSL_QDMA_DSR			0x4
 #define FSL_QDMA_DEIER			0xe00
-- 
2.25.1


^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH v3 5/7] dma/dpaa: support DMA operations
  2021-11-08  9:06     ` [dpdk-dev] [PATCH v3 0/7] Introduce " Gagandeep Singh
                         ` (3 preceding siblings ...)
  2021-11-08  9:07       ` [dpdk-dev] [PATCH v3 4/7] dma/dpaa: support basic operations Gagandeep Singh
@ 2021-11-08  9:07       ` Gagandeep Singh
  2021-11-08  9:07       ` [dpdk-dev] [PATCH v3 6/7] dma/dpaa: support statistics Gagandeep Singh
  2021-11-08  9:07       ` [dpdk-dev] [PATCH v3 7/7] doc: add user guide of DPAA DMA driver Gagandeep Singh
  6 siblings, 0 replies; 42+ messages in thread
From: Gagandeep Singh @ 2021-11-08  9:07 UTC (permalink / raw)
  To: dev; +Cc: nipun.gupta, thomas, Gagandeep Singh

This patch support copy, submit, completed and
completed status functionality of DMA driver.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 drivers/dma/dpaa/dpaa_qdma.c | 334 +++++++++++++++++++++++++++++++++++
 drivers/dma/dpaa/dpaa_qdma.h |   4 +
 2 files changed, 338 insertions(+)

diff --git a/drivers/dma/dpaa/dpaa_qdma.c b/drivers/dma/dpaa/dpaa_qdma.c
index e59cd36872..ebe6211f08 100644
--- a/drivers/dma/dpaa/dpaa_qdma.c
+++ b/drivers/dma/dpaa/dpaa_qdma.c
@@ -15,12 +15,50 @@ qdma_desc_addr_set64(struct fsl_qdma_format *ccdf, u64 addr)
 	ccdf->addr_lo = rte_cpu_to_le_32(lower_32_bits(addr));
 }
 
+static inline u64
+qdma_ccdf_get_queue(const struct fsl_qdma_format *ccdf)
+{
+	return ccdf->cfg8b_w1 & 0xff;
+}
+
+static inline int
+qdma_ccdf_get_offset(const struct fsl_qdma_format *ccdf)
+{
+	return (rte_le_to_cpu_32(ccdf->cfg) & QDMA_CCDF_MASK)
+		>> QDMA_CCDF_OFFSET;
+}
+
+static inline void
+qdma_ccdf_set_format(struct fsl_qdma_format *ccdf, int offset)
+{
+	ccdf->cfg = rte_cpu_to_le_32(QDMA_CCDF_FOTMAT | offset);
+}
+
+static inline int
+qdma_ccdf_get_status(const struct fsl_qdma_format *ccdf)
+{
+	return (rte_le_to_cpu_32(ccdf->status) & QDMA_CCDF_MASK)
+		>> QDMA_CCDF_STATUS;
+}
+
+static inline void
+qdma_ccdf_set_ser(struct fsl_qdma_format *ccdf, int status)
+{
+	ccdf->status = rte_cpu_to_le_32(QDMA_CCDF_SER | status);
+}
+
 static inline void
 qdma_csgf_set_len(struct fsl_qdma_format *csgf, int len)
 {
 	csgf->cfg = rte_cpu_to_le_32(len & QDMA_SG_LEN_MASK);
 }
 
+static inline void
+qdma_csgf_set_f(struct fsl_qdma_format *csgf, int len)
+{
+	csgf->cfg = rte_cpu_to_le_32(QDMA_SG_FIN | (len & QDMA_SG_LEN_MASK));
+}
+
 static inline int
 ilog2(int x)
 {
@@ -47,6 +85,18 @@ qdma_writel(u32 val, void *addr)
 	QDMA_OUT(addr, val);
 }
 
+static u32
+qdma_readl_be(void *addr)
+{
+	return QDMA_IN_BE(addr);
+}
+
+static void
+qdma_writel_be(u32 val, void *addr)
+{
+	QDMA_OUT_BE(addr, val);
+}
+
 static void
 *dma_pool_alloc(int size, int aligned, dma_addr_t *phy_addr)
 {
@@ -104,6 +154,32 @@ fsl_qdma_free_chan_resources(struct fsl_qdma_chan *fsl_chan)
 	fsl_qdma->desc_allocated--;
 }
 
+static void
+fsl_qdma_comp_fill_memcpy(struct fsl_qdma_comp *fsl_comp,
+				      dma_addr_t dst, dma_addr_t src, u32 len)
+{
+	struct fsl_qdma_format *csgf_src, *csgf_dest;
+
+	/* Note: command table (fsl_comp->virt_addr) is getting filled
+	 * directly in cmd descriptors of queues while enqueuing the descriptor
+	 * please refer fsl_qdma_enqueue_desc
+	 * frame list table (virt_addr) + 1) and source,
+	 * destination descriptor table
+	 * (fsl_comp->desc_virt_addr and fsl_comp->desc_virt_addr+1) move to
+	 * the control path to fsl_qdma_pre_request_enqueue_comp_sd_desc
+	 */
+	csgf_src = (struct fsl_qdma_format *)fsl_comp->virt_addr + 2;
+	csgf_dest = (struct fsl_qdma_format *)fsl_comp->virt_addr + 3;
+
+	/* Status notification is enqueued to status queue. */
+	qdma_desc_addr_set64(csgf_src, src);
+	qdma_csgf_set_len(csgf_src, len);
+	qdma_desc_addr_set64(csgf_dest, dst);
+	qdma_csgf_set_len(csgf_dest, len);
+	/* This entry is the last entry. */
+	qdma_csgf_set_f(csgf_dest, len);
+}
+
 /*
  * Pre-request command descriptor and compound S/G for enqueue.
  */
@@ -175,6 +251,26 @@ fsl_qdma_pre_request_enqueue_comp_sd_desc(
 	return -ENOMEM;
 }
 
+/*
+ * Request a command descriptor for enqueue.
+ */
+static struct fsl_qdma_comp *
+fsl_qdma_request_enqueue_desc(struct fsl_qdma_chan *fsl_chan)
+{
+	struct fsl_qdma_queue *queue = fsl_chan->queue;
+	struct fsl_qdma_comp *comp_temp;
+
+	if (!list_empty(&queue->comp_free)) {
+		comp_temp = list_first_entry(&queue->comp_free,
+					     struct fsl_qdma_comp,
+					     list);
+		list_del(&comp_temp->list);
+		return comp_temp;
+	}
+
+	return NULL;
+}
+
 static struct fsl_qdma_queue
 *fsl_qdma_alloc_queue_resources(struct fsl_qdma_engine *fsl_qdma)
 {
@@ -324,6 +420,54 @@ fsl_qdma_halt(struct fsl_qdma_engine *fsl_qdma)
 	return 0;
 }
 
+static int
+fsl_qdma_queue_transfer_complete(struct fsl_qdma_engine *fsl_qdma,
+				 void *block, int id, const uint16_t nb_cpls,
+				 uint16_t *last_idx,
+				 enum rte_dma_status_code *status)
+{
+	struct fsl_qdma_queue *fsl_queue = fsl_qdma->queue;
+	struct fsl_qdma_queue *fsl_status = fsl_qdma->status[id];
+	struct fsl_qdma_queue *temp_queue;
+	struct fsl_qdma_format *status_addr;
+	struct fsl_qdma_comp *fsl_comp = NULL;
+	u32 reg, i;
+	int count = 0;
+
+	while (count < nb_cpls) {
+		reg = qdma_readl_be(block + FSL_QDMA_BSQSR);
+		if (reg & FSL_QDMA_BSQSR_QE_BE)
+			return count;
+
+		status_addr = fsl_status->virt_head;
+
+		i = qdma_ccdf_get_queue(status_addr) +
+			id * fsl_qdma->n_queues;
+		temp_queue = fsl_queue + i;
+		fsl_comp = list_first_entry(&temp_queue->comp_used,
+					    struct fsl_qdma_comp,
+					    list);
+		list_del(&fsl_comp->list);
+
+		reg = qdma_readl_be(block + FSL_QDMA_BSQMR);
+		reg |= FSL_QDMA_BSQMR_DI_BE;
+
+		qdma_desc_addr_set64(status_addr, 0x0);
+		fsl_status->virt_head++;
+		if (fsl_status->virt_head == fsl_status->cq + fsl_status->n_cq)
+			fsl_status->virt_head = fsl_status->cq;
+		qdma_writel_be(reg, block + FSL_QDMA_BSQMR);
+		*last_idx = fsl_comp->index;
+		if (status != NULL)
+			status[count] = RTE_DMA_STATUS_SUCCESSFUL;
+
+		list_add_tail(&fsl_comp->list, &temp_queue->comp_free);
+		count++;
+
+	}
+	return count;
+}
+
 static int
 fsl_qdma_reg_init(struct fsl_qdma_engine *fsl_qdma)
 {
@@ -419,6 +563,66 @@ fsl_qdma_reg_init(struct fsl_qdma_engine *fsl_qdma)
 	return 0;
 }
 
+static void *
+fsl_qdma_prep_memcpy(void *fsl_chan, dma_addr_t dst,
+			   dma_addr_t src, size_t len,
+			   void *call_back,
+			   void *param)
+{
+	struct fsl_qdma_comp *fsl_comp;
+
+	fsl_comp =
+	fsl_qdma_request_enqueue_desc((struct fsl_qdma_chan *)fsl_chan);
+	if (!fsl_comp)
+		return NULL;
+
+	fsl_comp->qchan = fsl_chan;
+	fsl_comp->call_back_func = call_back;
+	fsl_comp->params = param;
+
+	fsl_qdma_comp_fill_memcpy(fsl_comp, dst, src, len);
+	return (void *)fsl_comp;
+}
+
+static int
+fsl_qdma_enqueue_desc(struct fsl_qdma_chan *fsl_chan,
+				  struct fsl_qdma_comp *fsl_comp,
+				  uint64_t flags)
+{
+	struct fsl_qdma_queue *fsl_queue = fsl_chan->queue;
+	void *block = fsl_queue->block_base;
+	struct fsl_qdma_format *ccdf;
+	u32 reg;
+
+	/* retrieve and store the register value in big endian
+	 * to avoid bits swap
+	 */
+	reg = qdma_readl_be(block +
+			 FSL_QDMA_BCQSR(fsl_queue->id));
+	if (reg & (FSL_QDMA_BCQSR_QF_XOFF_BE))
+		return -1;
+
+	/* filling descriptor  command table */
+	ccdf = (struct fsl_qdma_format *)fsl_queue->virt_head;
+	qdma_desc_addr_set64(ccdf, fsl_comp->bus_addr + 16);
+	qdma_ccdf_set_format(ccdf, qdma_ccdf_get_offset(fsl_comp->virt_addr));
+	qdma_ccdf_set_ser(ccdf, qdma_ccdf_get_status(fsl_comp->virt_addr));
+	fsl_comp->index = fsl_queue->virt_head - fsl_queue->cq;
+	fsl_queue->virt_head++;
+
+	if (fsl_queue->virt_head == fsl_queue->cq + fsl_queue->n_cq)
+		fsl_queue->virt_head = fsl_queue->cq;
+
+	list_add_tail(&fsl_comp->list, &fsl_queue->comp_used);
+
+	if (flags == RTE_DMA_OP_FLAG_SUBMIT) {
+		reg = qdma_readl_be(block + FSL_QDMA_BCQMR(fsl_queue->id));
+		reg |= FSL_QDMA_BCQMR_EI_BE;
+		qdma_writel_be(reg, block + FSL_QDMA_BCQMR(fsl_queue->id));
+	}
+	return fsl_comp->index;
+}
+
 static int
 fsl_qdma_alloc_chan_resources(struct fsl_qdma_chan *fsl_chan)
 {
@@ -535,6 +739,132 @@ dpaa_qdma_queue_setup(struct rte_dma_dev *dmadev,
 	return dpaa_get_channel(fsl_qdma, vchan);
 }
 
+static int
+dpaa_qdma_submit(void *dev_private, uint16_t vchan)
+{
+	struct fsl_qdma_engine *fsl_qdma = (struct fsl_qdma_engine *)dev_private;
+	struct fsl_qdma_chan *fsl_chan =
+		&fsl_qdma->chans[fsl_qdma->vchan_map[vchan]];
+	struct fsl_qdma_queue *fsl_queue = fsl_chan->queue;
+	void *block = fsl_queue->block_base;
+	u32 reg;
+
+	while (fsl_queue->pending) {
+		reg = qdma_readl_be(block + FSL_QDMA_BCQMR(fsl_queue->id));
+		reg |= FSL_QDMA_BCQMR_EI_BE;
+		qdma_writel_be(reg, block + FSL_QDMA_BCQMR(fsl_queue->id));
+		fsl_queue->pending--;
+	}
+
+	return 0;
+}
+
+static int
+dpaa_qdma_enqueue(void *dev_private, uint16_t vchan,
+		  rte_iova_t src, rte_iova_t dst,
+		  uint32_t length, uint64_t flags)
+{
+	struct fsl_qdma_engine *fsl_qdma = (struct fsl_qdma_engine *)dev_private;
+	struct fsl_qdma_chan *fsl_chan =
+		&fsl_qdma->chans[fsl_qdma->vchan_map[vchan]];
+	int ret;
+
+	void *fsl_comp = NULL;
+
+	fsl_comp = fsl_qdma_prep_memcpy(fsl_chan,
+			(dma_addr_t)dst, (dma_addr_t)src,
+			length, NULL, NULL);
+	if (!fsl_comp) {
+		DPAA_QDMA_DP_DEBUG("fsl_comp is NULL\n");
+		return -1;
+	}
+	ret = fsl_qdma_enqueue_desc(fsl_chan, fsl_comp, flags);
+
+	return ret;
+}
+
+static uint16_t
+dpaa_qdma_dequeue_status(void *dev_private, uint16_t vchan,
+			 const uint16_t nb_cpls, uint16_t *last_idx,
+			 enum rte_dma_status_code *st)
+{
+	struct fsl_qdma_engine *fsl_qdma = (struct fsl_qdma_engine *)dev_private;
+	int id = (int)((fsl_qdma->vchan_map[vchan]) / QDMA_QUEUES);
+	void *block;
+	int intr;
+	void *status = fsl_qdma->status_base;
+
+	intr = qdma_readl_be(status + FSL_QDMA_DEDR);
+	if (intr) {
+		DPAA_QDMA_ERR("DMA transaction error! %x\n", intr);
+		intr = qdma_readl(status + FSL_QDMA_DECFDW0R);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW0R %x\n", intr);
+		intr = qdma_readl(status + FSL_QDMA_DECFDW1R);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW1R %x\n", intr);
+		intr = qdma_readl(status + FSL_QDMA_DECFDW2R);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW2R %x\n", intr);
+		intr = qdma_readl(status + FSL_QDMA_DECFDW3R);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW3R %x\n", intr);
+		intr = qdma_readl(status + FSL_QDMA_DECFQIDR);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFQIDR %x\n", intr);
+		intr = qdma_readl(status + FSL_QDMA_DECBR);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECBR %x\n", intr);
+		qdma_writel(0xffffffff,
+			    status + FSL_QDMA_DEDR);
+		intr = qdma_readl(status + FSL_QDMA_DEDR);
+	}
+
+	block = fsl_qdma->block_base +
+		FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, id);
+
+	intr = fsl_qdma_queue_transfer_complete(fsl_qdma, block, id, nb_cpls,
+						last_idx, st);
+
+	return intr;
+}
+
+
+static uint16_t
+dpaa_qdma_dequeue(void *dev_private,
+		  uint16_t vchan, const uint16_t nb_cpls,
+		  uint16_t *last_idx, bool *has_error)
+{
+	struct fsl_qdma_engine *fsl_qdma = (struct fsl_qdma_engine *)dev_private;
+	int id = (int)((fsl_qdma->vchan_map[vchan]) / QDMA_QUEUES);
+	void *block;
+	int intr;
+	void *status = fsl_qdma->status_base;
+
+	intr = qdma_readl_be(status + FSL_QDMA_DEDR);
+	if (intr) {
+		DPAA_QDMA_ERR("DMA transaction error! %x\n", intr);
+		intr = qdma_readl(status + FSL_QDMA_DECFDW0R);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW0R %x\n", intr);
+		intr = qdma_readl(status + FSL_QDMA_DECFDW1R);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW1R %x\n", intr);
+		intr = qdma_readl(status + FSL_QDMA_DECFDW2R);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW2R %x\n", intr);
+		intr = qdma_readl(status + FSL_QDMA_DECFDW3R);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW3R %x\n", intr);
+		intr = qdma_readl(status + FSL_QDMA_DECFQIDR);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFQIDR %x\n", intr);
+		intr = qdma_readl(status + FSL_QDMA_DECBR);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECBR %x\n", intr);
+		qdma_writel(0xffffffff,
+			    status + FSL_QDMA_DEDR);
+		intr = qdma_readl(status + FSL_QDMA_DEDR);
+		*has_error = true;
+	}
+
+	block = fsl_qdma->block_base +
+		FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, id);
+
+	intr = fsl_qdma_queue_transfer_complete(fsl_qdma, block, id, nb_cpls,
+						last_idx, NULL);
+
+	return intr;
+}
+
 static struct rte_dma_dev_ops dpaa_qdma_ops = {
 	.dev_info_get		  = dpaa_info_get,
 	.dev_configure            = dpaa_qdma_configure,
@@ -652,6 +982,10 @@ dpaa_qdma_probe(__rte_unused struct rte_dpaa_driver *dpaa_drv,
 	dmadev->dev_ops = &dpaa_qdma_ops;
 	dmadev->device = &dpaa_dev->device;
 	dmadev->fp_obj->dev_private = dmadev->data->dev_private;
+	dmadev->fp_obj->copy = dpaa_qdma_enqueue;
+	dmadev->fp_obj->submit = dpaa_qdma_submit;
+	dmadev->fp_obj->completed = dpaa_qdma_dequeue;
+	dmadev->fp_obj->completed_status = dpaa_qdma_dequeue_status;
 
 	/* Invoke PMD device initialization function */
 	ret = dpaa_qdma_init(dmadev);
diff --git a/drivers/dma/dpaa/dpaa_qdma.h b/drivers/dma/dpaa/dpaa_qdma.h
index f046167108..6d0ac58317 100644
--- a/drivers/dma/dpaa/dpaa_qdma.h
+++ b/drivers/dma/dpaa/dpaa_qdma.h
@@ -7,6 +7,10 @@
 
 #include <rte_io.h>
 
+#ifndef BIT
+#define BIT(nr)		(1UL << (nr))
+#endif
+
 #define CORE_NUMBER 4
 #define RETRIES	5
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH v3 6/7] dma/dpaa: support statistics
  2021-11-08  9:06     ` [dpdk-dev] [PATCH v3 0/7] Introduce " Gagandeep Singh
                         ` (4 preceding siblings ...)
  2021-11-08  9:07       ` [dpdk-dev] [PATCH v3 5/7] dma/dpaa: support DMA operations Gagandeep Singh
@ 2021-11-08  9:07       ` Gagandeep Singh
  2021-11-08  9:07       ` [dpdk-dev] [PATCH v3 7/7] doc: add user guide of DPAA DMA driver Gagandeep Singh
  6 siblings, 0 replies; 42+ messages in thread
From: Gagandeep Singh @ 2021-11-08  9:07 UTC (permalink / raw)
  To: dev; +Cc: nipun.gupta, thomas, Gagandeep Singh

This patch support DMA read and reset statistics
operations

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 drivers/dma/dpaa/dpaa_qdma.c | 51 +++++++++++++++++++++++++++++++++++-
 drivers/dma/dpaa/dpaa_qdma.h |  1 +
 2 files changed, 51 insertions(+), 1 deletion(-)

diff --git a/drivers/dma/dpaa/dpaa_qdma.c b/drivers/dma/dpaa/dpaa_qdma.c
index ebe6211f08..cb272c700f 100644
--- a/drivers/dma/dpaa/dpaa_qdma.c
+++ b/drivers/dma/dpaa/dpaa_qdma.c
@@ -319,7 +319,7 @@ static struct fsl_qdma_queue
 			queue_temp->count = 0;
 			queue_temp->pending = 0;
 			queue_temp->virt_head = queue_temp->cq;
-
+			queue_temp->stats = (struct rte_dma_stats){0};
 		}
 	}
 	return queue_head;
@@ -619,6 +619,9 @@ fsl_qdma_enqueue_desc(struct fsl_qdma_chan *fsl_chan,
 		reg = qdma_readl_be(block + FSL_QDMA_BCQMR(fsl_queue->id));
 		reg |= FSL_QDMA_BCQMR_EI_BE;
 		qdma_writel_be(reg, block + FSL_QDMA_BCQMR(fsl_queue->id));
+		fsl_queue->stats.submitted++;
+	} else {
+		fsl_queue->pending++;
 	}
 	return fsl_comp->index;
 }
@@ -754,6 +757,7 @@ dpaa_qdma_submit(void *dev_private, uint16_t vchan)
 		reg |= FSL_QDMA_BCQMR_EI_BE;
 		qdma_writel_be(reg, block + FSL_QDMA_BCQMR(fsl_queue->id));
 		fsl_queue->pending--;
+		fsl_queue->stats.submitted++;
 	}
 
 	return 0;
@@ -793,6 +797,9 @@ dpaa_qdma_dequeue_status(void *dev_private, uint16_t vchan,
 	void *block;
 	int intr;
 	void *status = fsl_qdma->status_base;
+	struct fsl_qdma_chan *fsl_chan =
+		&fsl_qdma->chans[fsl_qdma->vchan_map[vchan]];
+	struct fsl_qdma_queue *fsl_queue = fsl_chan->queue;
 
 	intr = qdma_readl_be(status + FSL_QDMA_DEDR);
 	if (intr) {
@@ -812,6 +819,7 @@ dpaa_qdma_dequeue_status(void *dev_private, uint16_t vchan,
 		qdma_writel(0xffffffff,
 			    status + FSL_QDMA_DEDR);
 		intr = qdma_readl(status + FSL_QDMA_DEDR);
+		fsl_queue->stats.errors++;
 	}
 
 	block = fsl_qdma->block_base +
@@ -819,6 +827,7 @@ dpaa_qdma_dequeue_status(void *dev_private, uint16_t vchan,
 
 	intr = fsl_qdma_queue_transfer_complete(fsl_qdma, block, id, nb_cpls,
 						last_idx, st);
+	fsl_queue->stats.completed += intr;
 
 	return intr;
 }
@@ -834,6 +843,9 @@ dpaa_qdma_dequeue(void *dev_private,
 	void *block;
 	int intr;
 	void *status = fsl_qdma->status_base;
+	struct fsl_qdma_chan *fsl_chan =
+		&fsl_qdma->chans[fsl_qdma->vchan_map[vchan]];
+	struct fsl_qdma_queue *fsl_queue = fsl_chan->queue;
 
 	intr = qdma_readl_be(status + FSL_QDMA_DEDR);
 	if (intr) {
@@ -854,6 +866,7 @@ dpaa_qdma_dequeue(void *dev_private,
 			    status + FSL_QDMA_DEDR);
 		intr = qdma_readl(status + FSL_QDMA_DEDR);
 		*has_error = true;
+		fsl_queue->stats.errors++;
 	}
 
 	block = fsl_qdma->block_base +
@@ -861,16 +874,52 @@ dpaa_qdma_dequeue(void *dev_private,
 
 	intr = fsl_qdma_queue_transfer_complete(fsl_qdma, block, id, nb_cpls,
 						last_idx, NULL);
+	fsl_queue->stats.completed += intr;
 
 	return intr;
 }
 
+static int
+dpaa_qdma_stats_get(const struct rte_dma_dev *dmadev, uint16_t vchan,
+		    struct rte_dma_stats *rte_stats, uint32_t size)
+{
+	struct fsl_qdma_engine *fsl_qdma = dmadev->data->dev_private;
+	struct fsl_qdma_chan *fsl_chan =
+		&fsl_qdma->chans[fsl_qdma->vchan_map[vchan]];
+	struct fsl_qdma_queue *fsl_queue = fsl_chan->queue;
+	struct rte_dma_stats *stats = &fsl_queue->stats;
+
+	if (size < sizeof(rte_stats))
+		return -EINVAL;
+	if (rte_stats == NULL)
+		return -EINVAL;
+
+	*rte_stats = *stats;
+
+	return 0;
+}
+
+static int
+dpaa_qdma_stats_reset(struct rte_dma_dev *dmadev, uint16_t vchan)
+{
+	struct fsl_qdma_engine *fsl_qdma = dmadev->data->dev_private;
+	struct fsl_qdma_chan *fsl_chan =
+		&fsl_qdma->chans[fsl_qdma->vchan_map[vchan]];
+	struct fsl_qdma_queue *fsl_queue = fsl_chan->queue;
+
+	fsl_queue->stats = (struct rte_dma_stats){0};
+
+	return 0;
+}
+
 static struct rte_dma_dev_ops dpaa_qdma_ops = {
 	.dev_info_get		  = dpaa_info_get,
 	.dev_configure            = dpaa_qdma_configure,
 	.dev_start                = dpaa_qdma_start,
 	.dev_close                = dpaa_qdma_close,
 	.vchan_setup		  = dpaa_qdma_queue_setup,
+	.stats_get		  = dpaa_qdma_stats_get,
+	.stats_reset		  = dpaa_qdma_stats_reset,
 };
 
 static int
diff --git a/drivers/dma/dpaa/dpaa_qdma.h b/drivers/dma/dpaa/dpaa_qdma.h
index 6d0ac58317..bf49b2d5d9 100644
--- a/drivers/dma/dpaa/dpaa_qdma.h
+++ b/drivers/dma/dpaa/dpaa_qdma.h
@@ -210,6 +210,7 @@ struct fsl_qdma_queue {
 	u32			pending;
 	struct fsl_qdma_format	*cq;
 	void			*block_base;
+	struct rte_dma_stats	stats;
 };
 
 struct fsl_qdma_comp {
-- 
2.25.1


^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH v3 7/7] doc: add user guide of DPAA DMA driver
  2021-11-08  9:06     ` [dpdk-dev] [PATCH v3 0/7] Introduce " Gagandeep Singh
                         ` (5 preceding siblings ...)
  2021-11-08  9:07       ` [dpdk-dev] [PATCH v3 6/7] dma/dpaa: support statistics Gagandeep Singh
@ 2021-11-08  9:07       ` Gagandeep Singh
  2021-11-08  9:37         ` Thomas Monjalon
  2021-11-08  9:39         ` Thomas Monjalon
  6 siblings, 2 replies; 42+ messages in thread
From: Gagandeep Singh @ 2021-11-08  9:07 UTC (permalink / raw)
  To: dev; +Cc: nipun.gupta, thomas, Gagandeep Singh

This patch adds DPAA DMA user guide.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 MAINTAINERS                 |  1 +
 doc/guides/dmadevs/dpaa.rst | 60 +++++++++++++++++++++++++++++++++++++
 2 files changed, 61 insertions(+)
 create mode 100644 doc/guides/dmadevs/dpaa.rst

diff --git a/MAINTAINERS b/MAINTAINERS
index 76b9fb8e6c..a5ad16e309 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1361,6 +1361,7 @@ NXP DPAA DMA
 M: Gagandeep Singh <g.singh@nxp.com>
 M: Nipun Gupta <nipun.gupta@nxp.com>
 F: drivers/dma/dpaa/
+F: doc/guides/dmadevs/dpaa.rst
 
 
 Packet processing
diff --git a/doc/guides/dmadevs/dpaa.rst b/doc/guides/dmadevs/dpaa.rst
new file mode 100644
index 0000000000..ed9628ed79
--- /dev/null
+++ b/doc/guides/dmadevs/dpaa.rst
@@ -0,0 +1,60 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright 2021 NXP
+
+NXP DPAA DMA Driver
+=====================
+
+The DPAA DMA is an implementation of the dmadev APIs, that provide means
+to initiate a DMA transaction from CPU. The initiated DMA is performed
+without CPU being involved in the actual DMA transaction. This is achieved
+via using the QDMA controller of DPAA SoC.
+
+The QDMA controller transfers blocks of data between one source and one
+destination. The blocks of data transferred can be represented in memory
+as contiguous or noncontiguous using scatter/gather table(s).
+
+More information can be found at `NXP Official Website
+<http://www.nxp.com/products/microcontrollers-and-processors/arm-processors/qoriq-arm-processors:QORIQ-ARM>`_.
+
+Features
+--------
+
+The DPAA DMA implements following features in the dmadev API:
+
+- Supports 1 virtual channel.
+- Supports all 4 DMA transfers: MEM_TO_MEM, MEM_TO_DEV,
+  DEV_TO_MEM, DEV_TO_DEV.
+- Supports DMA silent mode.
+- Supports issuing DMA of data within memory without hogging CPU while
+  performing DMA operation.
+
+Supported DPAA SoCs
+--------------------
+
+- LS1046A
+- LS1043A
+
+Prerequisites
+-------------
+
+See :doc:`../platform/dpaa` for setup information
+
+- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
+
+.. note::
+
+   Some part of dpaa bus code (qbman and fman - library) routines are
+   dual licensed (BSD & GPLv2), however they are used as BSD in DPDK in userspace.
+
+Initialization
+--------------
+
+On EAL initialization, DPAA DMA devices will be detected on DPAA bus and
+will be probed and populated into their device list.
+
+
+Platform Requirement
+~~~~~~~~~~~~~~~~~~~~
+
+DPAA DMA driver for DPDK can only work on NXP SoCs as listed in the
+``Supported DPAA SoCs``.
-- 
2.25.1


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [dpdk-dev] [PATCH v3 7/7] doc: add user guide of DPAA DMA driver
  2021-11-08  9:07       ` [dpdk-dev] [PATCH v3 7/7] doc: add user guide of DPAA DMA driver Gagandeep Singh
@ 2021-11-08  9:37         ` Thomas Monjalon
  2021-11-08  9:39         ` Thomas Monjalon
  1 sibling, 0 replies; 42+ messages in thread
From: Thomas Monjalon @ 2021-11-08  9:37 UTC (permalink / raw)
  To: Gagandeep Singh; +Cc: dev, nipun.gupta

08/11/2021 10:07, Gagandeep Singh:
> This patch adds DPAA DMA user guide.

Please I prefer having doc introduced along other patches.
Each time a patch introduce a feature, the doc should be updated.




^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [dpdk-dev] [PATCH v3 3/7] dma/dpaa: add driver logs
  2021-11-08  9:07       ` [dpdk-dev] [PATCH v3 3/7] dma/dpaa: add driver logs Gagandeep Singh
@ 2021-11-08  9:38         ` Thomas Monjalon
  0 siblings, 0 replies; 42+ messages in thread
From: Thomas Monjalon @ 2021-11-08  9:38 UTC (permalink / raw)
  To: Gagandeep Singh; +Cc: dev, nipun.gupta

08/11/2021 10:07, Gagandeep Singh:
> This patch supports DPAA DMA driver logs.

That's a strange patch.
The log macros should be introduced in the first patch.
And the error messages should come with the rest of the code (in patch 2?)




^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [dpdk-dev] [PATCH v3 7/7] doc: add user guide of DPAA DMA driver
  2021-11-08  9:07       ` [dpdk-dev] [PATCH v3 7/7] doc: add user guide of DPAA DMA driver Gagandeep Singh
  2021-11-08  9:37         ` Thomas Monjalon
@ 2021-11-08  9:39         ` Thomas Monjalon
  1 sibling, 0 replies; 42+ messages in thread
From: Thomas Monjalon @ 2021-11-08  9:39 UTC (permalink / raw)
  To: Gagandeep Singh; +Cc: dev, nipun.gupta

08/11/2021 10:07, Gagandeep Singh:
> This patch adds DPAA DMA user guide.
> 
> Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
> ---
>  MAINTAINERS                 |  1 +
>  doc/guides/dmadevs/dpaa.rst | 60 +++++++++++++++++++++++++++++++++++++
>  2 files changed, 61 insertions(+)

Please do not forget the doc index.




^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH v4 0/5] Introduce DPAA DMA driver
  2021-11-08  9:06       ` [dpdk-dev] [PATCH v3 1/7] dma/dpaa: introduce " Gagandeep Singh
@ 2021-11-09  4:39         ` Gagandeep Singh
  2021-11-09  4:39           ` [dpdk-dev] [PATCH v4 1/5] dma/dpaa: introduce " Gagandeep Singh
                             ` (5 more replies)
  0 siblings, 6 replies; 42+ messages in thread
From: Gagandeep Singh @ 2021-11-09  4:39 UTC (permalink / raw)
  To: dev; +Cc: nipun.gupta, thomas, Gagandeep Singh

This series support DMA driver for NXP
1046A and 1043A SoCs.

v2-change-log:
* series rebased with latest dma driver

v3-change-log:
* support statistics.
* replaced local endianness conversion functions with rte_*.
* improved submit API logic.
* Handled all comments given by fengchengwen

v4-change-log:
* merged driver log patch with first patch.
* merged document patch with all other patches.

Gagandeep Singh (5):
  dma/dpaa: introduce DPAA DMA driver
  dma/dpaa: add device probe and remove functionality
  dma/dpaa: support basic operations
  dma/dpaa: support DMA operations
  dma/dpaa: support statistics

 MAINTAINERS                            |   11 +
 doc/guides/dmadevs/dpaa.rst            |   66 ++
 doc/guides/dmadevs/index.rst           |    1 +
 doc/guides/rel_notes/release_21_11.rst |    3 +
 drivers/bus/dpaa/dpaa_bus.c            |   22 +
 drivers/bus/dpaa/rte_dpaa_bus.h        |    5 +
 drivers/common/dpaax/dpaa_list.h       |    2 +
 drivers/dma/dpaa/dpaa_qdma.c           | 1081 ++++++++++++++++++++++++
 drivers/dma/dpaa/dpaa_qdma.h           |  247 ++++++
 drivers/dma/dpaa/dpaa_qdma_logs.h      |   46 +
 drivers/dma/dpaa/meson.build           |   14 +
 drivers/dma/dpaa/version.map           |    4 +
 drivers/dma/meson.build                |    3 +-
 13 files changed, 1504 insertions(+), 1 deletion(-)
 create mode 100644 doc/guides/dmadevs/dpaa.rst
 create mode 100644 drivers/dma/dpaa/dpaa_qdma.c
 create mode 100644 drivers/dma/dpaa/dpaa_qdma.h
 create mode 100644 drivers/dma/dpaa/dpaa_qdma_logs.h
 create mode 100644 drivers/dma/dpaa/meson.build
 create mode 100644 drivers/dma/dpaa/version.map

-- 
2.25.1


^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH v4 1/5] dma/dpaa: introduce DPAA DMA driver
  2021-11-09  4:39         ` [dpdk-dev] [PATCH v4 0/5] Introduce " Gagandeep Singh
@ 2021-11-09  4:39           ` Gagandeep Singh
  2021-11-09 14:44             ` Thomas Monjalon
  2021-11-09  4:39           ` [dpdk-dev] [PATCH v4 2/5] dma/dpaa: add device probe and remove functionality Gagandeep Singh
                             ` (4 subsequent siblings)
  5 siblings, 1 reply; 42+ messages in thread
From: Gagandeep Singh @ 2021-11-09  4:39 UTC (permalink / raw)
  To: dev; +Cc: nipun.gupta, thomas, Gagandeep Singh

The DPAA DMA  driver is an implementation of the dmadev APIs,
that provide means to initiate a DMA transaction from CPU.
The initiated DMA is performed without CPU being involved
in the actual DMA transaction. This is achieved via using
the QDMA controller of DPAA SoC.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 MAINTAINERS                            | 11 ++++++
 doc/guides/dmadevs/dpaa.rst            | 54 ++++++++++++++++++++++++++
 doc/guides/dmadevs/index.rst           |  1 +
 doc/guides/rel_notes/release_21_11.rst |  3 ++
 drivers/bus/dpaa/dpaa_bus.c            | 22 +++++++++++
 drivers/bus/dpaa/rte_dpaa_bus.h        |  5 +++
 drivers/common/dpaax/dpaa_list.h       |  2 +
 drivers/dma/dpaa/dpaa_qdma.c           | 29 ++++++++++++++
 drivers/dma/dpaa/dpaa_qdma_logs.h      | 46 ++++++++++++++++++++++
 drivers/dma/dpaa/meson.build           | 14 +++++++
 drivers/dma/dpaa/version.map           |  4 ++
 drivers/dma/meson.build                |  1 +
 12 files changed, 192 insertions(+)
 create mode 100644 doc/guides/dmadevs/dpaa.rst
 create mode 100644 drivers/dma/dpaa/dpaa_qdma.c
 create mode 100644 drivers/dma/dpaa/dpaa_qdma_logs.h
 create mode 100644 drivers/dma/dpaa/meson.build
 create mode 100644 drivers/dma/dpaa/version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index e157e12f88..0f333b7baa 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1377,6 +1377,17 @@ F: drivers/raw/dpaa2_qdma/
 F: doc/guides/rawdevs/dpaa2_qdma.rst
 
 
+
+Dmadev Drivers
+--------------
+
+NXP DPAA DMA
+M: Gagandeep Singh <g.singh@nxp.com>
+M: Nipun Gupta <nipun.gupta@nxp.com>
+F: drivers/dma/dpaa/
+F: doc/guides/dmadevs/dpaa.rst
+
+
 Packet processing
 -----------------
 
diff --git a/doc/guides/dmadevs/dpaa.rst b/doc/guides/dmadevs/dpaa.rst
new file mode 100644
index 0000000000..885a8bb8aa
--- /dev/null
+++ b/doc/guides/dmadevs/dpaa.rst
@@ -0,0 +1,54 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright 2021 NXP
+
+NXP DPAA DMA Driver
+=====================
+
+The DPAA DMA is an implementation of the dmadev APIs, that provide means
+to initiate a DMA transaction from CPU. The initiated DMA is performed
+without CPU being involved in the actual DMA transaction. This is achieved
+via using the QDMA controller of DPAA SoC.
+
+The QDMA controller transfers blocks of data between one source and one
+destination. The blocks of data transferred can be represented in memory
+as contiguous or noncontiguous using scatter/gather table(s).
+
+More information can be found at `NXP Official Website
+<http://www.nxp.com/products/microcontrollers-and-processors/arm-processors/qoriq-arm-processors:QORIQ-ARM>`_.
+
+Supported DPAA SoCs
+--------------------
+
+- LS1046A
+- LS1043A
+
+Prerequisites
+-------------
+
+See :doc:`../platform/dpaa` for setup information
+
+- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
+
+.. note::
+
+   Some part of dpaa bus code (qbman and fman - library) routines are
+   dual licensed (BSD & GPLv2), however they are used as BSD in DPDK in userspace.
+
+Compilation
+------------
+
+For builds using ``meson`` and ``ninja``, the driver will be built when the
+target platform is dpaa-based. No additional compilation steps are necessary.
+
+Initialization
+--------------
+
+On EAL initialization, DPAA DMA devices will be detected on DPAA bus and
+will be probed and populated into their device list.
+
+
+Platform Requirement
+~~~~~~~~~~~~~~~~~~~~
+
+DPAA DMA driver for DPDK can only work on NXP SoCs as listed in the
+``Supported DPAA SoCs``.
diff --git a/doc/guides/dmadevs/index.rst b/doc/guides/dmadevs/index.rst
index c2aa6058e6..6b6406f590 100644
--- a/doc/guides/dmadevs/index.rst
+++ b/doc/guides/dmadevs/index.rst
@@ -12,6 +12,7 @@ an application through DMA API.
    :numbered:
 
    cnxk
+   dpaa
    hisilicon
    idxd
    ioat
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 01923e2deb..ba6ad7bf16 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -20,6 +20,9 @@ DPDK Release 21.11
       ninja -C build doc
       xdg-open build/doc/guides/html/rel_notes/release_21_11.html
 
+* **Added NXP DPAA DMA driver.**
+
+  * Added a new dmadev driver for NXP DPAA platform.
 
 New Features
 ------------
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index 9a53fdc1fb..737ac8d8c5 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -250,6 +250,28 @@ dpaa_create_device_list(void)
 
 	rte_dpaa_bus.device_count += i;
 
+	/* Creating QDMA Device */
+	for (i = 0; i < RTE_DPAA_QDMA_DEVICES; i++) {
+		dev = calloc(1, sizeof(struct rte_dpaa_device));
+		if (!dev) {
+			DPAA_BUS_LOG(ERR, "Failed to allocate QDMA device");
+			ret = -1;
+			goto cleanup;
+		}
+
+		dev->device_type = FSL_DPAA_QDMA;
+		dev->id.dev_id = rte_dpaa_bus.device_count + i;
+
+		memset(dev->name, 0, RTE_ETH_NAME_MAX_LEN);
+		sprintf(dev->name, "dpaa_qdma-%d", i+1);
+		DPAA_BUS_LOG(INFO, "%s qdma device added", dev->name);
+		dev->device.name = dev->name;
+		dev->device.devargs = dpaa_devargs_lookup(dev);
+
+		dpaa_add_to_device_list(dev);
+	}
+	rte_dpaa_bus.device_count += i;
+
 	return 0;
 
 cleanup:
diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h
index 97d189f9b0..31a5ea3fca 100644
--- a/drivers/bus/dpaa/rte_dpaa_bus.h
+++ b/drivers/bus/dpaa/rte_dpaa_bus.h
@@ -58,6 +58,9 @@ dpaa_seqn(struct rte_mbuf *mbuf)
 /** Device driver supports link state interrupt */
 #define RTE_DPAA_DRV_INTR_LSC  0x0008
 
+/** Number of supported QDMA devices */
+#define RTE_DPAA_QDMA_DEVICES  1
+
 #define RTE_DEV_TO_DPAA_CONST(ptr) \
 	container_of(ptr, const struct rte_dpaa_device, device)
 
@@ -73,6 +76,7 @@ TAILQ_HEAD(rte_dpaa_driver_list, rte_dpaa_driver);
 enum rte_dpaa_type {
 	FSL_DPAA_ETH = 1,
 	FSL_DPAA_CRYPTO,
+	FSL_DPAA_QDMA
 };
 
 struct rte_dpaa_bus {
@@ -95,6 +99,7 @@ struct rte_dpaa_device {
 	union {
 		struct rte_eth_dev *eth_dev;
 		struct rte_cryptodev *crypto_dev;
+		struct rte_dma_dev *dmadev;
 	};
 	struct rte_dpaa_driver *driver;
 	struct dpaa_device_id id;
diff --git a/drivers/common/dpaax/dpaa_list.h b/drivers/common/dpaax/dpaa_list.h
index e94575982b..319a3562ab 100644
--- a/drivers/common/dpaax/dpaa_list.h
+++ b/drivers/common/dpaax/dpaa_list.h
@@ -35,6 +35,8 @@ do { \
 	const struct list_head *__p298 = (p); \
 	((__p298->next == __p298) && (__p298->prev == __p298)); \
 })
+#define list_first_entry(ptr, type, member) \
+	list_entry((ptr)->next, type, member)
 #define list_add(p, l) \
 do { \
 	struct list_head *__p298 = (p); \
diff --git a/drivers/dma/dpaa/dpaa_qdma.c b/drivers/dma/dpaa/dpaa_qdma.c
new file mode 100644
index 0000000000..29a6ec2fb3
--- /dev/null
+++ b/drivers/dma/dpaa/dpaa_qdma.c
@@ -0,0 +1,29 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 NXP
+ */
+
+#include <rte_dpaa_bus.h>
+
+static int
+dpaa_qdma_probe(__rte_unused struct rte_dpaa_driver *dpaa_drv,
+		__rte_unused struct rte_dpaa_device *dpaa_dev)
+{
+	return 0;
+}
+
+static int
+dpaa_qdma_remove(__rte_unused struct rte_dpaa_device *dpaa_dev)
+{
+	return 0;
+}
+
+static struct rte_dpaa_driver rte_dpaa_qdma_pmd;
+
+static struct rte_dpaa_driver rte_dpaa_qdma_pmd = {
+	.drv_type = FSL_DPAA_QDMA,
+	.probe = dpaa_qdma_probe,
+	.remove = dpaa_qdma_remove,
+};
+
+RTE_PMD_REGISTER_DPAA(dpaa_qdma, rte_dpaa_qdma_pmd);
+RTE_LOG_REGISTER_DEFAULT(dpaa_qdma_logtype, INFO);
diff --git a/drivers/dma/dpaa/dpaa_qdma_logs.h b/drivers/dma/dpaa/dpaa_qdma_logs.h
new file mode 100644
index 0000000000..01d4a508fc
--- /dev/null
+++ b/drivers/dma/dpaa/dpaa_qdma_logs.h
@@ -0,0 +1,46 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 NXP
+ */
+
+#ifndef __DPAA_QDMA_LOGS_H__
+#define __DPAA_QDMA_LOGS_H__
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+extern int dpaa_qdma_logtype;
+
+#define DPAA_QDMA_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, dpaa_qdma_logtype, "dpaa_qdma: " \
+		fmt "\n", ## args)
+
+#define DPAA_QDMA_DEBUG(fmt, args...) \
+	rte_log(RTE_LOG_DEBUG, dpaa_qdma_logtype, "dpaa_qdma: %s(): " \
+		fmt "\n", __func__, ## args)
+
+#define DPAA_QDMA_FUNC_TRACE() DPAA_QDMA_DEBUG(">>")
+
+#define DPAA_QDMA_INFO(fmt, args...) \
+	DPAA_QDMA_LOG(INFO, fmt, ## args)
+#define DPAA_QDMA_ERR(fmt, args...) \
+	DPAA_QDMA_LOG(ERR, fmt, ## args)
+#define DPAA_QDMA_WARN(fmt, args...) \
+	DPAA_QDMA_LOG(WARNING, fmt, ## args)
+
+/* DP Logs, toggled out at compile time if level lower than current level */
+#define DPAA_QDMA_DP_LOG(level, fmt, args...) \
+	RTE_LOG_DP(level, PMD, "dpaa_qdma: " fmt "\n", ## args)
+
+#define DPAA_QDMA_DP_DEBUG(fmt, args...) \
+	DPAA_QDMA_DP_LOG(DEBUG, fmt, ## args)
+#define DPAA_QDMA_DP_INFO(fmt, args...) \
+	DPAA_QDMA_DP_LOG(INFO, fmt, ## args)
+#define DPAA_QDMA_DP_WARN(fmt, args...) \
+	DPAA_QDMA_DP_LOG(WARNING, fmt, ## args)
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __DPAA_QDMA_LOGS_H__ */
diff --git a/drivers/dma/dpaa/meson.build b/drivers/dma/dpaa/meson.build
new file mode 100644
index 0000000000..9ab0862ede
--- /dev/null
+++ b/drivers/dma/dpaa/meson.build
@@ -0,0 +1,14 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright 2021 NXP
+
+if not is_linux
+	build = false
+	reason = 'only supported on linux'
+endif
+
+deps += ['dmadev', 'bus_dpaa']
+sources = files('dpaa_qdma.c')
+
+if cc.has_argument('-Wno-pointer-arith')
+	cflags += '-Wno-pointer-arith'
+endif
diff --git a/drivers/dma/dpaa/version.map b/drivers/dma/dpaa/version.map
new file mode 100644
index 0000000000..7bab7bea48
--- /dev/null
+++ b/drivers/dma/dpaa/version.map
@@ -0,0 +1,4 @@
+DPDK_22 {
+
+	local: *;
+};
diff --git a/drivers/dma/meson.build b/drivers/dma/meson.build
index ebac25d35f..7cdd6cd28f 100644
--- a/drivers/dma/meson.build
+++ b/drivers/dma/meson.build
@@ -3,6 +3,7 @@
 
 drivers = [
         'cnxk',
+	'dpaa',
         'hisilicon',
         'idxd',
         'ioat',
-- 
2.25.1


^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH v4 2/5] dma/dpaa: add device probe and remove functionality
  2021-11-09  4:39         ` [dpdk-dev] [PATCH v4 0/5] Introduce " Gagandeep Singh
  2021-11-09  4:39           ` [dpdk-dev] [PATCH v4 1/5] dma/dpaa: introduce " Gagandeep Singh
@ 2021-11-09  4:39           ` Gagandeep Singh
  2021-11-09  4:39           ` [dpdk-dev] [PATCH v4 3/5] dma/dpaa: support basic operations Gagandeep Singh
                             ` (3 subsequent siblings)
  5 siblings, 0 replies; 42+ messages in thread
From: Gagandeep Singh @ 2021-11-09  4:39 UTC (permalink / raw)
  To: dev; +Cc: nipun.gupta, thomas, Gagandeep Singh

This patch add device initialisation functionality.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 drivers/dma/dpaa/dpaa_qdma.c | 469 ++++++++++++++++++++++++++++++++++-
 drivers/dma/dpaa/dpaa_qdma.h | 236 ++++++++++++++++++
 2 files changed, 703 insertions(+), 2 deletions(-)
 create mode 100644 drivers/dma/dpaa/dpaa_qdma.h

diff --git a/drivers/dma/dpaa/dpaa_qdma.c b/drivers/dma/dpaa/dpaa_qdma.c
index 29a6ec2fb3..c3255dc0c7 100644
--- a/drivers/dma/dpaa/dpaa_qdma.c
+++ b/drivers/dma/dpaa/dpaa_qdma.c
@@ -3,17 +3,482 @@
  */
 
 #include <rte_dpaa_bus.h>
+#include <rte_dmadev_pmd.h>
+
+#include "dpaa_qdma.h"
+#include "dpaa_qdma_logs.h"
+
+static inline int
+ilog2(int x)
+{
+	int log = 0;
+
+	x >>= 1;
+
+	while (x) {
+		log++;
+		x >>= 1;
+	}
+	return log;
+}
+
+static u32
+qdma_readl(void *addr)
+{
+	return QDMA_IN(addr);
+}
+
+static void
+qdma_writel(u32 val, void *addr)
+{
+	QDMA_OUT(addr, val);
+}
+
+static void
+*dma_pool_alloc(int size, int aligned, dma_addr_t *phy_addr)
+{
+	void *virt_addr;
+
+	virt_addr = rte_malloc("dma pool alloc", size, aligned);
+	if (!virt_addr)
+		return NULL;
+
+	*phy_addr = rte_mem_virt2iova(virt_addr);
+
+	return virt_addr;
+}
+
+static void
+dma_pool_free(void *addr)
+{
+	rte_free(addr);
+}
+
+static void
+fsl_qdma_free_chan_resources(struct fsl_qdma_chan *fsl_chan)
+{
+	struct fsl_qdma_queue *fsl_queue = fsl_chan->queue;
+	struct fsl_qdma_engine *fsl_qdma = fsl_chan->qdma;
+	struct fsl_qdma_comp *comp_temp, *_comp_temp;
+	int id;
+
+	if (--fsl_queue->count)
+		goto finally;
+
+	id = (fsl_qdma->block_base - fsl_queue->block_base) /
+	      fsl_qdma->block_offset;
+
+	while (rte_atomic32_read(&wait_task[id]) == 1)
+		rte_delay_us(QDMA_DELAY);
+
+	list_for_each_entry_safe(comp_temp, _comp_temp,
+				 &fsl_queue->comp_used,	list) {
+		list_del(&comp_temp->list);
+		dma_pool_free(comp_temp->virt_addr);
+		dma_pool_free(comp_temp->desc_virt_addr);
+		rte_free(comp_temp);
+	}
+
+	list_for_each_entry_safe(comp_temp, _comp_temp,
+				 &fsl_queue->comp_free, list) {
+		list_del(&comp_temp->list);
+		dma_pool_free(comp_temp->virt_addr);
+		dma_pool_free(comp_temp->desc_virt_addr);
+		rte_free(comp_temp);
+	}
+
+finally:
+	fsl_qdma->desc_allocated--;
+}
+
+static struct fsl_qdma_queue
+*fsl_qdma_alloc_queue_resources(struct fsl_qdma_engine *fsl_qdma)
+{
+	struct fsl_qdma_queue *queue_head, *queue_temp;
+	int len, i, j;
+	int queue_num;
+	int blocks;
+	unsigned int queue_size[FSL_QDMA_QUEUE_MAX];
+
+	queue_num = fsl_qdma->n_queues;
+	blocks = fsl_qdma->num_blocks;
+
+	len = sizeof(*queue_head) * queue_num * blocks;
+	queue_head = rte_zmalloc("qdma: queue head", len, 0);
+	if (!queue_head)
+		return NULL;
+
+	for (i = 0; i < FSL_QDMA_QUEUE_MAX; i++)
+		queue_size[i] = QDMA_QUEUE_SIZE;
+
+	for (j = 0; j < blocks; j++) {
+		for (i = 0; i < queue_num; i++) {
+			if (queue_size[i] > FSL_QDMA_CIRCULAR_DESC_SIZE_MAX ||
+			    queue_size[i] < FSL_QDMA_CIRCULAR_DESC_SIZE_MIN) {
+				DPAA_QDMA_ERR("Get wrong queue-sizes.\n");
+				goto fail;
+			}
+			queue_temp = queue_head + i + (j * queue_num);
+
+			queue_temp->cq =
+			dma_pool_alloc(sizeof(struct fsl_qdma_format) *
+				       queue_size[i],
+				       sizeof(struct fsl_qdma_format) *
+				       queue_size[i], &queue_temp->bus_addr);
+
+			if (!queue_temp->cq)
+				goto fail;
+
+			memset(queue_temp->cq, 0x0, queue_size[i] *
+			       sizeof(struct fsl_qdma_format));
+
+			queue_temp->block_base = fsl_qdma->block_base +
+				FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, j);
+			queue_temp->n_cq = queue_size[i];
+			queue_temp->id = i;
+			queue_temp->count = 0;
+			queue_temp->pending = 0;
+			queue_temp->virt_head = queue_temp->cq;
+
+		}
+	}
+	return queue_head;
+
+fail:
+	for (j = 0; j < blocks; j++) {
+		for (i = 0; i < queue_num; i++) {
+			queue_temp = queue_head + i + (j * queue_num);
+			dma_pool_free(queue_temp->cq);
+		}
+	}
+	rte_free(queue_head);
+
+	return NULL;
+}
+
+static struct
+fsl_qdma_queue *fsl_qdma_prep_status_queue(void)
+{
+	struct fsl_qdma_queue *status_head;
+	unsigned int status_size;
+
+	status_size = QDMA_STATUS_SIZE;
+	if (status_size > FSL_QDMA_CIRCULAR_DESC_SIZE_MAX ||
+	    status_size < FSL_QDMA_CIRCULAR_DESC_SIZE_MIN) {
+		DPAA_QDMA_ERR("Get wrong status_size.\n");
+		return NULL;
+	}
+
+	status_head = rte_zmalloc("qdma: status head", sizeof(*status_head), 0);
+	if (!status_head)
+		return NULL;
+
+	/*
+	 * Buffer for queue command
+	 */
+	status_head->cq = dma_pool_alloc(sizeof(struct fsl_qdma_format) *
+					 status_size,
+					 sizeof(struct fsl_qdma_format) *
+					 status_size,
+					 &status_head->bus_addr);
+
+	if (!status_head->cq) {
+		rte_free(status_head);
+		return NULL;
+	}
+
+	memset(status_head->cq, 0x0, status_size *
+	       sizeof(struct fsl_qdma_format));
+	status_head->n_cq = status_size;
+	status_head->virt_head = status_head->cq;
+
+	return status_head;
+}
+
+static int
+fsl_qdma_halt(struct fsl_qdma_engine *fsl_qdma)
+{
+	void *ctrl = fsl_qdma->ctrl_base;
+	void *block;
+	int i, count = RETRIES;
+	unsigned int j;
+	u32 reg;
+
+	/* Disable the command queue and wait for idle state. */
+	reg = qdma_readl(ctrl + FSL_QDMA_DMR);
+	reg |= FSL_QDMA_DMR_DQD;
+	qdma_writel(reg, ctrl + FSL_QDMA_DMR);
+	for (j = 0; j < fsl_qdma->num_blocks; j++) {
+		block = fsl_qdma->block_base +
+			FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, j);
+		for (i = 0; i < FSL_QDMA_QUEUE_NUM_MAX; i++)
+			qdma_writel(0, block + FSL_QDMA_BCQMR(i));
+	}
+	while (true) {
+		reg = qdma_readl(ctrl + FSL_QDMA_DSR);
+		if (!(reg & FSL_QDMA_DSR_DB))
+			break;
+		if (count-- < 0)
+			return -EBUSY;
+		rte_delay_us(100);
+	}
+
+	for (j = 0; j < fsl_qdma->num_blocks; j++) {
+		block = fsl_qdma->block_base +
+			FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, j);
+
+		/* Disable status queue. */
+		qdma_writel(0, block + FSL_QDMA_BSQMR);
+
+		/*
+		 * clear the command queue interrupt detect register for
+		 * all queues.
+		 */
+		qdma_writel(0xffffffff, block + FSL_QDMA_BCQIDR(0));
+	}
+
+	return 0;
+}
+
+static int
+fsl_qdma_reg_init(struct fsl_qdma_engine *fsl_qdma)
+{
+	struct fsl_qdma_queue *fsl_queue = fsl_qdma->queue;
+	struct fsl_qdma_queue *temp;
+	void *ctrl = fsl_qdma->ctrl_base;
+	void *block;
+	u32 i, j;
+	u32 reg;
+	int ret, val;
+
+	/* Try to halt the qDMA engine first. */
+	ret = fsl_qdma_halt(fsl_qdma);
+	if (ret) {
+		DPAA_QDMA_ERR("DMA halt failed!");
+		return ret;
+	}
+
+	for (j = 0; j < fsl_qdma->num_blocks; j++) {
+		block = fsl_qdma->block_base +
+			FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, j);
+		for (i = 0; i < fsl_qdma->n_queues; i++) {
+			temp = fsl_queue + i + (j * fsl_qdma->n_queues);
+			/*
+			 * Initialize Command Queue registers to
+			 * point to the first
+			 * command descriptor in memory.
+			 * Dequeue Pointer Address Registers
+			 * Enqueue Pointer Address Registers
+			 */
+
+			qdma_writel(lower_32_bits(temp->bus_addr),
+				    block + FSL_QDMA_BCQDPA_SADDR(i));
+			qdma_writel(upper_32_bits(temp->bus_addr),
+				    block + FSL_QDMA_BCQEDPA_SADDR(i));
+			qdma_writel(lower_32_bits(temp->bus_addr),
+				    block + FSL_QDMA_BCQEPA_SADDR(i));
+			qdma_writel(upper_32_bits(temp->bus_addr),
+				    block + FSL_QDMA_BCQEEPA_SADDR(i));
+
+			/* Initialize the queue mode. */
+			reg = FSL_QDMA_BCQMR_EN;
+			reg |= FSL_QDMA_BCQMR_CD_THLD(ilog2(temp->n_cq) - 4);
+			reg |= FSL_QDMA_BCQMR_CQ_SIZE(ilog2(temp->n_cq) - 6);
+			qdma_writel(reg, block + FSL_QDMA_BCQMR(i));
+		}
+
+		/*
+		 * Workaround for erratum: ERR010812.
+		 * We must enable XOFF to avoid the enqueue rejection occurs.
+		 * Setting SQCCMR ENTER_WM to 0x20.
+		 */
+
+		qdma_writel(FSL_QDMA_SQCCMR_ENTER_WM,
+			    block + FSL_QDMA_SQCCMR);
+
+		/*
+		 * Initialize status queue registers to point to the first
+		 * command descriptor in memory.
+		 * Dequeue Pointer Address Registers
+		 * Enqueue Pointer Address Registers
+		 */
+
+		qdma_writel(
+			    upper_32_bits(fsl_qdma->status[j]->bus_addr),
+			    block + FSL_QDMA_SQEEPAR);
+		qdma_writel(
+			    lower_32_bits(fsl_qdma->status[j]->bus_addr),
+			    block + FSL_QDMA_SQEPAR);
+		qdma_writel(
+			    upper_32_bits(fsl_qdma->status[j]->bus_addr),
+			    block + FSL_QDMA_SQEDPAR);
+		qdma_writel(
+			    lower_32_bits(fsl_qdma->status[j]->bus_addr),
+			    block + FSL_QDMA_SQDPAR);
+		/* Desiable status queue interrupt. */
+
+		qdma_writel(0x0, block + FSL_QDMA_BCQIER(0));
+		qdma_writel(0x0, block + FSL_QDMA_BSQICR);
+		qdma_writel(0x0, block + FSL_QDMA_CQIER);
+
+		/* Initialize the status queue mode. */
+		reg = FSL_QDMA_BSQMR_EN;
+		val = ilog2(fsl_qdma->status[j]->n_cq) - 6;
+		reg |= FSL_QDMA_BSQMR_CQ_SIZE(val);
+		qdma_writel(reg, block + FSL_QDMA_BSQMR);
+	}
+
+	reg = qdma_readl(ctrl + FSL_QDMA_DMR);
+	reg &= ~FSL_QDMA_DMR_DQD;
+	qdma_writel(reg, ctrl + FSL_QDMA_DMR);
+
+	return 0;
+}
+
+static void
+dma_release(void *fsl_chan)
+{
+	((struct fsl_qdma_chan *)fsl_chan)->free = true;
+	fsl_qdma_free_chan_resources((struct fsl_qdma_chan *)fsl_chan);
+}
+
+static int
+dpaa_qdma_init(struct rte_dma_dev *dmadev)
+{
+	struct fsl_qdma_engine *fsl_qdma = dmadev->data->dev_private;
+	struct fsl_qdma_chan *fsl_chan;
+	uint64_t phys_addr;
+	unsigned int len;
+	int ccsr_qdma_fd;
+	int regs_size;
+	int ret;
+	u32 i;
+
+	fsl_qdma->desc_allocated = 0;
+	fsl_qdma->n_chans = VIRT_CHANNELS;
+	fsl_qdma->n_queues = QDMA_QUEUES;
+	fsl_qdma->num_blocks = QDMA_BLOCKS;
+	fsl_qdma->block_offset = QDMA_BLOCK_OFFSET;
+
+	len = sizeof(*fsl_chan) * fsl_qdma->n_chans;
+	fsl_qdma->chans = rte_zmalloc("qdma: fsl chans", len, 0);
+	if (!fsl_qdma->chans)
+		return -1;
+
+	len = sizeof(struct fsl_qdma_queue *) * fsl_qdma->num_blocks;
+	fsl_qdma->status = rte_zmalloc("qdma: fsl status", len, 0);
+	if (!fsl_qdma->status) {
+		rte_free(fsl_qdma->chans);
+		return -1;
+	}
+
+	for (i = 0; i < fsl_qdma->num_blocks; i++) {
+		rte_atomic32_init(&wait_task[i]);
+		fsl_qdma->status[i] = fsl_qdma_prep_status_queue();
+		if (!fsl_qdma->status[i])
+			goto err;
+	}
+
+	ccsr_qdma_fd = open("/dev/mem", O_RDWR);
+	if (unlikely(ccsr_qdma_fd < 0)) {
+		DPAA_QDMA_ERR("Can not open /dev/mem for qdma CCSR map");
+		goto err;
+	}
+
+	regs_size = fsl_qdma->block_offset * (fsl_qdma->num_blocks + 2);
+	phys_addr = QDMA_CCSR_BASE;
+	fsl_qdma->ctrl_base = mmap(NULL, regs_size, PROT_READ |
+					 PROT_WRITE, MAP_SHARED,
+					 ccsr_qdma_fd, phys_addr);
+
+	close(ccsr_qdma_fd);
+	if (fsl_qdma->ctrl_base == MAP_FAILED) {
+		DPAA_QDMA_ERR("Can not map CCSR base qdma: Phys: %08" PRIx64
+		       "size %d\n", phys_addr, regs_size);
+		goto err;
+	}
+
+	fsl_qdma->status_base = fsl_qdma->ctrl_base + QDMA_BLOCK_OFFSET;
+	fsl_qdma->block_base = fsl_qdma->status_base + QDMA_BLOCK_OFFSET;
+
+	fsl_qdma->queue = fsl_qdma_alloc_queue_resources(fsl_qdma);
+	if (!fsl_qdma->queue) {
+		munmap(fsl_qdma->ctrl_base, regs_size);
+		goto err;
+	}
+
+	for (i = 0; i < fsl_qdma->n_chans; i++) {
+		struct fsl_qdma_chan *fsl_chan = &fsl_qdma->chans[i];
+
+		fsl_chan->qdma = fsl_qdma;
+		fsl_chan->queue = fsl_qdma->queue + i % (fsl_qdma->n_queues *
+							fsl_qdma->num_blocks);
+		fsl_chan->free = true;
+	}
+
+	ret = fsl_qdma_reg_init(fsl_qdma);
+	if (ret) {
+		DPAA_QDMA_ERR("Can't Initialize the qDMA engine.\n");
+		munmap(fsl_qdma->ctrl_base, regs_size);
+		goto err;
+	}
+
+	return 0;
+
+err:
+	rte_free(fsl_qdma->chans);
+	rte_free(fsl_qdma->status);
+
+	return -1;
+}
 
 static int
 dpaa_qdma_probe(__rte_unused struct rte_dpaa_driver *dpaa_drv,
-		__rte_unused struct rte_dpaa_device *dpaa_dev)
+		struct rte_dpaa_device *dpaa_dev)
 {
+	struct rte_dma_dev *dmadev;
+	int ret;
+
+	dmadev = rte_dma_pmd_allocate(dpaa_dev->device.name,
+				      rte_socket_id(),
+				      sizeof(struct fsl_qdma_engine));
+	if (!dmadev) {
+		DPAA_QDMA_ERR("Unable to allocate dmadevice");
+		return -EINVAL;
+	}
+
+	dpaa_dev->dmadev = dmadev;
+
+	/* Invoke PMD device initialization function */
+	ret = dpaa_qdma_init(dmadev);
+	if (ret) {
+		(void)rte_dma_pmd_release(dpaa_dev->device.name);
+		return ret;
+	}
+
+	dmadev->state = RTE_DMA_DEV_READY;
 	return 0;
 }
 
 static int
-dpaa_qdma_remove(__rte_unused struct rte_dpaa_device *dpaa_dev)
+dpaa_qdma_remove(struct rte_dpaa_device *dpaa_dev)
 {
+	struct rte_dma_dev *dmadev = dpaa_dev->dmadev;
+	struct fsl_qdma_engine *fsl_qdma = dmadev->data->dev_private;
+	int i = 0, max = QDMA_QUEUES * QDMA_BLOCKS;
+
+	for (i = 0; i < max; i++) {
+		struct fsl_qdma_chan *fsl_chan = &fsl_qdma->chans[i];
+
+		if (fsl_chan->free == false)
+			dma_release(fsl_chan);
+	}
+
+	rte_free(fsl_qdma->status);
+	rte_free(fsl_qdma->chans);
+
+	(void)rte_dma_pmd_release(dpaa_dev->device.name);
+
 	return 0;
 }
 
diff --git a/drivers/dma/dpaa/dpaa_qdma.h b/drivers/dma/dpaa/dpaa_qdma.h
new file mode 100644
index 0000000000..c05620b740
--- /dev/null
+++ b/drivers/dma/dpaa/dpaa_qdma.h
@@ -0,0 +1,236 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 NXP
+ */
+
+#ifndef _DPAA_QDMA_H_
+#define _DPAA_QDMA_H_
+
+#include <rte_io.h>
+
+#define CORE_NUMBER 4
+#define RETRIES	5
+
+#define FSL_QDMA_DMR			0x0
+#define FSL_QDMA_DSR			0x4
+#define FSL_QDMA_DEIER			0xe00
+#define FSL_QDMA_DEDR			0xe04
+#define FSL_QDMA_DECFDW0R		0xe10
+#define FSL_QDMA_DECFDW1R		0xe14
+#define FSL_QDMA_DECFDW2R		0xe18
+#define FSL_QDMA_DECFDW3R		0xe1c
+#define FSL_QDMA_DECFQIDR		0xe30
+#define FSL_QDMA_DECBR			0xe34
+
+#define FSL_QDMA_BCQMR(x)		(0xc0 + 0x100 * (x))
+#define FSL_QDMA_BCQSR(x)		(0xc4 + 0x100 * (x))
+#define FSL_QDMA_BCQEDPA_SADDR(x)	(0xc8 + 0x100 * (x))
+#define FSL_QDMA_BCQDPA_SADDR(x)	(0xcc + 0x100 * (x))
+#define FSL_QDMA_BCQEEPA_SADDR(x)	(0xd0 + 0x100 * (x))
+#define FSL_QDMA_BCQEPA_SADDR(x)	(0xd4 + 0x100 * (x))
+#define FSL_QDMA_BCQIER(x)		(0xe0 + 0x100 * (x))
+#define FSL_QDMA_BCQIDR(x)		(0xe4 + 0x100 * (x))
+
+#define FSL_QDMA_SQEDPAR		0x808
+#define FSL_QDMA_SQDPAR			0x80c
+#define FSL_QDMA_SQEEPAR		0x810
+#define FSL_QDMA_SQEPAR			0x814
+#define FSL_QDMA_BSQMR			0x800
+#define FSL_QDMA_BSQSR			0x804
+#define FSL_QDMA_BSQICR			0x828
+#define FSL_QDMA_CQMR			0xa00
+#define FSL_QDMA_CQDSCR1		0xa08
+#define FSL_QDMA_CQDSCR2                0xa0c
+#define FSL_QDMA_CQIER			0xa10
+#define FSL_QDMA_CQEDR			0xa14
+#define FSL_QDMA_SQCCMR			0xa20
+
+#define FSL_QDMA_SQICR_ICEN
+
+#define FSL_QDMA_CQIDR_CQT		0xff000000
+#define FSL_QDMA_CQIDR_SQPE		0x800000
+#define FSL_QDMA_CQIDR_SQT		0x8000
+
+#define FSL_QDMA_BCQIER_CQTIE		0x8000
+#define FSL_QDMA_BCQIER_CQPEIE		0x800000
+#define FSL_QDMA_BSQICR_ICEN		0x80000000
+#define FSL_QDMA_BSQICR_ICST(x)		((x) << 16)
+#define FSL_QDMA_CQIER_MEIE		0x80000000
+#define FSL_QDMA_CQIER_TEIE		0x1
+#define FSL_QDMA_SQCCMR_ENTER_WM	0x200000
+
+#define FSL_QDMA_QUEUE_MAX		8
+
+#define FSL_QDMA_BCQMR_EN		0x80000000
+#define FSL_QDMA_BCQMR_EI		0x40000000
+#define FSL_QDMA_BCQMR_EI_BE           0x40
+#define FSL_QDMA_BCQMR_CD_THLD(x)	((x) << 20)
+#define FSL_QDMA_BCQMR_CQ_SIZE(x)	((x) << 16)
+
+#define FSL_QDMA_BCQSR_QF		0x10000
+#define FSL_QDMA_BCQSR_XOFF		0x1
+#define FSL_QDMA_BCQSR_QF_XOFF_BE      0x1000100
+
+#define FSL_QDMA_BSQMR_EN		0x80000000
+#define FSL_QDMA_BSQMR_DI		0x40000000
+#define FSL_QDMA_BSQMR_DI_BE		0x40
+#define FSL_QDMA_BSQMR_CQ_SIZE(x)	((x) << 16)
+
+#define FSL_QDMA_BSQSR_QE		0x20000
+#define FSL_QDMA_BSQSR_QE_BE		0x200
+#define FSL_QDMA_BSQSR_QF		0x10000
+
+#define FSL_QDMA_DMR_DQD		0x40000000
+#define FSL_QDMA_DSR_DB			0x80000000
+
+#define FSL_QDMA_COMMAND_BUFFER_SIZE	64
+#define FSL_QDMA_DESCRIPTOR_BUFFER_SIZE 32
+#define FSL_QDMA_CIRCULAR_DESC_SIZE_MIN	64
+#define FSL_QDMA_CIRCULAR_DESC_SIZE_MAX	16384
+#define FSL_QDMA_QUEUE_NUM_MAX		8
+
+#define FSL_QDMA_CMD_RWTTYPE		0x4
+#define FSL_QDMA_CMD_LWC                0x2
+
+#define FSL_QDMA_CMD_RWTTYPE_OFFSET	28
+#define FSL_QDMA_CMD_NS_OFFSET		27
+#define FSL_QDMA_CMD_DQOS_OFFSET	24
+#define FSL_QDMA_CMD_WTHROTL_OFFSET	20
+#define FSL_QDMA_CMD_DSEN_OFFSET	19
+#define FSL_QDMA_CMD_LWC_OFFSET		16
+
+#define QDMA_CCDF_STATUS		20
+#define QDMA_CCDF_OFFSET		20
+#define QDMA_CCDF_MASK			GENMASK(28, 20)
+#define QDMA_CCDF_FOTMAT		BIT(29)
+#define QDMA_CCDF_SER			BIT(30)
+
+#define QDMA_SG_FIN			BIT(30)
+#define QDMA_SG_EXT			BIT(31)
+#define QDMA_SG_LEN_MASK		GENMASK(29, 0)
+
+#define QDMA_BIG_ENDIAN			1
+#define COMP_TIMEOUT			100000
+#define COMMAND_QUEUE_OVERFLLOW		10
+
+/* qdma engine attribute */
+#define QDMA_QUEUE_SIZE 64
+#define QDMA_STATUS_SIZE 64
+#define QDMA_CCSR_BASE 0x8380000
+#define VIRT_CHANNELS 32
+#define QDMA_BLOCK_OFFSET 0x10000
+#define QDMA_BLOCKS 4
+#define QDMA_QUEUES 8
+#define QDMA_DELAY 1000
+
+#ifdef QDMA_BIG_ENDIAN
+#define QDMA_IN(addr)		be32_to_cpu(rte_read32(addr))
+#define QDMA_OUT(addr, val)	rte_write32(be32_to_cpu(val), addr)
+#define QDMA_IN_BE(addr)	rte_read32(addr)
+#define QDMA_OUT_BE(addr, val)	rte_write32(val, addr)
+#else
+#define QDMA_IN(addr)		rte_read32(addr)
+#define QDMA_OUT(addr, val)	rte_write32(val, addr)
+#define QDMA_IN_BE(addr)	be32_to_cpu(rte_write32(addr))
+#define QDMA_OUT_BE(addr, val)	rte_write32(be32_to_cpu(val), addr)
+#endif
+
+#define FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma_engine, x)			\
+	(((fsl_qdma_engine)->block_offset) * (x))
+
+typedef void (*dma_call_back)(void *params);
+
+/* qDMA Command Descriptor Formats */
+struct fsl_qdma_format {
+	__le32 status; /* ser, status */
+	__le32 cfg;	/* format, offset */
+	union {
+		struct {
+			__le32 addr_lo;	/* low 32-bits of 40-bit address */
+			u8 addr_hi;	/* high 8-bits of 40-bit address */
+			u8 __reserved1[2];
+			u8 cfg8b_w1; /* dd, queue */
+		};
+		__le64 data;
+	};
+};
+
+/* qDMA Source Descriptor Format */
+struct fsl_qdma_sdf {
+	__le32 rev3;
+	__le32 cfg; /* rev4, bit[0-11] - ssd, bit[12-23] sss */
+	__le32 rev5;
+	__le32 cmd;
+};
+
+/* qDMA Destination Descriptor Format */
+struct fsl_qdma_ddf {
+	__le32 rev1;
+	__le32 cfg; /* rev2, bit[0-11] - dsd, bit[12-23] - dss */
+	__le32 rev3;
+	__le32 cmd;
+};
+
+enum dma_status {
+	DMA_COMPLETE,
+	DMA_IN_PROGRESS,
+	DMA_IN_PREPAR,
+	DMA_PAUSED,
+	DMA_ERROR,
+};
+
+struct fsl_qdma_chan {
+	struct fsl_qdma_engine	*qdma;
+	struct fsl_qdma_queue	*queue;
+	bool			free;
+	struct list_head	list;
+};
+
+struct fsl_qdma_list {
+	struct list_head	dma_list;
+};
+
+struct fsl_qdma_queue {
+	struct fsl_qdma_format	*virt_head;
+	struct list_head	comp_used;
+	struct list_head	comp_free;
+	dma_addr_t		bus_addr;
+	u32                     n_cq;
+	u32			id;
+	u32			count;
+	u32			pending;
+	struct fsl_qdma_format	*cq;
+	void			*block_base;
+};
+
+struct fsl_qdma_comp {
+	dma_addr_t              bus_addr;
+	dma_addr_t              desc_bus_addr;
+	void			*virt_addr;
+	int			index;
+	void			*desc_virt_addr;
+	struct fsl_qdma_chan	*qchan;
+	dma_call_back		call_back_func;
+	void			*params;
+	struct list_head	list;
+};
+
+struct fsl_qdma_engine {
+	int			desc_allocated;
+	void			*ctrl_base;
+	void			*status_base;
+	void			*block_base;
+	u32			n_chans;
+	u32			n_queues;
+	int			error_irq;
+	struct fsl_qdma_queue	*queue;
+	struct fsl_qdma_queue	**status;
+	struct fsl_qdma_chan	*chans;
+	u32			num_blocks;
+	u8			free_block_id;
+	u32			vchan_map[4];
+	int			block_offset;
+};
+
+static rte_atomic32_t wait_task[CORE_NUMBER];
+
+#endif /* _DPAA_QDMA_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH v4 3/5] dma/dpaa: support basic operations
  2021-11-09  4:39         ` [dpdk-dev] [PATCH v4 0/5] Introduce " Gagandeep Singh
  2021-11-09  4:39           ` [dpdk-dev] [PATCH v4 1/5] dma/dpaa: introduce " Gagandeep Singh
  2021-11-09  4:39           ` [dpdk-dev] [PATCH v4 2/5] dma/dpaa: add device probe and remove functionality Gagandeep Singh
@ 2021-11-09  4:39           ` Gagandeep Singh
  2021-11-09  4:39           ` [dpdk-dev] [PATCH v4 4/5] dma/dpaa: support DMA operations Gagandeep Singh
                             ` (2 subsequent siblings)
  5 siblings, 0 replies; 42+ messages in thread
From: Gagandeep Singh @ 2021-11-09  4:39 UTC (permalink / raw)
  To: dev; +Cc: nipun.gupta, thomas, Gagandeep Singh

This patch support basic DMA operations which includes
device capability and channel setup.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 drivers/dma/dpaa/dpaa_qdma.c | 204 +++++++++++++++++++++++++++++++++++
 drivers/dma/dpaa/dpaa_qdma.h |   6 ++
 2 files changed, 210 insertions(+)

diff --git a/drivers/dma/dpaa/dpaa_qdma.c b/drivers/dma/dpaa/dpaa_qdma.c
index c3255dc0c7..e59cd36872 100644
--- a/drivers/dma/dpaa/dpaa_qdma.c
+++ b/drivers/dma/dpaa/dpaa_qdma.c
@@ -8,6 +8,19 @@
 #include "dpaa_qdma.h"
 #include "dpaa_qdma_logs.h"
 
+static inline void
+qdma_desc_addr_set64(struct fsl_qdma_format *ccdf, u64 addr)
+{
+	ccdf->addr_hi = upper_32_bits(addr);
+	ccdf->addr_lo = rte_cpu_to_le_32(lower_32_bits(addr));
+}
+
+static inline void
+qdma_csgf_set_len(struct fsl_qdma_format *csgf, int len)
+{
+	csgf->cfg = rte_cpu_to_le_32(len & QDMA_SG_LEN_MASK);
+}
+
 static inline int
 ilog2(int x)
 {
@@ -91,6 +104,77 @@ fsl_qdma_free_chan_resources(struct fsl_qdma_chan *fsl_chan)
 	fsl_qdma->desc_allocated--;
 }
 
+/*
+ * Pre-request command descriptor and compound S/G for enqueue.
+ */
+static int
+fsl_qdma_pre_request_enqueue_comp_sd_desc(
+					struct fsl_qdma_queue *queue,
+					int size, int aligned)
+{
+	struct fsl_qdma_comp *comp_temp, *_comp_temp;
+	struct fsl_qdma_sdf *sdf;
+	struct fsl_qdma_ddf *ddf;
+	struct fsl_qdma_format *csgf_desc;
+	int i;
+
+	for (i = 0; i < (int)(queue->n_cq + COMMAND_QUEUE_OVERFLLOW); i++) {
+		comp_temp = rte_zmalloc("qdma: comp temp",
+					sizeof(*comp_temp), 0);
+		if (!comp_temp)
+			return -ENOMEM;
+
+		comp_temp->virt_addr =
+		dma_pool_alloc(size, aligned, &comp_temp->bus_addr);
+		if (!comp_temp->virt_addr) {
+			rte_free(comp_temp);
+			goto fail;
+		}
+
+		comp_temp->desc_virt_addr =
+		dma_pool_alloc(size, aligned, &comp_temp->desc_bus_addr);
+		if (!comp_temp->desc_virt_addr) {
+			rte_free(comp_temp->virt_addr);
+			rte_free(comp_temp);
+			goto fail;
+		}
+
+		memset(comp_temp->virt_addr, 0, FSL_QDMA_COMMAND_BUFFER_SIZE);
+		memset(comp_temp->desc_virt_addr, 0,
+		       FSL_QDMA_DESCRIPTOR_BUFFER_SIZE);
+
+		csgf_desc = (struct fsl_qdma_format *)comp_temp->virt_addr + 1;
+		sdf = (struct fsl_qdma_sdf *)comp_temp->desc_virt_addr;
+		ddf = (struct fsl_qdma_ddf *)comp_temp->desc_virt_addr + 1;
+		/* Compound Command Descriptor(Frame List Table) */
+		qdma_desc_addr_set64(csgf_desc, comp_temp->desc_bus_addr);
+		/* It must be 32 as Compound S/G Descriptor */
+		qdma_csgf_set_len(csgf_desc, 32);
+		/* Descriptor Buffer */
+		sdf->cmd = rte_cpu_to_le_32(FSL_QDMA_CMD_RWTTYPE <<
+			       FSL_QDMA_CMD_RWTTYPE_OFFSET);
+		ddf->cmd = rte_cpu_to_le_32(FSL_QDMA_CMD_RWTTYPE <<
+			       FSL_QDMA_CMD_RWTTYPE_OFFSET);
+		ddf->cmd |= rte_cpu_to_le_32(FSL_QDMA_CMD_LWC <<
+				FSL_QDMA_CMD_LWC_OFFSET);
+
+		list_add_tail(&comp_temp->list, &queue->comp_free);
+	}
+
+	return 0;
+
+fail:
+	list_for_each_entry_safe(comp_temp, _comp_temp,
+				 &queue->comp_free, list) {
+		list_del(&comp_temp->list);
+		rte_free(comp_temp->virt_addr);
+		rte_free(comp_temp->desc_virt_addr);
+		rte_free(comp_temp);
+	}
+
+	return -ENOMEM;
+}
+
 static struct fsl_qdma_queue
 *fsl_qdma_alloc_queue_resources(struct fsl_qdma_engine *fsl_qdma)
 {
@@ -335,6 +419,84 @@ fsl_qdma_reg_init(struct fsl_qdma_engine *fsl_qdma)
 	return 0;
 }
 
+static int
+fsl_qdma_alloc_chan_resources(struct fsl_qdma_chan *fsl_chan)
+{
+	struct fsl_qdma_queue *fsl_queue = fsl_chan->queue;
+	struct fsl_qdma_engine *fsl_qdma = fsl_chan->qdma;
+	int ret;
+
+	if (fsl_queue->count++)
+		goto finally;
+
+	INIT_LIST_HEAD(&fsl_queue->comp_free);
+	INIT_LIST_HEAD(&fsl_queue->comp_used);
+
+	ret = fsl_qdma_pre_request_enqueue_comp_sd_desc(fsl_queue,
+				FSL_QDMA_COMMAND_BUFFER_SIZE, 64);
+	if (ret) {
+		DPAA_QDMA_ERR(
+			"failed to alloc dma buffer for comp descriptor\n");
+		goto exit;
+	}
+
+finally:
+	return fsl_qdma->desc_allocated++;
+
+exit:
+	return -ENOMEM;
+}
+
+static int
+dpaa_info_get(const struct rte_dma_dev *dev, struct rte_dma_info *dev_info,
+	      uint32_t info_sz)
+{
+#define DPAADMA_MAX_DESC        64
+#define DPAADMA_MIN_DESC        64
+
+	RTE_SET_USED(dev);
+	RTE_SET_USED(info_sz);
+
+	dev_info->dev_capa = RTE_DMA_CAPA_MEM_TO_MEM |
+			     RTE_DMA_CAPA_MEM_TO_DEV |
+			     RTE_DMA_CAPA_DEV_TO_DEV |
+			     RTE_DMA_CAPA_DEV_TO_MEM |
+			     RTE_DMA_CAPA_SILENT |
+			     RTE_DMA_CAPA_OPS_COPY;
+	dev_info->max_vchans = 1;
+	dev_info->max_desc = DPAADMA_MAX_DESC;
+	dev_info->min_desc = DPAADMA_MIN_DESC;
+
+	return 0;
+}
+
+static int
+dpaa_get_channel(struct fsl_qdma_engine *fsl_qdma,  uint16_t vchan)
+{
+	u32 i, start, end;
+	int ret;
+
+	start = fsl_qdma->free_block_id * QDMA_QUEUES;
+	fsl_qdma->free_block_id++;
+
+	end = start + 1;
+	for (i = start; i < end; i++) {
+		struct fsl_qdma_chan *fsl_chan = &fsl_qdma->chans[i];
+
+		if (fsl_chan->free) {
+			fsl_chan->free = false;
+			ret = fsl_qdma_alloc_chan_resources(fsl_chan);
+			if (ret)
+				return ret;
+
+			fsl_qdma->vchan_map[vchan] = i;
+			return 0;
+		}
+	}
+
+	return -1;
+}
+
 static void
 dma_release(void *fsl_chan)
 {
@@ -342,6 +504,45 @@ dma_release(void *fsl_chan)
 	fsl_qdma_free_chan_resources((struct fsl_qdma_chan *)fsl_chan);
 }
 
+static int
+dpaa_qdma_configure(__rte_unused struct rte_dma_dev *dmadev,
+		    __rte_unused const struct rte_dma_conf *dev_conf,
+		    __rte_unused uint32_t conf_sz)
+{
+	return 0;
+}
+
+static int
+dpaa_qdma_start(__rte_unused struct rte_dma_dev *dev)
+{
+	return 0;
+}
+
+static int
+dpaa_qdma_close(__rte_unused struct rte_dma_dev *dev)
+{
+	return 0;
+}
+
+static int
+dpaa_qdma_queue_setup(struct rte_dma_dev *dmadev,
+		      uint16_t vchan,
+		      __rte_unused const struct rte_dma_vchan_conf *conf,
+		      __rte_unused uint32_t conf_sz)
+{
+	struct fsl_qdma_engine *fsl_qdma = dmadev->data->dev_private;
+
+	return dpaa_get_channel(fsl_qdma, vchan);
+}
+
+static struct rte_dma_dev_ops dpaa_qdma_ops = {
+	.dev_info_get		  = dpaa_info_get,
+	.dev_configure            = dpaa_qdma_configure,
+	.dev_start                = dpaa_qdma_start,
+	.dev_close                = dpaa_qdma_close,
+	.vchan_setup		  = dpaa_qdma_queue_setup,
+};
+
 static int
 dpaa_qdma_init(struct rte_dma_dev *dmadev)
 {
@@ -448,6 +649,9 @@ dpaa_qdma_probe(__rte_unused struct rte_dpaa_driver *dpaa_drv,
 	}
 
 	dpaa_dev->dmadev = dmadev;
+	dmadev->dev_ops = &dpaa_qdma_ops;
+	dmadev->device = &dpaa_dev->device;
+	dmadev->fp_obj->dev_private = dmadev->data->dev_private;
 
 	/* Invoke PMD device initialization function */
 	ret = dpaa_qdma_init(dmadev);
diff --git a/drivers/dma/dpaa/dpaa_qdma.h b/drivers/dma/dpaa/dpaa_qdma.h
index c05620b740..f046167108 100644
--- a/drivers/dma/dpaa/dpaa_qdma.h
+++ b/drivers/dma/dpaa/dpaa_qdma.h
@@ -10,6 +10,12 @@
 #define CORE_NUMBER 4
 #define RETRIES	5
 
+#ifndef GENMASK
+#define BITS_PER_LONG	(__SIZEOF_LONG__ * 8)
+#define GENMASK(h, l) \
+		(((~0UL) << (l)) & (~0UL >> (BITS_PER_LONG - 1 - (h))))
+#endif
+
 #define FSL_QDMA_DMR			0x0
 #define FSL_QDMA_DSR			0x4
 #define FSL_QDMA_DEIER			0xe00
-- 
2.25.1


^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH v4 4/5] dma/dpaa: support DMA operations
  2021-11-09  4:39         ` [dpdk-dev] [PATCH v4 0/5] Introduce " Gagandeep Singh
                             ` (2 preceding siblings ...)
  2021-11-09  4:39           ` [dpdk-dev] [PATCH v4 3/5] dma/dpaa: support basic operations Gagandeep Singh
@ 2021-11-09  4:39           ` Gagandeep Singh
  2021-11-09  4:39           ` [dpdk-dev] [PATCH v4 5/5] dma/dpaa: support statistics Gagandeep Singh
  2021-11-10 12:48           ` [dpdk-dev] [PATCH v4 0/5] Introduce DPAA DMA driver Thomas Monjalon
  5 siblings, 0 replies; 42+ messages in thread
From: Gagandeep Singh @ 2021-11-09  4:39 UTC (permalink / raw)
  To: dev; +Cc: nipun.gupta, thomas, Gagandeep Singh

This patch support copy, submit, completed and
completed status functionality of DMA driver.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 doc/guides/dmadevs/dpaa.rst  |  11 ++
 drivers/dma/dpaa/dpaa_qdma.c | 334 +++++++++++++++++++++++++++++++++++
 drivers/dma/dpaa/dpaa_qdma.h |   4 +
 3 files changed, 349 insertions(+)

diff --git a/doc/guides/dmadevs/dpaa.rst b/doc/guides/dmadevs/dpaa.rst
index 885a8bb8aa..4fbd8a25fb 100644
--- a/doc/guides/dmadevs/dpaa.rst
+++ b/doc/guides/dmadevs/dpaa.rst
@@ -46,6 +46,17 @@ Initialization
 On EAL initialization, DPAA DMA devices will be detected on DPAA bus and
 will be probed and populated into their device list.
 
+Features
+--------
+
+The DPAA DMA implements following features in the dmadev API:
+
+- Supports 1 virtual channel.
+- Supports all 4 DMA transfers: MEM_TO_MEM, MEM_TO_DEV,
+  DEV_TO_MEM, DEV_TO_DEV.
+- Supports DMA silent mode.
+- Supports issuing DMA of data within memory without hogging CPU while
+  performing DMA operation.
 
 Platform Requirement
 ~~~~~~~~~~~~~~~~~~~~
diff --git a/drivers/dma/dpaa/dpaa_qdma.c b/drivers/dma/dpaa/dpaa_qdma.c
index e59cd36872..ebe6211f08 100644
--- a/drivers/dma/dpaa/dpaa_qdma.c
+++ b/drivers/dma/dpaa/dpaa_qdma.c
@@ -15,12 +15,50 @@ qdma_desc_addr_set64(struct fsl_qdma_format *ccdf, u64 addr)
 	ccdf->addr_lo = rte_cpu_to_le_32(lower_32_bits(addr));
 }
 
+static inline u64
+qdma_ccdf_get_queue(const struct fsl_qdma_format *ccdf)
+{
+	return ccdf->cfg8b_w1 & 0xff;
+}
+
+static inline int
+qdma_ccdf_get_offset(const struct fsl_qdma_format *ccdf)
+{
+	return (rte_le_to_cpu_32(ccdf->cfg) & QDMA_CCDF_MASK)
+		>> QDMA_CCDF_OFFSET;
+}
+
+static inline void
+qdma_ccdf_set_format(struct fsl_qdma_format *ccdf, int offset)
+{
+	ccdf->cfg = rte_cpu_to_le_32(QDMA_CCDF_FOTMAT | offset);
+}
+
+static inline int
+qdma_ccdf_get_status(const struct fsl_qdma_format *ccdf)
+{
+	return (rte_le_to_cpu_32(ccdf->status) & QDMA_CCDF_MASK)
+		>> QDMA_CCDF_STATUS;
+}
+
+static inline void
+qdma_ccdf_set_ser(struct fsl_qdma_format *ccdf, int status)
+{
+	ccdf->status = rte_cpu_to_le_32(QDMA_CCDF_SER | status);
+}
+
 static inline void
 qdma_csgf_set_len(struct fsl_qdma_format *csgf, int len)
 {
 	csgf->cfg = rte_cpu_to_le_32(len & QDMA_SG_LEN_MASK);
 }
 
+static inline void
+qdma_csgf_set_f(struct fsl_qdma_format *csgf, int len)
+{
+	csgf->cfg = rte_cpu_to_le_32(QDMA_SG_FIN | (len & QDMA_SG_LEN_MASK));
+}
+
 static inline int
 ilog2(int x)
 {
@@ -47,6 +85,18 @@ qdma_writel(u32 val, void *addr)
 	QDMA_OUT(addr, val);
 }
 
+static u32
+qdma_readl_be(void *addr)
+{
+	return QDMA_IN_BE(addr);
+}
+
+static void
+qdma_writel_be(u32 val, void *addr)
+{
+	QDMA_OUT_BE(addr, val);
+}
+
 static void
 *dma_pool_alloc(int size, int aligned, dma_addr_t *phy_addr)
 {
@@ -104,6 +154,32 @@ fsl_qdma_free_chan_resources(struct fsl_qdma_chan *fsl_chan)
 	fsl_qdma->desc_allocated--;
 }
 
+static void
+fsl_qdma_comp_fill_memcpy(struct fsl_qdma_comp *fsl_comp,
+				      dma_addr_t dst, dma_addr_t src, u32 len)
+{
+	struct fsl_qdma_format *csgf_src, *csgf_dest;
+
+	/* Note: command table (fsl_comp->virt_addr) is getting filled
+	 * directly in cmd descriptors of queues while enqueuing the descriptor
+	 * please refer fsl_qdma_enqueue_desc
+	 * frame list table (virt_addr) + 1) and source,
+	 * destination descriptor table
+	 * (fsl_comp->desc_virt_addr and fsl_comp->desc_virt_addr+1) move to
+	 * the control path to fsl_qdma_pre_request_enqueue_comp_sd_desc
+	 */
+	csgf_src = (struct fsl_qdma_format *)fsl_comp->virt_addr + 2;
+	csgf_dest = (struct fsl_qdma_format *)fsl_comp->virt_addr + 3;
+
+	/* Status notification is enqueued to status queue. */
+	qdma_desc_addr_set64(csgf_src, src);
+	qdma_csgf_set_len(csgf_src, len);
+	qdma_desc_addr_set64(csgf_dest, dst);
+	qdma_csgf_set_len(csgf_dest, len);
+	/* This entry is the last entry. */
+	qdma_csgf_set_f(csgf_dest, len);
+}
+
 /*
  * Pre-request command descriptor and compound S/G for enqueue.
  */
@@ -175,6 +251,26 @@ fsl_qdma_pre_request_enqueue_comp_sd_desc(
 	return -ENOMEM;
 }
 
+/*
+ * Request a command descriptor for enqueue.
+ */
+static struct fsl_qdma_comp *
+fsl_qdma_request_enqueue_desc(struct fsl_qdma_chan *fsl_chan)
+{
+	struct fsl_qdma_queue *queue = fsl_chan->queue;
+	struct fsl_qdma_comp *comp_temp;
+
+	if (!list_empty(&queue->comp_free)) {
+		comp_temp = list_first_entry(&queue->comp_free,
+					     struct fsl_qdma_comp,
+					     list);
+		list_del(&comp_temp->list);
+		return comp_temp;
+	}
+
+	return NULL;
+}
+
 static struct fsl_qdma_queue
 *fsl_qdma_alloc_queue_resources(struct fsl_qdma_engine *fsl_qdma)
 {
@@ -324,6 +420,54 @@ fsl_qdma_halt(struct fsl_qdma_engine *fsl_qdma)
 	return 0;
 }
 
+static int
+fsl_qdma_queue_transfer_complete(struct fsl_qdma_engine *fsl_qdma,
+				 void *block, int id, const uint16_t nb_cpls,
+				 uint16_t *last_idx,
+				 enum rte_dma_status_code *status)
+{
+	struct fsl_qdma_queue *fsl_queue = fsl_qdma->queue;
+	struct fsl_qdma_queue *fsl_status = fsl_qdma->status[id];
+	struct fsl_qdma_queue *temp_queue;
+	struct fsl_qdma_format *status_addr;
+	struct fsl_qdma_comp *fsl_comp = NULL;
+	u32 reg, i;
+	int count = 0;
+
+	while (count < nb_cpls) {
+		reg = qdma_readl_be(block + FSL_QDMA_BSQSR);
+		if (reg & FSL_QDMA_BSQSR_QE_BE)
+			return count;
+
+		status_addr = fsl_status->virt_head;
+
+		i = qdma_ccdf_get_queue(status_addr) +
+			id * fsl_qdma->n_queues;
+		temp_queue = fsl_queue + i;
+		fsl_comp = list_first_entry(&temp_queue->comp_used,
+					    struct fsl_qdma_comp,
+					    list);
+		list_del(&fsl_comp->list);
+
+		reg = qdma_readl_be(block + FSL_QDMA_BSQMR);
+		reg |= FSL_QDMA_BSQMR_DI_BE;
+
+		qdma_desc_addr_set64(status_addr, 0x0);
+		fsl_status->virt_head++;
+		if (fsl_status->virt_head == fsl_status->cq + fsl_status->n_cq)
+			fsl_status->virt_head = fsl_status->cq;
+		qdma_writel_be(reg, block + FSL_QDMA_BSQMR);
+		*last_idx = fsl_comp->index;
+		if (status != NULL)
+			status[count] = RTE_DMA_STATUS_SUCCESSFUL;
+
+		list_add_tail(&fsl_comp->list, &temp_queue->comp_free);
+		count++;
+
+	}
+	return count;
+}
+
 static int
 fsl_qdma_reg_init(struct fsl_qdma_engine *fsl_qdma)
 {
@@ -419,6 +563,66 @@ fsl_qdma_reg_init(struct fsl_qdma_engine *fsl_qdma)
 	return 0;
 }
 
+static void *
+fsl_qdma_prep_memcpy(void *fsl_chan, dma_addr_t dst,
+			   dma_addr_t src, size_t len,
+			   void *call_back,
+			   void *param)
+{
+	struct fsl_qdma_comp *fsl_comp;
+
+	fsl_comp =
+	fsl_qdma_request_enqueue_desc((struct fsl_qdma_chan *)fsl_chan);
+	if (!fsl_comp)
+		return NULL;
+
+	fsl_comp->qchan = fsl_chan;
+	fsl_comp->call_back_func = call_back;
+	fsl_comp->params = param;
+
+	fsl_qdma_comp_fill_memcpy(fsl_comp, dst, src, len);
+	return (void *)fsl_comp;
+}
+
+static int
+fsl_qdma_enqueue_desc(struct fsl_qdma_chan *fsl_chan,
+				  struct fsl_qdma_comp *fsl_comp,
+				  uint64_t flags)
+{
+	struct fsl_qdma_queue *fsl_queue = fsl_chan->queue;
+	void *block = fsl_queue->block_base;
+	struct fsl_qdma_format *ccdf;
+	u32 reg;
+
+	/* retrieve and store the register value in big endian
+	 * to avoid bits swap
+	 */
+	reg = qdma_readl_be(block +
+			 FSL_QDMA_BCQSR(fsl_queue->id));
+	if (reg & (FSL_QDMA_BCQSR_QF_XOFF_BE))
+		return -1;
+
+	/* filling descriptor  command table */
+	ccdf = (struct fsl_qdma_format *)fsl_queue->virt_head;
+	qdma_desc_addr_set64(ccdf, fsl_comp->bus_addr + 16);
+	qdma_ccdf_set_format(ccdf, qdma_ccdf_get_offset(fsl_comp->virt_addr));
+	qdma_ccdf_set_ser(ccdf, qdma_ccdf_get_status(fsl_comp->virt_addr));
+	fsl_comp->index = fsl_queue->virt_head - fsl_queue->cq;
+	fsl_queue->virt_head++;
+
+	if (fsl_queue->virt_head == fsl_queue->cq + fsl_queue->n_cq)
+		fsl_queue->virt_head = fsl_queue->cq;
+
+	list_add_tail(&fsl_comp->list, &fsl_queue->comp_used);
+
+	if (flags == RTE_DMA_OP_FLAG_SUBMIT) {
+		reg = qdma_readl_be(block + FSL_QDMA_BCQMR(fsl_queue->id));
+		reg |= FSL_QDMA_BCQMR_EI_BE;
+		qdma_writel_be(reg, block + FSL_QDMA_BCQMR(fsl_queue->id));
+	}
+	return fsl_comp->index;
+}
+
 static int
 fsl_qdma_alloc_chan_resources(struct fsl_qdma_chan *fsl_chan)
 {
@@ -535,6 +739,132 @@ dpaa_qdma_queue_setup(struct rte_dma_dev *dmadev,
 	return dpaa_get_channel(fsl_qdma, vchan);
 }
 
+static int
+dpaa_qdma_submit(void *dev_private, uint16_t vchan)
+{
+	struct fsl_qdma_engine *fsl_qdma = (struct fsl_qdma_engine *)dev_private;
+	struct fsl_qdma_chan *fsl_chan =
+		&fsl_qdma->chans[fsl_qdma->vchan_map[vchan]];
+	struct fsl_qdma_queue *fsl_queue = fsl_chan->queue;
+	void *block = fsl_queue->block_base;
+	u32 reg;
+
+	while (fsl_queue->pending) {
+		reg = qdma_readl_be(block + FSL_QDMA_BCQMR(fsl_queue->id));
+		reg |= FSL_QDMA_BCQMR_EI_BE;
+		qdma_writel_be(reg, block + FSL_QDMA_BCQMR(fsl_queue->id));
+		fsl_queue->pending--;
+	}
+
+	return 0;
+}
+
+static int
+dpaa_qdma_enqueue(void *dev_private, uint16_t vchan,
+		  rte_iova_t src, rte_iova_t dst,
+		  uint32_t length, uint64_t flags)
+{
+	struct fsl_qdma_engine *fsl_qdma = (struct fsl_qdma_engine *)dev_private;
+	struct fsl_qdma_chan *fsl_chan =
+		&fsl_qdma->chans[fsl_qdma->vchan_map[vchan]];
+	int ret;
+
+	void *fsl_comp = NULL;
+
+	fsl_comp = fsl_qdma_prep_memcpy(fsl_chan,
+			(dma_addr_t)dst, (dma_addr_t)src,
+			length, NULL, NULL);
+	if (!fsl_comp) {
+		DPAA_QDMA_DP_DEBUG("fsl_comp is NULL\n");
+		return -1;
+	}
+	ret = fsl_qdma_enqueue_desc(fsl_chan, fsl_comp, flags);
+
+	return ret;
+}
+
+static uint16_t
+dpaa_qdma_dequeue_status(void *dev_private, uint16_t vchan,
+			 const uint16_t nb_cpls, uint16_t *last_idx,
+			 enum rte_dma_status_code *st)
+{
+	struct fsl_qdma_engine *fsl_qdma = (struct fsl_qdma_engine *)dev_private;
+	int id = (int)((fsl_qdma->vchan_map[vchan]) / QDMA_QUEUES);
+	void *block;
+	int intr;
+	void *status = fsl_qdma->status_base;
+
+	intr = qdma_readl_be(status + FSL_QDMA_DEDR);
+	if (intr) {
+		DPAA_QDMA_ERR("DMA transaction error! %x\n", intr);
+		intr = qdma_readl(status + FSL_QDMA_DECFDW0R);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW0R %x\n", intr);
+		intr = qdma_readl(status + FSL_QDMA_DECFDW1R);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW1R %x\n", intr);
+		intr = qdma_readl(status + FSL_QDMA_DECFDW2R);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW2R %x\n", intr);
+		intr = qdma_readl(status + FSL_QDMA_DECFDW3R);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW3R %x\n", intr);
+		intr = qdma_readl(status + FSL_QDMA_DECFQIDR);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFQIDR %x\n", intr);
+		intr = qdma_readl(status + FSL_QDMA_DECBR);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECBR %x\n", intr);
+		qdma_writel(0xffffffff,
+			    status + FSL_QDMA_DEDR);
+		intr = qdma_readl(status + FSL_QDMA_DEDR);
+	}
+
+	block = fsl_qdma->block_base +
+		FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, id);
+
+	intr = fsl_qdma_queue_transfer_complete(fsl_qdma, block, id, nb_cpls,
+						last_idx, st);
+
+	return intr;
+}
+
+
+static uint16_t
+dpaa_qdma_dequeue(void *dev_private,
+		  uint16_t vchan, const uint16_t nb_cpls,
+		  uint16_t *last_idx, bool *has_error)
+{
+	struct fsl_qdma_engine *fsl_qdma = (struct fsl_qdma_engine *)dev_private;
+	int id = (int)((fsl_qdma->vchan_map[vchan]) / QDMA_QUEUES);
+	void *block;
+	int intr;
+	void *status = fsl_qdma->status_base;
+
+	intr = qdma_readl_be(status + FSL_QDMA_DEDR);
+	if (intr) {
+		DPAA_QDMA_ERR("DMA transaction error! %x\n", intr);
+		intr = qdma_readl(status + FSL_QDMA_DECFDW0R);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW0R %x\n", intr);
+		intr = qdma_readl(status + FSL_QDMA_DECFDW1R);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW1R %x\n", intr);
+		intr = qdma_readl(status + FSL_QDMA_DECFDW2R);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW2R %x\n", intr);
+		intr = qdma_readl(status + FSL_QDMA_DECFDW3R);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW3R %x\n", intr);
+		intr = qdma_readl(status + FSL_QDMA_DECFQIDR);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFQIDR %x\n", intr);
+		intr = qdma_readl(status + FSL_QDMA_DECBR);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECBR %x\n", intr);
+		qdma_writel(0xffffffff,
+			    status + FSL_QDMA_DEDR);
+		intr = qdma_readl(status + FSL_QDMA_DEDR);
+		*has_error = true;
+	}
+
+	block = fsl_qdma->block_base +
+		FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, id);
+
+	intr = fsl_qdma_queue_transfer_complete(fsl_qdma, block, id, nb_cpls,
+						last_idx, NULL);
+
+	return intr;
+}
+
 static struct rte_dma_dev_ops dpaa_qdma_ops = {
 	.dev_info_get		  = dpaa_info_get,
 	.dev_configure            = dpaa_qdma_configure,
@@ -652,6 +982,10 @@ dpaa_qdma_probe(__rte_unused struct rte_dpaa_driver *dpaa_drv,
 	dmadev->dev_ops = &dpaa_qdma_ops;
 	dmadev->device = &dpaa_dev->device;
 	dmadev->fp_obj->dev_private = dmadev->data->dev_private;
+	dmadev->fp_obj->copy = dpaa_qdma_enqueue;
+	dmadev->fp_obj->submit = dpaa_qdma_submit;
+	dmadev->fp_obj->completed = dpaa_qdma_dequeue;
+	dmadev->fp_obj->completed_status = dpaa_qdma_dequeue_status;
 
 	/* Invoke PMD device initialization function */
 	ret = dpaa_qdma_init(dmadev);
diff --git a/drivers/dma/dpaa/dpaa_qdma.h b/drivers/dma/dpaa/dpaa_qdma.h
index f046167108..6d0ac58317 100644
--- a/drivers/dma/dpaa/dpaa_qdma.h
+++ b/drivers/dma/dpaa/dpaa_qdma.h
@@ -7,6 +7,10 @@
 
 #include <rte_io.h>
 
+#ifndef BIT
+#define BIT(nr)		(1UL << (nr))
+#endif
+
 #define CORE_NUMBER 4
 #define RETRIES	5
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH v4 5/5] dma/dpaa: support statistics
  2021-11-09  4:39         ` [dpdk-dev] [PATCH v4 0/5] Introduce " Gagandeep Singh
                             ` (3 preceding siblings ...)
  2021-11-09  4:39           ` [dpdk-dev] [PATCH v4 4/5] dma/dpaa: support DMA operations Gagandeep Singh
@ 2021-11-09  4:39           ` Gagandeep Singh
  2021-11-10 12:48           ` [dpdk-dev] [PATCH v4 0/5] Introduce DPAA DMA driver Thomas Monjalon
  5 siblings, 0 replies; 42+ messages in thread
From: Gagandeep Singh @ 2021-11-09  4:39 UTC (permalink / raw)
  To: dev; +Cc: nipun.gupta, thomas, Gagandeep Singh

This patch support DMA read and reset statistics
operations

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 doc/guides/dmadevs/dpaa.rst  |  1 +
 drivers/dma/dpaa/dpaa_qdma.c | 51 +++++++++++++++++++++++++++++++++++-
 drivers/dma/dpaa/dpaa_qdma.h |  1 +
 3 files changed, 52 insertions(+), 1 deletion(-)

diff --git a/doc/guides/dmadevs/dpaa.rst b/doc/guides/dmadevs/dpaa.rst
index 4fbd8a25fb..7d51c8c4cd 100644
--- a/doc/guides/dmadevs/dpaa.rst
+++ b/doc/guides/dmadevs/dpaa.rst
@@ -57,6 +57,7 @@ The DPAA DMA implements following features in the dmadev API:
 - Supports DMA silent mode.
 - Supports issuing DMA of data within memory without hogging CPU while
   performing DMA operation.
+- support statistics
 
 Platform Requirement
 ~~~~~~~~~~~~~~~~~~~~
diff --git a/drivers/dma/dpaa/dpaa_qdma.c b/drivers/dma/dpaa/dpaa_qdma.c
index ebe6211f08..cb272c700f 100644
--- a/drivers/dma/dpaa/dpaa_qdma.c
+++ b/drivers/dma/dpaa/dpaa_qdma.c
@@ -319,7 +319,7 @@ static struct fsl_qdma_queue
 			queue_temp->count = 0;
 			queue_temp->pending = 0;
 			queue_temp->virt_head = queue_temp->cq;
-
+			queue_temp->stats = (struct rte_dma_stats){0};
 		}
 	}
 	return queue_head;
@@ -619,6 +619,9 @@ fsl_qdma_enqueue_desc(struct fsl_qdma_chan *fsl_chan,
 		reg = qdma_readl_be(block + FSL_QDMA_BCQMR(fsl_queue->id));
 		reg |= FSL_QDMA_BCQMR_EI_BE;
 		qdma_writel_be(reg, block + FSL_QDMA_BCQMR(fsl_queue->id));
+		fsl_queue->stats.submitted++;
+	} else {
+		fsl_queue->pending++;
 	}
 	return fsl_comp->index;
 }
@@ -754,6 +757,7 @@ dpaa_qdma_submit(void *dev_private, uint16_t vchan)
 		reg |= FSL_QDMA_BCQMR_EI_BE;
 		qdma_writel_be(reg, block + FSL_QDMA_BCQMR(fsl_queue->id));
 		fsl_queue->pending--;
+		fsl_queue->stats.submitted++;
 	}
 
 	return 0;
@@ -793,6 +797,9 @@ dpaa_qdma_dequeue_status(void *dev_private, uint16_t vchan,
 	void *block;
 	int intr;
 	void *status = fsl_qdma->status_base;
+	struct fsl_qdma_chan *fsl_chan =
+		&fsl_qdma->chans[fsl_qdma->vchan_map[vchan]];
+	struct fsl_qdma_queue *fsl_queue = fsl_chan->queue;
 
 	intr = qdma_readl_be(status + FSL_QDMA_DEDR);
 	if (intr) {
@@ -812,6 +819,7 @@ dpaa_qdma_dequeue_status(void *dev_private, uint16_t vchan,
 		qdma_writel(0xffffffff,
 			    status + FSL_QDMA_DEDR);
 		intr = qdma_readl(status + FSL_QDMA_DEDR);
+		fsl_queue->stats.errors++;
 	}
 
 	block = fsl_qdma->block_base +
@@ -819,6 +827,7 @@ dpaa_qdma_dequeue_status(void *dev_private, uint16_t vchan,
 
 	intr = fsl_qdma_queue_transfer_complete(fsl_qdma, block, id, nb_cpls,
 						last_idx, st);
+	fsl_queue->stats.completed += intr;
 
 	return intr;
 }
@@ -834,6 +843,9 @@ dpaa_qdma_dequeue(void *dev_private,
 	void *block;
 	int intr;
 	void *status = fsl_qdma->status_base;
+	struct fsl_qdma_chan *fsl_chan =
+		&fsl_qdma->chans[fsl_qdma->vchan_map[vchan]];
+	struct fsl_qdma_queue *fsl_queue = fsl_chan->queue;
 
 	intr = qdma_readl_be(status + FSL_QDMA_DEDR);
 	if (intr) {
@@ -854,6 +866,7 @@ dpaa_qdma_dequeue(void *dev_private,
 			    status + FSL_QDMA_DEDR);
 		intr = qdma_readl(status + FSL_QDMA_DEDR);
 		*has_error = true;
+		fsl_queue->stats.errors++;
 	}
 
 	block = fsl_qdma->block_base +
@@ -861,16 +874,52 @@ dpaa_qdma_dequeue(void *dev_private,
 
 	intr = fsl_qdma_queue_transfer_complete(fsl_qdma, block, id, nb_cpls,
 						last_idx, NULL);
+	fsl_queue->stats.completed += intr;
 
 	return intr;
 }
 
+static int
+dpaa_qdma_stats_get(const struct rte_dma_dev *dmadev, uint16_t vchan,
+		    struct rte_dma_stats *rte_stats, uint32_t size)
+{
+	struct fsl_qdma_engine *fsl_qdma = dmadev->data->dev_private;
+	struct fsl_qdma_chan *fsl_chan =
+		&fsl_qdma->chans[fsl_qdma->vchan_map[vchan]];
+	struct fsl_qdma_queue *fsl_queue = fsl_chan->queue;
+	struct rte_dma_stats *stats = &fsl_queue->stats;
+
+	if (size < sizeof(rte_stats))
+		return -EINVAL;
+	if (rte_stats == NULL)
+		return -EINVAL;
+
+	*rte_stats = *stats;
+
+	return 0;
+}
+
+static int
+dpaa_qdma_stats_reset(struct rte_dma_dev *dmadev, uint16_t vchan)
+{
+	struct fsl_qdma_engine *fsl_qdma = dmadev->data->dev_private;
+	struct fsl_qdma_chan *fsl_chan =
+		&fsl_qdma->chans[fsl_qdma->vchan_map[vchan]];
+	struct fsl_qdma_queue *fsl_queue = fsl_chan->queue;
+
+	fsl_queue->stats = (struct rte_dma_stats){0};
+
+	return 0;
+}
+
 static struct rte_dma_dev_ops dpaa_qdma_ops = {
 	.dev_info_get		  = dpaa_info_get,
 	.dev_configure            = dpaa_qdma_configure,
 	.dev_start                = dpaa_qdma_start,
 	.dev_close                = dpaa_qdma_close,
 	.vchan_setup		  = dpaa_qdma_queue_setup,
+	.stats_get		  = dpaa_qdma_stats_get,
+	.stats_reset		  = dpaa_qdma_stats_reset,
 };
 
 static int
diff --git a/drivers/dma/dpaa/dpaa_qdma.h b/drivers/dma/dpaa/dpaa_qdma.h
index 6d0ac58317..bf49b2d5d9 100644
--- a/drivers/dma/dpaa/dpaa_qdma.h
+++ b/drivers/dma/dpaa/dpaa_qdma.h
@@ -210,6 +210,7 @@ struct fsl_qdma_queue {
 	u32			pending;
 	struct fsl_qdma_format	*cq;
 	void			*block_base;
+	struct rte_dma_stats	stats;
 };
 
 struct fsl_qdma_comp {
-- 
2.25.1


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [dpdk-dev] [PATCH v4 1/5] dma/dpaa: introduce DPAA DMA driver
  2021-11-09  4:39           ` [dpdk-dev] [PATCH v4 1/5] dma/dpaa: introduce " Gagandeep Singh
@ 2021-11-09 14:44             ` Thomas Monjalon
  2021-11-10  5:17               ` Gagandeep Singh
  0 siblings, 1 reply; 42+ messages in thread
From: Thomas Monjalon @ 2021-11-09 14:44 UTC (permalink / raw)
  To: Gagandeep Singh
  Cc: dev, nipun.gupta, david.marchand, ferruh.yigit, gakhil, hemant.agrawal

09/11/2021 05:39, Gagandeep Singh:
> The DPAA DMA  driver is an implementation of the dmadev APIs,
> that provide means to initiate a DMA transaction from CPU.
> The initiated DMA is performed without CPU being involved
> in the actual DMA transaction. This is achieved via using
> the QDMA controller of DPAA SoC.
> 
> Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
> ---
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -1377,6 +1377,17 @@ F: drivers/raw/dpaa2_qdma/
>  F: doc/guides/rawdevs/dpaa2_qdma.rst
>  
>  
> +
> +Dmadev Drivers
> +--------------
> +
> +NXP DPAA DMA
> +M: Gagandeep Singh <g.singh@nxp.com>
> +M: Nipun Gupta <nipun.gupta@nxp.com>
> +F: drivers/dma/dpaa/
> +F: doc/guides/dmadevs/dpaa.rst
> +
> +

There is already a section for DMA drivers.

>  Packet processing
>  -----------------
>  
[...]
> --- /dev/null
> +++ b/doc/guides/dmadevs/dpaa.rst
> @@ -0,0 +1,54 @@
> +..  SPDX-License-Identifier: BSD-3-Clause
> +    Copyright 2021 NXP
> +
> +NXP DPAA DMA Driver
> +=====================

There are other occurences in this patch of a lack of attention.
Doing underlining of correct length is a minimum when starting a doc.

> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -20,6 +20,9 @@ DPDK Release 21.11
>        ninja -C build doc
>        xdg-open build/doc/guides/html/rel_notes/release_21_11.html
>  
> +* **Added NXP DPAA DMA driver.**
> +
> +  * Added a new dmadev driver for NXP DPAA platform.
>  
>  New Features
>  ------------

The new features should be inside this section and well sorted.

There are a lot of other details that I won't report.
All of this look to be on purpose, or a big negligence.
In any case, it looks to me as sending some junk work to my face.

I am too kind, so I will fix all, and will merge after the deadline passed.
But please remember that I don't forget such attitude,
and I think other maintainers won't forget as well.



^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [dpdk-dev] [PATCH v4 1/5] dma/dpaa: introduce DPAA DMA driver
  2021-11-09 14:44             ` Thomas Monjalon
@ 2021-11-10  5:17               ` Gagandeep Singh
  0 siblings, 0 replies; 42+ messages in thread
From: Gagandeep Singh @ 2021-11-10  5:17 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: dev, Nipun Gupta, david.marchand, ferruh.yigit, gakhil, Hemant Agrawal



> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Tuesday, November 9, 2021 8:15 PM
> To: Gagandeep Singh <G.Singh@nxp.com>
> Cc: dev@dpdk.org; Nipun Gupta <nipun.gupta@nxp.com>;
> david.marchand@redhat.com; ferruh.yigit@intel.com; gakhil@marvell.com;
> Hemant Agrawal <hemant.agrawal@nxp.com>
> Subject: Re: [dpdk-dev] [PATCH v4 1/5] dma/dpaa: introduce DPAA DMA driver
> 
> 09/11/2021 05:39, Gagandeep Singh:
> > The DPAA DMA  driver is an implementation of the dmadev APIs,
> > that provide means to initiate a DMA transaction from CPU.
> > The initiated DMA is performed without CPU being involved
> > in the actual DMA transaction. This is achieved via using
> > the QDMA controller of DPAA SoC.
> >
> > Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
> > ---
> > --- a/MAINTAINERS
> > +++ b/MAINTAINERS
> > @@ -1377,6 +1377,17 @@ F: drivers/raw/dpaa2_qdma/
> >  F: doc/guides/rawdevs/dpaa2_qdma.rst
> >
> >
> > +
> > +Dmadev Drivers
> > +--------------
> > +
> > +NXP DPAA DMA
> > +M: Gagandeep Singh <g.singh@nxp.com>
> > +M: Nipun Gupta <nipun.gupta@nxp.com>
> > +F: drivers/dma/dpaa/
> > +F: doc/guides/dmadevs/dpaa.rst
> > +
> > +
> 
> There is already a section for DMA drivers.
> 
> >  Packet processing
> >  -----------------
> >
> [...]
> > --- /dev/null
> > +++ b/doc/guides/dmadevs/dpaa.rst
> > @@ -0,0 +1,54 @@
> > +..  SPDX-License-Identifier: BSD-3-Clause
> > +    Copyright 2021 NXP
> > +
> > +NXP DPAA DMA Driver
> > +=====================
> 
> There are other occurences in this patch of a lack of attention.
> Doing underlining of correct length is a minimum when starting a doc.
> 
> > --- a/doc/guides/rel_notes/release_21_11.rst
> > +++ b/doc/guides/rel_notes/release_21_11.rst
> > @@ -20,6 +20,9 @@ DPDK Release 21.11
> >        ninja -C build doc
> >        xdg-open build/doc/guides/html/rel_notes/release_21_11.html
> >
> > +* **Added NXP DPAA DMA driver.**
> > +
> > +  * Added a new dmadev driver for NXP DPAA platform.
> >
> >  New Features
> >  ------------
> 
> The new features should be inside this section and well sorted.
> 
> There are a lot of other details that I won't report.
> All of this look to be on purpose, or a big negligence.
> In any case, it looks to me as sending some junk work to my face.
> 
> I am too kind, so I will fix all, and will merge after the deadline passed.
> But please remember that I don't forget such attitude,
> and I think other maintainers won't forget as well.
> 

I accept my mistakes. This happens while rebasing the patches in very last
moment.
There was no such intention to send junk work to the maintainer. I sent the first
Version of the series when the DMA APIs were still in review and at that time
There was no section " Dmadev Drivers"  in the maintainers so I added it
and also release notes were at the right place. But while rebasing at the
very last moment, I forgot to pay attention to details.

Thank you for being so kind. I will take care of all these details next time.


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [dpdk-dev] [PATCH v4 0/5] Introduce DPAA DMA driver
  2021-11-09  4:39         ` [dpdk-dev] [PATCH v4 0/5] Introduce " Gagandeep Singh
                             ` (4 preceding siblings ...)
  2021-11-09  4:39           ` [dpdk-dev] [PATCH v4 5/5] dma/dpaa: support statistics Gagandeep Singh
@ 2021-11-10 12:48           ` Thomas Monjalon
  5 siblings, 0 replies; 42+ messages in thread
From: Thomas Monjalon @ 2021-11-10 12:48 UTC (permalink / raw)
  To: Gagandeep Singh; +Cc: dev, nipun.gupta, hemant.agrawal, david.marchand

09/11/2021 05:39, Gagandeep Singh:
> Gagandeep Singh (5):
>   dma/dpaa: introduce DPAA DMA driver
>   dma/dpaa: add device probe and remove functionality
>   dma/dpaa: support basic operations
>   dma/dpaa: support DMA operations
>   dma/dpaa: support statistics

Applied with multiple minor details fixed and dead code removed.

Code changes are below:

diff --git a/MAINTAINERS b/MAINTAINERS
index 0f333b7baa..adee619d36 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1224,6 +1224,12 @@ M: Veerasenareddy Burru <vburru@marvell.com>
 F: drivers/dma/cnxk/
 F: doc/guides/dmadevs/cnxk.rst
 
+NXP DPAA DMA
+M: Gagandeep Singh <g.singh@nxp.com>
+M: Nipun Gupta <nipun.gupta@nxp.com>
+F: drivers/dma/dpaa/
+F: doc/guides/dmadevs/dpaa.rst
+
 
 RegEx Drivers
 -------------
@@ -1377,17 +1383,6 @@ F: drivers/raw/dpaa2_qdma/
 F: doc/guides/rawdevs/dpaa2_qdma.rst
 
 
-
-Dmadev Drivers
---------------
-
-NXP DPAA DMA
-M: Gagandeep Singh <g.singh@nxp.com>
-M: Nipun Gupta <nipun.gupta@nxp.com>
-F: drivers/dma/dpaa/
-F: doc/guides/dmadevs/dpaa.rst
-
-
 Packet processing
 -----------------
 
diff --git a/doc/guides/dmadevs/dpaa.rst b/doc/guides/dmadevs/dpaa.rst
index 7d51c8c4cd..f99bfc6087 100644
--- a/doc/guides/dmadevs/dpaa.rst
+++ b/doc/guides/dmadevs/dpaa.rst
@@ -2,22 +2,24 @@
     Copyright 2021 NXP
 
 NXP DPAA DMA Driver
-=====================
+===================
 
-The DPAA DMA is an implementation of the dmadev APIs, that provide means
-to initiate a DMA transaction from CPU. The initiated DMA is performed
-without CPU being involved in the actual DMA transaction. This is achieved
-via using the QDMA controller of DPAA SoC.
+The DPAA DMA is an implementation of the dmadev APIs,
+that provide means to initiate a DMA transaction from CPU.
+The initiated DMA is performed without CPU being involved
+in the actual DMA transaction.
+This is achieved via using the QDMA controller of DPAA SoC.
 
-The QDMA controller transfers blocks of data between one source and one
-destination. The blocks of data transferred can be represented in memory
+The QDMA controller transfers blocks of data
+between one source and one destination.
+The blocks of data transferred can be represented in memory
 as contiguous or noncontiguous using scatter/gather table(s).
 
 More information can be found at `NXP Official Website
 <http://www.nxp.com/products/microcontrollers-and-processors/arm-processors/qoriq-arm-processors:QORIQ-ARM>`_.
 
 Supported DPAA SoCs
---------------------
+-------------------
 
 - LS1046A
 - LS1043A
@@ -35,7 +37,7 @@ See :doc:`../platform/dpaa` for setup information
    dual licensed (BSD & GPLv2), however they are used as BSD in DPDK in userspace.
 
 Compilation
-------------
+-----------
 
 For builds using ``meson`` and ``ninja``, the driver will be built when the
 target platform is dpaa-based. No additional compilation steps are necessary.
@@ -57,10 +59,10 @@ The DPAA DMA implements following features in the dmadev API:
 - Supports DMA silent mode.
 - Supports issuing DMA of data within memory without hogging CPU while
   performing DMA operation.
-- support statistics
+- Supports statistics.
 
 Platform Requirement
-~~~~~~~~~~~~~~~~~~~~
+--------------------
 
-DPAA DMA driver for DPDK can only work on NXP SoCs as listed in the
-``Supported DPAA SoCs``.
+DPAA DMA driver for DPDK can only work on NXP SoCs
+as listed in the `Supported DPAA SoCs`_.
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index ba6ad7bf16..7d60b554d8 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -20,9 +20,6 @@ DPDK Release 21.11
       ninja -C build doc
       xdg-open build/doc/guides/html/rel_notes/release_21_11.html
 
-* **Added NXP DPAA DMA driver.**
-
-  * Added a new dmadev driver for NXP DPAA platform.
 
 New Features
 ------------
@@ -99,6 +96,10 @@ New Features
   Added dmadev driver for the DPI DMA hardware accelerator
   of Marvell OCTEONTX2 and OCTEONTX3 family of SoCs.
 
+* **Added NXP DPAA DMA driver.**
+
+  Added a new dmadev driver for NXP DPAA platform.
+
 * **Added support to get all MAC addresses of a device.**
 
   Added ``rte_eth_macaddrs_get`` to allow user to retrieve all Ethernet
diff --git a/drivers/dma/dpaa/dpaa_qdma.c b/drivers/dma/dpaa/dpaa_qdma.c
index cb272c700f..9386fe5698 100644
--- a/drivers/dma/dpaa/dpaa_qdma.c
+++ b/drivers/dma/dpaa/dpaa_qdma.c
@@ -194,7 +194,7 @@ fsl_qdma_pre_request_enqueue_comp_sd_desc(
 	struct fsl_qdma_format *csgf_desc;
 	int i;
 
-	for (i = 0; i < (int)(queue->n_cq + COMMAND_QUEUE_OVERFLLOW); i++) {
+	for (i = 0; i < (int)(queue->n_cq + COMMAND_QUEUE_OVERFLOW); i++) {
 		comp_temp = rte_zmalloc("qdma: comp temp",
 					sizeof(*comp_temp), 0);
 		if (!comp_temp)
diff --git a/drivers/dma/dpaa/dpaa_qdma.h b/drivers/dma/dpaa/dpaa_qdma.h
index bf49b2d5d9..7e9e76e21a 100644
--- a/drivers/dma/dpaa/dpaa_qdma.h
+++ b/drivers/dma/dpaa/dpaa_qdma.h
@@ -22,7 +22,6 @@
 
 #define FSL_QDMA_DMR			0x0
 #define FSL_QDMA_DSR			0x4
-#define FSL_QDMA_DEIER			0xe00
 #define FSL_QDMA_DEDR			0xe04
 #define FSL_QDMA_DECFDW0R		0xe10
 #define FSL_QDMA_DECFDW1R		0xe14
@@ -47,47 +46,25 @@
 #define FSL_QDMA_BSQMR			0x800
 #define FSL_QDMA_BSQSR			0x804
 #define FSL_QDMA_BSQICR			0x828
-#define FSL_QDMA_CQMR			0xa00
-#define FSL_QDMA_CQDSCR1		0xa08
-#define FSL_QDMA_CQDSCR2                0xa0c
 #define FSL_QDMA_CQIER			0xa10
-#define FSL_QDMA_CQEDR			0xa14
 #define FSL_QDMA_SQCCMR			0xa20
 
-#define FSL_QDMA_SQICR_ICEN
-
-#define FSL_QDMA_CQIDR_CQT		0xff000000
-#define FSL_QDMA_CQIDR_SQPE		0x800000
-#define FSL_QDMA_CQIDR_SQT		0x8000
-
-#define FSL_QDMA_BCQIER_CQTIE		0x8000
-#define FSL_QDMA_BCQIER_CQPEIE		0x800000
-#define FSL_QDMA_BSQICR_ICEN		0x80000000
-#define FSL_QDMA_BSQICR_ICST(x)		((x) << 16)
-#define FSL_QDMA_CQIER_MEIE		0x80000000
-#define FSL_QDMA_CQIER_TEIE		0x1
 #define FSL_QDMA_SQCCMR_ENTER_WM	0x200000
 
 #define FSL_QDMA_QUEUE_MAX		8
 
 #define FSL_QDMA_BCQMR_EN		0x80000000
-#define FSL_QDMA_BCQMR_EI		0x40000000
-#define FSL_QDMA_BCQMR_EI_BE           0x40
+#define FSL_QDMA_BCQMR_EI_BE		0x40
 #define FSL_QDMA_BCQMR_CD_THLD(x)	((x) << 20)
 #define FSL_QDMA_BCQMR_CQ_SIZE(x)	((x) << 16)
 
-#define FSL_QDMA_BCQSR_QF		0x10000
-#define FSL_QDMA_BCQSR_XOFF		0x1
-#define FSL_QDMA_BCQSR_QF_XOFF_BE      0x1000100
+#define FSL_QDMA_BCQSR_QF_XOFF_BE	0x1000100
 
 #define FSL_QDMA_BSQMR_EN		0x80000000
-#define FSL_QDMA_BSQMR_DI		0x40000000
 #define FSL_QDMA_BSQMR_DI_BE		0x40
 #define FSL_QDMA_BSQMR_CQ_SIZE(x)	((x) << 16)
 
-#define FSL_QDMA_BSQSR_QE		0x20000
 #define FSL_QDMA_BSQSR_QE_BE		0x200
-#define FSL_QDMA_BSQSR_QF		0x10000
 
 #define FSL_QDMA_DMR_DQD		0x40000000
 #define FSL_QDMA_DSR_DB			0x80000000
@@ -99,13 +76,9 @@
 #define FSL_QDMA_QUEUE_NUM_MAX		8
 
 #define FSL_QDMA_CMD_RWTTYPE		0x4
-#define FSL_QDMA_CMD_LWC                0x2
+#define FSL_QDMA_CMD_LWC		0x2
 
 #define FSL_QDMA_CMD_RWTTYPE_OFFSET	28
-#define FSL_QDMA_CMD_NS_OFFSET		27
-#define FSL_QDMA_CMD_DQOS_OFFSET	24
-#define FSL_QDMA_CMD_WTHROTL_OFFSET	20
-#define FSL_QDMA_CMD_DSEN_OFFSET	19
 #define FSL_QDMA_CMD_LWC_OFFSET		16
 
 #define QDMA_CCDF_STATUS		20
@@ -115,23 +88,21 @@
 #define QDMA_CCDF_SER			BIT(30)
 
 #define QDMA_SG_FIN			BIT(30)
-#define QDMA_SG_EXT			BIT(31)
 #define QDMA_SG_LEN_MASK		GENMASK(29, 0)
 
-#define QDMA_BIG_ENDIAN			1
-#define COMP_TIMEOUT			100000
-#define COMMAND_QUEUE_OVERFLLOW		10
+#define COMMAND_QUEUE_OVERFLOW		10
 
 /* qdma engine attribute */
-#define QDMA_QUEUE_SIZE 64
-#define QDMA_STATUS_SIZE 64
-#define QDMA_CCSR_BASE 0x8380000
-#define VIRT_CHANNELS 32
-#define QDMA_BLOCK_OFFSET 0x10000
-#define QDMA_BLOCKS 4
-#define QDMA_QUEUES 8
-#define QDMA_DELAY 1000
+#define QDMA_QUEUE_SIZE			64
+#define QDMA_STATUS_SIZE		64
+#define QDMA_CCSR_BASE			0x8380000
+#define VIRT_CHANNELS			32
+#define QDMA_BLOCK_OFFSET		0x10000
+#define QDMA_BLOCKS			4
+#define QDMA_QUEUES			8
+#define QDMA_DELAY			1000
 
+#define QDMA_BIG_ENDIAN			1
 #ifdef QDMA_BIG_ENDIAN
 #define QDMA_IN(addr)		be32_to_cpu(rte_read32(addr))
 #define QDMA_OUT(addr, val)	rte_write32(be32_to_cpu(val), addr)
@@ -180,14 +151,6 @@ struct fsl_qdma_ddf {
 	__le32 cmd;
 };
 
-enum dma_status {
-	DMA_COMPLETE,
-	DMA_IN_PROGRESS,
-	DMA_IN_PREPAR,
-	DMA_PAUSED,
-	DMA_ERROR,
-};
-
 struct fsl_qdma_chan {
 	struct fsl_qdma_engine	*qdma;
 	struct fsl_qdma_queue	*queue;
@@ -195,16 +158,12 @@ struct fsl_qdma_chan {
 	struct list_head	list;
 };
 
-struct fsl_qdma_list {
-	struct list_head	dma_list;
-};
-
 struct fsl_qdma_queue {
 	struct fsl_qdma_format	*virt_head;
 	struct list_head	comp_used;
 	struct list_head	comp_free;
 	dma_addr_t		bus_addr;
-	u32                     n_cq;
+	u32			n_cq;
 	u32			id;
 	u32			count;
 	u32			pending;
@@ -214,8 +173,8 @@ struct fsl_qdma_queue {
 };
 
 struct fsl_qdma_comp {
-	dma_addr_t              bus_addr;
-	dma_addr_t              desc_bus_addr;
+	dma_addr_t		bus_addr;
+	dma_addr_t		desc_bus_addr;
 	void			*virt_addr;
 	int			index;
 	void			*desc_virt_addr;
diff --git a/drivers/dma/dpaa/dpaa_qdma_logs.h b/drivers/dma/dpaa/dpaa_qdma_logs.h
index 01d4a508fc..762598f8f7 100644
--- a/drivers/dma/dpaa/dpaa_qdma_logs.h
+++ b/drivers/dma/dpaa/dpaa_qdma_logs.h
@@ -5,10 +5,6 @@
 #ifndef __DPAA_QDMA_LOGS_H__
 #define __DPAA_QDMA_LOGS_H__
 
-#ifdef __cplusplus
-extern "C" {
-#endif
-
 extern int dpaa_qdma_logtype;
 
 #define DPAA_QDMA_LOG(level, fmt, args...) \
@@ -39,8 +35,4 @@ extern int dpaa_qdma_logtype;
 #define DPAA_QDMA_DP_WARN(fmt, args...) \
 	DPAA_QDMA_DP_LOG(WARNING, fmt, ## args)
 
-#ifdef __cplusplus
-}
-#endif
-
 #endif /* __DPAA_QDMA_LOGS_H__ */
diff --git a/drivers/dma/dpaa/meson.build b/drivers/dma/dpaa/meson.build
index 9ab0862ede..c31a6d91fe 100644
--- a/drivers/dma/dpaa/meson.build
+++ b/drivers/dma/dpaa/meson.build
@@ -2,13 +2,13 @@
 # Copyright 2021 NXP
 
 if not is_linux
-	build = false
-	reason = 'only supported on linux'
+    build = false
+    reason = 'only supported on linux'
 endif
 
 deps += ['dmadev', 'bus_dpaa']
 sources = files('dpaa_qdma.c')
 
 if cc.has_argument('-Wno-pointer-arith')
-	cflags += '-Wno-pointer-arith'
+    cflags += '-Wno-pointer-arith'
 endif
diff --git a/drivers/dma/dpaa/version.map b/drivers/dma/dpaa/version.map
index 7bab7bea48..c2e0723b4c 100644
--- a/drivers/dma/dpaa/version.map
+++ b/drivers/dma/dpaa/version.map
@@ -1,4 +1,3 @@
 DPDK_22 {
-
 	local: *;
 };
diff --git a/drivers/dma/meson.build b/drivers/dma/meson.build
index 7cdd6cd28f..8bbc48cbde 100644
--- a/drivers/dma/meson.build
+++ b/drivers/dma/meson.build
@@ -3,7 +3,7 @@
 
 drivers = [
         'cnxk',
-	'dpaa',
+        'dpaa',
         'hisilicon',
         'idxd',
         'ioat',




^ permalink raw reply	[flat|nested] 42+ messages in thread

end of thread, other threads:[~2021-11-10 12:48 UTC | newest]

Thread overview: 42+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-09 11:14 [dpdk-dev] [PATCH 0/6] Introduce DPAA DMA driver Gagandeep Singh
2021-09-09 11:14 ` [dpdk-dev] [PATCH 1/6] dma/dpaa: introduce " Gagandeep Singh
2021-09-09 11:14 ` [dpdk-dev] [PATCH 2/6] dma/dpaa: add device probe and remove functionality Gagandeep Singh
2021-09-09 11:14 ` [dpdk-dev] [PATCH 3/6] dma/dpaa: add driver logs Gagandeep Singh
2021-09-09 11:14 ` [dpdk-dev] [PATCH 4/6] dma/dpaa: support basic operations Gagandeep Singh
2021-09-09 11:14 ` [dpdk-dev] [PATCH 5/6] dma/dpaa: support DMA operations Gagandeep Singh
2021-09-09 11:15 ` [dpdk-dev] [PATCH 6/6] doc: add user guide of DPAA DMA driver Gagandeep Singh
2021-10-27 14:57 ` [dpdk-dev] [PATCH 0/6] Introduce " Thomas Monjalon
2021-10-28  4:34   ` Gagandeep Singh
2021-11-01  8:51 ` [dpdk-dev] [PATCH v2 " Gagandeep Singh
2021-11-01  8:51   ` [dpdk-dev] [PATCH v2 1/6] dma/dpaa: introduce " Gagandeep Singh
2021-11-02  8:51     ` fengchengwen
2021-11-02 15:27       ` Thomas Monjalon
2021-11-08  9:06     ` [dpdk-dev] [PATCH v3 0/7] Introduce " Gagandeep Singh
2021-11-08  9:06       ` [dpdk-dev] [PATCH v3 1/7] dma/dpaa: introduce " Gagandeep Singh
2021-11-09  4:39         ` [dpdk-dev] [PATCH v4 0/5] Introduce " Gagandeep Singh
2021-11-09  4:39           ` [dpdk-dev] [PATCH v4 1/5] dma/dpaa: introduce " Gagandeep Singh
2021-11-09 14:44             ` Thomas Monjalon
2021-11-10  5:17               ` Gagandeep Singh
2021-11-09  4:39           ` [dpdk-dev] [PATCH v4 2/5] dma/dpaa: add device probe and remove functionality Gagandeep Singh
2021-11-09  4:39           ` [dpdk-dev] [PATCH v4 3/5] dma/dpaa: support basic operations Gagandeep Singh
2021-11-09  4:39           ` [dpdk-dev] [PATCH v4 4/5] dma/dpaa: support DMA operations Gagandeep Singh
2021-11-09  4:39           ` [dpdk-dev] [PATCH v4 5/5] dma/dpaa: support statistics Gagandeep Singh
2021-11-10 12:48           ` [dpdk-dev] [PATCH v4 0/5] Introduce DPAA DMA driver Thomas Monjalon
2021-11-08  9:06       ` [dpdk-dev] [PATCH v3 2/7] dma/dpaa: add device probe and remove functionality Gagandeep Singh
2021-11-08  9:07       ` [dpdk-dev] [PATCH v3 3/7] dma/dpaa: add driver logs Gagandeep Singh
2021-11-08  9:38         ` Thomas Monjalon
2021-11-08  9:07       ` [dpdk-dev] [PATCH v3 4/7] dma/dpaa: support basic operations Gagandeep Singh
2021-11-08  9:07       ` [dpdk-dev] [PATCH v3 5/7] dma/dpaa: support DMA operations Gagandeep Singh
2021-11-08  9:07       ` [dpdk-dev] [PATCH v3 6/7] dma/dpaa: support statistics Gagandeep Singh
2021-11-08  9:07       ` [dpdk-dev] [PATCH v3 7/7] doc: add user guide of DPAA DMA driver Gagandeep Singh
2021-11-08  9:37         ` Thomas Monjalon
2021-11-08  9:39         ` Thomas Monjalon
2021-11-01  8:51   ` [dpdk-dev] [PATCH v2 2/6] dma/dpaa: add device probe and remove functionality Gagandeep Singh
2021-11-02  9:07     ` fengchengwen
2021-11-01  8:51   ` [dpdk-dev] [PATCH v2 3/6] dma/dpaa: add driver logs Gagandeep Singh
2021-11-01  8:51   ` [dpdk-dev] [PATCH v2 4/6] dma/dpaa: support basic operations Gagandeep Singh
2021-11-02  9:21     ` fengchengwen
2021-11-01  8:51   ` [dpdk-dev] [PATCH v2 5/6] dma/dpaa: support DMA operations Gagandeep Singh
2021-11-02  9:31     ` fengchengwen
2021-11-08  9:06       ` Gagandeep Singh
2021-11-01  8:51   ` [dpdk-dev] [PATCH v2 6/6] doc: add user guide of DPAA DMA driver Gagandeep Singh

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).