* [dpdk-dev] [PATCH RFC] raw: add dpaa qdma driver
@ 2020-09-07 9:50 Gagandeep Singh
2020-09-25 6:10 ` Hemant Agrawal
0 siblings, 1 reply; 4+ messages in thread
From: Gagandeep Singh @ 2020-09-07 9:50 UTC (permalink / raw)
To: dev, nipun.gupta, hemant.agrawal; +Cc: thomas, Gagandeep Singh, Peng Ma
This patch adds support for dpaa qdma based driver.
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Signed-off-by: Peng Ma <peng.ma@nxp.com>
---
doc/guides/rawdevs/dpaa_qdma.rst | 98 ++
doc/guides/rawdevs/index.rst | 1 +
drivers/bus/dpaa/dpaa_bus.c | 22 +
drivers/bus/dpaa/rte_dpaa_bus.h | 5 +
drivers/common/dpaax/dpaa_list.h | 6 +-
drivers/raw/dpaa_qdma/dpaa_qdma.c | 1074 ++++++++++++++++++++
drivers/raw/dpaa_qdma/dpaa_qdma.h | 275 +++++
drivers/raw/dpaa_qdma/dpaa_qdma_logs.h | 46 +
drivers/raw/dpaa_qdma/meson.build | 15 +
.../raw/dpaa_qdma/rte_rawdev_dpaa_qdma_version.map | 3 +
drivers/raw/meson.build | 2 +-
11 files changed, 1543 insertions(+), 4 deletions(-)
create mode 100644 doc/guides/rawdevs/dpaa_qdma.rst
create mode 100644 drivers/raw/dpaa_qdma/dpaa_qdma.c
create mode 100644 drivers/raw/dpaa_qdma/dpaa_qdma.h
create mode 100644 drivers/raw/dpaa_qdma/dpaa_qdma_logs.h
create mode 100644 drivers/raw/dpaa_qdma/meson.build
create mode 100644 drivers/raw/dpaa_qdma/rte_rawdev_dpaa_qdma_version.map
diff --git a/doc/guides/rawdevs/dpaa_qdma.rst b/doc/guides/rawdevs/dpaa_qdma.rst
new file mode 100644
index 0000000..49457f6
--- /dev/null
+++ b/doc/guides/rawdevs/dpaa_qdma.rst
@@ -0,0 +1,98 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright 2018 NXP
+
+NXP DPAA QDMA Driver
+=====================
+
+The DPAA QDMA is an implementation of the rawdev API, that provide means
+to initiate a DMA transaction from CPU. The initiated DMA is performed
+without CPU being involved in the actual DMA transaction. This is achieved
+via using the DPDMAI device exposed by MC.
+
+More information can be found at `NXP Official Website
+<http://www.nxp.com/products/microcontrollers-and-processors/arm-processors/qoriq-arm-processors:QORIQ-ARM>`_.
+
+Features
+--------
+
+The DPAA QDMA implements following features in the rawdev API;
+
+- Supports issuing DMA of data within memory without hogging CPU while
+ performing DMA operation.
+- Supports configuring to optionally get status of the DMA translation on
+ per DMA operation basis.
+
+Supported DPAA SoCs
+--------------------
+
+- LS1043A
+- LS1046A
+
+Prerequisites
+-------------
+
+See :doc:`../platform/dpaa` for setup information
+
+Currently supported by DPDK:
+
+- NXP SDK **19.09+**.
+- Supported architectures: **arm64 LE**.
+
+- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
+
+.. note::
+
+ Some part of fslmc bus code (mc flib - object library) routines are
+ dual licensed (BSD & GPLv2).
+
+Pre-Installation Configuration
+------------------------------
+
+Config File Options
+~~~~~~~~~~~~~~~~~~~
+
+The following options can be modified in the ``config`` file.
+
+- ``CONFIG_RTE_LIBRTE_PMD_DPAA_QDMA_RAWDEV`` (default ``y``)
+
+ Toggle compilation of the ``lrte_pmd_dpaa_qdma`` driver.
+
+Enabling logs
+-------------
+
+For enabling logs, use the following EAL parameter:
+
+.. code-block:: console
+
+ ./your_qdma_application <EAL args> --log-level=pmd.raw.dpaa.qdma,<level>
+
+Using ``pmd.raw.dpaa.qdma`` as log matching criteria, all Event PMD logs can be
+enabled which are lower than logging ``level``.
+
+Driver Compilation
+~~~~~~~~~~~~~~~~~~
+
+To compile the DPAA QDMA PMD for Linux arm64 gcc target, run the
+following ``make`` command:
+
+.. code-block:: console
+
+ cd <DPDK-source-directory>
+ make config T=arm64-dpaa-linux-gcc install
+
+Initialization
+--------------
+
+The DPAA QDMA is exposed as a vdev device which consists of dpdmai devices.
+On EAL initialization, dpdmai devices will be probed and populated into the
+rawdevices. The rawdev ID of the device can be obtained using
+
+* Invoking ``rte_rawdev_get_dev_id("dpdmai.x")`` from the application
+ where x is the object ID of the DPDMAI object created by MC. Use can
+ use this index for further rawdev function calls.
+
+Platform Requirement
+~~~~~~~~~~~~~~~~~~~~
+
+DPAA drivers for DPDK can only work on NXP SoCs as listed in the
+``Supported DPAA SoCs``.
diff --git a/doc/guides/rawdevs/index.rst b/doc/guides/rawdevs/index.rst
index f64ec44..8450006 100644
--- a/doc/guides/rawdevs/index.rst
+++ b/doc/guides/rawdevs/index.rst
@@ -11,6 +11,7 @@ application through rawdev API.
:maxdepth: 2
:numbered:
+ dpaa_qdma
dpaa2_cmdif
dpaa2_qdma
ifpga
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index 32e872d..8697e9e 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -229,6 +229,28 @@ dpaa_create_device_list(void)
rte_dpaa_bus.device_count += i;
+ /* Creating QDMA Device */
+ for (i = 0; i < RTE_DPAA_QDMA_DEVICES; i++) {
+ dev = calloc(1, sizeof(struct rte_dpaa_device));
+ if (!dev) {
+ DPAA_BUS_LOG(ERR, "Failed to allocate QDMA device");
+ ret = -1;
+ goto cleanup;
+ }
+
+ dev->device_type = FSL_DPAA_QDMA;
+ dev->id.dev_id = rte_dpaa_bus.device_count + i;
+
+ memset(dev->name, 0, RTE_ETH_NAME_MAX_LEN);
+ sprintf(dev->name, "dpaa_qdma-%d", i+1);
+ DPAA_BUS_LOG(INFO, "%s qdma device added", dev->name);
+ dev->device.name = dev->name;
+ dev->device.devargs = dpaa_devargs_lookup(dev);
+
+ dpaa_add_to_device_list(dev);
+ }
+ rte_dpaa_bus.device_count += i;
+
return 0;
cleanup:
diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h
index fdaa63a..959cfdb 100644
--- a/drivers/bus/dpaa/rte_dpaa_bus.h
+++ b/drivers/bus/dpaa/rte_dpaa_bus.h
@@ -33,6 +33,9 @@
/** Device driver supports link state interrupt */
#define RTE_DPAA_DRV_INTR_LSC 0x0008
+/** Number of supported QDMA devices */
+#define RTE_DPAA_QDMA_DEVICES 1
+
#define RTE_DEV_TO_DPAA_CONST(ptr) \
container_of(ptr, const struct rte_dpaa_device, device)
@@ -48,6 +51,7 @@ TAILQ_HEAD(rte_dpaa_driver_list, rte_dpaa_driver);
enum rte_dpaa_type {
FSL_DPAA_ETH = 1,
FSL_DPAA_CRYPTO,
+ FSL_DPAA_QDMA,
};
struct rte_dpaa_bus {
@@ -70,6 +74,7 @@ struct rte_dpaa_device {
union {
struct rte_eth_dev *eth_dev;
struct rte_cryptodev *crypto_dev;
+ struct rte_rawdev *rawdev;
};
struct rte_dpaa_driver *driver;
struct dpaa_device_id id;
diff --git a/drivers/common/dpaax/dpaa_list.h b/drivers/common/dpaax/dpaa_list.h
index e945759..58c563e 100644
--- a/drivers/common/dpaax/dpaa_list.h
+++ b/drivers/common/dpaax/dpaa_list.h
@@ -1,7 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- *
- * Copyright 2017 NXP
- *
+ * Copyright 2017,2020 NXP
*/
#ifndef __DPAA_LIST_H
@@ -35,6 +33,8 @@ do { \
const struct list_head *__p298 = (p); \
((__p298->next == __p298) && (__p298->prev == __p298)); \
})
+#define list_first_entry(ptr, type, member) \
+ list_entry((ptr)->next, type, member)
#define list_add(p, l) \
do { \
struct list_head *__p298 = (p); \
diff --git a/drivers/raw/dpaa_qdma/dpaa_qdma.c b/drivers/raw/dpaa_qdma/dpaa_qdma.c
new file mode 100644
index 0000000..6897dc4
--- /dev/null
+++ b/drivers/raw/dpaa_qdma/dpaa_qdma.c
@@ -0,0 +1,1074 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020 NXP
+ * Driver for NXP Layerscape Queue direct memory access controller (qDMA)
+ */
+
+#include <sys/time.h>
+#include <semaphore.h>
+
+#include <rte_mbuf.h>
+#include <rte_rawdev.h>
+#include <rte_dpaa_bus.h>
+#include <rte_rawdev_pmd.h>
+#include <compat.h>
+#include <rte_hexdump.h>
+
+#include <rte_pmd_dpaa2_qdma.h>
+#include "dpaa_qdma.h"
+#include "dpaa_qdma_logs.h"
+
+/* Dynamic log type identifier */
+int dpaa_qdma_logtype;
+
+static inline u64
+qdma_ccdf_addr_get64(const struct fsl_qdma_format *ccdf)
+{
+ return rte_le_to_cpu_64(ccdf->data) & 0xffffffffffLLU;
+}
+
+static inline void
+qdma_desc_addr_set64(struct fsl_qdma_format *ccdf, u64 addr)
+{
+ ccdf->addr_hi = upper_32_bits(addr);
+ ccdf->addr_lo = rte_cpu_to_le_32(lower_32_bits(addr));
+}
+
+static inline u64
+qdma_ccdf_get_queue(const struct fsl_qdma_format *ccdf)
+{
+ return ccdf->cfg8b_w1 & 0xff;
+}
+
+static inline int
+qdma_ccdf_get_offset(const struct fsl_qdma_format *ccdf)
+{
+ return (rte_le_to_cpu_32(ccdf->cfg) & QDMA_CCDF_MASK) >> QDMA_CCDF_OFFSET;
+}
+
+static inline void
+qdma_ccdf_set_format(struct fsl_qdma_format *ccdf, int offset)
+{
+ ccdf->cfg = rte_cpu_to_le_32(QDMA_CCDF_FOTMAT | offset);
+}
+
+static inline int
+qdma_ccdf_get_status(const struct fsl_qdma_format *ccdf)
+{
+ return (rte_le_to_cpu_32(ccdf->status) & QDMA_CCDF_MASK) >> QDMA_CCDF_STATUS;
+}
+
+static inline void
+qdma_ccdf_set_ser(struct fsl_qdma_format *ccdf, int status)
+{
+ ccdf->status = rte_cpu_to_le_32(QDMA_CCDF_SER | status);
+}
+
+static inline void qdma_csgf_set_len(struct fsl_qdma_format *csgf, int len)
+{
+ csgf->cfg = rte_cpu_to_le_32(len & QDMA_SG_LEN_MASK);
+}
+
+static inline void qdma_csgf_set_f(struct fsl_qdma_format *csgf, int len)
+{
+ csgf->cfg = rte_cpu_to_le_32(QDMA_SG_FIN | (len & QDMA_SG_LEN_MASK));
+}
+
+static inline void qdma_csgf_set_e(struct fsl_qdma_format *csgf, int len)
+{
+ csgf->cfg = rte_cpu_to_le_32(QDMA_SG_EXT | (len & QDMA_SG_LEN_MASK));
+}
+
+static inline int ilog2(int x)
+{
+ int log = 0;
+
+ x >>= 1;
+
+ while (x) {
+ log++;
+ x >>= 1;
+ }
+ return log;
+}
+
+static u32 qdma_readl(struct fsl_qdma_engine *qdma, void *addr)
+{
+ return QDMA_IN(qdma, addr);
+}
+
+static void qdma_writel(struct fsl_qdma_engine *qdma, u32 val,
+ void *addr)
+{
+ QDMA_OUT(qdma, addr, val);
+}
+
+static void *dma_pool_alloc(int size, int aligned, dma_addr_t *phy_addr)
+{
+#ifdef QDMA_MEMZONE
+ void *virt_addr;
+
+ virt_addr = rte_malloc("dma pool alloc", size, aligned);
+ if (!virt_addr)
+ return NULL;
+
+ *phy_addr = rte_mem_virt2iova(virt_addr);
+
+ return virt_addr;
+#else
+ const struct rte_memzone *mz;
+ char mz_name[RTE_MEMZONE_NAMESIZE];
+ uint32_t core_id = rte_lcore_id();
+ unsigned int socket_id;
+ int count = 0;
+
+ bzero(mz_name, sizeof(*mz_name));
+ snprintf(mz_name, sizeof(mz_name) - 1, "%lx-times-%d",
+ (unsigned long)rte_get_timer_cycles(), count);
+ if (core_id == (unsigned int)LCORE_ID_ANY)
+ core_id = 0;
+ socket_id = rte_lcore_to_socket_id(core_id);
+ mz = rte_memzone_reserve_aligned(mz_name, size, socket_id, 0, aligned);
+ if (!mz) {
+ *phy_addr = 0;
+ return NULL;
+ }
+ *phy_addr = mz->iova;
+
+ qdma_mz_mapping[qdma_mz_count++] = mz;
+
+ return mz->addr;
+
+#endif
+}
+
+#ifdef QDMA_MEMZONE
+static void dma_pool_free(void *addr)
+{
+ rte_free(addr);
+# else
+static void dma_pool_free(dma_addr_t *addr)
+{
+ uint16_t i;
+
+ for (i = 0 ; i < qdma_mz_count; i++) {
+ if (addr == (dma_addr_t *)qdma_mz_mapping[i]->iova) {
+ rte_memzone_free(qdma_mz_mapping[i]);
+ return;
+ }
+ }
+#endif
+}
+
+static void fsl_qdma_free_chan_resources(struct fsl_qdma_chan *fsl_chan)
+{
+ struct fsl_qdma_queue *fsl_queue = fsl_chan->queue;
+ struct fsl_qdma_engine *fsl_qdma = fsl_chan->qdma;
+ struct fsl_qdma_comp *comp_temp, *_comp_temp;
+ int id;
+
+ if (--fsl_queue->count)
+ goto finally;
+
+ id = (fsl_qdma->block_base - fsl_queue->block_base) /
+ fsl_qdma->block_offset;
+
+ while (rte_atomic32_read(&wait_task[id]) == 1)
+ rte_delay_us(QDMA_DELAY);
+
+ list_for_each_entry_safe(comp_temp, _comp_temp,
+ &fsl_queue->comp_used, list) {
+ list_del(&comp_temp->list);
+#ifdef QDMA_MEMZONE
+ dma_pool_free(comp_temp->virt_addr);
+ dma_pool_free(comp_temp->desc_virt_addr);
+#else
+ dma_pool_free((dma_addr_t *)comp_temp->bus_addr);
+ dma_pool_free((dma_addr_t *)comp_temp->desc_bus_addr);
+#endif
+ rte_free(comp_temp);
+ }
+
+ list_for_each_entry_safe(comp_temp, _comp_temp,
+ &fsl_queue->comp_free, list) {
+ list_del(&comp_temp->list);
+#ifdef QDMA_MEMZONE
+ dma_pool_free(comp_temp->virt_addr);
+ dma_pool_free(comp_temp->desc_virt_addr);
+#else
+ dma_pool_free((dma_addr_t *)comp_temp->bus_addr);
+ dma_pool_free((dma_addr_t *)comp_temp->desc_bus_addr);
+#endif
+ rte_free(comp_temp);
+ }
+
+finally:
+ fsl_qdma->desc_allocated--;
+}
+
+static void fsl_qdma_comp_fill_memcpy(struct fsl_qdma_comp *fsl_comp,
+ dma_addr_t dst, dma_addr_t src, u32 len)
+{
+ struct fsl_qdma_format *ccdf, *csgf_desc, *csgf_src, *csgf_dest;
+ struct fsl_qdma_sdf *sdf;
+ struct fsl_qdma_ddf *ddf;
+
+ ccdf = (struct fsl_qdma_format *)fsl_comp->virt_addr;
+ csgf_desc = (struct fsl_qdma_format *)fsl_comp->virt_addr + 1;
+ csgf_src = (struct fsl_qdma_format *)fsl_comp->virt_addr + 2;
+ csgf_dest = (struct fsl_qdma_format *)fsl_comp->virt_addr + 3;
+ sdf = (struct fsl_qdma_sdf *)fsl_comp->desc_virt_addr;
+ ddf = (struct fsl_qdma_ddf *)fsl_comp->desc_virt_addr + 1;
+
+ memset(fsl_comp->virt_addr, 0, FSL_QDMA_COMMAND_BUFFER_SIZE);
+ memset(fsl_comp->desc_virt_addr, 0, FSL_QDMA_DESCRIPTOR_BUFFER_SIZE);
+ /* Head Command Descriptor(Frame Descriptor) */
+ qdma_desc_addr_set64(ccdf, fsl_comp->bus_addr + 16);
+ qdma_ccdf_set_format(ccdf, qdma_ccdf_get_offset(ccdf));
+ qdma_ccdf_set_ser(ccdf, qdma_ccdf_get_status(ccdf));
+ /* Status notification is enqueued to status queue. */
+ /* Compound Command Descriptor(Frame List Table) */
+ qdma_desc_addr_set64(csgf_desc, fsl_comp->desc_bus_addr);
+ /* It must be 32 as Compound S/G Descriptor */
+ qdma_csgf_set_len(csgf_desc, 32);
+ qdma_desc_addr_set64(csgf_src, src);
+ qdma_csgf_set_len(csgf_src, len);
+ qdma_desc_addr_set64(csgf_dest, dst);
+ qdma_csgf_set_len(csgf_dest, len);
+ /* This entry is the last entry. */
+ qdma_csgf_set_f(csgf_dest, len);
+ /* Descriptor Buffer */
+ sdf->cmd = rte_cpu_to_le_32(FSL_QDMA_CMD_RWTTYPE <<
+ FSL_QDMA_CMD_RWTTYPE_OFFSET);
+ ddf->cmd = rte_cpu_to_le_32(FSL_QDMA_CMD_RWTTYPE <<
+ FSL_QDMA_CMD_RWTTYPE_OFFSET);
+ ddf->cmd |= rte_cpu_to_le_32(FSL_QDMA_CMD_LWC <<
+ FSL_QDMA_CMD_LWC_OFFSET);
+}
+
+/*
+ * Pre-request command descriptor and compound S/G for enqueue.
+ */
+static int fsl_qdma_pre_request_enqueue_comp_desc(struct fsl_qdma_queue *queue,
+ int size, int aligned)
+{
+ struct fsl_qdma_comp *comp_temp;
+ int i;
+
+ for (i = 0; i < (int)(queue->n_cq + COMMAND_QUEUE_OVERFLLOW); i++) {
+ comp_temp = rte_zmalloc("qdma: comp temp",
+ sizeof(*comp_temp), 0);
+ if (!comp_temp)
+ return -ENOMEM;
+
+ comp_temp->virt_addr =
+ dma_pool_alloc(size, aligned, &comp_temp->bus_addr);
+ if (!comp_temp->virt_addr) {
+ rte_free(comp_temp);
+ return -ENOMEM;
+ }
+
+ list_add_tail(&comp_temp->list, &queue->comp_free);
+ }
+
+ return 0;
+}
+
+/*
+ * Pre-request source and destination descriptor for enqueue.
+ */
+static int fsl_qdma_pre_request_enqueue_sd_desc(struct fsl_qdma_queue *queue,
+ int size, int aligned)
+{
+ struct fsl_qdma_comp *comp_temp, *_comp_temp;
+
+ list_for_each_entry_safe(comp_temp, _comp_temp,
+ &queue->comp_free, list) {
+ comp_temp->desc_virt_addr =
+ dma_pool_alloc(size, aligned, &comp_temp->desc_bus_addr);
+ if (!comp_temp->desc_virt_addr)
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+/*
+ * Request a command descriptor for enqueue.
+ */
+static struct fsl_qdma_comp *
+fsl_qdma_request_enqueue_desc(struct fsl_qdma_chan *fsl_chan)
+{
+ struct fsl_qdma_queue *queue = fsl_chan->queue;
+ struct fsl_qdma_comp *comp_temp;
+ int timeout = COMP_TIMEOUT;
+
+ while (timeout) {
+ if (!list_empty(&queue->comp_free)) {
+ comp_temp = list_first_entry(&queue->comp_free,
+ struct fsl_qdma_comp,
+ list);
+ list_del(&comp_temp->list);
+ return comp_temp;
+ }
+ timeout--;
+ }
+
+ return NULL;
+}
+
+static struct fsl_qdma_queue
+*fsl_qdma_alloc_queue_resources(struct fsl_qdma_engine *fsl_qdma)
+{
+ struct fsl_qdma_queue *queue_head, *queue_temp;
+ int len, i, j;
+ int queue_num;
+ int blocks;
+ unsigned int queue_size[FSL_QDMA_QUEUE_MAX];
+
+ queue_num = fsl_qdma->n_queues;
+ blocks = fsl_qdma->num_blocks;
+
+ len = sizeof(*queue_head) * queue_num * blocks;
+ queue_head = rte_zmalloc("qdma: queue head", len, 0);
+ if (!queue_head)
+ return NULL;
+
+ for (i = 0; i < FSL_QDMA_QUEUE_MAX; i++)
+ queue_size[i] = QDMA_QUEUE_SIZE;
+
+ for (j = 0; j < blocks; j++) {
+ for (i = 0; i < queue_num; i++) {
+ if (queue_size[i] > FSL_QDMA_CIRCULAR_DESC_SIZE_MAX ||
+ queue_size[i] < FSL_QDMA_CIRCULAR_DESC_SIZE_MIN) {
+ DPAA_QDMA_ERR("Get wrong queue-sizes.\n");
+ return NULL;
+ }
+ queue_temp = queue_head + i + (j * queue_num);
+
+ queue_temp->cq =
+ dma_pool_alloc(sizeof(struct fsl_qdma_format) *
+ queue_size[i],
+ sizeof(struct fsl_qdma_format) *
+ queue_size[i], &queue_temp->bus_addr);
+
+ memset(queue_temp->cq, 0x0, queue_size[i] *
+ sizeof(struct fsl_qdma_format));
+
+ if (!queue_temp->cq)
+ return NULL;
+
+ queue_temp->block_base = fsl_qdma->block_base +
+ FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, j);
+ queue_temp->n_cq = queue_size[i];
+ queue_temp->id = i;
+ queue_temp->count = 0;
+ queue_temp->virt_head = queue_temp->cq;
+
+ }
+ }
+ return queue_head;
+}
+
+static struct fsl_qdma_queue *fsl_qdma_prep_status_queue(void)
+{
+ struct fsl_qdma_queue *status_head;
+ unsigned int status_size;
+
+ status_size = QDMA_STATUS_SIZE;
+ if (status_size > FSL_QDMA_CIRCULAR_DESC_SIZE_MAX ||
+ status_size < FSL_QDMA_CIRCULAR_DESC_SIZE_MIN) {
+ DPAA_QDMA_ERR("Get wrong status_size.\n");
+ return NULL;
+ }
+
+ status_head = rte_zmalloc("qdma: status head", sizeof(*status_head), 0);
+ if (!status_head)
+ return NULL;
+
+ /*
+ * Buffer for queue command
+ */
+ status_head->cq = dma_pool_alloc(sizeof(struct fsl_qdma_format) *
+ status_size,
+ sizeof(struct fsl_qdma_format) *
+ status_size,
+ &status_head->bus_addr);
+
+ memset(status_head->cq, 0x0, status_size *
+ sizeof(struct fsl_qdma_format));
+ if (!status_head->cq)
+ return NULL;
+
+ status_head->n_cq = status_size;
+ status_head->virt_head = status_head->cq;
+
+ return status_head;
+}
+
+static int fsl_qdma_halt(struct fsl_qdma_engine *fsl_qdma)
+{
+ void *ctrl = fsl_qdma->ctrl_base;
+ void *block;
+ int i, count = RETRIES;
+ unsigned int j;
+ u32 reg;
+
+ /* Disable the command queue and wait for idle state. */
+ reg = qdma_readl(fsl_qdma, ctrl + FSL_QDMA_DMR);
+ reg |= FSL_QDMA_DMR_DQD;
+ qdma_writel(fsl_qdma, reg, ctrl + FSL_QDMA_DMR);
+ for (j = 0; j < fsl_qdma->num_blocks; j++) {
+ block = fsl_qdma->block_base +
+ FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, j);
+ for (i = 0; i < FSL_QDMA_QUEUE_NUM_MAX; i++)
+ qdma_writel(fsl_qdma, 0, block + FSL_QDMA_BCQMR(i));
+ }
+ while (true) {
+ reg = qdma_readl(fsl_qdma, ctrl + FSL_QDMA_DSR);
+ if (!(reg & FSL_QDMA_DSR_DB))
+ break;
+ if (count-- < 0)
+ return -EBUSY;
+ rte_delay_us(100);
+ }
+
+ for (j = 0; j < fsl_qdma->num_blocks; j++) {
+ block = fsl_qdma->block_base +
+ FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, j);
+
+ /* Disable status queue. */
+ qdma_writel(fsl_qdma, 0, block + FSL_QDMA_BSQMR);
+
+ /*
+ * clear the command queue interrupt detect register for
+ * all queues.
+ */
+ qdma_writel(fsl_qdma, 0xffffffff, block + FSL_QDMA_BCQIDR(0));
+ }
+
+ return 0;
+}
+
+static int
+fsl_qdma_queue_transfer_complete(struct fsl_qdma_engine *fsl_qdma,
+ void *block, int id,
+ struct rte_qdma_enqdeq *e_context)
+{
+ struct fsl_qdma_queue *fsl_queue = fsl_qdma->queue;
+ struct fsl_qdma_queue *fsl_status = fsl_qdma->status[id];
+ struct fsl_qdma_queue *temp_queue;
+ struct fsl_qdma_format *status_addr;
+ struct fsl_qdma_comp *fsl_comp = NULL;
+ u32 reg, i;
+ int count = 0;
+ bool duplicate, duplicate_handle;
+
+ while (true) {
+ reg = qdma_readl(fsl_qdma, block + FSL_QDMA_BSQSR);
+ if (reg & FSL_QDMA_BSQSR_QE)
+ return count;
+
+ duplicate = 0;
+ duplicate_handle = 0;
+ status_addr = fsl_status->virt_head;
+
+ if (qdma_ccdf_get_queue(status_addr) ==
+ pre_queue[id] &&
+ qdma_ccdf_addr_get64(status_addr) ==
+ pre_addr[id]) {
+ duplicate = 1;
+ }
+ i = qdma_ccdf_get_queue(status_addr) +
+ id * fsl_qdma->n_queues;
+ pre_addr[id] = qdma_ccdf_addr_get64(status_addr);
+ pre_queue[id] = qdma_ccdf_get_queue(status_addr);
+ temp_queue = fsl_queue + i;
+
+ if (list_empty(&temp_queue->comp_used)) {
+ if (duplicate)
+ duplicate_handle = 1;
+ else
+ continue;
+ } else {
+ fsl_comp = list_first_entry(&temp_queue->comp_used,
+ struct fsl_qdma_comp,
+ list);
+ if (fsl_comp->bus_addr + 16 !=
+ pre_addr[id]) {
+ if (duplicate)
+ duplicate_handle = 1;
+ else
+ return -1;
+ }
+ }
+
+ if (duplicate_handle) {
+ reg = qdma_readl(fsl_qdma, block + FSL_QDMA_BSQMR);
+ reg |= FSL_QDMA_BSQMR_DI;
+ qdma_desc_addr_set64(status_addr, 0x0);
+ fsl_status->virt_head++;
+ if (fsl_status->virt_head == fsl_status->cq
+ + fsl_status->n_cq)
+ fsl_status->virt_head = fsl_status->cq;
+ qdma_writel(fsl_qdma, reg, block + FSL_QDMA_BSQMR);
+ continue;
+ }
+ list_del(&fsl_comp->list);
+
+ reg = qdma_readl(fsl_qdma, block + FSL_QDMA_BSQMR);
+ reg |= FSL_QDMA_BSQMR_DI;
+ qdma_desc_addr_set64(status_addr, 0x0);
+ fsl_status->virt_head++;
+ if (fsl_status->virt_head == fsl_status->cq + fsl_status->n_cq)
+ fsl_status->virt_head = fsl_status->cq;
+ qdma_writel(fsl_qdma, reg, block + FSL_QDMA_BSQMR);
+ list_add_tail(&fsl_comp->list, &temp_queue->comp_free);
+ e_context->job[count] = (struct rte_qdma_job *)fsl_comp->params;
+ count++;
+
+ fsl_comp->qchan->status = DMA_COMPLETE;
+ }
+ return count;
+}
+
+static int fsl_qdma_reg_init(struct fsl_qdma_engine *fsl_qdma)
+{
+ struct fsl_qdma_queue *fsl_queue = fsl_qdma->queue;
+ struct fsl_qdma_queue *temp;
+ void *ctrl = fsl_qdma->ctrl_base;
+ void *status = fsl_qdma->status_base;
+ void *block;
+ u32 i, j;
+ u32 reg;
+ int ret, val;
+
+ /* Try to halt the qDMA engine first. */
+ ret = fsl_qdma_halt(fsl_qdma);
+ if (ret) {
+ DPAA_QDMA_ERR("DMA halt failed!");
+ return ret;
+ }
+
+ for (j = 0; j < fsl_qdma->num_blocks; j++) {
+ block = fsl_qdma->block_base +
+ FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, j);
+ for (i = 0; i < fsl_qdma->n_queues; i++) {
+ temp = fsl_queue + i + (j * fsl_qdma->n_queues);
+ /*
+ * Initialize Command Queue registers to
+ * point to the first
+ * command descriptor in memory.
+ * Dequeue Pointer Address Registers
+ * Enqueue Pointer Address Registers
+ */
+
+ qdma_writel(fsl_qdma, lower_32_bits(temp->bus_addr),
+ block + FSL_QDMA_BCQDPA_SADDR(i));
+ qdma_writel(fsl_qdma, upper_32_bits(temp->bus_addr),
+ block + FSL_QDMA_BCQEDPA_SADDR(i));
+ qdma_writel(fsl_qdma, lower_32_bits(temp->bus_addr),
+ block + FSL_QDMA_BCQEPA_SADDR(i));
+ qdma_writel(fsl_qdma, upper_32_bits(temp->bus_addr),
+ block + FSL_QDMA_BCQEEPA_SADDR(i));
+
+ /* Initialize the queue mode. */
+ reg = FSL_QDMA_BCQMR_EN;
+ reg |= FSL_QDMA_BCQMR_CD_THLD(ilog2(temp->n_cq) - 4);
+ reg |= FSL_QDMA_BCQMR_CQ_SIZE(ilog2(temp->n_cq) - 6);
+ qdma_writel(fsl_qdma, reg, block + FSL_QDMA_BCQMR(i));
+ }
+
+ /*
+ * Workaround for erratum: ERR010812.
+ * We must enable XOFF to avoid the enqueue rejection occurs.
+ * Setting SQCCMR ENTER_WM to 0x20.
+ */
+
+ qdma_writel(fsl_qdma, FSL_QDMA_SQCCMR_ENTER_WM,
+ block + FSL_QDMA_SQCCMR);
+
+ /*
+ * Initialize status queue registers to point to the first
+ * command descriptor in memory.
+ * Dequeue Pointer Address Registers
+ * Enqueue Pointer Address Registers
+ */
+
+ qdma_writel(fsl_qdma,
+ upper_32_bits(fsl_qdma->status[j]->bus_addr),
+ block + FSL_QDMA_SQEEPAR);
+ qdma_writel(fsl_qdma,
+ lower_32_bits(fsl_qdma->status[j]->bus_addr),
+ block + FSL_QDMA_SQEPAR);
+ qdma_writel(fsl_qdma,
+ upper_32_bits(fsl_qdma->status[j]->bus_addr),
+ block + FSL_QDMA_SQEDPAR);
+ qdma_writel(fsl_qdma,
+ lower_32_bits(fsl_qdma->status[j]->bus_addr),
+ block + FSL_QDMA_SQDPAR);
+ /* Desiable status queue interrupt. */
+
+ qdma_writel(fsl_qdma, 0x0, block + FSL_QDMA_BCQIER(0));
+ qdma_writel(fsl_qdma, 0x0, block + FSL_QDMA_BSQICR);
+ qdma_writel(fsl_qdma, 0x0, block + FSL_QDMA_CQIER);
+
+ /* Initialize the status queue mode. */
+ reg = FSL_QDMA_BSQMR_EN;
+ val = ilog2(fsl_qdma->status[j]->n_cq) - 6;
+ reg |= FSL_QDMA_BSQMR_CQ_SIZE(val);
+ qdma_writel(fsl_qdma, reg, block + FSL_QDMA_BSQMR);
+ }
+
+ /* Initialize controller interrupt register. */
+ qdma_writel(fsl_qdma, 0xffffffff, status + FSL_QDMA_DEDR);
+ qdma_writel(fsl_qdma, 0xffffffff, status + FSL_QDMA_DEIER);
+
+ reg = qdma_readl(fsl_qdma, ctrl + FSL_QDMA_DMR);
+ reg &= ~FSL_QDMA_DMR_DQD;
+ qdma_writel(fsl_qdma, reg, ctrl + FSL_QDMA_DMR);
+
+ return 0;
+}
+
+static void *
+fsl_qdma_prep_memcpy(void *fsl_chan, dma_addr_t dst,
+ dma_addr_t src, size_t len,
+ void *call_back,
+ void *param)
+{
+ struct fsl_qdma_comp *fsl_comp;
+
+ fsl_comp =
+ fsl_qdma_request_enqueue_desc((struct fsl_qdma_chan *)fsl_chan);
+ if (!fsl_comp)
+ return NULL;
+
+ fsl_comp->qchan = fsl_chan;
+ fsl_comp->call_back_func = call_back;
+ fsl_comp->params = param;
+
+ fsl_qdma_comp_fill_memcpy(fsl_comp, dst, src, len);
+
+ ((struct fsl_qdma_chan *)fsl_chan)->status = DMA_IN_PREPAR;
+ return (void *)fsl_comp;
+}
+
+static int fsl_qdma_enqueue_desc(struct fsl_qdma_chan *fsl_chan,
+ struct fsl_qdma_comp *fsl_comp)
+{
+ struct fsl_qdma_queue *fsl_queue = fsl_chan->queue;
+ void *block = fsl_queue->block_base;
+ u32 reg;
+
+ if (!fsl_comp)
+ return -1;
+
+ reg = qdma_readl(fsl_chan->qdma, block +
+ FSL_QDMA_BCQSR(fsl_queue->id));
+ if (reg & (FSL_QDMA_BCQSR_QF | FSL_QDMA_BCQSR_XOFF))
+ return -1;
+
+ memcpy(fsl_queue->virt_head++, fsl_comp->virt_addr, 16);
+ if (fsl_queue->virt_head == fsl_queue->cq + fsl_queue->n_cq)
+ fsl_queue->virt_head = fsl_queue->cq;
+
+ list_add_tail(&fsl_comp->list, &fsl_queue->comp_used);
+
+ reg = qdma_readl(fsl_chan->qdma, block + FSL_QDMA_BSQSR);
+ if (reg & FSL_QDMA_BSQSR_QF)
+ return -1;
+
+ reg = qdma_readl(fsl_chan->qdma, block + FSL_QDMA_BCQMR(fsl_queue->id));
+ reg |= FSL_QDMA_BCQMR_EI;
+ qdma_writel(fsl_chan->qdma, reg, block + FSL_QDMA_BCQMR(fsl_queue->id));
+
+ fsl_chan->status = DMA_IN_PROGRESS;
+ return 0;
+}
+
+static int fsl_qdma_issue_pending(void *fsl_chan_org, void *fsl_comp)
+{
+ struct fsl_qdma_chan *fsl_chan = fsl_chan_org;
+
+ return fsl_qdma_enqueue_desc(fsl_chan,
+ (struct fsl_qdma_comp *)fsl_comp);
+}
+
+static int fsl_qdma_alloc_chan_resources(struct fsl_qdma_chan *fsl_chan)
+{
+ struct fsl_qdma_queue *fsl_queue = fsl_chan->queue;
+ struct fsl_qdma_engine *fsl_qdma = fsl_chan->qdma;
+ int ret;
+
+ if (fsl_queue->count++)
+ goto finally;
+
+ INIT_LIST_HEAD(&fsl_queue->comp_free);
+ INIT_LIST_HEAD(&fsl_queue->comp_used);
+
+ ret = fsl_qdma_pre_request_enqueue_comp_desc(fsl_queue,
+ FSL_QDMA_COMMAND_BUFFER_SIZE, 64);
+ if (ret) {
+ DPAA_QDMA_ERR(
+ "failed to alloc dma buffer for comp descriptor\n");
+ goto exit;
+ }
+
+ ret = fsl_qdma_pre_request_enqueue_sd_desc(fsl_queue,
+ FSL_QDMA_DESCRIPTOR_BUFFER_SIZE, 32);
+ if (ret) {
+ DPAA_QDMA_ERR(
+ "failed to alloc dma buffer for sd descriptor\n");
+ goto exit;
+ }
+
+finally:
+ return fsl_qdma->desc_allocated++;
+
+exit:
+ return -ENOMEM;
+}
+
+static int
+dpaa_get_channel(struct fsl_qdma_engine *fsl_qdma, uint32_t core)
+{
+ u32 i, start, end;
+
+ start = core * QDMA_QUEUES;
+
+ /* TODO: Currently, supporting 1 queue per core. In future,
+ * if there is more queues requirement then we may need extra
+ * layer of virtual queues on dequeue side as we have only one
+ * status queue per block/core and then we can change below
+ * statement as:
+ * end = start + QDMA_QUEUES;
+ */
+ end = start + 1;
+ for (i = start; i < end; i++) {
+ struct fsl_qdma_chan *fsl_chan = &fsl_qdma->chans[i];
+
+ if (fsl_chan->free) {
+ fsl_chan->free = false;
+ fsl_qdma_alloc_chan_resources(fsl_chan);
+ return i;
+ }
+ }
+
+ return -1;
+}
+
+static void
+dma_release(void *fsl_chan)
+{
+ ((struct fsl_qdma_chan *)fsl_chan)->free = true;
+ fsl_qdma_free_chan_resources((struct fsl_qdma_chan *)fsl_chan);
+}
+
+
+static int
+dpaa_qdma_configure(__rte_unused const struct rte_rawdev *rawdev,
+ __rte_unused rte_rawdev_obj_t config)
+{
+ return 0;
+}
+
+static int
+dpaa_qdma_start(__rte_unused struct rte_rawdev *rawdev)
+{
+ return 0;
+}
+
+static int
+dpaa_qdma_reset(__rte_unused struct rte_rawdev *rawdev)
+{
+ return 0;
+}
+
+static int
+dpaa_qdma_close(__rte_unused struct rte_rawdev *rawdev)
+{
+ return 0;
+}
+
+
+static int
+dpaa_qdma_queue_setup(struct rte_rawdev *rawdev,
+ __rte_unused uint16_t queue_id,
+ rte_rawdev_obj_t queue_conf)
+{
+ struct fsl_qdma_engine *fsl_qdma = rawdev->dev_private;
+ struct rte_qdma_queue_config *q_config =
+ (struct rte_qdma_queue_config *)queue_conf;
+
+ return dpaa_get_channel(fsl_qdma, q_config->lcore_id);
+}
+
+static int
+dpaa_qdma_queue_release(struct rte_rawdev *rawdev,
+ uint16_t vq_id)
+{
+ struct fsl_qdma_engine *fsl_qdma = rawdev->dev_private;
+ struct fsl_qdma_chan *fsl_chan = &fsl_qdma->chans[vq_id];
+
+ dma_release(fsl_chan);
+ return 0;
+}
+
+static int
+dpaa_qdma_enqueue(__rte_unused struct rte_rawdev *rawdev,
+ __rte_unused struct rte_rawdev_buf **buffers,
+ unsigned int nb_jobs,
+ rte_rawdev_obj_t context)
+{
+ struct fsl_qdma_engine *fsl_qdma = rawdev->dev_private;
+ struct rte_qdma_enqdeq *e_context = (struct rte_qdma_enqdeq *)context;
+ struct rte_qdma_job **job = e_context->job;
+ struct fsl_qdma_chan *fsl_chan = &fsl_qdma->chans[e_context->vq_id];
+ unsigned int i = 0, ret;
+
+ for (i = 0; i < nb_jobs; i++) {
+ void *fsl_comp = NULL;
+
+ fsl_comp = fsl_qdma_prep_memcpy(fsl_chan,
+ (dma_addr_t)job[i]->dest,
+ (dma_addr_t)job[i]->src,
+ job[i]->len, NULL, job[i]);
+ if (!fsl_comp) {
+ DPAA_QDMA_DP_DEBUG("fsl_comp is NULL\n");
+ return i;
+ }
+ ret = fsl_qdma_issue_pending(fsl_chan, fsl_comp);
+ if (ret)
+ return i;
+ }
+
+ return i;
+}
+
+static int
+dpaa_qdma_dequeue(struct rte_rawdev *rawdev,
+ __rte_unused struct rte_rawdev_buf **buffers,
+ __rte_unused unsigned int nb_jobs,
+ rte_rawdev_obj_t cntxt)
+{
+ struct fsl_qdma_engine *fsl_qdma = rawdev->dev_private;
+ struct rte_qdma_enqdeq *e_context = (struct rte_qdma_enqdeq *)cntxt;
+ int id = (int)((e_context->vq_id) / QDMA_QUEUES);
+ void *block;
+ void *ctrl = fsl_qdma->ctrl_base;
+ unsigned int reg;
+ int intr;
+ void *status = fsl_qdma->status_base;
+
+ intr = qdma_readl(fsl_qdma, status + FSL_QDMA_DEDR);
+ if (intr) {
+ DPAA_QDMA_ERR("DMA transaction error! %x\n", intr);
+ intr = qdma_readl(fsl_qdma, status + FSL_QDMA_DECFDW0R);
+ DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW0R %x\n", intr);
+ intr = qdma_readl(fsl_qdma, status + FSL_QDMA_DECFDW1R);
+ DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW1R %x\n", intr);
+ intr = qdma_readl(fsl_qdma, status + FSL_QDMA_DECFDW2R);
+ DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW2R %x\n", intr);
+ intr = qdma_readl(fsl_qdma, status + FSL_QDMA_DECFDW3R);
+ DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW3R %x\n", intr);
+ intr = qdma_readl(fsl_qdma, status + FSL_QDMA_DECFQIDR);
+ DPAA_QDMA_INFO("reg FSL_QDMA_DECFQIDR %x\n", intr);
+ intr = qdma_readl(fsl_qdma, status + FSL_QDMA_DECBR);
+ DPAA_QDMA_INFO("reg FSL_QDMA_DECBR %x\n", intr);
+ qdma_writel(fsl_qdma, 0xffffffff,
+ status + FSL_QDMA_DEDR);
+ intr = qdma_readl(fsl_qdma, status + FSL_QDMA_DEDR);
+ }
+
+ block = fsl_qdma->block_base +
+ FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, id);
+
+ reg = qdma_readl(fsl_qdma, block + FSL_QDMA_BSQSR);
+ if (reg & FSL_QDMA_BSQSR_QE)
+ return 0;
+
+ intr = fsl_qdma_queue_transfer_complete(fsl_qdma, block, id, e_context);
+ if (intr < 0) {
+ reg = qdma_readl(fsl_qdma, ctrl + FSL_QDMA_DMR);
+ reg |= FSL_QDMA_DMR_DQD;
+ qdma_writel(fsl_qdma, reg, ctrl + FSL_QDMA_DMR);
+ qdma_writel(fsl_qdma, 0, block + FSL_QDMA_BCQIER(0));
+ DPAA_QDMA_ERR("QDMA: status err!\n");
+ }
+
+ return intr;
+}
+
+static int
+dpaa_qdma_attr_get(__rte_unused struct rte_rawdev *rawdev,
+ __rte_unused const char *attr_name,
+ __rte_unused uint64_t *attr_value)
+{
+ return 0;
+}
+
+static struct rte_rawdev_ops dpaa_qdma_ops = {
+ .dev_configure = dpaa_qdma_configure,
+ .dev_start = dpaa_qdma_start,
+ .dev_reset = dpaa_qdma_reset,
+ .dev_close = dpaa_qdma_close,
+ .queue_setup = dpaa_qdma_queue_setup,
+ .queue_release = dpaa_qdma_queue_release,
+ .attr_get = dpaa_qdma_attr_get,
+ .enqueue_bufs = dpaa_qdma_enqueue,
+ .dequeue_bufs = dpaa_qdma_dequeue,
+};
+
+static int
+dpaa_qdma_init(struct rte_rawdev *rawdev)
+{
+ struct fsl_qdma_engine *fsl_qdma = rawdev->dev_private;
+ struct fsl_qdma_chan *fsl_chan;
+ uint64_t phys_addr;
+ unsigned int len;
+ int ccsr_qdma_fd;
+ int regs_size;
+ int ret;
+ u32 i;
+
+ fsl_qdma->desc_allocated = 0;
+ fsl_qdma->n_chans = VIRT_CHANNELS;
+ fsl_qdma->n_queues = QDMA_QUEUES;
+ fsl_qdma->num_blocks = QDMA_BLOCKS;
+ fsl_qdma->block_offset = QDMA_BLOCK_OFFSET;
+
+ len = sizeof(*fsl_chan) * fsl_qdma->n_chans;
+ fsl_qdma->chans = rte_zmalloc("qdma: fsl chans", len, 0);
+ if (!fsl_qdma->chans)
+ return -1;
+
+ len = sizeof(struct fsl_qdma_queue *) * fsl_qdma->num_blocks;
+ fsl_qdma->status = rte_zmalloc("qdma: fsl status", len, 0);
+ if (!fsl_qdma->status) {
+ rte_free(fsl_qdma->chans);
+ return -1;
+ }
+
+ for (i = 0; i < fsl_qdma->num_blocks; i++) {
+ rte_atomic32_init(&wait_task[i]);
+ fsl_qdma->status[i] = fsl_qdma_prep_status_queue();
+ if (!fsl_qdma->status[i])
+ goto err;
+ }
+
+ ccsr_qdma_fd = open("/dev/mem", O_RDWR);
+ if (unlikely(ccsr_qdma_fd < 0)) {
+ DPAA_QDMA_ERR("Can not open /dev/mem for qdma CCSR map");
+ goto err;
+ }
+
+ regs_size = fsl_qdma->block_offset * (fsl_qdma->num_blocks + 2);
+ phys_addr = QDMA_CCSR_BASE;
+ fsl_qdma->ctrl_base = mmap(NULL, regs_size, PROT_READ |
+ PROT_WRITE, MAP_SHARED,
+ ccsr_qdma_fd, phys_addr);
+
+ close(ccsr_qdma_fd);
+ if (fsl_qdma->ctrl_base == MAP_FAILED) {
+ DPAA_QDMA_ERR("Can not map CCSR base qdma: Phys: 0x%lx "
+ "size %d\n", phys_addr, regs_size);
+ goto err;
+ }
+
+ fsl_qdma->status_base = fsl_qdma->ctrl_base + QDMA_BLOCK_OFFSET;
+ fsl_qdma->block_base = fsl_qdma->status_base + QDMA_BLOCK_OFFSET;
+
+ fsl_qdma->queue = fsl_qdma_alloc_queue_resources(fsl_qdma);
+ if (!fsl_qdma->queue) {
+ munmap(fsl_qdma->ctrl_base, regs_size);
+ goto err;
+ }
+
+ fsl_qdma->big_endian = QDMA_BIG_ENDIAN;
+ for (i = 0; i < fsl_qdma->n_chans; i++) {
+ struct fsl_qdma_chan *fsl_chan = &fsl_qdma->chans[i];
+
+ fsl_chan->qdma = fsl_qdma;
+ fsl_chan->queue = fsl_qdma->queue + i % (fsl_qdma->n_queues *
+ fsl_qdma->num_blocks);
+ fsl_chan->free = true;
+ }
+
+ ret = fsl_qdma_reg_init(fsl_qdma);
+ if (ret) {
+ DPAA_QDMA_ERR("Can't Initialize the qDMA engine.\n");
+ munmap(fsl_qdma->ctrl_base, regs_size);
+ goto err;
+ }
+
+ return 0;
+
+err:
+ rte_free(fsl_qdma->chans);
+ rte_free(fsl_qdma->status);
+
+ return -1;
+}
+
+static int
+dpaa_qdma_probe(struct rte_dpaa_driver *dpaa_drv,
+ struct rte_dpaa_device *dpaa_dev)
+{
+ struct rte_rawdev *rawdev;
+ int ret;
+
+ rawdev = rte_rawdev_pmd_allocate(dpaa_dev->device.name,
+ sizeof(struct fsl_qdma_engine),
+ rte_socket_id());
+ if (!rawdev) {
+ DPAA_QDMA_ERR("Unable to allocate rawdevice");
+ return -EINVAL;
+ }
+
+ dpaa_dev->rawdev = rawdev;
+ rawdev->dev_ops = &dpaa_qdma_ops;
+ rawdev->device = &dpaa_dev->device;
+ rawdev->driver_name = dpaa_drv->driver.name;
+
+ /* Invoke PMD device initialization function */
+ ret = dpaa_qdma_init(rawdev);
+ if (ret) {
+ rte_rawdev_pmd_release(rawdev);
+ return ret;
+ }
+
+ return 0;
+}
+
+static int
+dpaa_qdma_remove(struct rte_dpaa_device *dpaa_dev)
+{
+ struct rte_rawdev *rawdev = dpaa_dev->rawdev;
+ struct fsl_qdma_engine *fsl_qdma = rawdev->dev_private;
+ int ret;
+
+ rte_free(fsl_qdma->status);
+ rte_free(fsl_qdma->chans);
+ ret = rte_rawdev_pmd_release(rawdev);
+ if (ret)
+ DPAA_QDMA_ERR("Device cleanup failed\n");
+
+ return 0;
+}
+
+static struct rte_dpaa_driver rte_dpaa_qdma_pmd;
+
+static struct rte_dpaa_driver rte_dpaa_qdma_pmd = {
+ .drv_type = FSL_DPAA_QDMA,
+ .probe = dpaa_qdma_probe,
+ .remove = dpaa_qdma_remove,
+};
+
+RTE_PMD_REGISTER_DPAA(dpaa_qdma, rte_dpaa_qdma_pmd);
+
+RTE_INIT(dpaa_qdma_init_log)
+{
+ dpaa_qdma_logtype = rte_log_register("pmd.raw.dpaa.qdma");
+ if (dpaa_qdma_logtype >= 0)
+ rte_log_set_level(dpaa_qdma_logtype, RTE_LOG_INFO);
+}
diff --git a/drivers/raw/dpaa_qdma/dpaa_qdma.h b/drivers/raw/dpaa_qdma/dpaa_qdma.h
new file mode 100644
index 0000000..a60fdcb
--- /dev/null
+++ b/drivers/raw/dpaa_qdma/dpaa_qdma.h
@@ -0,0 +1,275 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020 NXP
+ */
+
+#ifndef _FSL_QDMA_H_
+#define _FSL_QDMA_H_
+
+#include <dpaa_list.h>
+#include <rte_atomic.h>
+
+#define u64 uint64_t
+#define u32 uint32_t
+#define u16 uint16_t
+#define u8 uint8_t
+
+#ifdef DEBUG
+#define debug printf
+#else
+#define debug
+#endif
+#ifndef BIT
+#define BIT(nr) (1UL << (nr))
+#endif
+
+#define CORE_NUMBER 4
+#define RETRIES 5
+
+#ifndef GENMASK
+#define BITS_PER_LONG (__SIZEOF_LONG__ * 8)
+#define GENMASK(h, l) \
+ (((~0UL) << (l)) & (~0UL >> (BITS_PER_LONG - 1 - (h))))
+#endif
+
+#define FSL_QDMA_DMR 0x0
+#define FSL_QDMA_DSR 0x4
+#define FSL_QDMA_DEIER 0xe00
+#define FSL_QDMA_DEDR 0xe04
+#define FSL_QDMA_DECFDW0R 0xe10
+#define FSL_QDMA_DECFDW1R 0xe14
+#define FSL_QDMA_DECFDW2R 0xe18
+#define FSL_QDMA_DECFDW3R 0xe1c
+#define FSL_QDMA_DECFQIDR 0xe30
+#define FSL_QDMA_DECBR 0xe34
+
+#define FSL_QDMA_BCQMR(x) (0xc0 + 0x100 * (x))
+#define FSL_QDMA_BCQSR(x) (0xc4 + 0x100 * (x))
+#define FSL_QDMA_BCQEDPA_SADDR(x) (0xc8 + 0x100 * (x))
+#define FSL_QDMA_BCQDPA_SADDR(x) (0xcc + 0x100 * (x))
+#define FSL_QDMA_BCQEEPA_SADDR(x) (0xd0 + 0x100 * (x))
+#define FSL_QDMA_BCQEPA_SADDR(x) (0xd4 + 0x100 * (x))
+#define FSL_QDMA_BCQIER(x) (0xe0 + 0x100 * (x))
+#define FSL_QDMA_BCQIDR(x) (0xe4 + 0x100 * (x))
+
+#define FSL_QDMA_SQEDPAR 0x808
+#define FSL_QDMA_SQDPAR 0x80c
+#define FSL_QDMA_SQEEPAR 0x810
+#define FSL_QDMA_SQEPAR 0x814
+#define FSL_QDMA_BSQMR 0x800
+#define FSL_QDMA_BSQSR 0x804
+#define FSL_QDMA_BSQICR 0x828
+#define FSL_QDMA_CQMR 0xa00
+#define FSL_QDMA_CQDSCR1 0xa08
+#define FSL_QDMA_CQDSCR2 0xa0c
+#define FSL_QDMA_CQIER 0xa10
+#define FSL_QDMA_CQEDR 0xa14
+#define FSL_QDMA_SQCCMR 0xa20
+
+#define FSL_QDMA_SQICR_ICEN
+
+#define FSL_QDMA_CQIDR_CQT 0xff000000
+#define FSL_QDMA_CQIDR_SQPE 0x800000
+#define FSL_QDMA_CQIDR_SQT 0x8000
+
+#define FSL_QDMA_BCQIER_CQTIE 0x8000
+#define FSL_QDMA_BCQIER_CQPEIE 0x800000
+#define FSL_QDMA_BSQICR_ICEN 0x80000000
+#define FSL_QDMA_BSQICR_ICST(x) ((x) << 16)
+#define FSL_QDMA_CQIER_MEIE 0x80000000
+#define FSL_QDMA_CQIER_TEIE 0x1
+#define FSL_QDMA_SQCCMR_ENTER_WM 0x200000
+
+#define FSL_QDMA_QUEUE_MAX 8
+
+#define FSL_QDMA_BCQMR_EN 0x80000000
+#define FSL_QDMA_BCQMR_EI 0x40000000
+#define FSL_QDMA_BCQMR_CD_THLD(x) ((x) << 20)
+#define FSL_QDMA_BCQMR_CQ_SIZE(x) ((x) << 16)
+
+#define FSL_QDMA_BCQSR_QF 0x10000
+#define FSL_QDMA_BCQSR_XOFF 0x1
+
+#define FSL_QDMA_BSQMR_EN 0x80000000
+#define FSL_QDMA_BSQMR_DI 0x40000000
+#define FSL_QDMA_BSQMR_CQ_SIZE(x) ((x) << 16)
+
+#define FSL_QDMA_BSQSR_QE 0x20000
+#define FSL_QDMA_BSQSR_QF 0x10000
+
+#define FSL_QDMA_DMR_DQD 0x40000000
+#define FSL_QDMA_DSR_DB 0x80000000
+
+#define FSL_QDMA_COMMAND_BUFFER_SIZE 64
+#define FSL_QDMA_DESCRIPTOR_BUFFER_SIZE 32
+#define FSL_QDMA_CIRCULAR_DESC_SIZE_MIN 64
+#define FSL_QDMA_CIRCULAR_DESC_SIZE_MAX 16384
+#define FSL_QDMA_QUEUE_NUM_MAX 8
+
+#define FSL_QDMA_CMD_RWTTYPE 0x4
+#define FSL_QDMA_CMD_LWC 0x2
+
+#define FSL_QDMA_CMD_RWTTYPE_OFFSET 28
+#define FSL_QDMA_CMD_NS_OFFSET 27
+#define FSL_QDMA_CMD_DQOS_OFFSET 24
+#define FSL_QDMA_CMD_WTHROTL_OFFSET 20
+#define FSL_QDMA_CMD_DSEN_OFFSET 19
+#define FSL_QDMA_CMD_LWC_OFFSET 16
+
+#define QDMA_CCDF_STATUS 20
+#define QDMA_CCDF_OFFSET 20
+#define QDMA_CCDF_MASK GENMASK(28, 20)
+#define QDMA_CCDF_FOTMAT BIT(29)
+#define QDMA_CCDF_SER BIT(30)
+
+#define QDMA_SG_FIN BIT(30)
+#define QDMA_SG_EXT BIT(31)
+#define QDMA_SG_LEN_MASK GENMASK(29, 0)
+
+#define QDMA_BIG_ENDIAN 0x00000001
+#define COMP_TIMEOUT 100000
+#define COMMAND_QUEUE_OVERFLLOW 10
+
+/* qdma engine attribute */
+#define QDMA_QUEUE_SIZE 64
+#define QDMA_STATUS_SIZE 64
+#define QDMA_CCSR_BASE 0x8380000
+#define VIRT_CHANNELS 32
+#define QDMA_BLOCK_OFFSET 0x10000
+#define QDMA_BLOCKS 4
+#define QDMA_QUEUES 8
+#define QDMA_DELAY 1000
+
+#define __arch_getq(a) (*(volatile u64 *)(a))
+#define __arch_putq(v, a) (*(volatile u64 *)(a) = (v))
+#define __arch_getq32(a) (*(volatile u32 *)(a))
+#define __arch_putq32(v, a) (*(volatile u32 *)(a) = (v))
+#define readq(c) \
+ ({ u64 __v = __arch_getq(c); rte_io_rmb(); __v; })
+#define writeq(v, c) \
+ ({ u64 __v = v; rte_io_wmb(); __arch_putq(__v, c); __v; })
+#define readq32(c) \
+ ({ u32 __v = __arch_getq32(c); rte_io_rmb(); __v; })
+#define writeq32(v, c) \
+ ({ u32 __v = v; rte_io_wmb(); __arch_putq32(__v, c); __v; })
+#define ioread64(_p) readq(_p)
+#define iowrite64(_v, _p) writeq(_v, _p)
+#define ioread32(_p) readq32(_p)
+#define iowrite32(_v, _p) writeq32(_v, _p)
+
+#define ioread32be(_p) be32_to_cpu(readq32(_p))
+#define iowrite32be(_v, _p) writeq32(be32_to_cpu(_v), _p)
+
+#define QDMA_IN(fsl_qdma_engine, addr) \
+ (((fsl_qdma_engine)->big_endian & QDMA_BIG_ENDIAN) ? \
+ ioread32be(addr) : ioread32(addr))
+#define QDMA_OUT(fsl_qdma_engine, addr, val) \
+ (((fsl_qdma_engine)->big_endian & QDMA_BIG_ENDIAN) ? \
+ iowrite32be(val, addr) : iowrite32(val, addr))
+
+#define FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma_engine, x) \
+ (((fsl_qdma_engine)->block_offset) * (x))
+
+typedef void (*dma_call_back)(void *params);
+
+/* qDMA Command Descriptor Fotmats */
+struct fsl_qdma_format {
+ __le32 status; /* ser, status */
+ __le32 cfg; /* format, offset */
+ union {
+ struct {
+ __le32 addr_lo; /* low 32-bits of 40-bit address */
+ u8 addr_hi; /* high 8-bits of 40-bit address */
+ u8 __reserved1[2];
+ u8 cfg8b_w1; /* dd, queue */
+ };
+ __le64 data;
+ };
+};
+
+/* qDMA Source Descriptor Format */
+struct fsl_qdma_sdf {
+ __le32 rev3;
+ __le32 cfg; /* rev4, bit[0-11] - ssd, bit[12-23] sss */
+ __le32 rev5;
+ __le32 cmd;
+};
+
+/* qDMA Destination Descriptor Format */
+struct fsl_qdma_ddf {
+ __le32 rev1;
+ __le32 cfg; /* rev2, bit[0-11] - dsd, bit[12-23] - dss */
+ __le32 rev3;
+ __le32 cmd;
+};
+
+enum dma_status {
+ DMA_COMPLETE,
+ DMA_IN_PROGRESS,
+ DMA_IN_PREPAR,
+ DMA_PAUSED,
+ DMA_ERROR,
+};
+
+struct fsl_qdma_chan {
+ struct fsl_qdma_engine *qdma;
+ struct fsl_qdma_queue *queue;
+ enum dma_status status;
+ bool free;
+ struct list_head list;
+};
+
+struct fsl_qdma_list {
+ struct list_head dma_list;
+};
+
+struct fsl_qdma_queue {
+ struct fsl_qdma_format *virt_head;
+ struct list_head comp_used;
+ struct list_head comp_free;
+ dma_addr_t bus_addr;
+ u32 n_cq;
+ u32 id;
+ u32 count;
+ struct fsl_qdma_format *cq;
+ void *block_base;
+};
+
+struct fsl_qdma_comp {
+ dma_addr_t bus_addr;
+ dma_addr_t desc_bus_addr;
+ void *virt_addr;
+ void *desc_virt_addr;
+ struct fsl_qdma_chan *qchan;
+ dma_call_back call_back_func;
+ void *params;
+ struct list_head list;
+};
+
+struct fsl_qdma_engine {
+ int desc_allocated;
+ void *ctrl_base;
+ void *status_base;
+ void *block_base;
+ u32 n_chans;
+ u32 n_queues;
+ int error_irq;
+ bool big_endian;
+ struct fsl_qdma_queue *queue;
+ struct fsl_qdma_queue **status;
+ struct fsl_qdma_chan *chans;
+ u32 num_blocks;
+ int block_offset;
+};
+
+static u64 pre_addr[CORE_NUMBER];
+static u64 pre_queue[CORE_NUMBER];
+static rte_atomic32_t wait_task[CORE_NUMBER];
+
+#ifndef QDMA_MEMZONE
+/* Array of memzone pointers */
+static const struct rte_memzone *qdma_mz_mapping[RTE_MAX_MEMZONE];
+/* Counter to track current memzone allocated */
+static uint16_t qdma_mz_count;
+#endif
+
+#endif /* _FSL_QDMA_H_ */
diff --git a/drivers/raw/dpaa_qdma/dpaa_qdma_logs.h b/drivers/raw/dpaa_qdma/dpaa_qdma_logs.h
new file mode 100644
index 0000000..7c11815
--- /dev/null
+++ b/drivers/raw/dpaa_qdma/dpaa_qdma_logs.h
@@ -0,0 +1,46 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020 NXP
+ */
+
+#ifndef __DPAA_QDMA_LOGS_H__
+#define __DPAA_QDMA_LOGS_H__
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+extern int dpaa_qdma_logtype;
+
+#define DPAA_QDMA_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, dpaa_qdma_logtype, "dpaa_qdma: " \
+ fmt "\n", ## args)
+
+#define DPAA_QDMA_DEBUG(fmt, args...) \
+ rte_log(RTE_LOG_DEBUG, dpaa_qdma_logtype, "dpaa_qdma: %s(): " \
+ fmt "\n", __func__, ## args)
+
+#define DPAA_QDMA_FUNC_TRACE() DPAA_QDMA_DEBUG(">>")
+
+#define DPAA_QDMA_INFO(fmt, args...) \
+ DPAA_QDMA_LOG(INFO, fmt, ## args)
+#define DPAA_QDMA_ERR(fmt, args...) \
+ DPAA_QDMA_LOG(ERR, fmt, ## args)
+#define DPAA_QDMA_WARN(fmt, args...) \
+ DPAA_QDMA_LOG(WARNING, fmt, ## args)
+
+/* DP Logs, toggled out at compile time if level lower than current level */
+#define DPAA_QDMA_DP_LOG(level, fmt, args...) \
+ RTE_LOG_DP(level, PMD, "dpaa_qdma: " fmt "\n", ## args)
+
+#define DPAA_QDMA_DP_DEBUG(fmt, args...) \
+ DPAA_QDMA_DP_LOG(DEBUG, fmt, ## args)
+#define DPAA_QDMA_DP_INFO(fmt, args...) \
+ DPAA_QDMA_DP_LOG(INFO, fmt, ## args)
+#define DPAA_QDMA_DP_WARN(fmt, args...) \
+ DPAA_QDMA_DP_LOG(WARNING, fmt, ## args)
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __DPAA_QDMA_LOGS_H__ */
diff --git a/drivers/raw/dpaa_qdma/meson.build b/drivers/raw/dpaa_qdma/meson.build
new file mode 100644
index 0000000..ce2ac33
--- /dev/null
+++ b/drivers/raw/dpaa_qdma/meson.build
@@ -0,0 +1,15 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright 2020 NXP
+
+if not is_linux
+ build = false
+ reason = 'only supported on linux'
+endif
+
+deps += ['rawdev', 'bus_dpaa']
+sources = files('dpaa_qdma.c')
+includes += include_directories('../dpaa2_qdma')
+
+if cc.has_argument('-Wno-pointer-arith')
+ cflags += '-Wno-pointer-arith'
+endif
diff --git a/drivers/raw/dpaa_qdma/rte_rawdev_dpaa_qdma_version.map b/drivers/raw/dpaa_qdma/rte_rawdev_dpaa_qdma_version.map
new file mode 100644
index 0000000..f9f17e4
--- /dev/null
+++ b/drivers/raw/dpaa_qdma/rte_rawdev_dpaa_qdma_version.map
@@ -0,0 +1,3 @@
+DPDK_20.0 {
+ local: *;
+};
diff --git a/drivers/raw/meson.build b/drivers/raw/meson.build
index 2c1e65e..0e310ac 100644
--- a/drivers/raw/meson.build
+++ b/drivers/raw/meson.build
@@ -5,7 +5,7 @@ if is_windows
subdir_done()
endif
-drivers = ['dpaa2_cmdif', 'dpaa2_qdma',
+drivers = ['dpaa_qdma', 'dpaa2_cmdif', 'dpaa2_qdma',
'ifpga', 'ioat', 'ntb',
'octeontx2_dma',
'octeontx2_ep',
--
2.7.4
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [dpdk-dev] [PATCH RFC] raw: add dpaa qdma driver
2020-09-07 9:50 [dpdk-dev] [PATCH RFC] raw: add dpaa qdma driver Gagandeep Singh
@ 2020-09-25 6:10 ` Hemant Agrawal
2021-03-24 21:26 ` Thomas Monjalon
0 siblings, 1 reply; 4+ messages in thread
From: Hemant Agrawal @ 2020-09-25 6:10 UTC (permalink / raw)
To: Gagandeep Singh, dev, nipun.gupta, hemant.agrawal; +Cc: Peng Ma
Hi Gagan,
On 9/7/2020 3:20 PM, Gagandeep Singh wrote:
> This patch adds support for dpaa qdma based driver.
>
Can you provide more details and break it into logical parts?
> Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
> Signed-off-by: Peng Ma <peng.ma@nxp.com>
> ---
> doc/guides/rawdevs/dpaa_qdma.rst | 98 ++
> doc/guides/rawdevs/index.rst | 1 +
> drivers/bus/dpaa/dpaa_bus.c | 22 +
> drivers/bus/dpaa/rte_dpaa_bus.h | 5 +
> drivers/common/dpaax/dpaa_list.h | 6 +-
> drivers/raw/dpaa_qdma/dpaa_qdma.c | 1074 ++++++++++++++++++++
> drivers/raw/dpaa_qdma/dpaa_qdma.h | 275 +++++
> drivers/raw/dpaa_qdma/dpaa_qdma_logs.h | 46 +
> drivers/raw/dpaa_qdma/meson.build | 15 +
> .../raw/dpaa_qdma/rte_rawdev_dpaa_qdma_version.map | 3 +
> drivers/raw/meson.build | 2 +-
> 11 files changed, 1543 insertions(+), 4 deletions(-)
> create mode 100644 doc/guides/rawdevs/dpaa_qdma.rst
> create mode 100644 drivers/raw/dpaa_qdma/dpaa_qdma.c
> create mode 100644 drivers/raw/dpaa_qdma/dpaa_qdma.h
> create mode 100644 drivers/raw/dpaa_qdma/dpaa_qdma_logs.h
> create mode 100644 drivers/raw/dpaa_qdma/meson.build
> create mode 100644 drivers/raw/dpaa_qdma/rte_rawdev_dpaa_qdma_version.map
>
> diff --git a/doc/guides/rawdevs/dpaa_qdma.rst b/doc/guides/rawdevs/dpaa_qdma.rst
> new file mode 100644
> index 0000000..49457f6
> --- /dev/null
> +++ b/doc/guides/rawdevs/dpaa_qdma.rst
> @@ -0,0 +1,98 @@
> +.. SPDX-License-Identifier: BSD-3-Clause
> + Copyright 2018 NXP
> +
> +NXP DPAA QDMA Driver
> +=====================
> +
> +The DPAA QDMA is an implementation of the rawdev API, that provide means
> +to initiate a DMA transaction from CPU. The initiated DMA is performed
> +without CPU being involved in the actual DMA transaction. This is achieved
> +via using the DPDMAI device exposed by MC.
> +
> +More information can be found at `NXP Official Website
> +<http://www.nxp.com/products/microcontrollers-and-processors/arm-processors/qoriq-arm-processors:QORIQ-ARM>`_.
> +
> +Features
> +--------
> +
> +The DPAA QDMA implements following features in the rawdev API;
> +
> +- Supports issuing DMA of data within memory without hogging CPU while
> + performing DMA operation.
> +- Supports configuring to optionally get status of the DMA translation on
> + per DMA operation basis.
> +
> +Supported DPAA SoCs
> +--------------------
> +
> +- LS1043A
> +- LS1046A
> +
> +Prerequisites
> +-------------
> +
> +See :doc:`../platform/dpaa` for setup information
> +
> +Currently supported by DPDK:
> +
> +- NXP SDK **19.09+**.
> +- Supported architectures: **arm64 LE**.
> +
> +- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
> +
> +.. note::
> +
> + Some part of fslmc bus code (mc flib - object library) routines are
> + dual licensed (BSD & GPLv2).
> +
> +Pre-Installation Configuration
> +------------------------------
> +
> +Config File Options
> +~~~~~~~~~~~~~~~~~~~
> +
> +The following options can be modified in the ``config`` file.
> +
> +- ``CONFIG_RTE_LIBRTE_PMD_DPAA_QDMA_RAWDEV`` (default ``y``)
> +
> + Toggle compilation of the ``lrte_pmd_dpaa_qdma`` driver.
> +
> +Enabling logs
> +-------------
> +
> +For enabling logs, use the following EAL parameter:
> +
> +.. code-block:: console
> +
> + ./your_qdma_application <EAL args> --log-level=pmd.raw.dpaa.qdma,<level>
> +
> +Using ``pmd.raw.dpaa.qdma`` as log matching criteria, all Event PMD logs can be
> +enabled which are lower than logging ``level``.
> +
> +Driver Compilation
> +~~~~~~~~~~~~~~~~~~
> +
> +To compile the DPAA QDMA PMD for Linux arm64 gcc target, run the
> +following ``make`` command:
> +
> +.. code-block:: console
> +
> + cd <DPDK-source-directory>
> + make config T=arm64-dpaa-linux-gcc install
> +
> +Initialization
> +--------------
> +
> +The DPAA QDMA is exposed as a vdev device which consists of dpdmai devices.
> +On EAL initialization, dpdmai devices will be probed and populated into the
> +rawdevices. The rawdev ID of the device can be obtained using
> +
> +* Invoking ``rte_rawdev_get_dev_id("dpdmai.x")`` from the application
> + where x is the object ID of the DPDMAI object created by MC. Use can
> + use this index for further rawdev function calls.
> +
> +Platform Requirement
> +~~~~~~~~~~~~~~~~~~~~
> +
> +DPAA drivers for DPDK can only work on NXP SoCs as listed in the
> +``Supported DPAA SoCs``.
> diff --git a/doc/guides/rawdevs/index.rst b/doc/guides/rawdevs/index.rst
> index f64ec44..8450006 100644
> --- a/doc/guides/rawdevs/index.rst
> +++ b/doc/guides/rawdevs/index.rst
> @@ -11,6 +11,7 @@ application through rawdev API.
> :maxdepth: 2
> :numbered:
>
> + dpaa_qdma
> dpaa2_cmdif
> dpaa2_qdma
> ifpga
> diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
> index 32e872d..8697e9e 100644
> --- a/drivers/bus/dpaa/dpaa_bus.c
> +++ b/drivers/bus/dpaa/dpaa_bus.c
> @@ -229,6 +229,28 @@ dpaa_create_device_list(void)
>
> rte_dpaa_bus.device_count += i;
>
> + /* Creating QDMA Device */
> + for (i = 0; i < RTE_DPAA_QDMA_DEVICES; i++) {
> + dev = calloc(1, sizeof(struct rte_dpaa_device));
> + if (!dev) {
> + DPAA_BUS_LOG(ERR, "Failed to allocate QDMA device");
> + ret = -1;
> + goto cleanup;
> + }
> +
> + dev->device_type = FSL_DPAA_QDMA;
> + dev->id.dev_id = rte_dpaa_bus.device_count + i;
> +
> + memset(dev->name, 0, RTE_ETH_NAME_MAX_LEN);
> + sprintf(dev->name, "dpaa_qdma-%d", i+1);
> + DPAA_BUS_LOG(INFO, "%s qdma device added", dev->name);
> + dev->device.name = dev->name;
> + dev->device.devargs = dpaa_devargs_lookup(dev);
> +
> + dpaa_add_to_device_list(dev);
> + }
> + rte_dpaa_bus.device_count += i;
> +
> return 0;
>
> cleanup:
> diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h
> index fdaa63a..959cfdb 100644
> --- a/drivers/bus/dpaa/rte_dpaa_bus.h
> +++ b/drivers/bus/dpaa/rte_dpaa_bus.h
> @@ -33,6 +33,9 @@
> /** Device driver supports link state interrupt */
> #define RTE_DPAA_DRV_INTR_LSC 0x0008
>
> +/** Number of supported QDMA devices */
> +#define RTE_DPAA_QDMA_DEVICES 1
> +
> #define RTE_DEV_TO_DPAA_CONST(ptr) \
> container_of(ptr, const struct rte_dpaa_device, device)
>
> @@ -48,6 +51,7 @@ TAILQ_HEAD(rte_dpaa_driver_list, rte_dpaa_driver);
> enum rte_dpaa_type {
> FSL_DPAA_ETH = 1,
> FSL_DPAA_CRYPTO,
> + FSL_DPAA_QDMA,
> };
>
> struct rte_dpaa_bus {
> @@ -70,6 +74,7 @@ struct rte_dpaa_device {
> union {
> struct rte_eth_dev *eth_dev;
> struct rte_cryptodev *crypto_dev;
> + struct rte_rawdev *rawdev;
> };
> struct rte_dpaa_driver *driver;
> struct dpaa_device_id id;
> diff --git a/drivers/common/dpaax/dpaa_list.h b/drivers/common/dpaax/dpaa_list.h
> index e945759..58c563e 100644
> --- a/drivers/common/dpaax/dpaa_list.h
> +++ b/drivers/common/dpaax/dpaa_list.h
> @@ -1,7 +1,5 @@
> /* SPDX-License-Identifier: BSD-3-Clause
> - *
> - * Copyright 2017 NXP
> - *
> + * Copyright 2017,2020 NXP
> */
>
> #ifndef __DPAA_LIST_H
> @@ -35,6 +33,8 @@ do { \
> const struct list_head *__p298 = (p); \
> ((__p298->next == __p298) && (__p298->prev == __p298)); \
> })
> +#define list_first_entry(ptr, type, member) \
> + list_entry((ptr)->next, type, member)
> #define list_add(p, l) \
> do { \
> struct list_head *__p298 = (p); \
> diff --git a/drivers/raw/dpaa_qdma/dpaa_qdma.c b/drivers/raw/dpaa_qdma/dpaa_qdma.c
> new file mode 100644
> index 0000000..6897dc4
> --- /dev/null
> +++ b/drivers/raw/dpaa_qdma/dpaa_qdma.c
> @@ -0,0 +1,1074 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright 2020 NXP
> + * Driver for NXP Layerscape Queue direct memory access controller (qDMA)
> + */
> +
> +#include <sys/time.h>
> +#include <semaphore.h>
> +
> +#include <rte_mbuf.h>
> +#include <rte_rawdev.h>
> +#include <rte_dpaa_bus.h>
> +#include <rte_rawdev_pmd.h>
> +#include <compat.h>
> +#include <rte_hexdump.h>
> +
> +#include <rte_pmd_dpaa2_qdma.h>
> +#include "dpaa_qdma.h"
> +#include "dpaa_qdma_logs.h"
> +
> +/* Dynamic log type identifier */
> +int dpaa_qdma_logtype;
> +
> +static inline u64
> +qdma_ccdf_addr_get64(const struct fsl_qdma_format *ccdf)
> +{
> + return rte_le_to_cpu_64(ccdf->data) & 0xffffffffffLLU;
> +}
> +
> +static inline void
> +qdma_desc_addr_set64(struct fsl_qdma_format *ccdf, u64 addr)
> +{
> + ccdf->addr_hi = upper_32_bits(addr);
> + ccdf->addr_lo = rte_cpu_to_le_32(lower_32_bits(addr));
> +}
> +
> +static inline u64
> +qdma_ccdf_get_queue(const struct fsl_qdma_format *ccdf)
> +{
> + return ccdf->cfg8b_w1 & 0xff;
> +}
> +
> +static inline int
> +qdma_ccdf_get_offset(const struct fsl_qdma_format *ccdf)
> +{
> + return (rte_le_to_cpu_32(ccdf->cfg) & QDMA_CCDF_MASK) >> QDMA_CCDF_OFFSET;
> +}
> +
> +static inline void
> +qdma_ccdf_set_format(struct fsl_qdma_format *ccdf, int offset)
> +{
> + ccdf->cfg = rte_cpu_to_le_32(QDMA_CCDF_FOTMAT | offset);
> +}
> +
> +static inline int
> +qdma_ccdf_get_status(const struct fsl_qdma_format *ccdf)
> +{
> + return (rte_le_to_cpu_32(ccdf->status) & QDMA_CCDF_MASK) >> QDMA_CCDF_STATUS;
> +}
> +
> +static inline void
> +qdma_ccdf_set_ser(struct fsl_qdma_format *ccdf, int status)
> +{
> + ccdf->status = rte_cpu_to_le_32(QDMA_CCDF_SER | status);
> +}
> +
> +static inline void qdma_csgf_set_len(struct fsl_qdma_format *csgf, int len)
> +{
> + csgf->cfg = rte_cpu_to_le_32(len & QDMA_SG_LEN_MASK);
> +}
> +
> +static inline void qdma_csgf_set_f(struct fsl_qdma_format *csgf, int len)
> +{
> + csgf->cfg = rte_cpu_to_le_32(QDMA_SG_FIN | (len & QDMA_SG_LEN_MASK));
> +}
> +
> +static inline void qdma_csgf_set_e(struct fsl_qdma_format *csgf, int len)
> +{
> + csgf->cfg = rte_cpu_to_le_32(QDMA_SG_EXT | (len & QDMA_SG_LEN_MASK));
> +}
> +
> +static inline int ilog2(int x)
> +{
> + int log = 0;
> +
> + x >>= 1;
> +
> + while (x) {
> + log++;
> + x >>= 1;
> + }
> + return log;
> +}
> +
> +static u32 qdma_readl(struct fsl_qdma_engine *qdma, void *addr)
> +{
> + return QDMA_IN(qdma, addr);
> +}
> +
> +static void qdma_writel(struct fsl_qdma_engine *qdma, u32 val,
> + void *addr)
> +{
> + QDMA_OUT(qdma, addr, val);
> +}
> +
> +static void *dma_pool_alloc(int size, int aligned, dma_addr_t *phy_addr)
> +{
> +#ifdef QDMA_MEMZONE
> + void *virt_addr;
> +
> + virt_addr = rte_malloc("dma pool alloc", size, aligned);
> + if (!virt_addr)
> + return NULL;
> +
> + *phy_addr = rte_mem_virt2iova(virt_addr);
> +
> + return virt_addr;
> +#else
> + const struct rte_memzone *mz;
> + char mz_name[RTE_MEMZONE_NAMESIZE];
> + uint32_t core_id = rte_lcore_id();
> + unsigned int socket_id;
> + int count = 0;
> +
> + bzero(mz_name, sizeof(*mz_name));
> + snprintf(mz_name, sizeof(mz_name) - 1, "%lx-times-%d",
> + (unsigned long)rte_get_timer_cycles(), count);
> + if (core_id == (unsigned int)LCORE_ID_ANY)
> + core_id = 0;
> + socket_id = rte_lcore_to_socket_id(core_id);
> + mz = rte_memzone_reserve_aligned(mz_name, size, socket_id, 0, aligned);
> + if (!mz) {
> + *phy_addr = 0;
> + return NULL;
> + }
> + *phy_addr = mz->iova;
> +
> + qdma_mz_mapping[qdma_mz_count++] = mz;
> +
> + return mz->addr;
> +
> +#endif
> +}
> +
> +#ifdef QDMA_MEMZONE
> +static void dma_pool_free(void *addr)
> +{
> + rte_free(addr);
> +# else
> +static void dma_pool_free(dma_addr_t *addr)
> +{
> + uint16_t i;
> +
> + for (i = 0 ; i < qdma_mz_count; i++) {
> + if (addr == (dma_addr_t *)qdma_mz_mapping[i]->iova) {
> + rte_memzone_free(qdma_mz_mapping[i]);
> + return;
> + }
> + }
> +#endif
> +}
> +
> +static void fsl_qdma_free_chan_resources(struct fsl_qdma_chan *fsl_chan)
> +{
> + struct fsl_qdma_queue *fsl_queue = fsl_chan->queue;
> + struct fsl_qdma_engine *fsl_qdma = fsl_chan->qdma;
> + struct fsl_qdma_comp *comp_temp, *_comp_temp;
> + int id;
> +
> + if (--fsl_queue->count)
> + goto finally;
> +
> + id = (fsl_qdma->block_base - fsl_queue->block_base) /
> + fsl_qdma->block_offset;
> +
> + while (rte_atomic32_read(&wait_task[id]) == 1)
> + rte_delay_us(QDMA_DELAY);
> +
> + list_for_each_entry_safe(comp_temp, _comp_temp,
> + &fsl_queue->comp_used, list) {
> + list_del(&comp_temp->list);
> +#ifdef QDMA_MEMZONE
> + dma_pool_free(comp_temp->virt_addr);
> + dma_pool_free(comp_temp->desc_virt_addr);
> +#else
> + dma_pool_free((dma_addr_t *)comp_temp->bus_addr);
> + dma_pool_free((dma_addr_t *)comp_temp->desc_bus_addr);
> +#endif
> + rte_free(comp_temp);
> + }
> +
> + list_for_each_entry_safe(comp_temp, _comp_temp,
> + &fsl_queue->comp_free, list) {
> + list_del(&comp_temp->list);
> +#ifdef QDMA_MEMZONE
> + dma_pool_free(comp_temp->virt_addr);
> + dma_pool_free(comp_temp->desc_virt_addr);
> +#else
> + dma_pool_free((dma_addr_t *)comp_temp->bus_addr);
> + dma_pool_free((dma_addr_t *)comp_temp->desc_bus_addr);
> +#endif
> + rte_free(comp_temp);
> + }
> +
> +finally:
> + fsl_qdma->desc_allocated--;
> +}
> +
> +static void fsl_qdma_comp_fill_memcpy(struct fsl_qdma_comp *fsl_comp,
> + dma_addr_t dst, dma_addr_t src, u32 len)
> +{
> + struct fsl_qdma_format *ccdf, *csgf_desc, *csgf_src, *csgf_dest;
> + struct fsl_qdma_sdf *sdf;
> + struct fsl_qdma_ddf *ddf;
> +
> + ccdf = (struct fsl_qdma_format *)fsl_comp->virt_addr;
> + csgf_desc = (struct fsl_qdma_format *)fsl_comp->virt_addr + 1;
> + csgf_src = (struct fsl_qdma_format *)fsl_comp->virt_addr + 2;
> + csgf_dest = (struct fsl_qdma_format *)fsl_comp->virt_addr + 3;
> + sdf = (struct fsl_qdma_sdf *)fsl_comp->desc_virt_addr;
> + ddf = (struct fsl_qdma_ddf *)fsl_comp->desc_virt_addr + 1;
> +
> + memset(fsl_comp->virt_addr, 0, FSL_QDMA_COMMAND_BUFFER_SIZE);
> + memset(fsl_comp->desc_virt_addr, 0, FSL_QDMA_DESCRIPTOR_BUFFER_SIZE);
> + /* Head Command Descriptor(Frame Descriptor) */
> + qdma_desc_addr_set64(ccdf, fsl_comp->bus_addr + 16);
> + qdma_ccdf_set_format(ccdf, qdma_ccdf_get_offset(ccdf));
> + qdma_ccdf_set_ser(ccdf, qdma_ccdf_get_status(ccdf));
> + /* Status notification is enqueued to status queue. */
> + /* Compound Command Descriptor(Frame List Table) */
> + qdma_desc_addr_set64(csgf_desc, fsl_comp->desc_bus_addr);
> + /* It must be 32 as Compound S/G Descriptor */
> + qdma_csgf_set_len(csgf_desc, 32);
> + qdma_desc_addr_set64(csgf_src, src);
> + qdma_csgf_set_len(csgf_src, len);
> + qdma_desc_addr_set64(csgf_dest, dst);
> + qdma_csgf_set_len(csgf_dest, len);
> + /* This entry is the last entry. */
> + qdma_csgf_set_f(csgf_dest, len);
> + /* Descriptor Buffer */
> + sdf->cmd = rte_cpu_to_le_32(FSL_QDMA_CMD_RWTTYPE <<
> + FSL_QDMA_CMD_RWTTYPE_OFFSET);
> + ddf->cmd = rte_cpu_to_le_32(FSL_QDMA_CMD_RWTTYPE <<
> + FSL_QDMA_CMD_RWTTYPE_OFFSET);
> + ddf->cmd |= rte_cpu_to_le_32(FSL_QDMA_CMD_LWC <<
> + FSL_QDMA_CMD_LWC_OFFSET);
> +}
> +
> +/*
> + * Pre-request command descriptor and compound S/G for enqueue.
> + */
> +static int fsl_qdma_pre_request_enqueue_comp_desc(struct fsl_qdma_queue *queue,
> + int size, int aligned)
> +{
> + struct fsl_qdma_comp *comp_temp;
> + int i;
> +
> + for (i = 0; i < (int)(queue->n_cq + COMMAND_QUEUE_OVERFLLOW); i++) {
> + comp_temp = rte_zmalloc("qdma: comp temp",
> + sizeof(*comp_temp), 0);
> + if (!comp_temp)
> + return -ENOMEM;
> +
> + comp_temp->virt_addr =
> + dma_pool_alloc(size, aligned, &comp_temp->bus_addr);
> + if (!comp_temp->virt_addr) {
> + rte_free(comp_temp);
> + return -ENOMEM;
> + }
> +
> + list_add_tail(&comp_temp->list, &queue->comp_free);
> + }
> +
> + return 0;
> +}
> +
> +/*
> + * Pre-request source and destination descriptor for enqueue.
> + */
> +static int fsl_qdma_pre_request_enqueue_sd_desc(struct fsl_qdma_queue *queue,
> + int size, int aligned)
> +{
> + struct fsl_qdma_comp *comp_temp, *_comp_temp;
> +
> + list_for_each_entry_safe(comp_temp, _comp_temp,
> + &queue->comp_free, list) {
> + comp_temp->desc_virt_addr =
> + dma_pool_alloc(size, aligned, &comp_temp->desc_bus_addr);
> + if (!comp_temp->desc_virt_addr)
> + return -ENOMEM;
> + }
> +
> + return 0;
> +}
> +
> +/*
> + * Request a command descriptor for enqueue.
> + */
> +static struct fsl_qdma_comp *
> +fsl_qdma_request_enqueue_desc(struct fsl_qdma_chan *fsl_chan)
> +{
> + struct fsl_qdma_queue *queue = fsl_chan->queue;
> + struct fsl_qdma_comp *comp_temp;
> + int timeout = COMP_TIMEOUT;
> +
> + while (timeout) {
> + if (!list_empty(&queue->comp_free)) {
> + comp_temp = list_first_entry(&queue->comp_free,
> + struct fsl_qdma_comp,
> + list);
> + list_del(&comp_temp->list);
> + return comp_temp;
> + }
> + timeout--;
> + }
> +
> + return NULL;
> +}
> +
> +static struct fsl_qdma_queue
> +*fsl_qdma_alloc_queue_resources(struct fsl_qdma_engine *fsl_qdma)
> +{
> + struct fsl_qdma_queue *queue_head, *queue_temp;
> + int len, i, j;
> + int queue_num;
> + int blocks;
> + unsigned int queue_size[FSL_QDMA_QUEUE_MAX];
> +
> + queue_num = fsl_qdma->n_queues;
> + blocks = fsl_qdma->num_blocks;
> +
> + len = sizeof(*queue_head) * queue_num * blocks;
> + queue_head = rte_zmalloc("qdma: queue head", len, 0);
> + if (!queue_head)
> + return NULL;
> +
> + for (i = 0; i < FSL_QDMA_QUEUE_MAX; i++)
> + queue_size[i] = QDMA_QUEUE_SIZE;
> +
> + for (j = 0; j < blocks; j++) {
> + for (i = 0; i < queue_num; i++) {
> + if (queue_size[i] > FSL_QDMA_CIRCULAR_DESC_SIZE_MAX ||
> + queue_size[i] < FSL_QDMA_CIRCULAR_DESC_SIZE_MIN) {
> + DPAA_QDMA_ERR("Get wrong queue-sizes.\n");
> + return NULL;
> + }
> + queue_temp = queue_head + i + (j * queue_num);
> +
> + queue_temp->cq =
> + dma_pool_alloc(sizeof(struct fsl_qdma_format) *
> + queue_size[i],
> + sizeof(struct fsl_qdma_format) *
> + queue_size[i], &queue_temp->bus_addr);
> +
> + memset(queue_temp->cq, 0x0, queue_size[i] *
> + sizeof(struct fsl_qdma_format));
> +
> + if (!queue_temp->cq)
> + return NULL;
> +
> + queue_temp->block_base = fsl_qdma->block_base +
> + FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, j);
> + queue_temp->n_cq = queue_size[i];
> + queue_temp->id = i;
> + queue_temp->count = 0;
> + queue_temp->virt_head = queue_temp->cq;
> +
> + }
> + }
> + return queue_head;
> +}
> +
> +static struct fsl_qdma_queue *fsl_qdma_prep_status_queue(void)
> +{
> + struct fsl_qdma_queue *status_head;
> + unsigned int status_size;
> +
> + status_size = QDMA_STATUS_SIZE;
> + if (status_size > FSL_QDMA_CIRCULAR_DESC_SIZE_MAX ||
> + status_size < FSL_QDMA_CIRCULAR_DESC_SIZE_MIN) {
> + DPAA_QDMA_ERR("Get wrong status_size.\n");
> + return NULL;
> + }
> +
> + status_head = rte_zmalloc("qdma: status head", sizeof(*status_head), 0);
> + if (!status_head)
> + return NULL;
> +
> + /*
> + * Buffer for queue command
> + */
> + status_head->cq = dma_pool_alloc(sizeof(struct fsl_qdma_format) *
> + status_size,
> + sizeof(struct fsl_qdma_format) *
> + status_size,
> + &status_head->bus_addr);
> +
> + memset(status_head->cq, 0x0, status_size *
> + sizeof(struct fsl_qdma_format));
> + if (!status_head->cq)
> + return NULL;
> +
> + status_head->n_cq = status_size;
> + status_head->virt_head = status_head->cq;
> +
> + return status_head;
> +}
> +
> +static int fsl_qdma_halt(struct fsl_qdma_engine *fsl_qdma)
> +{
> + void *ctrl = fsl_qdma->ctrl_base;
> + void *block;
> + int i, count = RETRIES;
> + unsigned int j;
> + u32 reg;
> +
> + /* Disable the command queue and wait for idle state. */
> + reg = qdma_readl(fsl_qdma, ctrl + FSL_QDMA_DMR);
> + reg |= FSL_QDMA_DMR_DQD;
> + qdma_writel(fsl_qdma, reg, ctrl + FSL_QDMA_DMR);
> + for (j = 0; j < fsl_qdma->num_blocks; j++) {
> + block = fsl_qdma->block_base +
> + FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, j);
> + for (i = 0; i < FSL_QDMA_QUEUE_NUM_MAX; i++)
> + qdma_writel(fsl_qdma, 0, block + FSL_QDMA_BCQMR(i));
> + }
> + while (true) {
> + reg = qdma_readl(fsl_qdma, ctrl + FSL_QDMA_DSR);
> + if (!(reg & FSL_QDMA_DSR_DB))
> + break;
> + if (count-- < 0)
> + return -EBUSY;
> + rte_delay_us(100);
> + }
> +
> + for (j = 0; j < fsl_qdma->num_blocks; j++) {
> + block = fsl_qdma->block_base +
> + FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, j);
> +
> + /* Disable status queue. */
> + qdma_writel(fsl_qdma, 0, block + FSL_QDMA_BSQMR);
> +
> + /*
> + * clear the command queue interrupt detect register for
> + * all queues.
> + */
> + qdma_writel(fsl_qdma, 0xffffffff, block + FSL_QDMA_BCQIDR(0));
> + }
> +
> + return 0;
> +}
> +
> +static int
> +fsl_qdma_queue_transfer_complete(struct fsl_qdma_engine *fsl_qdma,
> + void *block, int id,
> + struct rte_qdma_enqdeq *e_context)
> +{
> + struct fsl_qdma_queue *fsl_queue = fsl_qdma->queue;
> + struct fsl_qdma_queue *fsl_status = fsl_qdma->status[id];
> + struct fsl_qdma_queue *temp_queue;
> + struct fsl_qdma_format *status_addr;
> + struct fsl_qdma_comp *fsl_comp = NULL;
> + u32 reg, i;
> + int count = 0;
> + bool duplicate, duplicate_handle;
> +
> + while (true) {
> + reg = qdma_readl(fsl_qdma, block + FSL_QDMA_BSQSR);
> + if (reg & FSL_QDMA_BSQSR_QE)
> + return count;
> +
> + duplicate = 0;
> + duplicate_handle = 0;
> + status_addr = fsl_status->virt_head;
> +
> + if (qdma_ccdf_get_queue(status_addr) ==
> + pre_queue[id] &&
> + qdma_ccdf_addr_get64(status_addr) ==
> + pre_addr[id]) {
> + duplicate = 1;
> + }
> + i = qdma_ccdf_get_queue(status_addr) +
> + id * fsl_qdma->n_queues;
> + pre_addr[id] = qdma_ccdf_addr_get64(status_addr);
> + pre_queue[id] = qdma_ccdf_get_queue(status_addr);
> + temp_queue = fsl_queue + i;
> +
> + if (list_empty(&temp_queue->comp_used)) {
> + if (duplicate)
> + duplicate_handle = 1;
> + else
> + continue;
> + } else {
> + fsl_comp = list_first_entry(&temp_queue->comp_used,
> + struct fsl_qdma_comp,
> + list);
> + if (fsl_comp->bus_addr + 16 !=
> + pre_addr[id]) {
> + if (duplicate)
> + duplicate_handle = 1;
> + else
> + return -1;
> + }
> + }
> +
> + if (duplicate_handle) {
> + reg = qdma_readl(fsl_qdma, block + FSL_QDMA_BSQMR);
> + reg |= FSL_QDMA_BSQMR_DI;
> + qdma_desc_addr_set64(status_addr, 0x0);
> + fsl_status->virt_head++;
> + if (fsl_status->virt_head == fsl_status->cq
> + + fsl_status->n_cq)
> + fsl_status->virt_head = fsl_status->cq;
> + qdma_writel(fsl_qdma, reg, block + FSL_QDMA_BSQMR);
> + continue;
> + }
> + list_del(&fsl_comp->list);
> +
> + reg = qdma_readl(fsl_qdma, block + FSL_QDMA_BSQMR);
> + reg |= FSL_QDMA_BSQMR_DI;
> + qdma_desc_addr_set64(status_addr, 0x0);
> + fsl_status->virt_head++;
> + if (fsl_status->virt_head == fsl_status->cq + fsl_status->n_cq)
> + fsl_status->virt_head = fsl_status->cq;
> + qdma_writel(fsl_qdma, reg, block + FSL_QDMA_BSQMR);
> + list_add_tail(&fsl_comp->list, &temp_queue->comp_free);
> + e_context->job[count] = (struct rte_qdma_job *)fsl_comp->params;
> + count++;
> +
> + fsl_comp->qchan->status = DMA_COMPLETE;
> + }
> + return count;
> +}
> +
> +static int fsl_qdma_reg_init(struct fsl_qdma_engine *fsl_qdma)
> +{
> + struct fsl_qdma_queue *fsl_queue = fsl_qdma->queue;
> + struct fsl_qdma_queue *temp;
> + void *ctrl = fsl_qdma->ctrl_base;
> + void *status = fsl_qdma->status_base;
> + void *block;
> + u32 i, j;
> + u32 reg;
> + int ret, val;
> +
> + /* Try to halt the qDMA engine first. */
> + ret = fsl_qdma_halt(fsl_qdma);
> + if (ret) {
> + DPAA_QDMA_ERR("DMA halt failed!");
> + return ret;
> + }
> +
> + for (j = 0; j < fsl_qdma->num_blocks; j++) {
> + block = fsl_qdma->block_base +
> + FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, j);
> + for (i = 0; i < fsl_qdma->n_queues; i++) {
> + temp = fsl_queue + i + (j * fsl_qdma->n_queues);
> + /*
> + * Initialize Command Queue registers to
> + * point to the first
> + * command descriptor in memory.
> + * Dequeue Pointer Address Registers
> + * Enqueue Pointer Address Registers
> + */
> +
> + qdma_writel(fsl_qdma, lower_32_bits(temp->bus_addr),
> + block + FSL_QDMA_BCQDPA_SADDR(i));
> + qdma_writel(fsl_qdma, upper_32_bits(temp->bus_addr),
> + block + FSL_QDMA_BCQEDPA_SADDR(i));
> + qdma_writel(fsl_qdma, lower_32_bits(temp->bus_addr),
> + block + FSL_QDMA_BCQEPA_SADDR(i));
> + qdma_writel(fsl_qdma, upper_32_bits(temp->bus_addr),
> + block + FSL_QDMA_BCQEEPA_SADDR(i));
> +
> + /* Initialize the queue mode. */
> + reg = FSL_QDMA_BCQMR_EN;
> + reg |= FSL_QDMA_BCQMR_CD_THLD(ilog2(temp->n_cq) - 4);
> + reg |= FSL_QDMA_BCQMR_CQ_SIZE(ilog2(temp->n_cq) - 6);
> + qdma_writel(fsl_qdma, reg, block + FSL_QDMA_BCQMR(i));
> + }
> +
> + /*
> + * Workaround for erratum: ERR010812.
> + * We must enable XOFF to avoid the enqueue rejection occurs.
> + * Setting SQCCMR ENTER_WM to 0x20.
> + */
> +
> + qdma_writel(fsl_qdma, FSL_QDMA_SQCCMR_ENTER_WM,
> + block + FSL_QDMA_SQCCMR);
> +
> + /*
> + * Initialize status queue registers to point to the first
> + * command descriptor in memory.
> + * Dequeue Pointer Address Registers
> + * Enqueue Pointer Address Registers
> + */
> +
> + qdma_writel(fsl_qdma,
> + upper_32_bits(fsl_qdma->status[j]->bus_addr),
> + block + FSL_QDMA_SQEEPAR);
> + qdma_writel(fsl_qdma,
> + lower_32_bits(fsl_qdma->status[j]->bus_addr),
> + block + FSL_QDMA_SQEPAR);
> + qdma_writel(fsl_qdma,
> + upper_32_bits(fsl_qdma->status[j]->bus_addr),
> + block + FSL_QDMA_SQEDPAR);
> + qdma_writel(fsl_qdma,
> + lower_32_bits(fsl_qdma->status[j]->bus_addr),
> + block + FSL_QDMA_SQDPAR);
> + /* Desiable status queue interrupt. */
> +
> + qdma_writel(fsl_qdma, 0x0, block + FSL_QDMA_BCQIER(0));
> + qdma_writel(fsl_qdma, 0x0, block + FSL_QDMA_BSQICR);
> + qdma_writel(fsl_qdma, 0x0, block + FSL_QDMA_CQIER);
> +
> + /* Initialize the status queue mode. */
> + reg = FSL_QDMA_BSQMR_EN;
> + val = ilog2(fsl_qdma->status[j]->n_cq) - 6;
> + reg |= FSL_QDMA_BSQMR_CQ_SIZE(val);
> + qdma_writel(fsl_qdma, reg, block + FSL_QDMA_BSQMR);
> + }
> +
> + /* Initialize controller interrupt register. */
> + qdma_writel(fsl_qdma, 0xffffffff, status + FSL_QDMA_DEDR);
> + qdma_writel(fsl_qdma, 0xffffffff, status + FSL_QDMA_DEIER);
> +
> + reg = qdma_readl(fsl_qdma, ctrl + FSL_QDMA_DMR);
> + reg &= ~FSL_QDMA_DMR_DQD;
> + qdma_writel(fsl_qdma, reg, ctrl + FSL_QDMA_DMR);
> +
> + return 0;
> +}
> +
> +static void *
> +fsl_qdma_prep_memcpy(void *fsl_chan, dma_addr_t dst,
> + dma_addr_t src, size_t len,
> + void *call_back,
> + void *param)
> +{
> + struct fsl_qdma_comp *fsl_comp;
> +
> + fsl_comp =
> + fsl_qdma_request_enqueue_desc((struct fsl_qdma_chan *)fsl_chan);
> + if (!fsl_comp)
> + return NULL;
> +
> + fsl_comp->qchan = fsl_chan;
> + fsl_comp->call_back_func = call_back;
> + fsl_comp->params = param;
> +
> + fsl_qdma_comp_fill_memcpy(fsl_comp, dst, src, len);
> +
> + ((struct fsl_qdma_chan *)fsl_chan)->status = DMA_IN_PREPAR;
> + return (void *)fsl_comp;
> +}
> +
> +static int fsl_qdma_enqueue_desc(struct fsl_qdma_chan *fsl_chan,
> + struct fsl_qdma_comp *fsl_comp)
> +{
> + struct fsl_qdma_queue *fsl_queue = fsl_chan->queue;
> + void *block = fsl_queue->block_base;
> + u32 reg;
> +
> + if (!fsl_comp)
> + return -1;
> +
> + reg = qdma_readl(fsl_chan->qdma, block +
> + FSL_QDMA_BCQSR(fsl_queue->id));
> + if (reg & (FSL_QDMA_BCQSR_QF | FSL_QDMA_BCQSR_XOFF))
> + return -1;
> +
> + memcpy(fsl_queue->virt_head++, fsl_comp->virt_addr, 16);
> + if (fsl_queue->virt_head == fsl_queue->cq + fsl_queue->n_cq)
> + fsl_queue->virt_head = fsl_queue->cq;
> +
> + list_add_tail(&fsl_comp->list, &fsl_queue->comp_used);
> +
> + reg = qdma_readl(fsl_chan->qdma, block + FSL_QDMA_BSQSR);
> + if (reg & FSL_QDMA_BSQSR_QF)
> + return -1;
> +
> + reg = qdma_readl(fsl_chan->qdma, block + FSL_QDMA_BCQMR(fsl_queue->id));
> + reg |= FSL_QDMA_BCQMR_EI;
> + qdma_writel(fsl_chan->qdma, reg, block + FSL_QDMA_BCQMR(fsl_queue->id));
> +
> + fsl_chan->status = DMA_IN_PROGRESS;
> + return 0;
> +}
> +
> +static int fsl_qdma_issue_pending(void *fsl_chan_org, void *fsl_comp)
> +{
> + struct fsl_qdma_chan *fsl_chan = fsl_chan_org;
> +
> + return fsl_qdma_enqueue_desc(fsl_chan,
> + (struct fsl_qdma_comp *)fsl_comp);
> +}
> +
> +static int fsl_qdma_alloc_chan_resources(struct fsl_qdma_chan *fsl_chan)
> +{
> + struct fsl_qdma_queue *fsl_queue = fsl_chan->queue;
> + struct fsl_qdma_engine *fsl_qdma = fsl_chan->qdma;
> + int ret;
> +
> + if (fsl_queue->count++)
> + goto finally;
> +
> + INIT_LIST_HEAD(&fsl_queue->comp_free);
> + INIT_LIST_HEAD(&fsl_queue->comp_used);
> +
> + ret = fsl_qdma_pre_request_enqueue_comp_desc(fsl_queue,
> + FSL_QDMA_COMMAND_BUFFER_SIZE, 64);
> + if (ret) {
> + DPAA_QDMA_ERR(
> + "failed to alloc dma buffer for comp descriptor\n");
> + goto exit;
> + }
> +
> + ret = fsl_qdma_pre_request_enqueue_sd_desc(fsl_queue,
> + FSL_QDMA_DESCRIPTOR_BUFFER_SIZE, 32);
> + if (ret) {
> + DPAA_QDMA_ERR(
> + "failed to alloc dma buffer for sd descriptor\n");
> + goto exit;
> + }
> +
> +finally:
> + return fsl_qdma->desc_allocated++;
> +
> +exit:
> + return -ENOMEM;
> +}
> +
> +static int
> +dpaa_get_channel(struct fsl_qdma_engine *fsl_qdma, uint32_t core)
> +{
> + u32 i, start, end;
> +
> + start = core * QDMA_QUEUES;
> +
> + /* TODO: Currently, supporting 1 queue per core. In future,
> + * if there is more queues requirement then we may need extra
> + * layer of virtual queues on dequeue side as we have only one
> + * status queue per block/core and then we can change below
> + * statement as:
> + * end = start + QDMA_QUEUES;
> + */
> + end = start + 1;
> + for (i = start; i < end; i++) {
> + struct fsl_qdma_chan *fsl_chan = &fsl_qdma->chans[i];
> +
> + if (fsl_chan->free) {
> + fsl_chan->free = false;
> + fsl_qdma_alloc_chan_resources(fsl_chan);
> + return i;
> + }
> + }
> +
> + return -1;
> +}
> +
> +static void
> +dma_release(void *fsl_chan)
> +{
> + ((struct fsl_qdma_chan *)fsl_chan)->free = true;
> + fsl_qdma_free_chan_resources((struct fsl_qdma_chan *)fsl_chan);
> +}
> +
> +
> +static int
> +dpaa_qdma_configure(__rte_unused const struct rte_rawdev *rawdev,
> + __rte_unused rte_rawdev_obj_t config)
> +{
> + return 0;
> +}
> +
> +static int
> +dpaa_qdma_start(__rte_unused struct rte_rawdev *rawdev)
> +{
> + return 0;
> +}
> +
> +static int
> +dpaa_qdma_reset(__rte_unused struct rte_rawdev *rawdev)
> +{
> + return 0;
> +}
> +
> +static int
> +dpaa_qdma_close(__rte_unused struct rte_rawdev *rawdev)
> +{
> + return 0;
> +}
> +
> +
> +static int
> +dpaa_qdma_queue_setup(struct rte_rawdev *rawdev,
> + __rte_unused uint16_t queue_id,
> + rte_rawdev_obj_t queue_conf)
> +{
> + struct fsl_qdma_engine *fsl_qdma = rawdev->dev_private;
> + struct rte_qdma_queue_config *q_config =
> + (struct rte_qdma_queue_config *)queue_conf;
> +
> + return dpaa_get_channel(fsl_qdma, q_config->lcore_id);
> +}
> +
> +static int
> +dpaa_qdma_queue_release(struct rte_rawdev *rawdev,
> + uint16_t vq_id)
> +{
> + struct fsl_qdma_engine *fsl_qdma = rawdev->dev_private;
> + struct fsl_qdma_chan *fsl_chan = &fsl_qdma->chans[vq_id];
> +
> + dma_release(fsl_chan);
> + return 0;
> +}
> +
> +static int
> +dpaa_qdma_enqueue(__rte_unused struct rte_rawdev *rawdev,
> + __rte_unused struct rte_rawdev_buf **buffers,
> + unsigned int nb_jobs,
> + rte_rawdev_obj_t context)
> +{
> + struct fsl_qdma_engine *fsl_qdma = rawdev->dev_private;
> + struct rte_qdma_enqdeq *e_context = (struct rte_qdma_enqdeq *)context;
> + struct rte_qdma_job **job = e_context->job;
> + struct fsl_qdma_chan *fsl_chan = &fsl_qdma->chans[e_context->vq_id];
> + unsigned int i = 0, ret;
> +
> + for (i = 0; i < nb_jobs; i++) {
> + void *fsl_comp = NULL;
> +
> + fsl_comp = fsl_qdma_prep_memcpy(fsl_chan,
> + (dma_addr_t)job[i]->dest,
> + (dma_addr_t)job[i]->src,
> + job[i]->len, NULL, job[i]);
> + if (!fsl_comp) {
> + DPAA_QDMA_DP_DEBUG("fsl_comp is NULL\n");
> + return i;
> + }
> + ret = fsl_qdma_issue_pending(fsl_chan, fsl_comp);
> + if (ret)
> + return i;
> + }
> +
> + return i;
> +}
> +
> +static int
> +dpaa_qdma_dequeue(struct rte_rawdev *rawdev,
> + __rte_unused struct rte_rawdev_buf **buffers,
> + __rte_unused unsigned int nb_jobs,
> + rte_rawdev_obj_t cntxt)
> +{
> + struct fsl_qdma_engine *fsl_qdma = rawdev->dev_private;
> + struct rte_qdma_enqdeq *e_context = (struct rte_qdma_enqdeq *)cntxt;
> + int id = (int)((e_context->vq_id) / QDMA_QUEUES);
> + void *block;
> + void *ctrl = fsl_qdma->ctrl_base;
> + unsigned int reg;
> + int intr;
> + void *status = fsl_qdma->status_base;
> +
> + intr = qdma_readl(fsl_qdma, status + FSL_QDMA_DEDR);
> + if (intr) {
> + DPAA_QDMA_ERR("DMA transaction error! %x\n", intr);
> + intr = qdma_readl(fsl_qdma, status + FSL_QDMA_DECFDW0R);
> + DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW0R %x\n", intr);
> + intr = qdma_readl(fsl_qdma, status + FSL_QDMA_DECFDW1R);
> + DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW1R %x\n", intr);
> + intr = qdma_readl(fsl_qdma, status + FSL_QDMA_DECFDW2R);
> + DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW2R %x\n", intr);
> + intr = qdma_readl(fsl_qdma, status + FSL_QDMA_DECFDW3R);
> + DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW3R %x\n", intr);
> + intr = qdma_readl(fsl_qdma, status + FSL_QDMA_DECFQIDR);
> + DPAA_QDMA_INFO("reg FSL_QDMA_DECFQIDR %x\n", intr);
> + intr = qdma_readl(fsl_qdma, status + FSL_QDMA_DECBR);
> + DPAA_QDMA_INFO("reg FSL_QDMA_DECBR %x\n", intr);
> + qdma_writel(fsl_qdma, 0xffffffff,
> + status + FSL_QDMA_DEDR);
> + intr = qdma_readl(fsl_qdma, status + FSL_QDMA_DEDR);
> + }
> +
> + block = fsl_qdma->block_base +
> + FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, id);
> +
> + reg = qdma_readl(fsl_qdma, block + FSL_QDMA_BSQSR);
> + if (reg & FSL_QDMA_BSQSR_QE)
> + return 0;
> +
> + intr = fsl_qdma_queue_transfer_complete(fsl_qdma, block, id, e_context);
> + if (intr < 0) {
> + reg = qdma_readl(fsl_qdma, ctrl + FSL_QDMA_DMR);
> + reg |= FSL_QDMA_DMR_DQD;
> + qdma_writel(fsl_qdma, reg, ctrl + FSL_QDMA_DMR);
> + qdma_writel(fsl_qdma, 0, block + FSL_QDMA_BCQIER(0));
> + DPAA_QDMA_ERR("QDMA: status err!\n");
> + }
> +
> + return intr;
> +}
> +
> +static int
> +dpaa_qdma_attr_get(__rte_unused struct rte_rawdev *rawdev,
> + __rte_unused const char *attr_name,
> + __rte_unused uint64_t *attr_value)
> +{
> + return 0;
> +}
> +
> +static struct rte_rawdev_ops dpaa_qdma_ops = {
> + .dev_configure = dpaa_qdma_configure,
> + .dev_start = dpaa_qdma_start,
> + .dev_reset = dpaa_qdma_reset,
> + .dev_close = dpaa_qdma_close,
> + .queue_setup = dpaa_qdma_queue_setup,
> + .queue_release = dpaa_qdma_queue_release,
> + .attr_get = dpaa_qdma_attr_get,
> + .enqueue_bufs = dpaa_qdma_enqueue,
> + .dequeue_bufs = dpaa_qdma_dequeue,
> +};
> +
> +static int
> +dpaa_qdma_init(struct rte_rawdev *rawdev)
> +{
> + struct fsl_qdma_engine *fsl_qdma = rawdev->dev_private;
> + struct fsl_qdma_chan *fsl_chan;
> + uint64_t phys_addr;
> + unsigned int len;
> + int ccsr_qdma_fd;
> + int regs_size;
> + int ret;
> + u32 i;
> +
> + fsl_qdma->desc_allocated = 0;
> + fsl_qdma->n_chans = VIRT_CHANNELS;
> + fsl_qdma->n_queues = QDMA_QUEUES;
> + fsl_qdma->num_blocks = QDMA_BLOCKS;
> + fsl_qdma->block_offset = QDMA_BLOCK_OFFSET;
> +
> + len = sizeof(*fsl_chan) * fsl_qdma->n_chans;
> + fsl_qdma->chans = rte_zmalloc("qdma: fsl chans", len, 0);
> + if (!fsl_qdma->chans)
> + return -1;
> +
> + len = sizeof(struct fsl_qdma_queue *) * fsl_qdma->num_blocks;
> + fsl_qdma->status = rte_zmalloc("qdma: fsl status", len, 0);
> + if (!fsl_qdma->status) {
> + rte_free(fsl_qdma->chans);
> + return -1;
> + }
> +
> + for (i = 0; i < fsl_qdma->num_blocks; i++) {
> + rte_atomic32_init(&wait_task[i]);
> + fsl_qdma->status[i] = fsl_qdma_prep_status_queue();
> + if (!fsl_qdma->status[i])
> + goto err;
> + }
> +
> + ccsr_qdma_fd = open("/dev/mem", O_RDWR);
> + if (unlikely(ccsr_qdma_fd < 0)) {
> + DPAA_QDMA_ERR("Can not open /dev/mem for qdma CCSR map");
> + goto err;
> + }
> +
> + regs_size = fsl_qdma->block_offset * (fsl_qdma->num_blocks + 2);
> + phys_addr = QDMA_CCSR_BASE;
> + fsl_qdma->ctrl_base = mmap(NULL, regs_size, PROT_READ |
> + PROT_WRITE, MAP_SHARED,
> + ccsr_qdma_fd, phys_addr);
> +
> + close(ccsr_qdma_fd);
> + if (fsl_qdma->ctrl_base == MAP_FAILED) {
> + DPAA_QDMA_ERR("Can not map CCSR base qdma: Phys: 0x%lx "
> + "size %d\n", phys_addr, regs_size);
> + goto err;
> + }
> +
> + fsl_qdma->status_base = fsl_qdma->ctrl_base + QDMA_BLOCK_OFFSET;
> + fsl_qdma->block_base = fsl_qdma->status_base + QDMA_BLOCK_OFFSET;
> +
> + fsl_qdma->queue = fsl_qdma_alloc_queue_resources(fsl_qdma);
> + if (!fsl_qdma->queue) {
> + munmap(fsl_qdma->ctrl_base, regs_size);
> + goto err;
> + }
> +
> + fsl_qdma->big_endian = QDMA_BIG_ENDIAN;
> + for (i = 0; i < fsl_qdma->n_chans; i++) {
> + struct fsl_qdma_chan *fsl_chan = &fsl_qdma->chans[i];
> +
> + fsl_chan->qdma = fsl_qdma;
> + fsl_chan->queue = fsl_qdma->queue + i % (fsl_qdma->n_queues *
> + fsl_qdma->num_blocks);
> + fsl_chan->free = true;
> + }
> +
> + ret = fsl_qdma_reg_init(fsl_qdma);
> + if (ret) {
> + DPAA_QDMA_ERR("Can't Initialize the qDMA engine.\n");
> + munmap(fsl_qdma->ctrl_base, regs_size);
> + goto err;
> + }
> +
> + return 0;
> +
> +err:
> + rte_free(fsl_qdma->chans);
> + rte_free(fsl_qdma->status);
> +
> + return -1;
> +}
> +
> +static int
> +dpaa_qdma_probe(struct rte_dpaa_driver *dpaa_drv,
> + struct rte_dpaa_device *dpaa_dev)
> +{
> + struct rte_rawdev *rawdev;
> + int ret;
> +
> + rawdev = rte_rawdev_pmd_allocate(dpaa_dev->device.name,
> + sizeof(struct fsl_qdma_engine),
> + rte_socket_id());
> + if (!rawdev) {
> + DPAA_QDMA_ERR("Unable to allocate rawdevice");
> + return -EINVAL;
> + }
> +
> + dpaa_dev->rawdev = rawdev;
> + rawdev->dev_ops = &dpaa_qdma_ops;
> + rawdev->device = &dpaa_dev->device;
> + rawdev->driver_name = dpaa_drv->driver.name;
> +
> + /* Invoke PMD device initialization function */
> + ret = dpaa_qdma_init(rawdev);
> + if (ret) {
> + rte_rawdev_pmd_release(rawdev);
> + return ret;
> + }
> +
> + return 0;
> +}
> +
> +static int
> +dpaa_qdma_remove(struct rte_dpaa_device *dpaa_dev)
> +{
> + struct rte_rawdev *rawdev = dpaa_dev->rawdev;
> + struct fsl_qdma_engine *fsl_qdma = rawdev->dev_private;
> + int ret;
> +
> + rte_free(fsl_qdma->status);
> + rte_free(fsl_qdma->chans);
> + ret = rte_rawdev_pmd_release(rawdev);
> + if (ret)
> + DPAA_QDMA_ERR("Device cleanup failed\n");
> +
> + return 0;
> +}
> +
> +static struct rte_dpaa_driver rte_dpaa_qdma_pmd;
> +
> +static struct rte_dpaa_driver rte_dpaa_qdma_pmd = {
> + .drv_type = FSL_DPAA_QDMA,
> + .probe = dpaa_qdma_probe,
> + .remove = dpaa_qdma_remove,
> +};
> +
> +RTE_PMD_REGISTER_DPAA(dpaa_qdma, rte_dpaa_qdma_pmd);
> +
> +RTE_INIT(dpaa_qdma_init_log)
> +{
> + dpaa_qdma_logtype = rte_log_register("pmd.raw.dpaa.qdma");
> + if (dpaa_qdma_logtype >= 0)
> + rte_log_set_level(dpaa_qdma_logtype, RTE_LOG_INFO);
> +}
> diff --git a/drivers/raw/dpaa_qdma/dpaa_qdma.h b/drivers/raw/dpaa_qdma/dpaa_qdma.h
> new file mode 100644
> index 0000000..a60fdcb
> --- /dev/null
> +++ b/drivers/raw/dpaa_qdma/dpaa_qdma.h
> @@ -0,0 +1,275 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright 2020 NXP
> + */
> +
> +#ifndef _FSL_QDMA_H_
> +#define _FSL_QDMA_H_
> +
> +#include <dpaa_list.h>
> +#include <rte_atomic.h>
> +
> +#define u64 uint64_t
> +#define u32 uint32_t
> +#define u16 uint16_t
> +#define u8 uint8_t
> +
> +#ifdef DEBUG
> +#define debug printf
> +#else
> +#define debug
> +#endif
> +#ifndef BIT
> +#define BIT(nr) (1UL << (nr))
> +#endif
> +
> +#define CORE_NUMBER 4
> +#define RETRIES 5
> +
> +#ifndef GENMASK
> +#define BITS_PER_LONG (__SIZEOF_LONG__ * 8)
> +#define GENMASK(h, l) \
> + (((~0UL) << (l)) & (~0UL >> (BITS_PER_LONG - 1 - (h))))
> +#endif
> +
> +#define FSL_QDMA_DMR 0x0
> +#define FSL_QDMA_DSR 0x4
> +#define FSL_QDMA_DEIER 0xe00
> +#define FSL_QDMA_DEDR 0xe04
> +#define FSL_QDMA_DECFDW0R 0xe10
> +#define FSL_QDMA_DECFDW1R 0xe14
> +#define FSL_QDMA_DECFDW2R 0xe18
> +#define FSL_QDMA_DECFDW3R 0xe1c
> +#define FSL_QDMA_DECFQIDR 0xe30
> +#define FSL_QDMA_DECBR 0xe34
> +
> +#define FSL_QDMA_BCQMR(x) (0xc0 + 0x100 * (x))
> +#define FSL_QDMA_BCQSR(x) (0xc4 + 0x100 * (x))
> +#define FSL_QDMA_BCQEDPA_SADDR(x) (0xc8 + 0x100 * (x))
> +#define FSL_QDMA_BCQDPA_SADDR(x) (0xcc + 0x100 * (x))
> +#define FSL_QDMA_BCQEEPA_SADDR(x) (0xd0 + 0x100 * (x))
> +#define FSL_QDMA_BCQEPA_SADDR(x) (0xd4 + 0x100 * (x))
> +#define FSL_QDMA_BCQIER(x) (0xe0 + 0x100 * (x))
> +#define FSL_QDMA_BCQIDR(x) (0xe4 + 0x100 * (x))
> +
> +#define FSL_QDMA_SQEDPAR 0x808
> +#define FSL_QDMA_SQDPAR 0x80c
> +#define FSL_QDMA_SQEEPAR 0x810
> +#define FSL_QDMA_SQEPAR 0x814
> +#define FSL_QDMA_BSQMR 0x800
> +#define FSL_QDMA_BSQSR 0x804
> +#define FSL_QDMA_BSQICR 0x828
> +#define FSL_QDMA_CQMR 0xa00
> +#define FSL_QDMA_CQDSCR1 0xa08
> +#define FSL_QDMA_CQDSCR2 0xa0c
> +#define FSL_QDMA_CQIER 0xa10
> +#define FSL_QDMA_CQEDR 0xa14
> +#define FSL_QDMA_SQCCMR 0xa20
> +
> +#define FSL_QDMA_SQICR_ICEN
> +
> +#define FSL_QDMA_CQIDR_CQT 0xff000000
> +#define FSL_QDMA_CQIDR_SQPE 0x800000
> +#define FSL_QDMA_CQIDR_SQT 0x8000
> +
> +#define FSL_QDMA_BCQIER_CQTIE 0x8000
> +#define FSL_QDMA_BCQIER_CQPEIE 0x800000
> +#define FSL_QDMA_BSQICR_ICEN 0x80000000
> +#define FSL_QDMA_BSQICR_ICST(x) ((x) << 16)
> +#define FSL_QDMA_CQIER_MEIE 0x80000000
> +#define FSL_QDMA_CQIER_TEIE 0x1
> +#define FSL_QDMA_SQCCMR_ENTER_WM 0x200000
> +
> +#define FSL_QDMA_QUEUE_MAX 8
> +
> +#define FSL_QDMA_BCQMR_EN 0x80000000
> +#define FSL_QDMA_BCQMR_EI 0x40000000
> +#define FSL_QDMA_BCQMR_CD_THLD(x) ((x) << 20)
> +#define FSL_QDMA_BCQMR_CQ_SIZE(x) ((x) << 16)
> +
> +#define FSL_QDMA_BCQSR_QF 0x10000
> +#define FSL_QDMA_BCQSR_XOFF 0x1
> +
> +#define FSL_QDMA_BSQMR_EN 0x80000000
> +#define FSL_QDMA_BSQMR_DI 0x40000000
> +#define FSL_QDMA_BSQMR_CQ_SIZE(x) ((x) << 16)
> +
> +#define FSL_QDMA_BSQSR_QE 0x20000
> +#define FSL_QDMA_BSQSR_QF 0x10000
> +
> +#define FSL_QDMA_DMR_DQD 0x40000000
> +#define FSL_QDMA_DSR_DB 0x80000000
> +
> +#define FSL_QDMA_COMMAND_BUFFER_SIZE 64
> +#define FSL_QDMA_DESCRIPTOR_BUFFER_SIZE 32
> +#define FSL_QDMA_CIRCULAR_DESC_SIZE_MIN 64
> +#define FSL_QDMA_CIRCULAR_DESC_SIZE_MAX 16384
> +#define FSL_QDMA_QUEUE_NUM_MAX 8
> +
> +#define FSL_QDMA_CMD_RWTTYPE 0x4
> +#define FSL_QDMA_CMD_LWC 0x2
> +
> +#define FSL_QDMA_CMD_RWTTYPE_OFFSET 28
> +#define FSL_QDMA_CMD_NS_OFFSET 27
> +#define FSL_QDMA_CMD_DQOS_OFFSET 24
> +#define FSL_QDMA_CMD_WTHROTL_OFFSET 20
> +#define FSL_QDMA_CMD_DSEN_OFFSET 19
> +#define FSL_QDMA_CMD_LWC_OFFSET 16
> +
> +#define QDMA_CCDF_STATUS 20
> +#define QDMA_CCDF_OFFSET 20
> +#define QDMA_CCDF_MASK GENMASK(28, 20)
> +#define QDMA_CCDF_FOTMAT BIT(29)
> +#define QDMA_CCDF_SER BIT(30)
> +
> +#define QDMA_SG_FIN BIT(30)
> +#define QDMA_SG_EXT BIT(31)
> +#define QDMA_SG_LEN_MASK GENMASK(29, 0)
> +
> +#define QDMA_BIG_ENDIAN 0x00000001
> +#define COMP_TIMEOUT 100000
> +#define COMMAND_QUEUE_OVERFLLOW 10
> +
> +/* qdma engine attribute */
> +#define QDMA_QUEUE_SIZE 64
> +#define QDMA_STATUS_SIZE 64
> +#define QDMA_CCSR_BASE 0x8380000
> +#define VIRT_CHANNELS 32
> +#define QDMA_BLOCK_OFFSET 0x10000
> +#define QDMA_BLOCKS 4
> +#define QDMA_QUEUES 8
> +#define QDMA_DELAY 1000
> +
> +#define __arch_getq(a) (*(volatile u64 *)(a))
> +#define __arch_putq(v, a) (*(volatile u64 *)(a) = (v))
> +#define __arch_getq32(a) (*(volatile u32 *)(a))
> +#define __arch_putq32(v, a) (*(volatile u32 *)(a) = (v))
> +#define readq(c) \
> + ({ u64 __v = __arch_getq(c); rte_io_rmb(); __v; })
> +#define writeq(v, c) \
> + ({ u64 __v = v; rte_io_wmb(); __arch_putq(__v, c); __v; })
> +#define readq32(c) \
> + ({ u32 __v = __arch_getq32(c); rte_io_rmb(); __v; })
> +#define writeq32(v, c) \
> + ({ u32 __v = v; rte_io_wmb(); __arch_putq32(__v, c); __v; })
> +#define ioread64(_p) readq(_p)
> +#define iowrite64(_v, _p) writeq(_v, _p)
> +#define ioread32(_p) readq32(_p)
> +#define iowrite32(_v, _p) writeq32(_v, _p)
> +
> +#define ioread32be(_p) be32_to_cpu(readq32(_p))
> +#define iowrite32be(_v, _p) writeq32(be32_to_cpu(_v), _p)
> +
> +#define QDMA_IN(fsl_qdma_engine, addr) \
> + (((fsl_qdma_engine)->big_endian & QDMA_BIG_ENDIAN) ? \
> + ioread32be(addr) : ioread32(addr))
> +#define QDMA_OUT(fsl_qdma_engine, addr, val) \
> + (((fsl_qdma_engine)->big_endian & QDMA_BIG_ENDIAN) ? \
> + iowrite32be(val, addr) : iowrite32(val, addr))
> +
> +#define FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma_engine, x) \
> + (((fsl_qdma_engine)->block_offset) * (x))
> +
> +typedef void (*dma_call_back)(void *params);
> +
> +/* qDMA Command Descriptor Fotmats */
> +struct fsl_qdma_format {
> + __le32 status; /* ser, status */
> + __le32 cfg; /* format, offset */
> + union {
> + struct {
> + __le32 addr_lo; /* low 32-bits of 40-bit address */
> + u8 addr_hi; /* high 8-bits of 40-bit address */
> + u8 __reserved1[2];
> + u8 cfg8b_w1; /* dd, queue */
> + };
> + __le64 data;
> + };
> +};
> +
> +/* qDMA Source Descriptor Format */
> +struct fsl_qdma_sdf {
> + __le32 rev3;
> + __le32 cfg; /* rev4, bit[0-11] - ssd, bit[12-23] sss */
> + __le32 rev5;
> + __le32 cmd;
> +};
> +
> +/* qDMA Destination Descriptor Format */
> +struct fsl_qdma_ddf {
> + __le32 rev1;
> + __le32 cfg; /* rev2, bit[0-11] - dsd, bit[12-23] - dss */
> + __le32 rev3;
> + __le32 cmd;
> +};
> +
> +enum dma_status {
> + DMA_COMPLETE,
> + DMA_IN_PROGRESS,
> + DMA_IN_PREPAR,
> + DMA_PAUSED,
> + DMA_ERROR,
> +};
> +
> +struct fsl_qdma_chan {
> + struct fsl_qdma_engine *qdma;
> + struct fsl_qdma_queue *queue;
> + enum dma_status status;
> + bool free;
> + struct list_head list;
> +};
> +
> +struct fsl_qdma_list {
> + struct list_head dma_list;
> +};
> +
> +struct fsl_qdma_queue {
> + struct fsl_qdma_format *virt_head;
> + struct list_head comp_used;
> + struct list_head comp_free;
> + dma_addr_t bus_addr;
> + u32 n_cq;
> + u32 id;
> + u32 count;
> + struct fsl_qdma_format *cq;
> + void *block_base;
> +};
> +
> +struct fsl_qdma_comp {
> + dma_addr_t bus_addr;
> + dma_addr_t desc_bus_addr;
> + void *virt_addr;
> + void *desc_virt_addr;
> + struct fsl_qdma_chan *qchan;
> + dma_call_back call_back_func;
> + void *params;
> + struct list_head list;
> +};
> +
> +struct fsl_qdma_engine {
> + int desc_allocated;
> + void *ctrl_base;
> + void *status_base;
> + void *block_base;
> + u32 n_chans;
> + u32 n_queues;
> + int error_irq;
> + bool big_endian;
> + struct fsl_qdma_queue *queue;
> + struct fsl_qdma_queue **status;
> + struct fsl_qdma_chan *chans;
> + u32 num_blocks;
> + int block_offset;
> +};
> +
> +static u64 pre_addr[CORE_NUMBER];
> +static u64 pre_queue[CORE_NUMBER];
> +static rte_atomic32_t wait_task[CORE_NUMBER];
> +
> +#ifndef QDMA_MEMZONE
> +/* Array of memzone pointers */
> +static const struct rte_memzone *qdma_mz_mapping[RTE_MAX_MEMZONE];
> +/* Counter to track current memzone allocated */
> +static uint16_t qdma_mz_count;
> +#endif
> +
> +#endif /* _FSL_QDMA_H_ */
> diff --git a/drivers/raw/dpaa_qdma/dpaa_qdma_logs.h b/drivers/raw/dpaa_qdma/dpaa_qdma_logs.h
> new file mode 100644
> index 0000000..7c11815
> --- /dev/null
> +++ b/drivers/raw/dpaa_qdma/dpaa_qdma_logs.h
> @@ -0,0 +1,46 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright 2020 NXP
> + */
> +
> +#ifndef __DPAA_QDMA_LOGS_H__
> +#define __DPAA_QDMA_LOGS_H__
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +extern int dpaa_qdma_logtype;
> +
> +#define DPAA_QDMA_LOG(level, fmt, args...) \
> + rte_log(RTE_LOG_ ## level, dpaa_qdma_logtype, "dpaa_qdma: " \
> + fmt "\n", ## args)
> +
> +#define DPAA_QDMA_DEBUG(fmt, args...) \
> + rte_log(RTE_LOG_DEBUG, dpaa_qdma_logtype, "dpaa_qdma: %s(): " \
> + fmt "\n", __func__, ## args)
> +
> +#define DPAA_QDMA_FUNC_TRACE() DPAA_QDMA_DEBUG(">>")
> +
> +#define DPAA_QDMA_INFO(fmt, args...) \
> + DPAA_QDMA_LOG(INFO, fmt, ## args)
> +#define DPAA_QDMA_ERR(fmt, args...) \
> + DPAA_QDMA_LOG(ERR, fmt, ## args)
> +#define DPAA_QDMA_WARN(fmt, args...) \
> + DPAA_QDMA_LOG(WARNING, fmt, ## args)
> +
> +/* DP Logs, toggled out at compile time if level lower than current level */
> +#define DPAA_QDMA_DP_LOG(level, fmt, args...) \
> + RTE_LOG_DP(level, PMD, "dpaa_qdma: " fmt "\n", ## args)
> +
> +#define DPAA_QDMA_DP_DEBUG(fmt, args...) \
> + DPAA_QDMA_DP_LOG(DEBUG, fmt, ## args)
> +#define DPAA_QDMA_DP_INFO(fmt, args...) \
> + DPAA_QDMA_DP_LOG(INFO, fmt, ## args)
> +#define DPAA_QDMA_DP_WARN(fmt, args...) \
> + DPAA_QDMA_DP_LOG(WARNING, fmt, ## args)
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +#endif /* __DPAA_QDMA_LOGS_H__ */
> diff --git a/drivers/raw/dpaa_qdma/meson.build b/drivers/raw/dpaa_qdma/meson.build
> new file mode 100644
> index 0000000..ce2ac33
> --- /dev/null
> +++ b/drivers/raw/dpaa_qdma/meson.build
> @@ -0,0 +1,15 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright 2020 NXP
> +
> +if not is_linux
> + build = false
> + reason = 'only supported on linux'
> +endif
> +
> +deps += ['rawdev', 'bus_dpaa']
> +sources = files('dpaa_qdma.c')
> +includes += include_directories('../dpaa2_qdma')
> +
> +if cc.has_argument('-Wno-pointer-arith')
> + cflags += '-Wno-pointer-arith'
> +endif
> diff --git a/drivers/raw/dpaa_qdma/rte_rawdev_dpaa_qdma_version.map b/drivers/raw/dpaa_qdma/rte_rawdev_dpaa_qdma_version.map
> new file mode 100644
> index 0000000..f9f17e4
> --- /dev/null
> +++ b/drivers/raw/dpaa_qdma/rte_rawdev_dpaa_qdma_version.map
> @@ -0,0 +1,3 @@
> +DPDK_20.0 {
> + local: *;
> +};
> diff --git a/drivers/raw/meson.build b/drivers/raw/meson.build
> index 2c1e65e..0e310ac 100644
> --- a/drivers/raw/meson.build
> +++ b/drivers/raw/meson.build
> @@ -5,7 +5,7 @@ if is_windows
> subdir_done()
> endif
>
> -drivers = ['dpaa2_cmdif', 'dpaa2_qdma',
> +drivers = ['dpaa_qdma', 'dpaa2_cmdif', 'dpaa2_qdma',
> 'ifpga', 'ioat', 'ntb',
> 'octeontx2_dma',
> 'octeontx2_ep',
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [dpdk-dev] [PATCH RFC] raw: add dpaa qdma driver
2020-09-25 6:10 ` Hemant Agrawal
@ 2021-03-24 21:26 ` Thomas Monjalon
2021-04-05 6:35 ` Gagandeep Singh
0 siblings, 1 reply; 4+ messages in thread
From: Thomas Monjalon @ 2021-03-24 21:26 UTC (permalink / raw)
To: Gagandeep Singh, hemant.agrawal; +Cc: dev, nipun.gupta, Peng Ma
25/09/2020 08:10, Hemant Agrawal:
> Hi Gagan,
>
> On 9/7/2020 3:20 PM, Gagandeep Singh wrote:
> > This patch adds support for dpaa qdma based driver.
> >
> Can you provide more details and break it into logical parts?
Is it abandoned?
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [dpdk-dev] [PATCH RFC] raw: add dpaa qdma driver
2021-03-24 21:26 ` Thomas Monjalon
@ 2021-04-05 6:35 ` Gagandeep Singh
0 siblings, 0 replies; 4+ messages in thread
From: Gagandeep Singh @ 2021-04-05 6:35 UTC (permalink / raw)
To: Thomas Monjalon, Hemant Agrawal; +Cc: dev, Nipun Gupta, Peng Ma
Hi Thomas,
> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Thursday, March 25, 2021 2:56 AM
> To: Gagandeep Singh <G.Singh@nxp.com>; Hemant Agrawal
> <hemant.agrawal@nxp.com>
> Cc: dev@dpdk.org; Nipun Gupta <nipun.gupta@nxp.com>; Peng Ma
> <peng.ma@nxp.com>
> Subject: Re: [dpdk-dev] [PATCH RFC] raw: add dpaa qdma driver
>
> 25/09/2020 08:10, Hemant Agrawal:
> > Hi Gagan,
> >
> > On 9/7/2020 3:20 PM, Gagandeep Singh wrote:
> > > This patch adds support for dpaa qdma based driver.
> > >
> > Can you provide more details and break it into logical parts?
>
> Is it abandoned?
>
>
It is abandoned for now. I will try to send the patches next month for next DPDK release.
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2021-04-05 6:35 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-07 9:50 [dpdk-dev] [PATCH RFC] raw: add dpaa qdma driver Gagandeep Singh
2020-09-25 6:10 ` Hemant Agrawal
2021-03-24 21:26 ` Thomas Monjalon
2021-04-05 6:35 ` Gagandeep Singh
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).