DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH 0/3] Add uadk compression and crypto PMD
@ 2022-06-20 12:35 Zhangfei Gao
  2022-06-20 12:35 ` [PATCH 1/3] compress/uadk: add uadk compression PMD Zhangfei Gao
                   ` (3 more replies)
  0 siblings, 4 replies; 8+ messages in thread
From: Zhangfei Gao @ 2022-06-20 12:35 UTC (permalink / raw)
  To: Akhil Goyal, Declan Doherty, Fan Zhang, Ashish Gupta, Ray Kinsella
  Cc: dev, Zhangfei Gao

UADK compression PMD provides poll mode compression & decompression driver
UADK crypto PMD provides poll mode driver
All cryptography operations are using UADK crypto API.
All compression operations are using UADK compress API.

Hardware accelerators using UADK are supposed to be supported.
Currently supported hardware platforms:
HiSilicon Kunpeng920 and Kunpeng930

Test:
sudo dpdk-test --vdev=compress_uadk
sudo dpdk-test --vdev=crypto_uadk

v1:
Target to DPDK 22.11
Rebased on http://git.dpdk.org/next/dpdk-next-crypto/

Suggested from Akhil Goyal <gakhil@marvell.com>
> Current release cycle is DPDK-22.07 for which this patchset is late.
> As we had the V1 deadline last month.
> This patchset can go for next release cycle which is 22.11.

Zhangfei Gao (3):
  compress/uadk: add uadk compression PMD
  test/crypto: add cryptodev_uadk_autotest
  crypto/uadk: add uadk crypto PMD

 app/test/test_cryptodev.c                 |    7 +
 app/test/test_cryptodev.h                 |    1 +
 doc/guides/compressdevs/index.rst         |    1 +
 doc/guides/compressdevs/uadk.rst          |   60 ++
 doc/guides/cryptodevs/index.rst           |    1 +
 doc/guides/cryptodevs/uadk.rst            |   70 ++
 drivers/compress/meson.build              |    1 +
 drivers/compress/uadk/meson.build         |   28 +
 drivers/compress/uadk/uadk_compress_pmd.c |  489 +++++++++
 drivers/compress/uadk/version.map         |    3 +
 drivers/crypto/meson.build                |    1 +
 drivers/crypto/uadk/meson.build           |   28 +
 drivers/crypto/uadk/uadk_crypto_pmd.c     | 1137 +++++++++++++++++++++
 drivers/crypto/uadk/version.map           |    3 +
 14 files changed, 1830 insertions(+)
 create mode 100644 doc/guides/compressdevs/uadk.rst
 create mode 100644 doc/guides/cryptodevs/uadk.rst
 create mode 100644 drivers/compress/uadk/meson.build
 create mode 100644 drivers/compress/uadk/uadk_compress_pmd.c
 create mode 100644 drivers/compress/uadk/version.map
 create mode 100644 drivers/crypto/uadk/meson.build
 create mode 100644 drivers/crypto/uadk/uadk_crypto_pmd.c
 create mode 100644 drivers/crypto/uadk/version.map

-- 
2.36.1


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 1/3] compress/uadk: add uadk compression PMD
  2022-06-20 12:35 [PATCH 0/3] Add uadk compression and crypto PMD Zhangfei Gao
@ 2022-06-20 12:35 ` Zhangfei Gao
  2022-06-20 12:35 ` [PATCH 2/3] test/crypto: add cryptodev_uadk_autotest Zhangfei Gao
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 8+ messages in thread
From: Zhangfei Gao @ 2022-06-20 12:35 UTC (permalink / raw)
  To: Akhil Goyal, Declan Doherty, Fan Zhang, Ashish Gupta, Ray Kinsella
  Cc: dev, Zhangfei Gao

Add uadk compression & decompression PMD, which relies on uadk api.

Test:
sudo dpdk-test --vdev=compress_uadk
RTE>>compressdev_autotest
RTE>>quit

Signed-off-by: Zhangfei Gao <zhangfei.gao@linaro.org>
---
 doc/guides/compressdevs/index.rst         |   1 +
 doc/guides/compressdevs/uadk.rst          |  60 +++
 drivers/compress/meson.build              |   1 +
 drivers/compress/uadk/meson.build         |  28 ++
 drivers/compress/uadk/uadk_compress_pmd.c | 489 ++++++++++++++++++++++
 drivers/compress/uadk/version.map         |   3 +
 6 files changed, 582 insertions(+)
 create mode 100644 doc/guides/compressdevs/uadk.rst
 create mode 100644 drivers/compress/uadk/meson.build
 create mode 100644 drivers/compress/uadk/uadk_compress_pmd.c
 create mode 100644 drivers/compress/uadk/version.map

diff --git a/doc/guides/compressdevs/index.rst b/doc/guides/compressdevs/index.rst
index 54a3ef4273..e47a9ab9cf 100644
--- a/doc/guides/compressdevs/index.rst
+++ b/doc/guides/compressdevs/index.rst
@@ -14,4 +14,5 @@ Compression Device Drivers
     mlx5
     octeontx
     qat_comp
+    uadk
     zlib
diff --git a/doc/guides/compressdevs/uadk.rst b/doc/guides/compressdevs/uadk.rst
new file mode 100644
index 0000000000..3f2c35e08e
--- /dev/null
+++ b/doc/guides/compressdevs/uadk.rst
@@ -0,0 +1,60 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright 2022-2023 Huawei Technologies Co.,Ltd. All rights reserved.
+    Copyright 2022-2023 Linaro ltd.
+
+UADK Compression Poll Mode Driver
+=======================================================
+
+UADK compression PMD provides poll mode compression & decompression driver
+All compression operations are using UADK compress API.
+Hardware accelerators using UADK are supposed to be supported.
+
+Features
+--------
+
+UADK compression PMD has support for:
+
+Compression/Decompression algorithm:
+
+    * DEFLATE - using Fixed and Dynamic Huffman encoding
+
+Window size support:
+
+    * 32K
+
+Checksum generation:
+
+    * CRC32, Adler and combined checksum
+
+Test steps
+-----------
+
+   .. code-block:: console
+
+	1. Build
+	cd dpdk
+	mkdir build
+	meson build (--reconfigure)
+	cd build
+	ninja
+	sudo ninja install
+
+	2. Prepare
+	echo 1024 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
+	echo 1024 > /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
+	echo 1024 > /sys/devices/system/node/node2/hugepages/hugepages-2048kB/nr_hugepages
+	echo 1024 > /sys/devices/system/node/node3/hugepages/hugepages-2048kB/nr_hugepages
+	mkdir -p /mnt/huge_2mb
+	mount -t hugetlbfs none /mnt/huge_2mb -o pagesize=2MB
+
+	2 Test with compress_uadk
+	sudo dpdk-test --vdev=compress_uadk
+	RTE>>compressdev_autotest
+	RTE>>quit
+
+Dependency
+------------
+
+UADK compression PMD relies on UADK library [1]
+
+[1] https://github.com/Linaro/uadk
diff --git a/drivers/compress/meson.build b/drivers/compress/meson.build
index abe043ab94..041a45ba41 100644
--- a/drivers/compress/meson.build
+++ b/drivers/compress/meson.build
@@ -10,6 +10,7 @@ drivers = [
         'mlx5',
         'octeontx',
         'zlib',
+        'uadk',
 ]
 
 std_deps = ['compressdev'] # compressdev pulls in all other needed deps
diff --git a/drivers/compress/uadk/meson.build b/drivers/compress/uadk/meson.build
new file mode 100644
index 0000000000..579d673f8d
--- /dev/null
+++ b/drivers/compress/uadk/meson.build
@@ -0,0 +1,28 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright 2022-2023 Huawei Technologies Co.,Ltd. All rights reserved.
+# Copyright 2022-2023 Linaro ltd.
+
+if not is_linux
+    build = false
+    reason = 'only supported on Linux'
+    subdir_done()
+endif
+
+if arch_subdir != 'arm' or not dpdk_conf.get('RTE_ARCH_64')
+    build = false
+    reason = 'only supported on aarch64'
+    subdir_done()
+endif
+
+sources = files(
+        'uadk_compress_pmd.c',
+)
+
+deps += 'bus_vdev'
+dep = cc.find_library('libwd_comp', dirs: ['/usr/local/lib'], required: false)
+if not dep.found()
+	build = false
+	reason = 'missing dependency, "libwd_comp"'
+else
+	ext_deps += dep
+endif
diff --git a/drivers/compress/uadk/uadk_compress_pmd.c b/drivers/compress/uadk/uadk_compress_pmd.c
new file mode 100644
index 0000000000..2e94961135
--- /dev/null
+++ b/drivers/compress/uadk/uadk_compress_pmd.c
@@ -0,0 +1,489 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2022-2023 Huawei Technologies Co.,Ltd. All rights reserved.
+ * Copyright 2022-2023 Linaro ltd.
+ */
+
+#include <rte_bus_vdev.h>
+#include <rte_compressdev_pmd.h>
+#include <rte_malloc.h>
+#include <uadk/wd_comp.h>
+#include <uadk/wd_sched.h>
+
+struct uadk_compress_priv {
+	struct rte_mempool *mp;
+} __rte_cache_aligned;
+
+struct uadk_qp {
+	struct rte_ring *processed_pkts;
+	/* Ring for placing process packets */
+	struct rte_compressdev_stats qp_stats;
+	/* Queue pair statistics */
+	uint16_t id;
+	/* Queue Pair Identifier */
+	char name[RTE_COMPRESSDEV_NAME_MAX_LEN];
+	/* Unique Queue Pair Name */
+} __rte_cache_aligned;
+
+struct uadk_stream {
+	handle_t handle;
+	enum rte_comp_xform_type type;
+} __rte_cache_aligned;
+
+RTE_LOG_REGISTER_DEFAULT(uadk_compress_logtype, INFO);
+
+#define UADK_LOG(level, fmt, ...)  \
+	rte_log(RTE_LOG_ ## level, uadk_compress_logtype,  \
+		"%s() line %u: " fmt "\n", __func__, __LINE__,  \
+		## __VA_ARGS__)
+
+static int
+uadk_compress_pmd_config(struct rte_compressdev *dev,
+			 struct rte_compressdev_config *config)
+{
+	char mp_name[RTE_MEMPOOL_NAMESIZE];
+	struct uadk_compress_priv *priv;
+	struct rte_mempool *mp;
+	int ret;
+
+	if (dev == NULL || config == NULL)
+		return -EINVAL;
+
+	snprintf(mp_name, RTE_MEMPOOL_NAMESIZE,
+		 "stream_mp_%u", dev->data->dev_id);
+	priv = dev->data->dev_private;
+
+	/* alloc resources */
+	ret = wd_comp_env_init(NULL);
+	if (ret < 0)
+		return -EINVAL;
+
+	mp = priv->mp;
+	if (mp == NULL) {
+		mp = rte_mempool_create(mp_name,
+					config->max_nb_priv_xforms +
+					config->max_nb_streams,
+					sizeof(struct uadk_stream),
+					0, 0, NULL, NULL, NULL,
+					NULL, config->socket_id, 0);
+		if (mp == NULL) {
+			UADK_LOG(ERR, "Cannot create private xform pool on socket %d\n",
+				 config->socket_id);
+			ret = -ENOMEM;
+			goto err_mempool;
+		}
+		priv->mp = mp;
+	}
+
+	return 0;
+
+err_mempool:
+	wd_comp_env_uninit();
+	return ret;
+}
+
+static int
+uadk_compress_pmd_start(struct rte_compressdev *dev __rte_unused)
+{
+	return 0;
+}
+
+static void
+uadk_compress_pmd_stop(struct rte_compressdev *dev __rte_unused)
+{
+}
+
+static int
+uadk_compress_pmd_close(struct rte_compressdev *dev)
+{
+	struct uadk_compress_priv *priv =
+		(struct uadk_compress_priv *)dev->data->dev_private;
+
+	/* free resources */
+	rte_mempool_free(priv->mp);
+	priv->mp = NULL;
+	wd_comp_env_uninit();
+
+	return 0;
+}
+
+static void
+uadk_compress_pmd_stats_get(struct rte_compressdev *dev,
+			    struct rte_compressdev_stats *stats)
+{
+	int qp_id;
+
+	for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
+		struct uadk_qp *qp = dev->data->queue_pairs[qp_id];
+
+		stats->enqueued_count += qp->qp_stats.enqueued_count;
+		stats->dequeued_count += qp->qp_stats.dequeued_count;
+		stats->enqueue_err_count += qp->qp_stats.enqueue_err_count;
+		stats->dequeue_err_count += qp->qp_stats.dequeue_err_count;
+	}
+}
+
+static void
+uadk_compress_pmd_stats_reset(struct rte_compressdev *dev __rte_unused)
+{
+}
+
+static const struct
+rte_compressdev_capabilities uadk_compress_pmd_capabilities[] = {
+	{   /* Deflate */
+		.algo = RTE_COMP_ALGO_DEFLATE,
+		.comp_feature_flags = RTE_COMP_FF_SHAREABLE_PRIV_XFORM |
+				      RTE_COMP_FF_HUFFMAN_FIXED |
+				      RTE_COMP_FF_HUFFMAN_DYNAMIC,
+	},
+
+	RTE_COMP_END_OF_CAPABILITIES_LIST()
+};
+
+static void
+uadk_compress_pmd_info_get(struct rte_compressdev *dev,
+			   struct rte_compressdev_info *dev_info)
+{
+	if (dev_info != NULL) {
+		dev_info->driver_name = dev->device->driver->name;
+		dev_info->feature_flags = dev->feature_flags;
+		dev_info->capabilities = uadk_compress_pmd_capabilities;
+	}
+}
+
+static int
+uadk_compress_pmd_qp_release(struct rte_compressdev *dev, uint16_t qp_id)
+{
+	struct uadk_qp *qp = dev->data->queue_pairs[qp_id];
+
+	if (qp != NULL) {
+		rte_ring_free(qp->processed_pkts);
+		rte_free(qp);
+		dev->data->queue_pairs[qp_id] = NULL;
+	}
+
+	return 0;
+}
+
+static int
+uadk_pmd_qp_set_unique_name(struct rte_compressdev *dev,
+			    struct uadk_qp *qp)
+{
+	unsigned int n = snprintf(qp->name, sizeof(qp->name),
+				 "uadk_pmd_%u_qp_%u",
+				 dev->data->dev_id, qp->id);
+
+	if (n >= sizeof(qp->name))
+		return -EINVAL;
+
+	return 0;
+}
+
+static struct rte_ring *
+uadk_pmd_qp_create_processed_pkts_ring(struct uadk_qp *qp,
+				       unsigned int ring_size, int socket_id)
+{
+	struct rte_ring *r = qp->processed_pkts;
+
+	if (r) {
+		if (rte_ring_get_size(r) >= ring_size) {
+			UADK_LOG(INFO, "Reusing existing ring %s for processed packets",
+				 qp->name);
+			return r;
+		}
+
+		UADK_LOG(ERR, "Unable to reuse existing ring %s for processed packets",
+			 qp->name);
+		return NULL;
+	}
+
+	return rte_ring_create(qp->name, ring_size, socket_id,
+			       RING_F_EXACT_SZ);
+}
+
+static int
+uadk_compress_pmd_qp_setup(struct rte_compressdev *dev, uint16_t qp_id,
+			   uint32_t max_inflight_ops, int socket_id)
+{
+	struct uadk_qp *qp = NULL;
+
+	/* Free memory prior to re-allocation if needed. */
+	if (dev->data->queue_pairs[qp_id] != NULL)
+		uadk_compress_pmd_qp_release(dev, qp_id);
+
+	/* Allocate the queue pair data structure. */
+	qp = rte_zmalloc_socket("uadk PMD Queue Pair", sizeof(*qp),
+				RTE_CACHE_LINE_SIZE, socket_id);
+	if (qp == NULL)
+		return (-ENOMEM);
+
+	qp->id = qp_id;
+	dev->data->queue_pairs[qp_id] = qp;
+
+	if (uadk_pmd_qp_set_unique_name(dev, qp))
+		goto qp_setup_cleanup;
+
+	qp->processed_pkts = uadk_pmd_qp_create_processed_pkts_ring(qp,
+						max_inflight_ops, socket_id);
+	if (qp->processed_pkts == NULL)
+		goto qp_setup_cleanup;
+
+	memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
+
+	return 0;
+
+qp_setup_cleanup:
+	if (qp) {
+		rte_free(qp);
+		qp = NULL;
+	}
+	return -EINVAL;
+}
+
+static int
+uadk_compress_pmd_xform_create(struct rte_compressdev *dev,
+			       const struct rte_comp_xform *xform,
+			       void **private_xform)
+{
+	struct uadk_compress_priv *priv = dev->data->dev_private;
+	struct wd_comp_sess_setup setup = {0};
+	struct sched_params param = {0};
+	struct uadk_stream *stream;
+	handle_t handle;
+
+	if (xform == NULL) {
+		UADK_LOG(ERR, "invalid xform struct");
+		return -EINVAL;
+	}
+
+	if (rte_mempool_get(priv->mp, private_xform)) {
+		UADK_LOG(ERR, "Couldn't get object from session mempool");
+		return -ENOMEM;
+	}
+
+	stream = *((struct uadk_stream **)private_xform);
+
+	switch (xform->type) {
+	case RTE_COMP_COMPRESS:
+		switch (xform->compress.algo) {
+		case RTE_COMP_ALGO_NULL:
+			break;
+		case RTE_COMP_ALGO_DEFLATE:
+			setup.alg_type = WD_ZLIB;
+			setup.win_sz = WD_COMP_WS_8K;
+			setup.comp_lv = WD_COMP_L8;
+			setup.op_type = WD_DIR_COMPRESS;
+			param.type = setup.op_type;
+			param.numa_id = -1;	/* choose nearby numa node */
+			setup.sched_param = &param;
+			break;
+		default:
+			goto err;
+		}
+		break;
+	case RTE_COMP_DECOMPRESS:
+		switch (xform->decompress.algo) {
+		case RTE_COMP_ALGO_NULL:
+			break;
+		case RTE_COMP_ALGO_DEFLATE:
+			setup.alg_type = WD_ZLIB;
+			setup.comp_lv = WD_COMP_L8;
+			setup.op_type = WD_DIR_DECOMPRESS;
+			param.type = setup.op_type;
+			param.numa_id = -1;	/* choose nearby numa node */
+			setup.sched_param = &param;
+			break;
+		default:
+			goto err;
+		}
+		break;
+	default:
+		UADK_LOG(ERR, "Algorithm %u is not supported.", xform->type);
+		goto err;
+	}
+
+	handle = wd_comp_alloc_sess(&setup);
+	if (!handle)
+		goto err;
+
+	stream->handle = handle;
+	stream->type = xform->type;
+
+	return 0;
+
+err:
+	rte_mempool_put(priv->mp, private_xform);
+	return -EINVAL;
+}
+
+static int
+uadk_compress_pmd_xform_free(struct rte_compressdev *dev __rte_unused, void *private_xform)
+{
+	struct uadk_stream *stream = (struct uadk_stream *)private_xform;
+	struct rte_mempool *mp;
+
+	if (!stream)
+		return -EINVAL;
+
+	wd_comp_free_sess(stream->handle);
+	memset(stream, 0, sizeof(struct uadk_stream));
+	mp = rte_mempool_from_obj(stream);
+	rte_mempool_put(mp, stream);
+
+	return 0;
+}
+
+static struct rte_compressdev_ops uadk_compress_pmd_ops = {
+		.dev_configure		= uadk_compress_pmd_config,
+		.dev_start		= uadk_compress_pmd_start,
+		.dev_stop		= uadk_compress_pmd_stop,
+		.dev_close		= uadk_compress_pmd_close,
+		.stats_get		= uadk_compress_pmd_stats_get,
+		.stats_reset		= uadk_compress_pmd_stats_reset,
+		.dev_infos_get		= uadk_compress_pmd_info_get,
+		.queue_pair_setup	= uadk_compress_pmd_qp_setup,
+		.queue_pair_release	= uadk_compress_pmd_qp_release,
+		.private_xform_create	= uadk_compress_pmd_xform_create,
+		.private_xform_free	= uadk_compress_pmd_xform_free,
+		.stream_create		= NULL,
+		.stream_free		= NULL,
+};
+
+static uint16_t
+uadk_compress_pmd_enqueue_burst_sync(void *queue_pair,
+				     struct rte_comp_op **ops, uint16_t nb_ops)
+{
+	struct uadk_qp *qp = queue_pair;
+	struct uadk_stream *stream;
+	struct rte_comp_op *op;
+	uint16_t enqd = 0;
+	int i, ret = 0;
+
+	for (i = 0; i < nb_ops; i++) {
+		op = ops[i];
+
+		if (op->op_type == RTE_COMP_OP_STATEFUL) {
+			op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
+		} else {
+			/* process stateless ops */
+			stream = (struct uadk_stream *)op->private_xform;
+			if (stream) {
+				struct wd_comp_req req = {0};
+				uint16_t dst_len = rte_pktmbuf_data_len(op->m_dst);
+
+				req.src = rte_pktmbuf_mtod(op->m_src, uint8_t *);
+				req.src_len = op->src.length;
+				req.dst = rte_pktmbuf_mtod(op->m_dst, uint8_t *);
+				req.dst_len = dst_len;
+				req.op_type = stream->type;
+				req.cb = NULL;
+				req.data_fmt = WD_FLAT_BUF;
+				do {
+					ret = wd_do_comp_sync(stream->handle, &req);
+				} while (ret == -WD_EBUSY);
+
+				op->consumed += req.src_len;
+
+				if (req.dst_len <= dst_len) {
+					op->produced += req.dst_len;
+					op->status = RTE_COMP_OP_STATUS_SUCCESS;
+				} else  {
+					op->status = RTE_COMP_OP_STATUS_OUT_OF_SPACE_TERMINATED;
+				}
+
+				if (ret) {
+					op->status = RTE_COMP_OP_STATUS_ERROR;
+					break;
+				}
+			} else {
+				op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
+			}
+		}
+
+		/* Whatever is out of op, put it into completion queue with
+		 * its status
+		 */
+		if (!ret)
+			ret = rte_ring_enqueue(qp->processed_pkts, (void *)op);
+
+		if (unlikely(ret)) {
+			/* increment count if failed to enqueue op */
+			qp->qp_stats.enqueue_err_count++;
+		} else {
+			qp->qp_stats.enqueued_count++;
+			enqd++;
+		}
+	}
+
+	return enqd;
+}
+
+static uint16_t
+uadk_compress_pmd_dequeue_burst_sync(void *queue_pair,
+				     struct rte_comp_op **ops,
+				     uint16_t nb_ops)
+{
+	struct uadk_qp *qp = queue_pair;
+	unsigned int nb_dequeued = 0;
+
+	nb_dequeued = rte_ring_dequeue_burst(qp->processed_pkts,
+			(void **)ops, nb_ops, NULL);
+	qp->qp_stats.dequeued_count += nb_dequeued;
+
+	return nb_dequeued;
+}
+
+static int
+uadk_compress_probe(struct rte_vdev_device *vdev)
+{
+	struct rte_compressdev_pmd_init_params init_params = {
+		"",
+		rte_socket_id(),
+	};
+	struct rte_compressdev *compressdev;
+	const char *name;
+
+	name = rte_vdev_device_name(vdev);
+
+	if (name == NULL)
+		return -EINVAL;
+
+	compressdev = rte_compressdev_pmd_create(name, &vdev->device,
+			sizeof(struct uadk_compress_priv), &init_params);
+	if (compressdev == NULL) {
+		UADK_LOG(ERR, "driver %s: create failed", init_params.name);
+		return -ENODEV;
+	}
+
+	compressdev->dev_ops = &uadk_compress_pmd_ops;
+	compressdev->dequeue_burst = uadk_compress_pmd_dequeue_burst_sync;
+	compressdev->enqueue_burst = uadk_compress_pmd_enqueue_burst_sync;
+	compressdev->feature_flags = RTE_COMPDEV_FF_HW_ACCELERATED;
+
+	return 0;
+}
+
+static int
+uadk_compress_remove(struct rte_vdev_device *vdev)
+{
+	struct rte_compressdev *compressdev;
+	const char *name;
+
+	name = rte_vdev_device_name(vdev);
+	if (name == NULL)
+		return -EINVAL;
+
+	compressdev = rte_compressdev_pmd_get_named_dev(name);
+	if (compressdev == NULL)
+		return -ENODEV;
+
+	return rte_compressdev_pmd_destroy(compressdev);
+}
+
+static struct rte_vdev_driver uadk_compress_pmd = {
+	.probe = uadk_compress_probe,
+	.remove = uadk_compress_remove,
+};
+
+#define UADK_COMPRESS_DRIVER_NAME compress_uadk
+
+RTE_PMD_REGISTER_VDEV(UADK_COMPRESS_DRIVER_NAME, uadk_compress_pmd);
diff --git a/drivers/compress/uadk/version.map b/drivers/compress/uadk/version.map
new file mode 100644
index 0000000000..c2e0723b4c
--- /dev/null
+++ b/drivers/compress/uadk/version.map
@@ -0,0 +1,3 @@
+DPDK_22 {
+	local: *;
+};
-- 
2.36.1


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 2/3] test/crypto: add cryptodev_uadk_autotest
  2022-06-20 12:35 [PATCH 0/3] Add uadk compression and crypto PMD Zhangfei Gao
  2022-06-20 12:35 ` [PATCH 1/3] compress/uadk: add uadk compression PMD Zhangfei Gao
@ 2022-06-20 12:35 ` Zhangfei Gao
  2022-06-20 12:35 ` [PATCH 3/3] crypto/uadk: add uadk crypto PMD Zhangfei Gao
  2022-08-28 13:02 ` [EXT] [PATCH 0/3] Add uadk compression and " Akhil Goyal
  3 siblings, 0 replies; 8+ messages in thread
From: Zhangfei Gao @ 2022-06-20 12:35 UTC (permalink / raw)
  To: Akhil Goyal, Declan Doherty, Fan Zhang, Ashish Gupta, Ray Kinsella
  Cc: dev, Zhangfei Gao

Example:
sudo dpdk-test --vdev=crypto_uadk --log-level=6
RTE>>cryptodev_uadk_autotest
RTE>>quit

Signed-off-by: Zhangfei Gao <zhangfei.gao@linaro.org>
---
 app/test/test_cryptodev.c | 7 +++++++
 app/test/test_cryptodev.h | 1 +
 2 files changed, 8 insertions(+)

diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 2766e0cc10..1aaadcc474 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -16307,6 +16307,12 @@ test_cryptodev_qat(void)
 	return run_cryptodev_testsuite(RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD));
 }
 
+static int
+test_cryptodev_uadk(void)
+{
+	return run_cryptodev_testsuite(RTE_STR(CRYPTODEV_NAME_UADK_PMD));
+}
+
 static int
 test_cryptodev_virtio(void)
 {
@@ -16650,6 +16656,7 @@ REGISTER_TEST_COMMAND(cryptodev_sw_mvsam_autotest, test_cryptodev_mrvl);
 REGISTER_TEST_COMMAND(cryptodev_dpaa2_sec_autotest, test_cryptodev_dpaa2_sec);
 REGISTER_TEST_COMMAND(cryptodev_dpaa_sec_autotest, test_cryptodev_dpaa_sec);
 REGISTER_TEST_COMMAND(cryptodev_ccp_autotest, test_cryptodev_ccp);
+REGISTER_TEST_COMMAND(cryptodev_uadk_autotest, test_cryptodev_uadk);
 REGISTER_TEST_COMMAND(cryptodev_virtio_autotest, test_cryptodev_virtio);
 REGISTER_TEST_COMMAND(cryptodev_octeontx_autotest, test_cryptodev_octeontx);
 REGISTER_TEST_COMMAND(cryptodev_caam_jr_autotest, test_cryptodev_caam_jr);
diff --git a/app/test/test_cryptodev.h b/app/test/test_cryptodev.h
index 29a7d4db2b..abd795f54a 100644
--- a/app/test/test_cryptodev.h
+++ b/app/test/test_cryptodev.h
@@ -74,6 +74,7 @@
 #define CRYPTODEV_NAME_CN9K_PMD		crypto_cn9k
 #define CRYPTODEV_NAME_CN10K_PMD	crypto_cn10k
 #define CRYPTODEV_NAME_MLX5_PMD		crypto_mlx5
+#define CRYPTODEV_NAME_UADK_PMD		crypto_uadk
 
 
 enum cryptodev_api_test_type {
-- 
2.36.1


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 3/3] crypto/uadk: add uadk crypto PMD
  2022-06-20 12:35 [PATCH 0/3] Add uadk compression and crypto PMD Zhangfei Gao
  2022-06-20 12:35 ` [PATCH 1/3] compress/uadk: add uadk compression PMD Zhangfei Gao
  2022-06-20 12:35 ` [PATCH 2/3] test/crypto: add cryptodev_uadk_autotest Zhangfei Gao
@ 2022-06-20 12:35 ` Zhangfei Gao
  2022-08-28 13:02 ` [EXT] [PATCH 0/3] Add uadk compression and " Akhil Goyal
  3 siblings, 0 replies; 8+ messages in thread
From: Zhangfei Gao @ 2022-06-20 12:35 UTC (permalink / raw)
  To: Akhil Goyal, Declan Doherty, Fan Zhang, Ashish Gupta, Ray Kinsella
  Cc: dev, Zhangfei Gao

Add uadk crypto pmd, which relies on uadk crypto api

Test:
sudo dpdk-test --vdev=crypto_uadk (--log-level=6)
RTE>>cryptodev_uadk_autotest
RTE>>quit

Signed-off-by: Zhangfei Gao <zhangfei.gao@linaro.org>
---
 doc/guides/cryptodevs/index.rst       |    1 +
 doc/guides/cryptodevs/uadk.rst        |   70 ++
 drivers/crypto/meson.build            |    1 +
 drivers/crypto/uadk/meson.build       |   28 +
 drivers/crypto/uadk/uadk_crypto_pmd.c | 1137 +++++++++++++++++++++++++
 drivers/crypto/uadk/version.map       |    3 +
 6 files changed, 1240 insertions(+)
 create mode 100644 doc/guides/cryptodevs/uadk.rst
 create mode 100644 drivers/crypto/uadk/meson.build
 create mode 100644 drivers/crypto/uadk/uadk_crypto_pmd.c
 create mode 100644 drivers/crypto/uadk/version.map

diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst
index 39cca6dbde..cb4ce227e9 100644
--- a/doc/guides/cryptodevs/index.rst
+++ b/doc/guides/cryptodevs/index.rst
@@ -30,5 +30,6 @@ Crypto Device Drivers
     scheduler
     snow3g
     qat
+    uadk
     virtio
     zuc
diff --git a/doc/guides/cryptodevs/uadk.rst b/doc/guides/cryptodevs/uadk.rst
new file mode 100644
index 0000000000..dd35195309
--- /dev/null
+++ b/doc/guides/cryptodevs/uadk.rst
@@ -0,0 +1,70 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright 2022-2023 Huawei Technologies Co.,Ltd. All rights reserved.
+    Copyright 2022-2023 Linaro ltd.
+
+UADK Crypto Poll Mode Driver
+=======================================================
+
+UADK crypto PMD provides poll mode driver
+All cryptography operations are using UADK crypto API.
+Hardware accelerators using UADK are supposed to be supported.
+
+Features
+--------
+
+UADK crypto PMD has support for:
+
+Cipher algorithms:
+
+* ``RTE_CRYPTO_CIPHER_AES_ECB``
+* ``RTE_CRYPTO_CIPHER_AES_CBC``
+* ``RTE_CRYPTO_CIPHER_AES_XTS``
+* ``RTE_CRYPTO_CIPHER_DES_CBC``
+
+Hash algorithms:
+
+* ``RTE_CRYPTO_AUTH_MD5``
+* ``RTE_CRYPTO_AUTH_MD5_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA1``
+* ``RTE_CRYPTO_AUTH_SHA1_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA224``
+* ``RTE_CRYPTO_AUTH_SHA224_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA256``
+* ``RTE_CRYPTO_AUTH_SHA256_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA384``
+* ``RTE_CRYPTO_AUTH_SHA384_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA512``
+* ``RTE_CRYPTO_AUTH_SHA512_HMAC``
+
+Test steps
+-----------
+
+   .. code-block:: console
+
+	1. Build
+	cd dpdk
+	mkdir build
+	meson build (--reconfigure)
+	cd build
+	ninja
+	sudo ninja install
+
+	2. Prepare
+	echo 1024 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
+	echo 1024 > /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
+	echo 1024 > /sys/devices/system/node/node2/hugepages/hugepages-2048kB/nr_hugepages
+	echo 1024 > /sys/devices/system/node/node3/hugepages/hugepages-2048kB/nr_hugepages
+	mkdir -p /mnt/huge_2mb
+	mount -t hugetlbfs none /mnt/huge_2mb -o pagesize=2MB
+
+	2 Test with crypto_uadk
+	sudo dpdk-test --vdev=crypto_uadk (--log-level=6)
+	RTE>>cryptodev_uadk_autotest
+	RTE>>quit
+
+Dependency
+------------
+
+UADK crypto PMD relies on UADK library [1]
+
+[1] https://github.com/Linaro/uadk
diff --git a/drivers/crypto/meson.build b/drivers/crypto/meson.build
index 147b8cf633..ee5377deff 100644
--- a/drivers/crypto/meson.build
+++ b/drivers/crypto/meson.build
@@ -18,6 +18,7 @@ drivers = [
         'octeontx',
         'openssl',
         'scheduler',
+        'uadk',
         'virtio',
 ]
 
diff --git a/drivers/crypto/uadk/meson.build b/drivers/crypto/uadk/meson.build
new file mode 100644
index 0000000000..52abd791ce
--- /dev/null
+++ b/drivers/crypto/uadk/meson.build
@@ -0,0 +1,28 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright 2022-2023 Huawei Technologies Co.,Ltd. All rights reserved.
+# Copyright 2022-2023 Linaro ltd.
+
+if not is_linux
+    build = false
+    reason = 'only supported on Linux'
+    subdir_done()
+endif
+
+if arch_subdir != 'arm' or not dpdk_conf.get('RTE_ARCH_64')
+    build = false
+    reason = 'only supported on aarch64'
+    subdir_done()
+endif
+
+sources = files(
+        'uadk_crypto_pmd.c',
+)
+
+deps += 'bus_vdev'
+dep = cc.find_library('libwd_crypto', dirs: ['/usr/local/lib'], required: false)
+if not dep.found()
+	build = false
+	reason = 'missing dependency, "libwd_crypto"'
+else
+	ext_deps += dep
+endif
diff --git a/drivers/crypto/uadk/uadk_crypto_pmd.c b/drivers/crypto/uadk/uadk_crypto_pmd.c
new file mode 100644
index 0000000000..6348b5c66b
--- /dev/null
+++ b/drivers/crypto/uadk/uadk_crypto_pmd.c
@@ -0,0 +1,1137 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2022-2023 Huawei Technologies Co.,Ltd. All rights reserved.
+ * Copyright 2022-2023 Linaro ltd.
+ */
+
+#include <cryptodev_pmd.h>
+#include <rte_bus_vdev.h>
+#include <rte_comp.h>
+#include <uadk/wd_cipher.h>
+#include <uadk/wd_digest.h>
+#include <uadk/wd_sched.h>
+
+struct uadk_crypto_priv {
+	bool env_cipher_init;
+	bool env_auth_init;
+} __rte_cache_aligned;
+
+/* Maximum length for digest (SHA-512 needs 64 bytes) */
+#define DIGEST_LENGTH_MAX 64
+
+struct uadk_qp {
+	struct rte_ring *processed_pkts;
+	/* Ring for placing process packets */
+	struct rte_cryptodev_stats qp_stats;
+	/* Queue pair statistics */
+	uint16_t id;
+	/* Queue Pair Identifier */
+	char name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	/* Unique Queue Pair Name */
+	uint8_t temp_digest[DIGEST_LENGTH_MAX];
+	/* Buffer used to store the digest generated
+	 * by the driver when verifying a digest provided
+	 * by the user (using authentication verify operation)
+	 */
+} __rte_cache_aligned;
+
+enum uadk_chain_order {
+	UADK_CHAIN_ONLY_CIPHER,
+	UADK_CHAIN_ONLY_AUTH,
+	UADK_CHAIN_CIPHER_AUTH,
+	UADK_CHAIN_AUTH_CIPHER,
+	UADK_CHAIN_NOT_SUPPORTED
+};
+
+struct uadk_crypto_session {
+	handle_t handle_cipher;
+	handle_t handle_digest;
+	enum uadk_chain_order chain_order;
+
+	/* IV parameters */
+	struct {
+		uint16_t length;
+		uint16_t offset;
+	} iv;
+
+	/* Cipher Parameters */
+	struct {
+		enum rte_crypto_cipher_operation direction;
+		/* cipher operation direction */
+		struct wd_cipher_req req;
+	} cipher;
+
+	/* Authentication Parameters */
+	struct {
+		struct wd_digest_req req;
+		enum rte_crypto_auth_operation operation;
+		/* auth operation generate or verify */
+		uint16_t digest_length;
+		/* digest length */
+	} auth;
+} __rte_cache_aligned;
+
+static uint8_t uadk_cryptodev_driver_id;
+
+RTE_LOG_REGISTER_DEFAULT(uadk_crypto_logtype, INFO);
+
+#define UADK_LOG(level, fmt, ...)  \
+	rte_log(RTE_LOG_ ## level, uadk_crypto_logtype,  \
+		"%s() line %u: " fmt "\n", __func__, __LINE__,  \
+		## __VA_ARGS__)
+
+static const struct rte_cryptodev_capabilities uadk_crypto_capabilities[] = {
+	{	/* MD5 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_MD5_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 0,
+					.max = 0,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				},
+				.iv_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* MD5 */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_MD5,
+				.block_size = 64,
+				.key_size = {
+					.min = 0,
+					.max = 0,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				},
+			}, }
+		}, }
+	},
+	{	/* SHA1 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 0,
+					.max = 0,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 20,
+					.max = 20,
+					.increment = 0
+				},
+				.iv_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA1 */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA1,
+				.block_size = 64,
+				.key_size = {
+					.min = 0,
+					.max = 0,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 20,
+					.max = 20,
+					.increment = 0
+				},
+			}, }
+		}, }
+	},
+	{	/* SHA224 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA224_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 0,
+					.max = 0,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 28,
+					.max = 28,
+					.increment = 0
+				},
+				.iv_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA224 */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA224,
+				.block_size = 64,
+					.key_size = {
+					.min = 0,
+					.max = 0,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 28,
+					.max = 28,
+					.increment = 0
+				},
+			}, }
+		}, }
+	},
+	{	/* SHA256 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA256_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 0,
+					.max = 0,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 32,
+					.max = 32,
+					.increment = 0
+				},
+				.iv_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA256 */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA256,
+				.block_size = 64,
+				.key_size = {
+					.min = 0,
+					.max = 0,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 32,
+					.max = 32,
+					.increment = 0
+				},
+			}, }
+		}, }
+	},
+	{	/* SHA384 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA384_HMAC,
+				.block_size = 128,
+				.key_size = {
+					.min = 0,
+					.max = 0,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 48,
+					.max = 48,
+					.increment = 0
+				},
+				.iv_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA384 */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA384,
+				.block_size = 64,
+				.key_size = {
+					.min = 0,
+					.max = 0,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 48,
+					.max = 48,
+					.increment = 0
+					},
+			}, }
+		}, }
+	},
+	{	/* SHA512 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA512_HMAC,
+				.block_size = 128,
+				.key_size = {
+					.min = 0,
+					.max = 0,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.iv_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA512 */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA512,
+				.block_size = 128,
+				.key_size = {
+					.min = 0,
+					.max = 0,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+			}, }
+		}, }
+	},
+	{	/* AES ECB */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+			{.cipher = {
+				.algo = RTE_CRYPTO_CIPHER_AES_ECB,
+				.block_size = 16,
+				.key_size = {
+					.min = 16,
+					.max = 32,
+					.increment = 8
+				},
+				.iv_size = {
+					.min = 0,
+					.max = 0,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+	{	/* AES CBC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+			{.cipher = {
+				.algo = RTE_CRYPTO_CIPHER_AES_CBC,
+				.block_size = 16,
+				.key_size = {
+					.min = 16,
+					.max = 32,
+					.increment = 8
+				},
+				.iv_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+	{	/* AES XTS */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+			{.cipher = {
+				.algo = RTE_CRYPTO_CIPHER_AES_XTS,
+				.block_size = 1,
+				.key_size = {
+					.min = 32,
+					.max = 64,
+					.increment = 32
+				},
+				.iv_size = {
+					.min = 0,
+					.max = 0,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+	{	/* DES CBC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+			{.cipher = {
+				.algo = RTE_CRYPTO_CIPHER_DES_CBC,
+				.block_size = 8,
+				.key_size = {
+					.min = 8,
+					.max = 8,
+					.increment = 0
+				},
+				.iv_size = {
+					.min = 8,
+					.max = 8,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+	/* End of symmetric capabilities */
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+/* Configure device */
+static int
+uadk_crypto_pmd_config(struct rte_cryptodev *dev __rte_unused,
+		       struct rte_cryptodev_config *config __rte_unused)
+{
+	return 0;
+}
+
+/* Start device */
+static int
+uadk_crypto_pmd_start(struct rte_cryptodev *dev __rte_unused)
+{
+	return 0;
+}
+
+/* Stop device */
+static void
+uadk_crypto_pmd_stop(struct rte_cryptodev *dev __rte_unused)
+{
+}
+
+/* Close device */
+static int
+uadk_crypto_pmd_close(struct rte_cryptodev *dev)
+{
+	struct uadk_crypto_priv *priv = dev->data->dev_private;
+
+	if (priv->env_cipher_init) {
+		wd_cipher_env_uninit();
+		priv->env_cipher_init = false;
+	}
+
+	if (priv->env_auth_init) {
+		wd_digest_env_uninit();
+		priv->env_auth_init = false;
+	}
+
+	return 0;
+}
+
+/* Get device statistics */
+static void
+uadk_crypto_pmd_stats_get(struct rte_cryptodev *dev,
+			  struct rte_cryptodev_stats *stats)
+{
+	int qp_id;
+
+	for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
+		struct uadk_qp *qp = dev->data->queue_pairs[qp_id];
+
+		stats->enqueued_count += qp->qp_stats.enqueued_count;
+		stats->dequeued_count += qp->qp_stats.dequeued_count;
+		stats->enqueue_err_count += qp->qp_stats.enqueue_err_count;
+		stats->dequeue_err_count += qp->qp_stats.dequeue_err_count;
+	}
+}
+
+/* Reset device statistics */
+static void
+uadk_crypto_pmd_stats_reset(struct rte_cryptodev *dev __rte_unused)
+{
+	int qp_id;
+
+	for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
+		struct uadk_qp *qp = dev->data->queue_pairs[qp_id];
+
+		memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
+	}
+}
+
+/* Get device info */
+static void
+uadk_crypto_pmd_info_get(struct rte_cryptodev *dev,
+			 struct rte_cryptodev_info *dev_info)
+{
+	if (dev_info != NULL) {
+		dev_info->driver_id = dev->driver_id;
+		dev_info->driver_name = dev->device->driver->name;
+		dev_info->max_nb_queue_pairs = 128;
+		/* No limit of number of sessions */
+		dev_info->sym.max_nb_sessions = 0;
+		dev_info->feature_flags = dev->feature_flags;
+		dev_info->capabilities = uadk_crypto_capabilities;
+	}
+}
+
+/* Release queue pair */
+static int
+uadk_crypto_pmd_qp_release(struct rte_cryptodev *dev, uint16_t qp_id)
+{
+	struct uadk_qp *qp = dev->data->queue_pairs[qp_id];
+
+	if (qp) {
+		rte_ring_free(qp->processed_pkts);
+		rte_free(qp);
+		dev->data->queue_pairs[qp_id] = NULL;
+	}
+
+	return 0;
+}
+
+/* set a unique name for the queue pair based on its name, dev_id and qp_id */
+static int
+uadk_pmd_qp_set_unique_name(struct rte_cryptodev *dev,
+			    struct uadk_qp *qp)
+{
+	unsigned int n = snprintf(qp->name, sizeof(qp->name),
+				  "uadk_crypto_pmd_%u_qp_%u",
+				  dev->data->dev_id, qp->id);
+
+	if (n >= sizeof(qp->name))
+		return -EINVAL;
+
+	return 0;
+}
+
+/* Create a ring to place process packets on */
+static struct rte_ring *
+uadk_pmd_qp_create_processed_pkts_ring(struct uadk_qp *qp,
+				       unsigned int ring_size, int socket_id)
+{
+	struct rte_ring *r = qp->processed_pkts;
+
+	if (r) {
+		if (rte_ring_get_size(r) >= ring_size) {
+			UADK_LOG(INFO, "Reusing existing ring %s for processed packets",
+				 qp->name);
+			return r;
+		}
+
+		UADK_LOG(ERR, "Unable to reuse existing ring %s for processed packets",
+			 qp->name);
+		return NULL;
+	}
+
+	return rte_ring_create(qp->name, ring_size, socket_id,
+			       RING_F_EXACT_SZ);
+}
+
+static int
+uadk_crypto_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+			 const struct rte_cryptodev_qp_conf *qp_conf,
+			 int socket_id)
+{
+	struct uadk_qp *qp;
+
+	/* Free memory prior to re-allocation if needed. */
+	if (dev->data->queue_pairs[qp_id] != NULL)
+		uadk_crypto_pmd_qp_release(dev, qp_id);
+
+	/* Allocate the queue pair data structure. */
+	qp = rte_zmalloc_socket("uadk PMD Queue Pair", sizeof(*qp),
+				RTE_CACHE_LINE_SIZE, socket_id);
+	if (qp == NULL)
+		return (-ENOMEM);
+
+	qp->id = qp_id;
+	dev->data->queue_pairs[qp_id] = qp;
+
+	if (uadk_pmd_qp_set_unique_name(dev, qp))
+		goto qp_setup_cleanup;
+
+	qp->processed_pkts = uadk_pmd_qp_create_processed_pkts_ring(qp,
+			qp_conf->nb_descriptors, socket_id);
+	if (qp->processed_pkts == NULL)
+		goto qp_setup_cleanup;
+
+	memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
+
+	return 0;
+
+qp_setup_cleanup:
+	if (qp) {
+		rte_free(qp);
+		qp = NULL;
+	}
+	return -EINVAL;
+}
+
+static unsigned int
+uadk_crypto_sym_session_get_size(struct rte_cryptodev *dev __rte_unused)
+{
+	return sizeof(struct uadk_crypto_session);
+}
+
+static enum uadk_chain_order
+uadk_get_chain_order(const struct rte_crypto_sym_xform *xform)
+{
+	enum uadk_chain_order res = UADK_CHAIN_NOT_SUPPORTED;
+
+	if (xform != NULL) {
+		if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
+			if (xform->next == NULL)
+				res = UADK_CHAIN_ONLY_AUTH;
+			else if (xform->next->type ==
+					RTE_CRYPTO_SYM_XFORM_CIPHER)
+				res = UADK_CHAIN_AUTH_CIPHER;
+		}
+
+		if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
+			if (xform->next == NULL)
+				res = UADK_CHAIN_ONLY_CIPHER;
+			else if (xform->next->type == RTE_CRYPTO_SYM_XFORM_AUTH)
+				res = UADK_CHAIN_CIPHER_AUTH;
+		}
+	}
+
+	return res;
+}
+
+static int
+uadk_set_session_cipher_parameters(struct rte_cryptodev *dev,
+				   struct uadk_crypto_session *sess,
+				   struct rte_crypto_sym_xform *xform)
+{
+	struct uadk_crypto_priv *priv = dev->data->dev_private;
+	struct rte_crypto_cipher_xform *cipher = &xform->cipher;
+	struct wd_cipher_sess_setup setup = {0};
+	struct sched_params params = {0};
+	int ret;
+
+	if (!priv->env_cipher_init) {
+		ret = wd_cipher_env_init(NULL);
+		if (ret < 0)
+			return -EINVAL;
+		priv->env_cipher_init = true;
+	}
+
+	sess->cipher.direction = cipher->op;
+	sess->iv.offset = cipher->iv.offset;
+	sess->iv.length = cipher->iv.length;
+
+	switch (cipher->algo) {
+	/* Cover supported cipher algorithms */
+	case RTE_CRYPTO_CIPHER_AES_CTR:
+		setup.alg = WD_CIPHER_AES;
+		setup.mode = WD_CIPHER_CTR;
+		sess->cipher.req.out_bytes = 64;
+		break;
+	case RTE_CRYPTO_CIPHER_AES_ECB:
+		setup.alg = WD_CIPHER_AES;
+		setup.mode = WD_CIPHER_ECB;
+		sess->cipher.req.out_bytes = 16;
+		break;
+	case RTE_CRYPTO_CIPHER_AES_CBC:
+		setup.alg = WD_CIPHER_AES;
+		setup.mode = WD_CIPHER_CBC;
+		if (cipher->key.length == 16)
+			sess->cipher.req.out_bytes = 16;
+		else
+			sess->cipher.req.out_bytes = 64;
+		break;
+	case RTE_CRYPTO_CIPHER_AES_XTS:
+		setup.alg = WD_CIPHER_AES;
+		setup.mode = WD_CIPHER_XTS;
+		if (cipher->key.length == 16)
+			sess->cipher.req.out_bytes = 32;
+		else
+			sess->cipher.req.out_bytes = 512;
+		break;
+	default:
+		ret = -ENOTSUP;
+		goto env_uninit;
+	}
+
+	params.numa_id = -1;	/* choose nearby numa node */
+	setup.sched_param = &params;
+	sess->handle_cipher = wd_cipher_alloc_sess(&setup);
+	if (!sess->handle_cipher) {
+		UADK_LOG(ERR, "uadk failed to alloc session!\n");
+		ret = -EINVAL;
+		goto env_uninit;
+	}
+
+	ret = wd_cipher_set_key(sess->handle_cipher, cipher->key.data, cipher->key.length);
+	if (ret) {
+		wd_cipher_free_sess(sess->handle_cipher);
+		UADK_LOG(ERR, "uadk failed to set key!\n");
+		ret = -EINVAL;
+		goto env_uninit;
+	}
+
+	return 0;
+
+env_uninit:
+	wd_cipher_env_uninit();
+	priv->env_cipher_init = false;
+	return ret;
+}
+
+/* Set session auth parameters */
+static int
+uadk_set_session_auth_parameters(struct rte_cryptodev *dev,
+				 struct uadk_crypto_session *sess,
+				 struct rte_crypto_sym_xform *xform)
+{
+	struct uadk_crypto_priv *priv = dev->data->dev_private;
+	struct wd_digest_sess_setup setup = {0};
+	struct sched_params params = {0};
+	int ret;
+
+	if (!priv->env_auth_init) {
+		ret = wd_digest_env_init(NULL);
+		if (ret < 0)
+			return -EINVAL;
+		priv->env_auth_init = true;
+	}
+
+	sess->auth.operation = xform->auth.op;
+	sess->auth.digest_length = xform->auth.digest_length;
+
+	switch (xform->auth.algo) {
+	case RTE_CRYPTO_AUTH_MD5:
+	case RTE_CRYPTO_AUTH_MD5_HMAC:
+		setup.mode = (xform->auth.algo == RTE_CRYPTO_AUTH_MD5) ?
+			     WD_DIGEST_NORMAL : WD_DIGEST_HMAC;
+		setup.alg = WD_DIGEST_MD5;
+		sess->auth.req.out_buf_bytes = 16;
+		sess->auth.req.out_bytes = 16;
+		break;
+	case RTE_CRYPTO_AUTH_SHA1:
+	case RTE_CRYPTO_AUTH_SHA1_HMAC:
+		setup.mode = (xform->auth.algo == RTE_CRYPTO_AUTH_SHA1) ?
+			     WD_DIGEST_NORMAL : WD_DIGEST_HMAC;
+		setup.alg = WD_DIGEST_SHA1;
+		sess->auth.req.out_buf_bytes = 20;
+		sess->auth.req.out_bytes = 20;
+		break;
+	case RTE_CRYPTO_AUTH_SHA224:
+	case RTE_CRYPTO_AUTH_SHA224_HMAC:
+		setup.mode = (xform->auth.algo == RTE_CRYPTO_AUTH_SHA224) ?
+			     WD_DIGEST_NORMAL : WD_DIGEST_HMAC;
+		setup.alg = WD_DIGEST_SHA224;
+		sess->auth.req.out_buf_bytes = 28;
+		sess->auth.req.out_bytes = 28;
+		break;
+	case RTE_CRYPTO_AUTH_SHA256:
+	case RTE_CRYPTO_AUTH_SHA256_HMAC:
+		setup.mode = (xform->auth.algo == RTE_CRYPTO_AUTH_SHA256) ?
+			     WD_DIGEST_NORMAL : WD_DIGEST_HMAC;
+		setup.alg = WD_DIGEST_SHA256;
+		sess->auth.req.out_buf_bytes = 32;
+		sess->auth.req.out_bytes = 32;
+		break;
+	case RTE_CRYPTO_AUTH_SHA384:
+	case RTE_CRYPTO_AUTH_SHA384_HMAC:
+		setup.mode = (xform->auth.algo == RTE_CRYPTO_AUTH_SHA384) ?
+			     WD_DIGEST_NORMAL : WD_DIGEST_HMAC;
+		setup.alg = WD_DIGEST_SHA384;
+		sess->auth.req.out_buf_bytes = 48;
+		sess->auth.req.out_bytes = 48;
+		break;
+	case RTE_CRYPTO_AUTH_SHA512:
+	case RTE_CRYPTO_AUTH_SHA512_HMAC:
+		setup.mode = (xform->auth.algo == RTE_CRYPTO_AUTH_SHA512) ?
+			     WD_DIGEST_NORMAL : WD_DIGEST_HMAC;
+		setup.alg = WD_DIGEST_SHA512;
+		sess->auth.req.out_buf_bytes = 64;
+		sess->auth.req.out_bytes = 64;
+		break;
+	default:
+		ret = -ENOTSUP;
+		goto env_uninit;
+	}
+
+	params.numa_id = -1;	/* choose nearby numa node */
+	setup.sched_param = &params;
+	sess->handle_digest = wd_digest_alloc_sess(&setup);
+	if (!sess->handle_digest) {
+		UADK_LOG(ERR, "uadk failed to alloc session!\n");
+		ret = -EINVAL;
+		goto env_uninit;
+	}
+
+	/* if mode is HMAC, should set key */
+	if (setup.mode == WD_DIGEST_HMAC) {
+		ret = wd_digest_set_key(sess->handle_digest,
+					xform->auth.key.data,
+					xform->auth.key.length);
+		if (ret) {
+			UADK_LOG(ERR, "uadk failed to alloc session!\n");
+			wd_digest_free_sess(sess->handle_digest);
+			sess->handle_digest = 0;
+			ret = -EINVAL;
+			goto env_uninit;
+		}
+	}
+
+	return 0;
+
+env_uninit:
+	wd_digest_env_uninit();
+	priv->env_auth_init = false;
+	return ret;
+}
+
+static int
+uadk_crypto_sym_session_configure(struct rte_cryptodev *dev,
+				  struct rte_crypto_sym_xform *xform,
+				  struct rte_cryptodev_sym_session *session,
+				  struct rte_mempool *mp)
+{
+	struct rte_crypto_sym_xform *cipher_xform = NULL;
+	struct rte_crypto_sym_xform *auth_xform = NULL;
+	struct uadk_crypto_session *sess;
+	int ret;
+
+	ret = rte_mempool_get(mp, (void *)&sess);
+	if (ret != 0) {
+		UADK_LOG(ERR, "Failed to get session %p private data from mempool",
+			 sess);
+		return -ENOMEM;
+	}
+
+	sess->chain_order = uadk_get_chain_order(xform);
+	switch (sess->chain_order) {
+	case UADK_CHAIN_ONLY_CIPHER:
+		cipher_xform = xform;
+		break;
+	case UADK_CHAIN_ONLY_AUTH:
+		auth_xform = xform;
+		break;
+	case UADK_CHAIN_CIPHER_AUTH:
+		cipher_xform = xform;
+		auth_xform = xform->next;
+		break;
+	case UADK_CHAIN_AUTH_CIPHER:
+		auth_xform = xform;
+		cipher_xform = xform->next;
+		break;
+	default:
+		ret = -ENOTSUP;
+		goto err;
+	}
+
+	if (cipher_xform) {
+		ret = uadk_set_session_cipher_parameters(dev, sess, cipher_xform);
+		if (ret != 0) {
+			UADK_LOG(ERR,
+				"Invalid/unsupported cipher parameters");
+			goto err;
+		}
+	}
+
+	if (auth_xform) {
+		ret = uadk_set_session_auth_parameters(dev, sess, auth_xform);
+		if (ret != 0) {
+			UADK_LOG(ERR,
+				"Invalid/unsupported auth parameters");
+			goto err;
+		}
+	}
+
+	set_sym_session_private_data(session, dev->driver_id, sess);
+
+	return 0;
+err:
+	rte_mempool_put(mp, sess);
+	return ret;
+}
+
+static void
+uadk_crypto_sym_session_clear(struct rte_cryptodev *dev,
+			      struct rte_cryptodev_sym_session *sess)
+{
+	struct uadk_crypto_session *priv_sess =
+			get_sym_session_private_data(sess, dev->driver_id);
+
+	if (unlikely(priv_sess == NULL)) {
+		UADK_LOG(ERR, "Failed to get session %p private data.", priv_sess);
+		return;
+	}
+
+	if (priv_sess->handle_cipher) {
+		wd_cipher_free_sess(priv_sess->handle_cipher);
+		priv_sess->handle_cipher = 0;
+	}
+
+	if (priv_sess->handle_digest) {
+		wd_digest_free_sess(priv_sess->handle_digest);
+		priv_sess->handle_digest = 0;
+	}
+
+	set_sym_session_private_data(sess, dev->driver_id, NULL);
+	rte_mempool_put(rte_mempool_from_obj(priv_sess), priv_sess);
+}
+
+static struct rte_cryptodev_ops uadk_crypto_pmd_ops = {
+		.dev_configure		= uadk_crypto_pmd_config,
+		.dev_start		= uadk_crypto_pmd_start,
+		.dev_stop		= uadk_crypto_pmd_stop,
+		.dev_close		= uadk_crypto_pmd_close,
+		.stats_get		= uadk_crypto_pmd_stats_get,
+		.stats_reset		= uadk_crypto_pmd_stats_reset,
+		.dev_infos_get		= uadk_crypto_pmd_info_get,
+		.queue_pair_setup	= uadk_crypto_pmd_qp_setup,
+		.queue_pair_release	= uadk_crypto_pmd_qp_release,
+		.sym_session_get_size	= uadk_crypto_sym_session_get_size,
+		.sym_session_configure	= uadk_crypto_sym_session_configure,
+		.sym_session_clear	= uadk_crypto_sym_session_clear,
+};
+
+static void
+uadk_process_cipher_op(struct rte_crypto_op *op,
+		       struct uadk_crypto_session *sess,
+		       struct rte_mbuf *msrc, struct rte_mbuf *mdst)
+{
+	uint32_t off = op->sym->cipher.data.offset;
+	int ret;
+
+	if (!sess) {
+		op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
+		return;
+	}
+
+	sess->cipher.req.src = rte_pktmbuf_mtod_offset(msrc, uint8_t *, off);
+	sess->cipher.req.in_bytes = op->sym->cipher.data.length;
+	sess->cipher.req.dst = rte_pktmbuf_mtod_offset(mdst, uint8_t *, off);
+	sess->cipher.req.out_buf_bytes = sess->cipher.req.in_bytes;
+	sess->cipher.req.iv_bytes = sess->iv.length;
+	sess->cipher.req.iv = rte_crypto_op_ctod_offset(op, uint8_t *,
+							sess->iv.offset);
+	if (sess->cipher.direction == RTE_CRYPTO_CIPHER_OP_ENCRYPT)
+		sess->cipher.req.op_type = WD_CIPHER_ENCRYPTION;
+	else
+		sess->cipher.req.op_type = WD_CIPHER_DECRYPTION;
+
+	do {
+		ret = wd_do_cipher_sync(sess->handle_cipher, &sess->cipher.req);
+	} while (ret == -WD_EBUSY);
+
+	if (sess->cipher.req.out_buf_bytes > sess->cipher.req.in_bytes)
+		op->status = RTE_COMP_OP_STATUS_OUT_OF_SPACE_TERMINATED;
+
+	if (ret)
+		op->status = RTE_COMP_OP_STATUS_ERROR;
+}
+
+static void
+uadk_process_auth_op(struct uadk_qp *qp, struct rte_crypto_op *op,
+		     struct uadk_crypto_session *sess,
+		     struct rte_mbuf *msrc, struct rte_mbuf *mdst)
+{
+	uint32_t srclen = op->sym->auth.data.length;
+	uint32_t off = op->sym->auth.data.offset;
+	uint8_t *dst = qp->temp_digest;
+	int ret;
+
+	if (!sess) {
+		op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
+		return;
+	}
+
+	sess->auth.req.in = rte_pktmbuf_mtod_offset(msrc, uint8_t *, off);
+	sess->auth.req.in_bytes = srclen;
+	sess->auth.req.out = dst;
+
+	do {
+		ret = wd_do_digest_sync(sess->handle_digest, &sess->auth.req);
+	} while (ret == -WD_EBUSY);
+
+	if (sess->auth.operation == RTE_CRYPTO_AUTH_OP_VERIFY) {
+		if (memcmp(dst, op->sym->auth.digest.data,
+				sess->auth.digest_length) != 0) {
+			op->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
+		}
+	} else {
+		uint8_t *auth_dst;
+
+		auth_dst = op->sym->auth.digest.data;
+		if (auth_dst == NULL)
+			auth_dst = rte_pktmbuf_mtod_offset(mdst, uint8_t *,
+					op->sym->auth.data.offset +
+					op->sym->auth.data.length);
+		memcpy(auth_dst, dst, sess->auth.digest_length);
+	}
+
+	if (ret)
+		op->status = RTE_COMP_OP_STATUS_ERROR;
+}
+
+static uint16_t
+uadk_crypto_enqueue_burst(void *queue_pair, struct rte_crypto_op **ops,
+			  uint16_t nb_ops)
+{
+	struct uadk_qp *qp = queue_pair;
+	struct uadk_crypto_session *sess = NULL;
+	struct rte_mbuf *msrc, *mdst;
+	struct rte_crypto_op *op;
+	uint16_t enqd = 0;
+	int i, ret;
+
+	for (i = 0; i < nb_ops; i++) {
+		op = ops[i];
+		op->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+		msrc = op->sym->m_src;
+		mdst = op->sym->m_dst ? op->sym->m_dst : op->sym->m_src;
+
+		if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
+			if (likely(op->sym->session != NULL))
+				sess = (struct uadk_crypto_session *)
+					get_sym_session_private_data(
+						op->sym->session,
+						uadk_cryptodev_driver_id);
+		}
+
+		switch (sess->chain_order) {
+		case UADK_CHAIN_ONLY_CIPHER:
+			uadk_process_cipher_op(op, sess, msrc, mdst);
+			break;
+		case UADK_CHAIN_ONLY_AUTH:
+			uadk_process_auth_op(qp, op, sess, msrc, mdst);
+			break;
+		case UADK_CHAIN_CIPHER_AUTH:
+			uadk_process_cipher_op(op, sess, msrc, mdst);
+			uadk_process_auth_op(qp, op, sess, mdst, mdst);
+			break;
+		case UADK_CHAIN_AUTH_CIPHER:
+			uadk_process_auth_op(qp, op, sess, msrc, mdst);
+			uadk_process_cipher_op(op, sess, msrc, mdst);
+			break;
+		default:
+			op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+			break;
+		}
+
+		if (op->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED)
+			op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+
+		if (op->status != RTE_CRYPTO_OP_STATUS_ERROR) {
+			ret = rte_ring_enqueue(qp->processed_pkts, (void *)op);
+			if (ret < 0)
+				goto enqueue_err;
+			qp->qp_stats.enqueued_count++;
+			enqd++;
+		} else {
+			/* increment count if failed to enqueue op */
+			qp->qp_stats.enqueue_err_count++;
+		}
+	}
+
+	return enqd;
+
+enqueue_err:
+	qp->qp_stats.enqueue_err_count++;
+	return enqd;
+}
+
+static uint16_t
+uadk_crypto_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops,
+			  uint16_t nb_ops)
+{
+	struct uadk_qp *qp = queue_pair;
+	unsigned int nb_dequeued;
+
+	nb_dequeued = rte_ring_dequeue_burst(qp->processed_pkts,
+			(void **)ops, nb_ops, NULL);
+	qp->qp_stats.dequeued_count += nb_dequeued;
+
+	return nb_dequeued;
+}
+
+static int
+uadk_cryptodev_probe(struct rte_vdev_device *vdev)
+{
+	struct rte_cryptodev_pmd_init_params init_params = {
+		.name = "",
+		.private_data_size = sizeof(struct uadk_crypto_priv),
+		.max_nb_queue_pairs =
+				RTE_CRYPTODEV_PMD_DEFAULT_MAX_NB_QUEUE_PAIRS,
+	};
+	struct rte_cryptodev *dev;
+	const char *name;
+
+	name = rte_vdev_device_name(vdev);
+	if (name == NULL)
+		return -EINVAL;
+
+	dev = rte_cryptodev_pmd_create(name, &vdev->device, &init_params);
+	if (dev == NULL) {
+		UADK_LOG(ERR, "driver %s: create failed", init_params.name);
+		return -ENODEV;
+	}
+
+	dev->dev_ops = &uadk_crypto_pmd_ops;
+	dev->driver_id = uadk_cryptodev_driver_id;
+	dev->dequeue_burst = uadk_crypto_dequeue_burst;
+	dev->enqueue_burst = uadk_crypto_enqueue_burst;
+	dev->feature_flags = RTE_CRYPTODEV_FF_HW_ACCELERATED |
+			     RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
+			     RTE_CRYPTODEV_FF_SYM_SESSIONLESS;
+
+	rte_cryptodev_pmd_probing_finish(dev);
+
+	return 0;
+}
+
+static int
+uadk_cryptodev_remove(struct rte_vdev_device *vdev)
+{
+	struct rte_cryptodev *cryptodev;
+	const char *name;
+
+	name = rte_vdev_device_name(vdev);
+	if (name == NULL)
+		return -EINVAL;
+
+	cryptodev = rte_cryptodev_pmd_get_named_dev(name);
+	if (cryptodev == NULL)
+		return -ENODEV;
+
+	return rte_cryptodev_pmd_destroy(cryptodev);
+}
+
+static struct rte_vdev_driver uadk_crypto_pmd = {
+	.probe       = uadk_cryptodev_probe,
+	.remove      = uadk_cryptodev_remove,
+};
+
+static struct cryptodev_driver uadk_crypto_drv;
+
+#define UADK_CRYPTO_DRIVER_NAME crypto_uadk
+RTE_PMD_REGISTER_VDEV(UADK_CRYPTO_DRIVER_NAME, uadk_crypto_pmd);
+RTE_PMD_REGISTER_CRYPTO_DRIVER(uadk_crypto_drv, uadk_crypto_pmd.driver,
+			       uadk_cryptodev_driver_id);
diff --git a/drivers/crypto/uadk/version.map b/drivers/crypto/uadk/version.map
new file mode 100644
index 0000000000..c2e0723b4c
--- /dev/null
+++ b/drivers/crypto/uadk/version.map
@@ -0,0 +1,3 @@
+DPDK_22 {
+	local: *;
+};
-- 
2.36.1


^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: [EXT] [PATCH 0/3] Add uadk compression and crypto PMD
  2022-06-20 12:35 [PATCH 0/3] Add uadk compression and crypto PMD Zhangfei Gao
                   ` (2 preceding siblings ...)
  2022-06-20 12:35 ` [PATCH 3/3] crypto/uadk: add uadk crypto PMD Zhangfei Gao
@ 2022-08-28 13:02 ` Akhil Goyal
  2022-08-29  6:52   ` Zhangfei Gao
  3 siblings, 1 reply; 8+ messages in thread
From: Akhil Goyal @ 2022-08-28 13:02 UTC (permalink / raw)
  To: Zhangfei Gao, Declan Doherty, Fan Zhang, Ashish Gupta, Ray Kinsella; +Cc: dev

> UADK compression PMD provides poll mode compression & decompression
> driver
> UADK crypto PMD provides poll mode driver
> All cryptography operations are using UADK crypto API.
> All compression operations are using UADK compress API.
> 
> Hardware accelerators using UADK are supposed to be supported.
> Currently supported hardware platforms:
> HiSilicon Kunpeng920 and Kunpeng930
> 
> Test:
> sudo dpdk-test --vdev=compress_uadk
> sudo dpdk-test --vdev=crypto_uadk
> 
> v1:
> Target to DPDK 22.11
> Rebased on http://git.dpdk.org/next/dpdk-next-crypto/
> 
> Suggested from Akhil Goyal <gakhil@marvell.com>
> > Current release cycle is DPDK-22.07 for which this patchset is late.
> > As we had the V1 deadline last month.
> > This patchset can go for next release cycle which is 22.11.
> 
> Zhangfei Gao (3):
>   compress/uadk: add uadk compression PMD
>   test/crypto: add cryptodev_uadk_autotest
>   crypto/uadk: add uadk crypto PMD
> 
>  app/test/test_cryptodev.c                 |    7 +
>  app/test/test_cryptodev.h                 |    1 +
>  doc/guides/compressdevs/index.rst         |    1 +
>  doc/guides/compressdevs/uadk.rst          |   60 ++
>  doc/guides/cryptodevs/index.rst           |    1 +
>  doc/guides/cryptodevs/uadk.rst            |   70 ++
>  drivers/compress/meson.build              |    1 +
>  drivers/compress/uadk/meson.build         |   28 +
>  drivers/compress/uadk/uadk_compress_pmd.c |  489 +++++++++
>  drivers/compress/uadk/version.map         |    3 +
>  drivers/crypto/meson.build                |    1 +
>  drivers/crypto/uadk/meson.build           |   28 +
>  drivers/crypto/uadk/uadk_crypto_pmd.c     | 1137 +++++++++++++++++++++
>  drivers/crypto/uadk/version.map           |    3 +
>  14 files changed, 1830 insertions(+)
>  create mode 100644 doc/guides/compressdevs/uadk.rst
>  create mode 100644 doc/guides/cryptodevs/uadk.rst
>  create mode 100644 drivers/compress/uadk/meson.build
>  create mode 100644 drivers/compress/uadk/uadk_compress_pmd.c
>  create mode 100644 drivers/compress/uadk/version.map
>  create mode 100644 drivers/crypto/uadk/meson.build
>  create mode 100644 drivers/crypto/uadk/uadk_crypto_pmd.c
>  create mode 100644 drivers/crypto/uadk/version.map
> 
Please split the series into two - crypto pmd and compression pmd.
And split each of the PMD into logical small (individually compiled) patches.

Update MAINTAINERS file
Update documentation in doc/guides/cryptodevs/features/uadk.ini
and doc/guides/compressdevs/features/uadk.ini

Also UADK does not look to be a PMD name. It is some development kit
Outside of DPDK. Can you rename it to something else?

Is there some dependency to build it using external libraries etc? 
Can you explain what exactly is UADK?

Regards,
Akhil

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [EXT] [PATCH 0/3] Add uadk compression and crypto PMD
  2022-08-28 13:02 ` [EXT] [PATCH 0/3] Add uadk compression and " Akhil Goyal
@ 2022-08-29  6:52   ` Zhangfei Gao
  2022-08-29  7:11     ` Akhil Goyal
  0 siblings, 1 reply; 8+ messages in thread
From: Zhangfei Gao @ 2022-08-29  6:52 UTC (permalink / raw)
  To: Akhil Goyal, Declan Doherty, Fan Zhang, Ashish Gupta, Ray Kinsella
  Cc: dev, acc


Hi, Akhil

On 2022/8/28 下午9:02, Akhil Goyal wrote:
>> UADK compression PMD provides poll mode compression & decompression
>> driver
>> UADK crypto PMD provides poll mode driver
>> All cryptography operations are using UADK crypto API.
>> All compression operations are using UADK compress API.
>>
>> Hardware accelerators using UADK are supposed to be supported.
>> Currently supported hardware platforms:
>> HiSilicon Kunpeng920 and Kunpeng930
>>
>> Test:
>> sudo dpdk-test --vdev=compress_uadk
>> sudo dpdk-test --vdev=crypto_uadk
>>
>> v1:
>> Target to DPDK 22.11
>> Rebased on http://git.dpdk.org/next/dpdk-next-crypto/
>>
>> Suggested from Akhil Goyal <gakhil@marvell.com>
>>> Current release cycle is DPDK-22.07 for which this patchset is late.
>>> As we had the V1 deadline last month.
>>> This patchset can go for next release cycle which is 22.11.
>> Zhangfei Gao (3):
>>    compress/uadk: add uadk compression PMD
>>    test/crypto: add cryptodev_uadk_autotest
>>    crypto/uadk: add uadk crypto PMD
>>
>>   app/test/test_cryptodev.c                 |    7 +
>>   app/test/test_cryptodev.h                 |    1 +
>>   doc/guides/compressdevs/index.rst         |    1 +
>>   doc/guides/compressdevs/uadk.rst          |   60 ++
>>   doc/guides/cryptodevs/index.rst           |    1 +
>>   doc/guides/cryptodevs/uadk.rst            |   70 ++
>>   drivers/compress/meson.build              |    1 +
>>   drivers/compress/uadk/meson.build         |   28 +
>>   drivers/compress/uadk/uadk_compress_pmd.c |  489 +++++++++
>>   drivers/compress/uadk/version.map         |    3 +
>>   drivers/crypto/meson.build                |    1 +
>>   drivers/crypto/uadk/meson.build           |   28 +
>>   drivers/crypto/uadk/uadk_crypto_pmd.c     | 1137 +++++++++++++++++++++
>>   drivers/crypto/uadk/version.map           |    3 +
>>   14 files changed, 1830 insertions(+)
>>   create mode 100644 doc/guides/compressdevs/uadk.rst
>>   create mode 100644 doc/guides/cryptodevs/uadk.rst
>>   create mode 100644 drivers/compress/uadk/meson.build
>>   create mode 100644 drivers/compress/uadk/uadk_compress_pmd.c
>>   create mode 100644 drivers/compress/uadk/version.map
>>   create mode 100644 drivers/crypto/uadk/meson.build
>>   create mode 100644 drivers/crypto/uadk/uadk_crypto_pmd.c
>>   create mode 100644 drivers/crypto/uadk/version.map
>>
> Please split the series into two - crypto pmd and compression pmd.
> And split each of the PMD into logical small (individually compiled) patches.
>
> Update MAINTAINERS file
> Update documentation in doc/guides/cryptodevs/features/uadk.ini
> and doc/guides/compressdevs/features/uadk.ini
Thanks for the suggestion.

>
> Also UADK does not look to be a PMD name. It is some development kit
> Outside of DPDK. Can you rename it to something else?
>
> Is there some dependency to build it using external libraries etc?
> Can you explain what exactly is UADK?
UADK is a framework for user application to access hardware accelerator .
https://github.com/Linaro/uadk/blob/master/docs/wd_design.md

UADK relies on SVA (Shared Virtual Address) that needs to be supported 
by IOMMU.
As a result, user application can directly use virtual address for dma, 
since iommu and
mmu share the same virtual address by coping the same page table, which 
enhance the
performance as well as easy usability.

UADK provide algorithm libraries and api for application to use.
The library will find the real hardware in the platform.

We also provide openssl engine for uadk, 
https://github.com/Linaro/uadk_engine
For alignment, we planned to provide uadk dpdk pmd as well, with the 
name as UADK.

What do you think.


Thanks.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: [EXT] [PATCH 0/3] Add uadk compression and crypto PMD
  2022-08-29  6:52   ` Zhangfei Gao
@ 2022-08-29  7:11     ` Akhil Goyal
  2022-08-29  8:21       ` Zhangfei Gao
  0 siblings, 1 reply; 8+ messages in thread
From: Akhil Goyal @ 2022-08-29  7:11 UTC (permalink / raw)
  To: Zhangfei Gao, Declan Doherty, Fan Zhang, Ashish Gupta, Ray Kinsella
  Cc: dev, acc, thomas, David Marchand

> >
> > Also UADK does not look to be a PMD name. It is some development kit
> > Outside of DPDK. Can you rename it to something else?
> >
> > Is there some dependency to build it using external libraries etc?
> > Can you explain what exactly is UADK?
> UADK is a framework for user application to access hardware accelerator .
> https://urldefense.proofpoint.com/v2/url?u=https-
> 3A__github.com_Linaro_uadk_blob_master_docs_wd-
> 5Fdesign.md&d=DwIDaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=DnL7Si2wl_PRwpZ9
> TWey3eu68gBzn7DkPwuqhd6WNyo&m=5ceaLjLGdoHAuVeeh-
> 9uypoBDKCK43QrqhAOEbBu1vFFrSFxNpncZzByqSguUBUk&s=VSCYj_EhQ67Rxmz
> N-N8H38GXxRyzNnNsRsOtL5eUVIE&e=
> 
> UADK relies on SVA (Shared Virtual Address) that needs to be supported
> by IOMMU.
> As a result, user application can directly use virtual address for dma,
> since iommu and
> mmu share the same virtual address by coping the same page table, which
> enhance the
> performance as well as easy usability.
> 
> UADK provide algorithm libraries and api for application to use.
> The library will find the real hardware in the platform.
> 
> We also provide openssl engine for uadk,
> https://urldefense.proofpoint.com/v2/url?u=https-
> 3A__github.com_Linaro_uadk-
> 5Fengine&d=DwIDaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=DnL7Si2wl_PRwpZ9TW
> ey3eu68gBzn7DkPwuqhd6WNyo&m=5ceaLjLGdoHAuVeeh-
> 9uypoBDKCK43QrqhAOEbBu1vFFrSFxNpncZzByqSguUBUk&s=s4G4UzM5B3w8t7
> b0IMKgAbWS5DN7n6ez4WkZpIZ1QGs&e=
> For alignment, we planned to provide uadk dpdk pmd as well, with the
> name as UADK.

Thanks for the explanation. Please add the information in documentation as well.
Hardware PMDs are generally named after the hardware device and not on some other library.

Naming it with uadk would look like a software PMD.


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [EXT] [PATCH 0/3] Add uadk compression and crypto PMD
  2022-08-29  7:11     ` Akhil Goyal
@ 2022-08-29  8:21       ` Zhangfei Gao
  0 siblings, 0 replies; 8+ messages in thread
From: Zhangfei Gao @ 2022-08-29  8:21 UTC (permalink / raw)
  To: Akhil Goyal, Declan Doherty, Fan Zhang, Ashish Gupta, Ray Kinsella
  Cc: dev, acc, thomas, David Marchand



On 2022/8/29 下午3:11, Akhil Goyal wrote:
>>> Also UADK does not look to be a PMD name. It is some development kit
>>> Outside of DPDK. Can you rename it to something else?
>>>
>>> Is there some dependency to build it using external libraries etc?
>>> Can you explain what exactly is UADK?
>> UADK is a framework for user application to access hardware accelerator .
>> https://urldefense.proofpoint.com/v2/url?u=https-
>> 3A__github.com_Linaro_uadk_blob_master_docs_wd-
>> 5Fdesign.md&d=DwIDaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=DnL7Si2wl_PRwpZ9
>> TWey3eu68gBzn7DkPwuqhd6WNyo&m=5ceaLjLGdoHAuVeeh-
>> 9uypoBDKCK43QrqhAOEbBu1vFFrSFxNpncZzByqSguUBUk&s=VSCYj_EhQ67Rxmz
>> N-N8H38GXxRyzNnNsRsOtL5eUVIE&e=
>>
>> UADK relies on SVA (Shared Virtual Address) that needs to be supported
>> by IOMMU.
>> As a result, user application can directly use virtual address for dma,
>> since iommu and
>> mmu share the same virtual address by coping the same page table, which
>> enhance the
>> performance as well as easy usability.
>>
>> UADK provide algorithm libraries and api for application to use.
>> The library will find the real hardware in the platform.
>>
>> We also provide openssl engine for uadk,
>> https://urldefense.proofpoint.com/v2/url?u=https-
>> 3A__github.com_Linaro_uadk-
>> 5Fengine&d=DwIDaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=DnL7Si2wl_PRwpZ9TW
>> ey3eu68gBzn7DkPwuqhd6WNyo&m=5ceaLjLGdoHAuVeeh-
>> 9uypoBDKCK43QrqhAOEbBu1vFFrSFxNpncZzByqSguUBUk&s=s4G4UzM5B3w8t7
>> b0IMKgAbWS5DN7n6ez4WkZpIZ1QGs&e=
>> For alignment, we planned to provide uadk dpdk pmd as well, with the
>> name as UADK.
> Thanks for the explanation. Please add the information in documentation as well.
> Hardware PMDs are generally named after the hardware device and not on some other library.
>
> Naming it with uadk would look like a software PMD.
Well, UADK is the brand HiSilicon want to advertise, just like Intel's QAT.
And now, we are doing our best to build the UADK ecosystem.
So if possible, we still want to keep the name, as part of the UADK 
ecosystem :)

Thanks


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2022-08-29  8:21 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-20 12:35 [PATCH 0/3] Add uadk compression and crypto PMD Zhangfei Gao
2022-06-20 12:35 ` [PATCH 1/3] compress/uadk: add uadk compression PMD Zhangfei Gao
2022-06-20 12:35 ` [PATCH 2/3] test/crypto: add cryptodev_uadk_autotest Zhangfei Gao
2022-06-20 12:35 ` [PATCH 3/3] crypto/uadk: add uadk crypto PMD Zhangfei Gao
2022-08-28 13:02 ` [EXT] [PATCH 0/3] Add uadk compression and " Akhil Goyal
2022-08-29  6:52   ` Zhangfei Gao
2022-08-29  7:11     ` Akhil Goyal
2022-08-29  8:21       ` Zhangfei Gao

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).